Hacker News new | past | comments | ask | show | jobs | submit | yonkshi's comments login

Unfortunately any AI systems will encounter the verification dilemma, the more powerful an AI system becomes, the less verifiable it would be.


Verification is easy, just like with humans. It's called a driver's license. And since the AI does not tire and can probably be sped up, you can put it through hundreds of thousands of hours of driving quickly in a good simulator with adversarial and normal situations, then rate it at various tasks. Just like we should do with human drivers, but fail to.

Explanation is harder. But we probably shouldn't care, even in courts people cannot often explain what and why they did while driving, or they just lie. The thing is, for liability purposes you have to ensure it is not a series defect and that a good human driver would not be able to handle it


In the world of investments, that would be called buzzwordily as “back testing”.

But is it sufficient to answer _why_ the machine chose to act a certain way given a certain set of instantaneous input criteria?


No. But, as the parent post said, we don't always know why a human driver chose to act a certain way either. So that shouldn't be a blocker.


But we evolved to have empathy to help understand how humans act in the cases we don't have complete or good information.

This is why "crazy" people make us so uneasy. They don't fit our mental models for how a human should act. Would you be comfortable driving with a road full of unpredictable "crazies"?

I wonder if we'll ever have the same level of trust with AI as humans if it is still being used at a black box level.


Bad AI will suffer from the Dunning-Kruger effect and overestimate its abilities, while good AI will suffer from Imposter Syndrome and underestimate its abilities.


AGI is a gradient, not an arbitrary threshold.

We are not capable of recreating human level intelligence yet, but our modern algorithms had become magnitudes better at generalization and sample efficiency. And this trend is not showing any signs of slowing down.

Take PPO for example (powers the OpenAI 5 dota agent), the same algorithms can be used for robotic arms as it does with video games. Two completely different domains of tasks now generalizable under one algorithm. That to me is a solid step towards more general AI.


Asking how close our computers and algorithms are to AGI is like asking how close our machines and power systems are to "human physicality".


It’s a gradient but according to the marketers it’s basically going to overtake humanity any week now.


I agree. I think a big part of this problem is that smaller companies usually cannot afford AI research. I would even go as far as to say there are more AI companies than capable AI researchers, and this causes a large number of faux-AI companies poisoning the AI branding.


"AI any week now"

What marketers proclaim that? Are they saying that or are they saying there is _utility_ in AI, now? Because me thinks, there is real utility, now, but it's going to take years until it overtakes us. Years!


I'm not sure anyone said any week now but Musk probably came closest https://www.entrepreneur.com/article/323278


For the problem you’re trying to tackle, this startup has already solved it and will show you insights previously impossible!


My guess is to prevent ad-fraud by website owners. It's a lot harder to detect fraudulent clicks/impressions if all data are routed through website.

The cost of potentially blocked by ad blockers is finite (A percentage of total revenue), but the cost of ad-fraud is not bounded.


>My guess is to prevent ad-fraud by website owners. It's a lot harder to detect fraudulent clicks/impressions if all data are routed through website.

Isn't the solution to that problem a flat rate fee (similar to how advertisements on tv, newspapers and magazines work)?

Instead of a pay-per-click it could be a simple $X dollars and your ad will be visible for Y days/weeks.


I don't see how that would work. If my site gets zero traffic, would I still get paid a flat rate to 'serve' ads? Pay per impression/click works to pay proportionally to individual site traffic and the extent of a campaign.

The current solution is effectively a flat rate as far as an ad campaign is concerned: impressions/$


People would either (a) pay to place ads on sites they knew had a decent amount of traffic just from reputation, or (b) would hire ad-buying companies which made it their business to know what different sites' ad space is worth.

Needless to say, this could be inconvenient for the adwords-make-me-five-bucks-a-month scale sites. It'd work out OK for the New York Times-es of the world though.


What would happen if browsers simply didn't allow cross domain referencing? Would the web break (and would it be worse than NoScript)?


I've thought about this before, since NoScript is too disruptive for me. One issue is that it's common for scripts to served from assets.whateverwebsite.com. I also thought of allowing anything from the same second-level domain (so anything on .whateverwebsite.com), but that would allow anything on .co.uk. ¯\_(ツ)_/¯ in Chrome I trust, for now.


Sounds like a job for the public suffix list.


Ooh, cool, hadn't heard of that before! TIL.

But even with the added complexity of regularly pulling in the public suffix list, the problems keep going: e.g., facebook.com's scripts are all served from static.xx.fbcdn.net.


Not only dupe, Forbes also has the most click baity title amongst them all. Forbes is like buzzfeed now


Blame the Forbes Tech Council.


It appears they are using an algorithmic process to perform the heartbeat trading, and they are patentable[0] as long as they are not illegal. Tax loopholes are not illegal but very unethical, I think the policy makers share equal if not more blame for this.

[0] https://www.quora.com/Is-it-possible-to-patent-an-algorithm


It is possible to patent a “business method,” no algorithm required [1].

Also, TFA addresses the ethics question in this case, and I don’t think it’s so obviously unethical:

> A lot of middle-class people love investing in ETFs and not paying taxes until they sell their shares, and politicians and regulators seem pretty happy to let them do it. It is also quite reasonable: People who buy ETFs pay taxes on their gains when they sell the ETFs and actually realize the gains, which feels like the right time to pay taxes, whereas people who buy mutual funds have to pay capital gains taxes at random times that have nothing to do with their own investment decisions or cash flows. From that perspective, the heartbeat trades are not an evil tax dodge but just a sensible mechanical use of the rules to achieve the logical result that everyone wants.

[1] https://en.m.wikipedia.org/wiki/Business_method_patent


It's no more unethical or loophole to suppress taxes when an etf reallocates holding than it is unethical or loophole to suppress taxes when a corporation sells a widget and then buys another one. There is no realized gain to the shareholders


The "policy-makers" are the people exploiting the loopholes. They are the same people.


This is such a cool concept, I really wish these guys would succeed. Though last time I heard they gave up developing their own rocket (as a second stage to space). Anyone who's in the loop, does Stratolaunch currently have any potential rockets lined up?


Article states that they're going to use Northrop Grumman's Pegasus XL.


They abandonned all development activity of their own rockets, shortly after Allen's death.


The concept is indeed good, which is why people have been air-launching Pegasus rockets for decades. I’m actually not sure what the new thing is with this aircraft. Just that it can launch three rockets per flight?


They have been contracted to launch Pegasus XL rockets.


Occam's razor applies here. Your theory is that she had 1. intended to assassinate Assange, 2. attempted to cover it as a joke and then 3. further covering it up by implying she doesn't remember this statement.

Let me offer you another theory: she's almost certain that she didn't say it, but she couldn't remember every word she's said.

Which theory do you think is more likely?

Now I am not saying you are wrong, but you'd need external evidence to support your theory, until then, Occam's rule apply. If you agree that my theory is more probable but do not like it, then it's possible your current belief is skewed by your prior belief.


Using Occam's razor like that is essentially a logical fallacy.


Elaborate? We are discussing event probability, not logical inference, so how can there be fallacy in the first place.


Yes and you are assigning event probability arbitrarily. This is why it is hard to use occams razor correctly in a non-subjective way. I can just as easily say that there is a very high probability that she said it, and doesn't want it on the record. There is nothing backing this as being more or less probable than what you said, it is all arbitrarily subjective prob assignments since we are both going on nearly no information.


My friend Gibbs invented this really efficient way to learn.


More specifically Markov chain with monte carlo method (MCMC)


Did something like that: An organization with some boats, quite a lot of boats, some that might be involved in global nuclear war, maybe limited to sea, wanted to know how long some of the boats might survive. The ocean had Red and Blue boats and airplanes, and the Reds and Blues were looking for each other and trying to kill each other.

So, the state of the system was the remaining Red/Blue inventories.

Some work by Koopmans showed that the encounter rates were a Poisson process. So, the time to the next encounter had exponential distribution, depending on the current state.

At an encounter, depending on the types, could have the Red die, the Blue die, both die, or neither die. Then after the encounter, the state of the system changed. So, the state of the system was a continuous time, discrete state space Markov process subordinated to a Poisson process. That is, in part, a Markov chain.

Yes, there is a closed form solution, but the combinatorial explosion of the discrete state space size meant that a direct attack via the closed form solution was not reasonable.

But it was easy enough to do Monte-Carlo, that is, generate a few hundred sample paths and average those, get confidence intervals, etc. While in grad school working on operations research I did that. While the state space was enormous, the Monte-Carlo was really fast. On any computer of today, the code would run before could get finger off the mouse button or the Enter key. And running off 1 million sample paths would be feasible. For the random numbers I looked in Knuth's appropriate volume of The Art ... and used

X(n + 1) = X(n) * 5^15 + 1 mod 2^47

programmed in assembler.

Work passed review by famous applied probabilist J. Keilson.

Apparently the work was sold to some intelligence agency. I could guess which one, but then I'd have to ...!


Sure theoretical advances may be evolutionary, but they resulted in exponential reduction of parameter space and sample complexity. These advances outpace Moore's law by a large margin.

Hardware served as a catalyst, but it was not a necessity.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: