Verification is easy, just like with humans. It's called a driver's license. And since the AI does not tire and can probably be sped up, you can put it through hundreds of thousands of hours of driving quickly in a good simulator with adversarial and normal situations, then rate it at various tasks. Just like we should do with human drivers, but fail to.
Explanation is harder. But we probably shouldn't care, even in courts people cannot often explain what and why they did while driving, or they just lie. The thing is, for liability purposes you have to ensure it is not a series defect and that a good human driver would not be able to handle it
But we evolved to have empathy to help understand how humans act in the cases we don't have complete or good information.
This is why "crazy" people make us so uneasy. They don't fit our mental models for how a human should act. Would you be comfortable driving with a road full of unpredictable "crazies"?
I wonder if we'll ever have the same level of trust with AI as humans if it is still being used at a black box level.
Bad AI will suffer from the Dunning-Kruger effect and overestimate its abilities, while good AI will suffer from Imposter Syndrome and underestimate its abilities.
We are not capable of recreating human level intelligence yet, but our modern algorithms had become magnitudes better at generalization and sample efficiency. And this trend is not showing any signs of slowing down.
Take PPO for example (powers the OpenAI 5 dota agent), the same algorithms can be used for robotic arms as it does with video games. Two completely different domains of tasks now generalizable under one algorithm. That to me is a solid step towards more general AI.
I agree. I think a big part of this problem is that smaller companies usually cannot afford AI research. I would even go as far as to say there are more AI companies than capable AI researchers, and this causes a large number of faux-AI companies poisoning the AI branding.
What marketers proclaim that? Are they saying that or are they saying there is _utility_ in AI, now? Because me thinks, there is real utility, now, but it's going to take years until it overtakes us. Years!
I don't see how that would work. If my site gets zero traffic, would I still get paid a flat rate to 'serve' ads? Pay per impression/click works to pay proportionally to individual site traffic and the extent of a campaign.
The current solution is effectively a flat rate as far as an ad campaign is concerned: impressions/$
People would either (a) pay to place ads on sites they knew had a decent amount of traffic just from reputation, or (b) would hire ad-buying companies which made it their business to know what different sites' ad space is worth.
Needless to say, this could be inconvenient for the adwords-make-me-five-bucks-a-month scale sites. It'd work out OK for the New York Times-es of the world though.
I've thought about this before, since NoScript is too disruptive for me. One issue is that it's common for scripts to served from assets.whateverwebsite.com. I also thought of allowing anything from the same second-level domain (so anything on .whateverwebsite.com), but that would allow anything on .co.uk. ¯\_(ツ)_/¯ in Chrome I trust, for now.
But even with the added complexity of regularly pulling in the public suffix list, the problems keep going: e.g., facebook.com's scripts are all served from static.xx.fbcdn.net.
It appears they are using an algorithmic process to perform the heartbeat trading, and they are patentable[0] as long as they are not illegal. Tax loopholes are not illegal but very unethical, I think the policy makers share equal if not more blame for this.
It is possible to patent a “business method,” no algorithm required [1].
Also, TFA addresses the ethics question in this case, and I don’t think it’s so obviously unethical:
> A lot of middle-class people love investing in ETFs and not paying taxes until they sell their shares, and politicians and regulators seem pretty happy to let them do it. It is also quite reasonable: People who buy ETFs pay taxes on their gains when they sell the ETFs and actually realize the gains, which feels like the right time to pay taxes, whereas people who buy mutual funds have to pay capital gains taxes at random times that have nothing to do with their own investment decisions or cash flows. From that perspective, the heartbeat trades are not an evil tax dodge but just a sensible mechanical use of the rules to achieve the logical result that everyone wants.
It's no more unethical or loophole to suppress taxes when an etf reallocates holding than it is unethical or loophole to suppress taxes when a corporation sells a widget and then buys another one. There is no realized gain to the shareholders
This is such a cool concept, I really wish these guys would succeed. Though last time I heard they gave up developing their own rocket (as a second stage to space). Anyone who's in the loop, does Stratolaunch currently have any potential rockets lined up?
The concept is indeed good, which is why people have been air-launching Pegasus rockets for decades. I’m actually not sure what the new thing is with this aircraft. Just that it can launch three rockets per flight?
Occam's razor applies here. Your theory is that she had 1. intended to assassinate Assange, 2. attempted to cover it as a joke and then 3. further covering it up by implying she doesn't remember this statement.
Let me offer you another theory: she's almost certain that she didn't say it, but she couldn't remember every word she's said.
Which theory do you think is more likely?
Now I am not saying you are wrong, but you'd need external evidence to support your theory, until then, Occam's rule apply. If you agree that my theory is more probable but do not like it, then it's possible your current belief is skewed by your prior belief.
Yes and you are assigning event probability arbitrarily. This is why it is hard to use occams razor correctly in a non-subjective way. I can just as easily say that there is a very high probability that she said it, and doesn't want it on the record. There is nothing backing this as being more or less probable than what you said, it is all arbitrarily subjective prob assignments since we are both going on nearly no information.
Did something like that: An organization with some boats, quite a lot of boats, some that might be involved in global nuclear war, maybe limited to sea, wanted to know how long some of the boats might survive. The ocean had Red and Blue boats and airplanes, and the Reds and Blues were looking for each other and trying to kill each other.
So, the state of the system was the remaining Red/Blue inventories.
Some work by Koopmans showed that the encounter rates were a Poisson process. So, the time to the next encounter had exponential distribution, depending on the current state.
At an encounter, depending on the types, could have the Red die, the Blue die, both die, or neither die. Then after the encounter, the state of the system changed. So, the state of the system was a continuous time, discrete state space Markov process subordinated to a Poisson process. That is, in part, a Markov chain.
Yes, there is a closed form solution, but the combinatorial explosion of the discrete state space size meant that a direct attack via the closed form solution was not reasonable.
But it was easy enough to do Monte-Carlo, that is, generate a few hundred sample paths and average those, get confidence intervals, etc. While in grad school working on operations research I did that. While the state space was enormous, the Monte-Carlo was really fast. On any computer of today, the code would run before could get finger off the mouse button or the Enter key. And running off 1 million sample paths would be feasible. For the random numbers I looked in Knuth's appropriate volume of The Art ... and used
X(n + 1) = X(n) * 5^15 + 1 mod 2^47
programmed in assembler.
Work passed review by famous applied probabilist J. Keilson.
Apparently the work was sold to some intelligence agency. I could guess which one, but then I'd have to ...!
Sure theoretical advances may be evolutionary, but they resulted in exponential reduction of parameter space and sample complexity. These advances outpace Moore's law by a large margin.
Hardware served as a catalyst, but it was not a necessity.