Grassroots-level adoption was key for things like Dropbox, IMHO - it solved a real need for individual people and it worked well and was easy to use. Same for Docker - developers adopted it in droves, and then enterprises followed.
Inrupt is trying to bootstrap a two-sided marketplace of sorts: product builders won't care until enough potential customers demand support for the "data pods", and regular people won't care until "data pods" solve real everyday problems for them.
Hopefully Inrupt's team has enough business-savvy people on it to find ways to gain traction to slog through some of the tough early stages of the product adoption cycle.
> "In many ways, 2017 marked the year that cryptocurrency stopped being about technologically innovative peer-to-peer cash and instead essentially became a new, unregulated penny stock market."
I don't think it stopped being about tech innovation - there is a ton of stuff happening around proof-of-stake, layer 2 networks, state channels etc. It's just that the stories around the speculative aspects of the cryptocurrency assets have been dominating in 2017.
My toddler is allowed about a half hour of cartoons on Youtube on the big TV per day. He's played with the iPad a few times in his life. Lots of books and Legos and physical toys in our house. Waiting for longitudinal studies to come out, not taking chances with the little brain.
Every other month I go on a media diet: no social media, no news, no unnecessary browsing of any kind - only books and a small number of podcasts are allowed (plus whatever Internet usage is necessary for work). I've been doing this for more than a year now, and it has helped me feel a lot less "fragmented". I highly recommend it.
What do you do when you need to know something? (recent example: my toilet stopped refilling, I had no idea what was wrong and found the information on the internet). Also, which podcasts do you prioritize?
> but what's to stop us from cloning big chunks of our civilizations... simulating them, but using vastly less power/resources for each simulation? Then writing software to have them pass knowledge among the civilizations? We have just a few hundred countries, what if we had a trillion communicating at the speed of computer circuitry?
The principle of computational irreducibility [1] is what will stop us from "cloning" civilizations. That and chaos theory - any tiny deviation in initial conditions of such a simulation or cloning process could produce unusable results.
"simulating them, but using vastly less power/resources" is a pipe dream.
'Better' as in achieves more results per unit of time. Which is a fundamental problem for humans too. Many do have the intellectual capacity to invent something like general relativity, but few would be capable of doing so in the very limited timeframe we have available and even fewer actually end up doing that instead of dedicating their thinking time somewhere else. More thinking and more output per timeframe should lead to significant improvements for both human and AI in terms of results, which is generally the meaningful part
Hardware that has more memory, more processing speed, faster access to memory, and more parallelism is better than hardware without those characteristics.
The exact same software running on better hardware will run faster and can tackle larger problems.
We can't possibly build a human with twice the memory that thinks twice as fast. However once we have an AI which is roughly equivalent to a human, having an AI with twice as much memory that thinks twice as fast is just 2-5 years. (How long depends on where the bottleneck is.)
Yes. Nowhere near as good as AlphaGo, but yes it would do better.
When Deep Blue beat Kasparov at chess, the program was not significantly better than what had been state of the art for the previous decade. They just threw enough hardware at it.
For chess programs there is an almost linear relationship between your search depth and effective ELO rating, and search depth went up by a constant with each generation of Moore's law.
But we, humans, aren't going at anything like the speed of light. What if we tweaked our DNA to produce human beings with the working memory capacity of 50 items instead of the normal 7-ish [1]? One such researcher would be able to work faster, on more problems at once, and to consider more evidence and facts. The next bottleneck for that person, of course, would be the input/output capacity (reading, writing, typing, communicating), but even with those limitations, I bet they would be a lot more efficient than the average "normal" human. The question is - would you call such a person more "intelligent"?
Or we get more humans and then it's a coordination problem right? I mean there is a point in comparing individual vs collective intelligence. This is a bit like communist systems. They work in theory because you get to plan the economy centrally, but in fact more chaotic systems (unplanned) do better (check growth of capitalist countries vs communist ones).
> Our environment, which determines how our intelligence manifests itself, puts a hard limit on what we can do with our brains — on how intelligent we can grow up to be, on how effectively we can leverage the intelligence that we develop, on what problems we can solve.
Consider internet to be the "new" environment, full of highly complex social networks, millions of applications to interact with etc. Our brains are way too limited to be able to deal with it. There's an opportunity for a much more powerful intelligence to arise that CAN effectively process that volume of data and appear to be a lot more intelligent in that particular context.