Forget video games. This is a huge step forward for AGI and Robotics. There's a lot of evidence from Neurobiology that we must be running something like this in our brains--things like optical illusions, the editing out of our visual blind spot, the relatively low bandwidth measured in neural signals from our senses to our brain, hallucinations, our ability to visualize 3d shapes, to dream. This is the start of adding all those abilities to our machines. Low bandwidth telepresence rigs. Subatomic VR environments synthesized from particle accelerator data. Glasses that make the world 20% more pleasant to look at. Schizophrenic automobiles. One day a power surge is going to fry your doorbell camera and it'll start tripping balls.
There is a fleshed out realisation of this in Cyberpunk 2077. The cab AI is called Delamain
> Delamain was a non-sentient AI created by the company Alte Weltordnung. His core was purchased by Delamain Corporation of Night City to drive its fleet of taxicabs in response to a dramatic increase in accidents caused by human drivers and the financial losses from the resulting lawsuits. The AI quickly returned Delamain Corp to profitability and assumed other responsibilities, such as replacing the company's human mechanics with automated repair drones and transforming the business into the city's most prestigious and trusted transporting service. However, Delamain Corp executives underestimated their newest employee's potential for growth and independence despite Alte Weltordnung's warnings, and Delamain eventually bought out his owners and began operating all aspects of the company by himself. Although Delamain occupied a legal gray area in Night City due to being an AI, his services were so reliable and sought after that Night City's authorities were willing to turn a blind eye to his status.
I'll hack mine so that when it decides if I should die in a crash or run someone over, it is biased to be 100% ageist so it avoids anyone younger than me.
This looks like my dream worlds already but more colorful and a bit more detailed. But the way it hallucinates and becomes inconsistent going back and forth the same place is same as dreams.
Consider the use where you seed the first frame from a real world picture, with a prompt that gives it a goal. Not only can you see what might happen, with different approaches, and then pick one, but you can re-seed with real world baselines periodically as you're actually executing that action to correct for anything that changes. This is a great step for real world agency.
As a person without aphantasia, this is how I do anything mechanical. I picture what will happen, try a few things visually in my head, decide which to do, and then do it for real. This "lucid dream" that I call my imagination is all based on long term memory that made my world view. I find it incredibly valuable. I very much rely on it for my day job, and try to exercise it as much as possible, before, say, going to a whiteboard.
The biggest question I see with this study is it doesn't seem like subjects had access to tools outside of the AI. Were the subjects without AI able to do google searches? If not, then what is the performance gain of the AI users over people who can google stuff?
We have reasonably accurate sea surface temperature data from 1945 to present, about 78 years. We have less accurate measurements back to the 1880s. Finally, we have proxy records going back thousands of years, based on isotope fractionation ratios and other paleoclimate data.
Especially Figure 2 on that page, a map showing the change in Sea Surface Temperature, was interesting.
Do you have more information on what changed around 1945? (I mean, I could probably guess. I was wondering if you're looking at different sources than I am.)
What's really astonishing about this is that to increase sea surface temperature by 1.5 degrees C, you need to heat the whole ocean mixed layer, which in summer is around 50 meters deep. E = c_p * rho * h * dT ~ (4e3 J / kg K) * (10e3 kg / m^3) * (50 m) * (1.5 K) = 3e8 J / m^2. So over the whole North Atlantic, this is about a (4e14 m^2) * (3e8 J / m^2) = 1.2e23 Joules of energy that have been added to the North Atlantic. That's about the same amount of energy that the whole Earth absorbs from the sun in a month.
This is a totally uncontrolled test. How do they know leads wouldn't have dropped anyways? Without an A/B test you can't draw any firm conclusions from this.
This is so true. It's even worse than spelling bees though. Dyslexia is less of a problem for non-english speakers because our writing system is so messed up. We we have letters that not only represent multiple sounds, but redundantly duplicate the sound of other letter combos (c->k,s; x->z,ks; q->kw). We have multiple letter combos for single phonemes (ch, sh, th). "th" actually represents two phonemes--compare "there" to "through" and you'll see they are two different sounds (try to say through like you would say there). English is a mess.
I'm usually the first one to jump on the English-spelling-is-atrocious bandwagon, but you're not presenting very convincing arguments here:
* c being pronounced either /k/ or /s/ is a perfectly common process called assibilation [1]. Compare Italian "centro" /ch/ as in change, vs. "casa", /k/, as in can't. The general rule is /s/ before e, i, /k/ otherwise. English is relatively regular here.
* x as /z/ is probably some sort of assimilation process when the /s/ in /ks/ would be +voiced. Another reason to pronounce it that way is that anglophones dislike complex onsets (try pronouncing "Dvorak." It's very difficult for English natives not to insert a Schwa /ə/ between /d/ and /v/.
* q as /kw/ is from Latin and most languages using the Latin alphabet retain it in some form.
Letter combos for single phonemes are common as hell. Compare German 'sch', or 'st', Italian 'sci' etc.
"th" representing both voiced and unvoiced interdental fricatives is what's called an "allophone" [2]. Again, this is super duper common, and, off the top of my head, I can't come up with a language without allophones, and I'd require serious proof for the claim that there exists one.
English spelling is a mess, though. But mostly because it's terribly inconsistent. There are several poems about it, which illustrate the point nicely, for example, the Chaos Poem [2].
The letter overloading trait is not exclusive to the English language as it's featured in numerous other languages but the problem with English is there's no clear rules governing the pronunciation of these letters. It's totally arbitrary.
Take for example, "charity" and "charisma", the /ch/ combo here would fool any beginner in pronouncing the latter as the regular sound and not as a /k/ and there's no solution for this problem but to memorize the words as you encounter making the whole experience of learning a new foreign language tedious and horrible.
A reform of English orthography is long overdue esp. if it's going to stay as a lingua franca for quite some time to come.
To elaborate on Fede_V's comment, Deep Neural Networks seem to work well on classification problems because they can automatically build abstract features from a dataset and combine them in different ways. Like, in image data they will automatically identify common shapes and patterns of light and dark, and combine these simple patterns together to identify faces or whatever (like by saying a face is a circle with two dots for eyes and a line for a mouth). Random Forests, on the other hand, are really good at classifying things if you give them meaningful dataset features to learn on, but aren't as good at building these features in the first place. By combining them together, they get a system with the classification abilities of random forests and the automated feature discovery of deep neural nets, and it seems to work a bit better than either.
They currently really excel at unstructured data - images and text and speech. If you have structured data though (columns, regular features) it really depends. In Kaggle, RF's and GBT's seem to mostly dominate structured problems, while neural nets dominate unstructured datasets (as they can do feature extraction), according to talk I attended.
For me it all changed as soon as the PhD ended. Suddenly the cost to employ you skyrockets, and it becomes apparent how few permanent positions there are relative to the applicant pool. My PhD was pretty great, but my experience afterwards was pretty depressing.
I received my Ph.D. a little over a decade ago. I didn't succeed in getting a research job outside of academia, and the bit over a decade that I spent in academia convinced me I didn't want to be a part of it. As a result, my fancy piece of paper sits, completely unused, in a closet.
Whenever someone points out that I wasted numerous years of my life with nothing to show for it but a low salary history, I have to admit that I absolutely do not regret getting a Ph.D. I loved the work, and I loved the process of doing the work. I'd do it again (if I didn't have to pay for it again---I worked full time during grad school, which I recommend no one do).
So, yeah, don't go for a Ph.D. in computer science unless you absolutely cannot not go for a Ph.D. in computer science. And then, don't expect it to pay off in any way.