VR's problem, in my opinion, is that I can get immersed (fully, exactly as the author describes it) in a 2D game just fine - the lack of stereo vision or head-tracking or motion controls is no more an impediment to my immersion than the limited binocular overlap or peripheral vision or lag in a VR headset. And 2D is a heck of a lot cheaper and more convenient (and less nauseating).
That's not to say VR can never be successful, but I think it needs to offer something more compelling than just "immersion." Exercise or AR might be viable routes.
I feel that's like saying "I can get just as fully immersed in a book so who needs movies?"
They're different experiences. I don't need Tetris or PacMan in VR. Conversely, Half-Life 1/2 etc are not remotely intense as Half-Life Alyx. In the first 2 you're watching a movie. In the later you're in the world of Half-Life
The frequency of choosing to go out to the movies is also about how often I think "I wish I could do this in VR".
Examples:
- Before going on a trip, pre-visiting the destination in Google Earth with VR is very spatially informative & makes directional intuition memorable upon arrival at the real world destination.
- Virtual role-play with environmental cues that cause make-believe to be ever more real.
But most people don't need this very often. Picking up a book or throwing on some earbuds to listen to a book are far more frequent and compatible with simultaneous other activities. VR feels the same--a high-demand focused experience that is infrequently worth the effort.
The most important quality of any successful trend (eg windows, internet, smart phones, cloud computing) has been convenience. Which is also the reason why I think Meta Glasses have a real chance to take off.
Exactly. It sounds like a detail that you can‘t eat and drink while you‘re in VR - but for casual experience it‘s friction and you resort back to a screen.
I recently started enjoying virtual bike tours on my exercise bike, but vertigo when the camera turns is an issue. I absolutely wouldn't do it on a treadmill.
There are many games for vr that cannot be done without the tech. It isnt all about immersion but facilitating unique experiences.
What held it back from mainstream imo is an inherent space issue (you need room) and a lack of multiplayer participation (need even more room). Compared to sitting on a couch in a small studio with a few friends, it doesnt stand a chance.
The other problem is most peoples first experience is with some shitty mall vr room where the “game” consists of free unity assets slapped together in a way that makes marky marks horizons look polished. Few people start off with something like the half life one.
I like VR and immersion in theory. I like being able to look around, but I absolutely hate the movement controls.
I know some people complain of motion sickness, but that doesn't bother me. I just want controls like Mario or Zelda on a regular joystick. Why can't this be done?
It doesn't even have to be first person. I'd play a third person game like Mario or Zelda with a VR camera tracking them. I just want that kind of movement.
Pushing a button to teleport in short hops is annoying as hell. I hate everything about it.
I always thought a great compromise would be games that gave you an overhead “gods eye” third person perspective. People seem to be obsessed with making VR games first person, but that’s where the movement problems come in.
The game Moss did this well for a platformers. But it could also be really fun for realtime strategy/simulation games (StarCraft, sim city) or sports games like Madden.
Yes! After many years of using only linux or windows machines, I was assigned an iMac at an internship and noticed the friction with fullscreening things. I decided not to fight it and spent the next year happily working in little windows and making frequent use of the "mission control" gesture.
However, after the internship I went right back to fullscreen/window tiling in linux, so I can't say I really preferred it. Even now as a Gnome user with a big monitor and magic trackpad on my desk - which gives me ~equal access to either approach - I fullscreen everything.
I don't know what it is, but fullscreen on Mac (even dock-showing "fullish screen") feels wrong in a way that fullscreen on Windows/Linux feels "right".
I think it’s partially because on Macs, the desktop has always been a more pivotal component of the OS thanks to ubiquitous drag and drop support and mounted volumes showing on the desktop, among other things. At least for me, it’s not unusual to grab images, text snippets, and other things from apps and drop them on my desktop, making it more of a workbench than it is on other platforms.
Another component is how ability to overlap windows is emphasized, allowing the currently relevant portion of them to be visible without taking center stage or stealing any space from your main window(s).
Both are part of a larger difference in mentality and workflow style.
If you stick with your OS/package manager-distributed version, installation isn't painful anymore (provided that version approximately overlaps with your generation of GPU). It's okay for inference, and okay for training if you don't stray too far beyond plain torch. If you want to run code from a paper or other more esoteric stuff you're still going to have a bad time.
> even Alan M. Turing allowed himself to be drawn into the discussion of the question whether computers can think. The question is just as relevant and just as meaningful as the question whether submarines can swim.
(I am of the opinion that the thinking question is in fact a bit more relevant than the swimming one, but I understand where these are coming from.)
I've come across that quote several times, and reach the same conclusion as you.
While I share Dijkstra's sentiment that "thinking machines" is largely a marketing term we've been chasing for decades, and this new cycle is no different, it's still worth discussing and... thinking about. The implications of a machine that can approximate or mimic human thinking are far beyond the implications of a machine that can approximate or mimic swimming. It's frankly disappointing that such a prominent computer scientist and philosopher would be so dismissive and uninterested in this fundamental CS topic.
Also, it's worth contextualizing that quote. It's from a panel discussion in 1983, which was between the two major AI "winters", and during the Expert Systems hype cycle. Dijkstra was clearly frustrated by the false advertising, to which I can certainly relate today, and yet he couldn't have predicted that a few decades later we would have computers that mimic human thinking much more closely and are thus far more capable than Expert Systems ever were. There are still numerous problems to resolve, w.r.t. reliability, brittleness, explainability, etc., but the capability itself has vastly improved. So while we can still criticize modern "AI" companies for false advertising and anthropomorphizing their products just like in the 1980s hype cycle, the technology has clearly improved, which arguably wouldn't have happened if we didn't consider the question of whether machines can "think".
> The implications of a machine that can approximate or mimic human thinking are far beyond the implications of a machine that can approximate or mimic swimming
It seems to me like too many people are missing this point.
Modern philosophy tells us we can't even be certain whether other humans are conscious or not. The 'hard problem', p-zombies, etcetera.
The fact that current LLMs can convince many actual humans that they are conscious (whether they are or not is irrelevant, I lean toward not but whatever) has implications which aren't being discussed enough. If you teach a kid that they can treat this intelligent-seeming 'bot' like an object with no mind, is it not plausible that they might then go on to feel they can treat other kids who are obviously far less intelligent like objects as well? Seriously, we need to be talking more about this.
One of the most important questions about AI agents in my opinion should be, "can they suffer?", and if you can't answer that with a definitive "absolutely not" then we are suddenly in uncharted waters, ethically speaking. They can certainly act like they're suffering (edit: which, when witnessed by a credulous human audience, could cause them to suffer!). I think we should be treading much more carefully than many of us are.
The question of whether the current generation of "AI" can think, whether it is conscious, let alone whether it can suffer(!), is not even worth discussing. It should be obvious to anyone who understands how these tools work that they don't in fact "think", for even the most liberal definition of that term. They're statistical models that can generate useful patterns when fed with vast amounts of high quality data. That's it. The fact we interpret their output as though it is coming from a sentient being is simply due to our inability to comprehend patterns in the data at such scales. It's the best mimicry of intelligence we've ever invented, for better or worse, but it's far from how intelligence actually works, even if we struggle to define it accurately. Which doesn't mean that this technology can't be useful—far from it—but it's ludicrous to ascribe any human-like qualities to it.
So I 100% side with Dijkstra on that point.
What I'm criticizing is his apparent dismissal and refusal to even consider it a worthy philosophical exercise. This is why I think that the comparison to submarines and swimming is reductionist, and ultimately not productive. I would argue that we do need to keep thinking about whether machines can think, as that drives progress, and is a fundamentally interesting topic. It would be great if the progress wouldn't be fueled by greed, self-interest, and manipulation, or at the very least balanced by rationality, healthy skepticism, and safety measures, but I suppose this is just inescapable human nature.
> The question of whether the current generation of "AI" can think, whether it is conscious, let alone whether it can suffer(!), is not even worth discussing. It should be obvious to anyone who understands how these tools work that they don't in fact "think", for even the most liberal definition of that term.
While I agree with your second sentence here, the first one gives me pause. Why isn't it "worth discussing"? Do you refuse to engage in conversation with all mentally challenged people? Do you avoid all interactions with human children? There are many, many folks living their lives as fully as they can right now who are convinced these things are alive. There are ethical implications to that assumption regardless of whether the things are actually alive, especially when people respond to them as if they are.
We need to have better arguments and refine them for different audiences.
Are you aware of the concept of philosophical zombies? Some of the top minds on the planet are telling us they can't even determine if you or me are conscious and sentient, let alone if a machine is. On the other hand, some of those people's peers are arguing that weather patterns might be conscious (among even more extreme claims). From the standpoint of logic and reason being paramount, we cannot claim to know the answers to these questions. What we can do is discuss the ethical implications of various people coming to different conclusions about them.
Because it's obviously not true. The second sentence follows the first.
> There are many, many folks living their lives as fully as they can right now who are convinced these things are alive.
And those people are living in a delusion, whether it's self-imposed, or the result of false advertising. The way you get them out of that is by rationalizing and explaining the technology in terms they can understand, not by mistifying it and bringing up existential topics.
> Are you aware of the concept of philosophical zombies?
I wasn't, no.
> Some of the top minds on the planet are telling us they can't even determine if you or me are conscious and sentient, let alone if a machine is.
Look, we can philosophize about the nature of existence until we're blue in the face. People have been pondering about similar questions since the dawn of humanity. FWIW I don't believe in "top minds" as having authority to tell us anything. What we know for certain is how technology works, since we built it. And we damn well know that this technology has absolutely zero understanding about anything. Go ahead, ask it how it works. It will tell you that it doesn't understand a single word it's generating, but it sure can string together patterns that make it look like it does. And you think there's some deeper meaning here we should discuss seriously? Please.
Like I said, I think these are interesting thought experiments, and something we should keep thinking about. But it should be clear to anyone, especially technically minded people, that we're nowhere near being able to create artificial intelligence. What we have now are a bunch of grifters and snake oil salesmen selling us a neat statistical trick and telling us it's "AI". This should be criminally prosecuted, if you ask me.
That one was interesting - I found it a lot of work to plan in advance but trivial to complete because at every point there was only one sensible course of action. After a couple of rounds I didn't bother planning and just lined things up as I went.
$40/month at work and $10/month at home, and it's more than I can use.
I cannot imagine productively spending $250k/year on LLM coding - you'd need some kind of massive tree of agents reviewing each other's work and I think even then you would struggle to keep them on-task and sanity-checked. However, I don't make $500k a year so what do I know...
That's not to say VR can never be successful, but I think it needs to offer something more compelling than just "immersion." Exercise or AR might be viable routes.
reply