Hacker Newsnew | past | comments | ask | show | jobs | submit | xenostar's commentslogin

I’m also very interested in an invite, this sounds like a very novel concept!


for sure, email me erik at getdangerous.app


If the headsets are good enough you could actually use them to remove information from your view as you walk instead of adding more to it.


How exactly would they disrupt it?


Someone posted this earlier from Schneier on Security but it didn't get much traction: https://news.ycombinator.com/item?id=35162918

"AI Could Write Our Laws"

I didn't read it fully (yet) but early on he (I believe) coins the term "microlegislation".


Make it available to the public. Have a service where people can come together to decide what to lobby for.


If only there was some way every member of the public could have a say in political decisions. Hopefully one of those clever startups figures it out.


Introducing, Representr: the first Democracy-as-a Service.


When I change a CSS property in the Rules tab under the Inspector, and I press ctrl/cmd+z, it doesn't set the property back to what it was before I changed it. And obviously ctrl/cmd+y doesn't go forwards.

It's amazing how such a small change makes it so much more difficult to use the Firefox dev tools compared to Chrome. Sometimes I just want to toggle between CSS values and see what is changing, or undo a few changes I made.


Not a total fix, but I always just put a new rule under the one I'm replacing, and use the checks to enable (override) or disabled the rule.


Proper undo-redo is on the roadmap. It works in some cases but needs to work throughout the whole Inspector!


And is that is a bad thing? The work still gets released and can be used by anyone.


In a world with less perverse incentives these people whose education was largely funded by tax payers would be working on actual valuable projects rather than optimizing how to get people to click more ads for a private corporation.


I agree with the sentiment that ads are such a savage application, but...

* I'm confused by why you bring up tax-payer funded education... do you think these employees don't pay taxes?

* What is your definition of "valuable"?


Ads are core part of how today’s world economy works, like it or not.


What do you think would go awry, if we suddenly abandoned machine learning for advertising?


A lot of business would go bankrupt. And I’m not talking about adtech, but advertisers. There’s tons of them, from small mom and pop style to big companies that depend on ads to get customers. And no, they aren’t scams.

HN crowd doesn’t accept that, but most people here have no idea about how business works.


Has the cost of customer acquisition objectively dropped over the last twenty years? If so, where can I read about that? If not, why do you believe that?


Nobody says it's a bad thing (licensing traps aside, if any, which I didn't check). It just should be noted that when big corps/celebrities get struck by bad publicity they must react with some good move as a lever to counteract the negative press. I'm not saying they wouldn't have done this, but it's a very common move among politicians, corporations, celebrities etc. Their PR office is just doing their work.


As far as I’m aware not since wrath of the litch king expansion.


> Or their kowtowing to the Chinese censors.

And what exactly is the alternative option for them there? They can either follow the rules in China or completely lose all of their business. That is on China, not Apple.


And I'm really curious why people here are not mentioning Apple gave Chinese gov access to icloud data for Chinese people. Kind of double standard I have to say...


People savage Google yesterday for launching a censored search in China. Apple essentially gives iCloud to the Chinese government and the outrage levels don't seem to be the same.


Its because google has 'don't be evil' as its unofficial motto. Googlers joined google because they agreed with their ethics. People at apple knew what thy were getting into and consumers knew and expected it to remain the same. Change causes chaos in humanity.


At least letting us know that they don't like it so at least I know it's against their principles and they might push back in other cases when asked. That they seemed so willing to do it should be the fear, not that they had to (and by the way, they didn't have to, losing all of their business is an option for principled companies).


They can either violate the privacy rights of Chinese citizens, or not sell to Chinese citizens at all.


Pretty sure he means to wipe out your career within the company.


Exactly what I meant. And this results in wiping out your unvested stock.


I'm looking forward to the next generation of VR headsets immensly. A lot of people have been quick to jump on the "VR is already dead" train, but having picked one up during Christmas this year, it's obvious how much potential is there.

There are a few things that need to be accomplished before widespread adoption:

- Removal of wires. It restricts movement too much and removes immersion. The new HTC headset is a step toward this.

- Higher resolution screens. VR AMOLEDs like this are a step in the right direction.

- Prices for GPUs need to go down, and/or a few more years are needed for average computers to be able to render high frame-rates without breaking the bank.

- Headsets need to be lighter and smaller.

- Removal of sensor placement the room. This will be harder to do, but cameras/sensors built on the headsets themselves could potentially accomplish this.

The way I see it, we're in the iPhone 1 stage of VR right now. Imagine the iPhone X version: lighter, smaller, higher resolution, more colors, higher frame-rate, less hassle. These are all inevitabilities, and at that point it will become much easier to adopt the technology. We're also missing a true "killer app" that will get people to purchase a headset JUST for that. I think it will take some sort of truly massive MMO the likes of WoW to accomplish that.

The future is definitely exciting in this field. I hope hardware vendors don't give up and can see the light at the end of the tunnel.


Holy server hug, batman! (Chief Blur Buster here, I noticed the traffic spike).

BTW, GPU is a problem, but we're expecting Frame Rate Amplification Technologies to solve the problem. Basically improved versions of Oculus Spacewarp that can do large framerate multiplication factors with zero parallax artifacts (unlike today).

I covered this topic near the bottom of a different article about the journey to 1000 Hz displays at https://www.blurbusters.com/1000hz-journey

The gist is that within five to ten years, we'll have many tricks to increase framerates with the same number of transistors, without needing to reduce detail levels or make textures/edges blurry, without input lag, and without interpolation artifacts.


I‘m intrigued how a lag-less frame interpolation would work - the algorithm can’t look into the future, or can it?


Current headsets, at least the Rift, already do "look into the future" to lower the motion-to-photon latency (the amount of time between you moving your head and the screen updating based on that).

When you're dealing with a head moving, and very brief slices of time, inertia plays a large role and allows for fairly accurate prediction. After rendering the frame they check head position again, update their prediction for head position at time of display, and move/warp the frame slightly to match. This does require rendering a slightly larger view.

I remember when Oculus cracked the 20 ms mark and got down into imperceptible lag, it was very exciting. They bragged at the time that their predictive models would let them get down to 0 ms eventually, but I'm not sure if they've hit that yet.


You can make educated guesses about the future, which is how Oculus's Asynchronous Spacewarp works. Rendering a whole frame is slow, but warping a pre-rendered frame is fast. If the next frame is taking too long to render, you can warp the last frame to roughly match the perspective that corresponds with the current head tracking data. You get some artefacts, but they're not as noticeable as the judder caused by a missed frame. Prediction can also be used to estimate the head-tracking data at the time the frame is drawn to the display, rather than at the time the frame starts to render.

Similar techniques are used in video compression - encoding the exact value of every pixel is expensive, but you can trade bandwidth for processing by encoding transformations of a previous frame. A modern compressed video consists mainly of these interpolated frames, with only a minority of frames containing a full image. This interpolation can use data from both past and future frames (B frames) but can also use just the data in previous frames (P frames). This works extremely well most of the time, but there are some edge cases:

https://www.youtube.com/watch?v=r6Rp-uo6HmI

https://en.wikipedia.org/wiki/Motion_compensation


Not all things that look like "interpolators" need traditional lookforward lag.

Mice and head trackers can already run at 1000 Hz. It's the GPU that cannot keep up.

Instead of black-box interpolators (e.g. Sony MotionFlow), a smart interpolator can be made to know the high-frequency controller inputs in realtime, and doesn't even need to use guesswork-based interpolation for everything.

Just shift everything around based on the high-refresh 1000Hz controller input. (In other words, "reprojection").

Also, knowing more data about the source (e.g. near-zero-lag controller input stream) eliminates lots of interpolation guesswork. It's much like how H.264 (video compression) is heavily interpolation-based mathematics during the video codec, but it had full awareness of the source video material, to successfully compress it virtually artifact-free.

So basically, you are simply giving a smart interpolator full awareness of things like geometry & input at a higher rate than the GPU renders. To avoid guesswork on those kinds of items.

Things like future multilayer Z-buffers can help solve a lot of parallax-reveal problems of trying to create intermediate frames, and there are future tweaks they are working on to eliminate reprojection artifacts. Like artifacts or reprojection distortions around edges of objects in front of objects. So adding intermediate frames with full parallax effects can eventually become artifact free because of the GPU's knowledge-in-advance of what-behind-what. Basically, more advanced reprojection algorithms that can create near-flawless intermediate GPU frames (without lookforward) without a full polygonal rerender.

Prediction helps (as it does for Oculus), but remember, we have controllers that already go at ultra high frequencies, and it is expected headtrackers will eventually become ultra high frequency too -- and that extra data can reduce the need to do lookforward prediction.

It's all very complex, with many researchers working on multiple solutions, but it can reduce the average processing-power-required per extra frame, and it can theoretically allow high reprojection ratios without lookforward lag (e.g. theoretical future 10:1, such as multiplying 100fps to 1000fps, at least with 1000Hz input devices like 1000Hz gaming mice, and 1000Hz head trackers).

Several VR scientists have indeed advocated the need for 1000Hz eventually, someday in humankind, as there are confirmed tangible immersion benefits to getting that high and beyond.

That's why I wrote that article full of motion demos explaining the visual science concepts of why 1000Hz displays are needed. It will be useful for passing a theoretical future Holodeck Turing Test (not telling apart a VR headset versus transparent ski goggles in a reality-versus-VR blind test), in terms of Morarity-style or Matrix-style "it's real" VR.

Many tricks layers upon each other, to achieve what's being achieved today, and this creativity will only continue. Lagless lookbehind-only interpolation (utilizing ultra-high-Hz controller input to reproject new 3D position). Foveated rendering too, yes. Realtime beamtracing with realtime denoising (NVIDIA scientist paper), perhaps. Maybe even all piled on top of each other simultaneously, perhaps.


I think lag is defined as the time between when you make a movement and the movement is displayed, right?

So if that delay is 200ms, we know it makes people sick, for example. A delay of 0ms would be "zero lag" IMO.


I agree that VR is here to stay, and most of the things people complain about will be solved within a couple of years. I get the impression that most people here are chronically underwhelmed by things. My mind is still blown by the PSVR, the least advanced of the big 3. But I'm older, and I've been waiting for this since 1990. To me the killer app is VR. Myself and everyone I know (including non-techies) got VR to experience VR. I realize a true killer app would have greater reach. I don't think those are knowable except in hindsight. I thought Google Earth was killer, but I guess not everyone agrees. I would love to see game companies invest in decent ports of existing AAA games, including WoW. Again I don't think it needs any big whiz-bang extra, besides being in VR.


Just making a branded VR game (ie. Zelda, Super Mario et al) would move VR hardware like hot cakes.


Its chicken and egg though. No company will make VR games unless the market is large enough. Its up to the little guys who can take risk, to prove the market before the big guys come in and take over.


And long term, this technology will definitely shake up who the little guys and big guys are, as big companies fail to adapt and small companies make hits.


I dont think vr is iphone1. Its more like first non-smart phones lets say nokia 6110.

Its quite possible that with progress made in brains wavelengths readings, the true vr in form of iphone1 will be a head cap you put on and your view/smell/touch perceptions are overwritten by cap’s sensor. That would trully be regular vr/phone versus true vr/smartphone/iphone1


> Its quite possible that with progress made in brains wavelengths readings, the true vr in form of iphone1 will be a head cap you put on and your view/smell/touch perceptions are overwritten by cap’s sensor.

That sort of headset would be such a massive breakthrough in neuroscience that the VR aspect of it would be tiny in comparison.


- Removal of sensor placement the room. This will be harder to do, but cameras/sensors built on the headsets themselves could potentially accomplish this.

Inside out tracking is a reality in consumer devices now. All Windows MR devices that shipped late last year have 6DOF inside out tracking via cameras on the front of the headset with no external sensors. Moving forward there will be more devices from other vendors that use inside out tracking. Qualcomm has shown prototypes, Google + HTC were working on a tango device that got cancelled, HTC is working on an inside out standalone for the Chinese market, Oculus has shown standalone inside out tracked prototypes, etc.


How reliable are they? The Vive's tracking is pretty rock solid. I'd be very disappointed with anything less (e.g., even 97% solid is not good enough) given that any glitches are REALLY jarring and nauseating in VR.


>How reliable are they?

They are very, very good. I've owned every major HMD since the DK2 came out in 2014, and I would say the Samsung Odyssey is the best one to date. The inside-out tracking is fantastic and just as good as Lighthouse (in practical usage, not theoretically). When you consider that there is no setup involved, it makes it a no brainer that this is the way forward.


The problem I find with inside-out tracking is the range of motion. One of the most powerful concepts in VR is being able to do things with your hands when you are not looking at them.

Maybe they could do inside-out tracking on the controllers?


>The problem I find with inside-out tracking is the range of motion. One of the most powerful concepts in VR is being able to do things with your hands when you are not looking at them.

Agreed, there is a bit of an occlusion issue when your arms are behind/above the HMD. I feel like they could probably solve this though, with inductive tracking like Sixsense [0] integrated into the controllers and used in conjunction with IMU/camera data.

https://www.sixense.com/platform/hardware/ [0]


I dont think we're even in iPhone 1 stage - more like Motorola Razr (the old one, back in 2004/05). The real cool stuff is barely even being thought of now, let alone built and sold.

Also, in a perfect world this will all just be a transition step before we get full on holodecks.


The other thing that needs to happen is that the skill, time and money required to create quality VR & AR content needs to be significantly reduced. This will come with time too.

We're doing our own small part to try and make that happen with our automated 3D scanning platform (http://realityzero.one)


From my experiences, I think there are several completely different axies of quality.

If it’s a cartoonish game, the imaging doesn’t need to be photo-realistic. If it’s ‘toon style or lit and textured by photography of real environments, the polygon count doesn’t need to be high.

What it does need, beyond the hardware, is immersive physics, and a gameplay-justified reason for why you can’t run through the furniture/wall/cable that you can no longer see. The game “I Expect You to Die” does that perfectly because you’re sitting down the whole time.


I don't think the skill,time,money required for VR is the man problem. It's not all that much harder to make a VR title vs non-VR title the coat is roughly the same. The problem is there is not a large enough VR market to justify the normal AAA size budgets.


That's a very good point. No content = no point.


I have similar opinion. And I dont think it is even iPhone 1 stage yet. It is more like early Smartphone. iPhone manage to get to a very decent and wide usage Smartphone within a 5 years time. I dont see this happening yet with VR, as it is extremely limited by hardware and software.

As we finally realize, how not very real time our OS and software / hardware. After all these years of abstraction and slight delay added to everywhere in stack. We finally have a motive to unwind / improve those.

May be some day we could have Sword Art Online Style Game to move VR forward. Link Start.


When i think about VR, i think first about alot of other stuff before i think of gaming.

Alone for remodeling my flat (kitchen, bathroom) or building a house.

I would also like to train on a virtual lathe before using a real one.

I might buy the new htc vive and i will see it as an early adopter beta hype thing because there is still work to do, but in general it feels already really good.

That Valve Portal Demo, wow that frightned me a little bit :)


Apart from the issues you list, the biggest problem for me is game quality / support. The VR native titles, aside from few exceptions, feel and play like a tablet games, the non-native titles are usually riddled with rendering artifacts, despicable camera work or really unstable framerates (showstopper for me in VR).


> we're in the iPhone 1 stage of VR

You could immediately do a number of useful (read:productivity enhancing) things with the iPhone 1 - play your music, make calls, browse the web, take pictures, send emails. It deprecated a lot of what needed to be done on traditional phones, desktops, and laptop computers. It would last a day without charging, you wouldn't have to hook it up to a GPU or strap it to your body to use it, and it wouldn't give you motion sickness. Cellular technologies developed quickly to support the bandwidth needed for even better user experiences. Shipments jumped from 1M in 2007 to 20M in 2009.

There are very few polished games or apps available for VR 2 years after the "new" generation of VR headsets was released in 2016 by Oculus and HTC, and total headset unit shipments for the entire market (excluding phone-mounting headsets such as Gear) are probably in the low single digit millions for 2017. It hasn't yet deprecated any traditional dedicated communications technology or functions.

I wish I could get excited about the future in this field, but I really just don't see what the killer app will be for VR. Facebook thinks it will be virtual meetings for the enterprise and hanging out virtually with friends/family for the consumer market...I am very skeptical but want to be proven wrong as VR is one of the last platforms pushing hardware and software innovation forward at the moment.


> There are very few polished games or apps available for VR

Not true. Examples: Brass Tactics, Robo Recall, In Death, Lone Echo. There are already more quality games in the Oculus Store and Steam than most people will have the time to play.


Id say VR is still in the windows mobile and palm pilot days.


Or in the MS-DOS 2.5D days of gaming (Think DOOM). Despite the technical limitations, it is this awesome feeling of experiencing something revolutionary and new.

Previous VR endeavours, like 90s, early 2000s would then be the Pac-Man and Donkey Kong in this analogy :-)


> Or in the MS-DOS 2.5D days of gaming (Think DOOM).

We're most definitely there. Here's the Original DOOM, modded for VR:

http://rotatingpenguin.com/gz3doom/


I'm not convinced that removing sensor placement or wires is a huge problem. It's not any more involved or awkward than a big home theater setup is.

As someone who has had an oculus since the consumer version was released, my main problems are:

1) Eye strain. Even though you have a 3d effect you're still looking at something a few inches from your eye and that disconnect causes eye pain and headaches after playing for more than an hour or so.

2) Locomotion. I've yet to find any way of moving around in VR space that doesn't either make you nauseous or pull you right out of the realism of the experience.


Eye strain will be mitigated with eye tracking and varifocal displays (https://www.roadtovr.com/oculus-research-demonstrate-groundb...), as well as higher resolution. It might still be a problem because you're still staring a screen on your face.

Locomotion is less of a problem than people who think its a problem is, and I think that stems from a lot of people in VR being hardcore gamers, and exploration of large spaces being a core mechanic and selling point for 3D video games for the last 25 years. I don't think omni-directional treadmills or vestibular stimulation or anything inconvenient like that will catch on for locomotion, and I think we'll either use various forms of teleport, or sliding (traditional 3D) locomotion for the foreseeable future, and people will mostly be ok with it. It's possible that with wireless and/or standalone headsets that redirected walking, will be a popular option. You could imagine a headset where Chaperone\Guardian builds on SLAM used for inside out tracking, and can give apps information about the layout of your house, allowing for large procedurally created virtual spaces. This still doesn't solve the problem for people that don't have medium sized private spaces to play in, and also makes it harder to do multiplayer games where players are playing in very different spaces.


One problem rarely mentioned regarding VR is input. Jim Sterling mentioned this in a recent Jimquisition[0] video, and I think it's a valid criticism - the complexity required of modern games can't often be replicated with VR as well as it can with a controller and/or keyboard, certainly not improved upon. Motion controls are often not precise enough - players can do more with buttons sitting down than they can waving their limbs around.

[0]https://youtu.be/7_h6GYI8ddA?t=6m56s


A similar thing was said about keyboard and mouse compared to dual analog.

And those things were true.

Games and people adapted.

Personally, I feel something work that can pick up nerve impulses as well as deliver modest feedback, tactile and or electrical will close much of this gap.

New input paradigms will advance too, just as they currently are for touch.

Touch today is getting good. The finger in the way problem is being chipped away.


Even if touch and motion controls improve, what evidence is there that they present a better paradigm for interaction than a keyboard or controller?


None. The question really is when does input get transparent enough to not be a bother. Whether it beats other forms and technologies is a different question. One that doesn't necessarily require an answer.


Unfortunately there seems to be no end in sight to this crypto currency nonsense, and it has made GPUs double in price.

VR was already too expensive. Now it’s completely out of the question for most people. Adding an absurdly high res and high refresh HMD to the mix right now doesn’t seem like a good idea.


>Imagine the iPhone X version

There's going to be a black bar across the top middle of my field of vision?


Actually there might be. Just like the iPhone, we need somewhere to put the face and eye trackers.


VR is not dead because the resolution isn't high enough. It's dead because it's been around for 30 years and no one has found a useful application for it.


Are you a troll or just brain-dead?


Google plans to stream vr content including games. So the GPU cost is not such a hurdle for the average consumer


It's physically impossible to stream VR games from a remote server without unacceptable lag.


I think it depends on the definition of "stream". You could maybe trade pre-rendering for file size; say prerender a million versions of a 360 degree image along with some kind of 3d shadow map - something that took a lot of the work from the unit.

We already have 3d video - I'd be surprised if the concept couldn't be expanded, leaving the unit to merge streams of background environment and 3d animated sprites.


> average computers to be able to render high frame-rates without breaking the bank

High res stereo at 120Hz is never going to have to same graphics as the latest high budget big game release. Current GPUs are already very powerful, but if people have the expectations of getting the same graphics when they use VR they are going to be very frustrated.


I'm all for it if it means that AAA games will finally step off the photorealism treadmill and admit that pushing polygon counts is not a substitute for art direction. I'll take a Fortnite look over a PUBG look any day.


you can say this, but if you look at the most popular mods on skyrim they tend to make the game look more photorealistic, not less. the general trend for video game consumers is towards realism.


Never is a long time. Screens can only get so good before they're pointlessly higher resolution, and at that point GPUs will keep increasing in performance.

At some point your GPU runs out of things to do.


Sounds suspiciously like not ever needing more than 64KB of RAM.

The reality is that greater horsepower allows for greater abstraction, and easier to program APIs. Increasing developer productivity 2x reduces performance 10-100x, or something like that. So there’s never “enough” performance for the same reason there’s never “enough” powerful/usable APIs.


A current GTX 1080ti is overpowered for a 1080p display, it's too much GPU for too few pixels. If you're driving a 4K display or a head-mounted display with higher refresh rates it will break a sweat, but not on today's games with today's workloads.

Audio used to be really difficult to process in real-time but now it's trivial. There's only so much audio processing you can do before it's ridiculous and pointless.

The same goes for video. Once you have, say, a 40K display for each eye at 244Hz there's no point in going for more pixels or faster refresh rates. If a GPU can handle that, easily, then that GPU will probably be best put to use doing other things in addition to rendering graphics.

Memory is not tied to your senses, we can always find uses for more. Audio and video are, and at some point it's as good as real.


There are rumors about foveated rendering being demoed around the same time.


This is just the beginning though. The iPhone 1. Imagine the iPhone X version of this. It's open source now; it was made by one person. Now imagine it after 10,000 people have worked on that code. This will become so streamlined and efficient you won't be able to tell at all. Some of the good ones are already extremely convincing, you just have to give it appropriate training data, choose a good model to swap with, etc.


Hard to get excited about a technique that doesn't really work well with the handwavey excuse of "it may actually work later"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: