That just sounds like an attempt to save both of you time (elapsed time, from interview to offer, not time spent) in case your phone interview did do well.
After the phone interview, somebody needs to write it up, a hiring committee needs to look at it, and then they can decide whether to move forward or not. If they give you the challenge after that it's easily another week.
Frankly I am surprised they gave you a full response on Monday after a phone interview Friday, that's actually quite fast given all the steps and people involved.
I went and looked up the emails. I submitted my code challenge at 9:45pm on Sunday central time and received the response at 2:18am Monday. Couple hours, and a strange time to get a response. I suspect the recruiter was not US based.
Given that the job was supposed to be west coast based, I have to imagine they’d made their decision on Friday.
I joined a nerdy martial arts club (Historic European Martial Arts), to play with swords, get fit, and meet other people who are nerdy but not in tech. Made many new friends that way.
It's a description of reality. We can agree the situation is unfair, and that the big companies show an arrogant approach, but if you want to get hired you have to conform to what is asked and expected.
I get what you're saying but this attitude is what essentially says "Developers are ok with this crap so keep doing it please". Or, in other words, you say it's unfair but then participate anyways and whatever you say no longer has value.
It really doesn't take much effort. Non compliance to this kind of interview BS would make at of these unfair practices go away.
Fusion using lasers is an off-shoot of H-Bomb development, and advances by John Nuckolls from early laser-based fusion research in the 1960s(!) were fed back into H-Bomb research.
These fields are surprisingly related. For details, see Alex Wellerstein's book "Restricted Data", chapter 7.
I could see this as a virtual office, as some of the comments speculate.
If I had a set of AR glasses that projected what appeared to be an 8K monitor on top of my dining room table, and integrated with my MacBook Pro for input / output, and that had batteries to last a workday, I’d pay $2K/$3K. Even more if it worked well at brightness levels I’d have in my backyard.
Never mind gaming, mobile high-quality virtual office is good enough.
If the screens in the goggles themselves are 8k, the "monitor" is only going to be a small fraction of that unless you have your face up really close. (Although you could effectively have much higher than 8k when you do stick your face right up next to it.)
Yeah the idea of having a better virtual office really isn't as great as it seems when you try it out. Basically extra latency to not see all of the stuff on your desk and have the entire world skip frames when you do something that maxes out your CPU or GPU. Not to mention the aformentioned image-in-an-image quality problems.
I think this can be solved. This may surprise many, but I use virtual reality for all of my heavy reading, without any issue whatsoever. It basically feels like I am reading on a movie screen.
While my use case is different than most, I personally have a print related disability (severe convergence insufficiency), which requires the use of assistive technology. Anyways, the app I use is called Retinopsy Look VR, which is available in Viveport. It is intended for people with visual impairments and is super adjustable. I think the adjustability is key. To augment my reading experience, I use a screen reader called Kurzweil 3000, which reads the text aloud to me, with the sentence being read in yellow and the word being read aloud in green, simultaneously.
I use a Valve Index headset with prescription lenses that are adapted to the headset (I got them from VR Optician). I also use a laptop with an i9 10th gen processor, a 2080 Super video card, 32 GB RAM, and SSDs as hard drives. The only thing I do not like about my setup is the fact that it is not wireless and also the fact that base stations (“lighthouses”) are required.
I think specialized use cases are an absolute hit on VR devices. Where else can you get consumer priced hardware that you can create a completely custom tuned to the user version without needing custom hardware? That being said I never did manage to find a customized version that was more efficient for the common use case, after all monitors/books/phone screens are already the customized optimized consumer hardware for most people.
Agreed. I am actually working on making an app that effectively does all of this natively, that is extremely customizable, for both Oculus and SteamVR systems.
You may want to check out SeeingVRToolkit, which was made by Microsoft to make VR accessible to the visually impaired. Retinopsy Look VR utilizes this.
With Retinopsy VR, reading dense and long material in VR is extremely easy and immersive. I find it far more enjoyable than reading physical books, even without using a screenreader.
I also have ADHD, and reading in VR is far more immersive (and especially with a screen reader utilizing multimodal highlighting). I can learn a lot better because the text is right in my face, being read aloud to me, with changing colors highlighted to the audio, which I cannot escape and drift off from in VR.
Anyways, I do all my coding in VR using Retinopsy VR. It allows me to really buckle down and focus. I sit on the couch reclined and I have my keyboard and mouse on a very stable lap desk, the Couchmaster Cycon 2. I also have headset strap stabilizers/modifiers from Studioform Creative so I can use my headset for several hours.
This was amazing to read thank you for taking the time to discuss all of these VR tools and how they are used. I have a friend in a similar situation who is getting an Oculus Quest 2 soon and this information will be very useful to them.
Best of luck with the app development it sounds fantastic m
You have to read this article, which was written by somebody who is visually impaired. He explains why the experience of VR is so much better for people with visual impairments, in so many ways, compared to any other assistive technologies. He states why it is so much more helpful: https://www.alphr.com/virtual-reality/1008932/vr-vision-loss...
I think the softwre side answers are there, just pump more high quality information in faster and it's easily solved. The problem is i just punts it to hardware. Both the lenovo product and hololens have absymal fovs and resolution but you can at least look to the real world great. Otoh normal headsets can display the virtual world in great detail and double the fov but not only is it taxing on the hardware to do so you need to bring in the real world insted. In either scenario you get pretty bad tradeoffs at this point, and I've tried both approaches for quite a while.
Why? I mostly have zero interest in being immersed in meetings. Most people aren't IRL most of the time. There's potentially value where people need to be on-site in other contexts such as repairing equipment.
That sounds utterly unappealing. If I'm in a coffeeshop I have zero reason to want to cut off from the outside world. Otherwise I wouldn't be in a coffeeshop. Most of the reason I'd work in a coffeeshop is for an ambient social vibe. And for many things I do a 13" laptop screen is fine. (And I probably wouldn't feel comfortable being utterly cut off from the environment in an urban space.)
You should look into how the Hololens is able to display floating windows. You're not cut off any more than the physically equivalent monitor would obscure.
A full face headset would probably give off a socially isolating presence, at least today, though.
The FOV of these devices is something like 100 degrees. There's not a lot of wasted space. Who knows what this theoretical Apple device is like though.
If the FoV is less then it drives home that the pixel per degree is even higher!
This is a closed face VR headset with presumably more traditional screens and lenses and unlike a Hololens or MagicLeap. The device sounds closer to VR headsets like the Vive and Quest, which are mostly around 90-110 degrees.
This is described as a joint AR/VR device. I don’t expect we’d be totally cut off, probably more like an overlay of computery stuff while still being able to see the real world behind it. Sort of like a virtual screen.
Correct, it sounds like they are getting the AR using a 'passthrough' mode stitched together from the multiple cameras and some sensor fusion to give you an eye-level perspective live video feed they can manipulate with all manner of neat effects.
Wearing an HMD is always going to disconnect your attention from where you are and teleport it to someplace not shared by those around you, there is no way around that and it's not a bug either. AR just makes that separation a little fuzzier.
The real challenge, is going to be some way to not look like a dork while using it in public. If I had to have faith in anyone to design something that could square that circle it would be Apple (and maybe Sony). But it's a tall order.
Of course, inputs are weird too. Would it need to come with a pocketable keyboard or something?
People 50 years ago would have said that about people who aimlessly slide their thumb up and down a glass screen on a device the size of a deck of cards.
I have no idea how that works with AR overlaying monitors on a physical worldview. So you're writing something overlaid on the view around the coffeeshop? The idea of AR is more to give a HUD that provides information about what you're looking at.
AR is commonly used to insert solid objects into the real world. See Pokemon Go and most HoloLens games. And the HoloLens app where a virtual dog lives in your house.
HoloLens is not able to render opaque objects. They always have some level of translucency, and the darker the colour of the object, the more translucent it is.
When you share your screen in Zoom/Hangouts/whatever, the monitor boundary is nice to isolate stuff. This is of course because Zoom doesn’t let me “add” n app to existing share after having selected a single app - nothing that can’t be fixed
Lol. Just picturing the reactions of the hip baristas in SF who already disdain techbros for spending 5 hours at the coffee shop with their laptops/headphones and their one cup of coffee when said techbros upgrade to VR headsets.
People will get over it. Especially if Apple does it.
I remember clearly in 1999 going to dinner with co-workers, all of us pulling out our cell phones for something and one of the co-workers wives call us all geeks for even having a cell phone. I'm sure that same person is now more addicted to her smartphone than her husband.
The same will happen with AR. It will seem "ewww gross" until it doesn't
Do you mean to pass the real world through, or for scanning the world?
If the display is see-through - perhaps with a removable view shield for switching AR/VR - you can have pass-through without a camera. Like magic leap, but without the magic "black pixels" tech.
Alternately, if it has lidar you could display a wireframe / point cloud version of the environment. I don't think people will consider a lidar scan to be "a picture of me," even if it actually captures more detail.
Alternately again, it could have a camera but just never expose that feed to the real world or allow recording. It's their own closed hardware after all.
>To do AR, it'll need a camera. People hated google glass because of the camera.
I remember the fuss, but it's so weird -- everyone carries a smartphone with camera with them all the time, even into public bathrooms. If someone wanted to record me I think I'd have a better chance of noticing someone looking at me with their face than I would someone doing it while pretending to be scrolling twitter.
To augment something you need to have whatever is there to begin with - so they either need a camera or you need transparent screens. An IR structured light depth camera is still ... well ... a camera.
I'm confused by people saying that they would use VR for work. My head hurts after one or maybe two hours of using it, and if I would work whole day in it?... ugh.
At least when I think of what could be done with a virtual work environment, I imagine it being used to go beyond the limitations of a workstation. We’ve been stuck with the same peripherals for 50+ years, the same GUIs for 25 and we’ve been trying to balance sedentary office work with health for as long as we’ve had swivel chairs. VR allows you to be on your feet moving around and literally putting your hands into the computer while engaging with a truly 3D environment, if we can’t revolutionize work with those gimmes then we deserve our RSI and back pain.
While that sounds like it would work in theory, we've been experimenting with 3D interfaces for work for decades and... it just hasn't caught on. I used to have a 3D desktop application back in the day, where your character could walk through a virtual room to open up a browser, apps, etc. It was a gimmick. Nothing beats hitting cmd+space and three letters to start an app (besides having it open already / cmd-tabbing to it)
Sure some things would stay the same but what if I rephrased it and asked: if you had an infinite budget, what would your ideal office look like?
There have already been inklings of the new interfaces we could invent like Tilt Brush. We shouldn’t be thinking along the lines of how we could do easy things differently, we need to think how can we do hard things intuitively?
Edit: another consideration is that if we want to be less sedentary maybe it is worthwhile actually getting up and walking over to a filing cabinet in order to open a file browser. If it were an option it would probably be more effective than setting reminders to get up and move.
The headaches are commonly a result of low resolution, low frame-rate, insufficient lighting, or bad hardware design (too tight or heavy or not adjustable for your head/face/glasses/etc). These are nontrivial problems to solve.
Very much disagree. I have a Vive Cosmos Elite which has 90Hz refresh rate (pretty standard apart from the Index) and 1440x1700 resolution, which is a bit higher than the average. IMO it has the most comfortable halo ring design for the headband.
That said, I can spend about an hour in it. I can be having the time of my life but after an hour, I need to take a break if I want to jump back in.
I'd laugh in your face if you told me you planned on working 40 hour weeks in this thing.
I know plenty of people who claim they can't stare at a monitor for more than 30 minutes or they get headaches yet we don't laugh in the face of people who do stare at monitors all day long.
It sucks if it doesn't work for you. Hopefully they'll find solutions so more people can be comfortable. For those of us who are already comfortable we'll be happy to use what's available now.
Close one eye and focus on something really close with the open eye. Notice that things that are far away are blurry. Now still with one eye open, focus on something far away, and notice that things nearby are blurry. VR doesn't replicate this.
This is a very active area of research, and I'm pretty confident it will be a standard feature within the next five years. There are also light field displays, which are super interesting, but I'm pretty sure they are cost prohibitive for consumer devices.
I can get that sensation once in a while when I'm driving for an hour or so in my car. My brain kinda makes the car an extended part of the body so it's a bit of a strange sensation to suddenly "rediscover" your arms and hands as your actual limbs. First time I experienced it I thought it was because I needed a break from driving but I wasn't even remotely tired or unfocused. I was maybe too much immersed like what happens in VR?
Same; even with 8K screens, I'm sure my mild eye problems (glasses) would cause enough issues for me to prefer a regular screen.
I mean I'd love to be proven wrong, put one of these things on and everything being full of stars, but at the moment I'm skeptical.
My experience with VR has been limited to an HTC Vive. While a game like Elite Dangerous feels great, I had a lot of trouble reading text on it. Probably the same reason why I can't get along with binoculars, I just can't seem to focus on things with both eyes?
> While a game like Elite Dangerous feels great, I had a lot of trouble reading text on it. Probably the same reason why I can't get along with binoculars, I just can't seem to focus on things with both eyes?
This is basically textbook definition of convergence insufficiency, which is underdiagnosed on a population level. You should absolutely see an ophthalmologist about it, preferably one at an academic medical institution, as they will be less likely to miss it. They can also modify your eyeglass prescription to help with this issue. This does warrant seeing an eye doctor over.
I personally have severe convergence insufficiency (most middle-aged adults end up with mild convergence insufficiency and have prism/powered lenses) due to a rare immune mediated neurological condition. This requires me to see a neuro-ophtalmologist. I also have astigmatism and nearsightedness. I only wear glasses to read/drive/VR. I have lenses from VR Optician for my VR headsets.
My favorite way to read is using VR. I recommend using Retinopsy Look VR (link: https://www.viveport.com/5445b338-0944-49a8-80ce-7c0f4ea7709...) (which is only available on Viveport, however I have it set up to boot from Steam), which allows you to adjust the distance from screen (focal viewing length), screen size, screen curvature, and screen tilt. This helps tremendously with convergence insufficiency.
Anyways, I personally use Retinopsy Look VR in combination with a screenreader called Kurzweil 3000 (https://www.kurzweiledu.com/k3000-firefly/overview.html) which reads the text aloud to me. When the text is being read aloud, the sentence being read is highlighted in yellow while the word currently being read aloud is highlighted in green. This is done simultaneously, and it helps me better absorb the material.
>My experience with VR has been limited to an HTC Vive. While a game like Elite Dangerous feels great, I had a lot of trouble reading text on it. Probably the same reason why I can't get along with binoculars, I just can't seem to focus on things with both eyes?
You might have a problem focusing with both your eyes, but the reason you couldn't read the text in Elite was simply that the Vive is a very low resolution headset compared to more modern models. Everyone had to lean into the display screens to read them on that display.
Wearing a HMD reminds me of a diving mask - its a little awkward and can get fogged up and etc -- but the overall benefit of seeing underwater as an experience makes up for the inconvenience. My guess most of the problems are a result of limited early hardware and limited early software, both of which are expected to change over time ;}
An AR headset like you describe would be killer: get rid of the laptop form factor, put everything into a phone sized box, and you can sit at a table with just a keyboard and mouse and work.
I'm wondering why Apple doesn't create a device that is much cheaper, has no augmented reality, and is just for consuming TV/movies with A/V quality matching that of the best cinemas. Pay off the device with a monthly subscription that includes Apple TV+. Something like $600-700 would get a lot of people interested.
The cheaper devices already exist. An Oculus Quest is basically that. However I don't think the refresh rate or image quality is really up to scratch.
And if this were just a question of image quality, I wouldn't make a big deal about it, but poor image quality means something resembling motion sickness will set in pretty quickly, even if you're just watching a video.
One time I tried to do some yoga while watching TV with an Oculus Go and I stopped immediately, huge mistake, never trying that again. But based on my knowledge of the tech it seems possible that such a thing might be possible at a $3000 price point.
If the external cameras are low enough latency that I could basically have a TV floating at the perfect viewing distance while walking around and doing chores or exercising I would buy one in a heartbeat. Though I'm not sure that's even possible or if that's just guaranteed motion sickness. I would say if it's possible, it's definitely not going to be possible for under $2000, at least not for 3-5 years.
The problem with image quality is that you either need to use the built-in desktop view, or an app that captures and pipes your desktop view to your VR headset. Both of these approaches don't do any upscaling of your monitor's resolution, so if your monitor itself isn't 4k you have to do one of two things:
- use a vr video viewer to view downloaded 4k content
- use Mirage desktop to create a virtual desktop[0], then start oculus/steamvr, and use that to stream the 4k desktop view. this isn't actually made for regular vr, though, so i don't consider it a great solution
As a side note, you can't view DRM-protected content with any current solution (maybe other than with webvr, but I haven't seen Netflix add a VR button yet), so the only way to actually watch content is via piracy - not something trivial to get into for people without the time to maintain a torrent client+VPN+Plex setup.
The trouble with VR video and Oculus Go is that 3-DOF has high degree of discomfort when you try to move and the world seems to move with you. There has been some interesting work on synthesizing new viewpoints using deep learning with rather impressive results. That is something that could bring a level of immersion comparable to 3D content to video sources.
In my imagination, teen girls would love to be able to use AR to talk to their friends with their friend appearing like a hologram in their room over their friend appearing on their phone ala facetime.
There's still 72 raw megapixels in the two displays. Even if you "cheat" and run rendered output for the edges in 4K or 1080p, something still has to drive each RGD emitter for each of those pixels, even if you're painting blocks of 4 or 16 of them all the same colour...
wifi is a horrid idea, any retransmissions/interference would cause an immediate/noticeable latency. The distance for full speed is low and any object between the antenas, e.g. a human, would have noticeable effects.
I read it the same way. I guess the phrasing ("I would be willing") should have made it clear, but it's an understandable misunderstanding: it's used both ways:
I wonder if a little bit of dithering would be enough to compensate for that at that high a resolution. The actual area of our eyes that can resolve colour is quite small, so it might work.
With VR/AR you can’t “go anywhere” (not until the experience is Matrix-like anyway). You can just look at stuff. Like on a screen. Just a bit better than a traditional screen in a few ways, and worse in others. It’s not going to replace travelling to interesting places any time soon.
I never believed in VR as a mainstream thing but AR keeps me curious. There's a Hololens demo on youtube that shows a bunch of virtual monitors of arbitrary size and shape floating around the user. That's potentially awesome. Think the desktop "windows" concepts but with virtual monitors, 3D UI elements where useful. This could have potential. I'd say the technology to make it workable as something you wanna wear for 8 hours a day still isn't there yet, though, and probably won't be until some major breakthrough. It's not just resolution, it's battery, contrast, size... I don't see anything I'd personally want to use with current or next-year tech.
That's basically the premise of the Oculus Quest, and my experience with virtual offices there are extremely mixed. I don't even find any of the apps to be particularly bad, but the inconvenience of using VR as a user shell becomes apparent very quickly. Small gestures that used to be a centimeter of movement are abstracted into larger, easier to read gestures that just tire you out. With that being said, I still found a few "professional" uses for it: I was particularly impressed by how easy it was to load up a Blender project and step inside of it.
You may be interested in https://immersedvr.com/. Some videos suggest that their coders use their VR workspace to code the app itself. Not exactly what you described of course :)
Fwiw I use this a couple hours a day a few times a week for coding. It’s great for focus and completely comfortable/readable as long as the text size isn’t tiny.
iSpatial is also an interesting experience. ImmersedVR is basically a 4+ screen desktop extended workspace while iSpatial gives each window its own screen in multi-level opp center environment. They both have some problems scaling certain content and are works in progress but are the standouts for getting things done in VR. Provided you can touch type well enough.
I'm pretty sure Apple's gaslighting field has yet not found a way to bend laws of physics.
Everybody wants a sleek, everyday wearable device, without usability corner cases, but such are physically impossible to make.
Portability, visibility, usability. Choose one.
First, power requirements demand either external power, or extreme power austerity cutting into display, and graphics.
Recon Instruments had first really practical battery powered HUD goggles, and they barely lasted more than an hour in real life use.
The lowest possible power at which you can provide just any much rich graphics is 250mw — the lowest end of most energy efficient SoCs. Possibly, Apple can bruteforce it with 5nm custom ASICs, but not by much, 100mw at max possible with CMOS cells.
Even if you have a magic chip with hypothetical 0 Watt power usage, you will not improve the situation by much as your display will still eat at least 1W, or usually up to 2.5-4W if you use any waveguides.
Daylight visibility is essentially impossible without displays outputting at least 100000+ nits. There are no tricks around that, that's just physics.
There are very few technologies on the horizon which are physically capable of achieving anything better.
Monolithic devices are one, and only ones which can offer sub 1W power use at any much good visual quality.
But, they are very expensive to make. All existing makers are manufacturing on lab scales only. Getting manufacturing out of the lab, and into fabs is impossible without the process technology, getting out of the lab too. And this shortens the list of credible contenders to just 1 company in the world, and guess, it's nor Apple.
Third, even if you agree to a wired up design, where do you get 16k video from? Have you ever seen how thick DP 2.0 cables are?
How do you get latency down?
How do you make even simplest AR interactions not require the wearer to also wear a Quad SLI videogaming PC?
How do you transport 40 gigabits per second of video without having the I/O PHY eat more than the system itself? Only optics can do it under 1W.
Apple can surely put its silicon to good use here, but even a purpose made ASIC video system will be on the edge. It will limit them to the most basic graphics, and pretty much hardcoded, and handcoded use cases to extract maximum power efficiency, and workaround hardware limitations.
It will be a basic HUD, maybe with some GFX, and good video playback options. Essentially an Apple Watch, except you wear it on the head.
I bet they will intentionally limit its functionality to not to let users see performance limitations, and corner cases too often.
I agree that the 8k displays sound ambitious but they may have some tricks up their sleeves if they are doing foveated rendering using eye tracking. I think they’ll primarily focus on VR but they may do AR via camera pass through. They graphics in VR mode will be much better than what your claiming though, current stand alone headsets like the Quest 2 have good enough graphics to create a sense of immersion where you forget that what you’re interacting with isn’t actually there. That’s more than enough for virtual meetings and office spaces. If they can nail the facial animations by tracking eye and mouth movement so that non verbal communication crosses the gap they’ll have a winner. The current offerings for meeting and social apps in VR are almost there so Apple has a good shot at pushing it over the line.