One of the cool things about the 84" version is that it does 4K at 120 Hz. This allows for much lower touch and stylus latency. It is likely using custom internal hardware like the 5K iMac to do achieve that refresh rate so you are probably stuck with the built in GPU.
It is odd that the 55" is only 1080p when you have 55" 4K Vizio TV's going for $999.
You can visibly see and feel the delay on the Wacom Centiqs and they're 60 Hz.
You adjust but it's not as ideal as there's mild tendency to under/over draw. This is probably less of a problem for a whiteboard but for art, its more problematic.
That's probably due to input lag rather than refresh rates though. For non-TN screens, input lag (generally >30 ms) tends to dwarf refresh rate differences (16 ms vs 8 ms).
Sorry to hear things didn't work out. I am curious to know more about your thoughts on VCs in Toronto versus those south of the border. Canadian VCs have this rep of being quite conservative, only wanting to invest at later stages when there is very little risk. Does your experience agree with this characterization?
The language support is quite impressive with support for latin, CJK and even indic scripts. Notable omissions include RTL scripts like Arabic and Hebrew.
"Please note, we are using a standard Xbox One Kinect without any hardware modifications. The sunglasses are optional and were originally used for anonymity, and partly tongue-in-cheek."
It's an interesting experiment but I don't think Google is aiming to please the crowd with the need for "native apps" anyways and they wouldn't be terribly interested in keeping the native VLC dependencies alive and/or compatible. It seems pretty obvious to me that Google is trying to push the mainstream consumer market into a "cloud computing/services lifestyle", it only makes sense because their whole business model revolves around web users. So VLC is out, Netflix/Playstore is in.
Not for me, don't get me wrong, I'm not that kind of consumer and it may be safe to state that most people within the HN crowd isn't either. I'm personally following and waiting for the Novena[1] laptop and open hardware to be launched.
Even if chromeOS is removed and Gnu/Linux is loaded instead, chromebooks' keyboards look abysmally ugly and useless to me, otherwise I would at least be excited about the inexpensive hardware.
So yeah, as a consumer I can distill my opinion about this product to "meh...", but as a web developer though, that's a different story, the possibility of Google hardware converting handheld mobile users to desktop-ish mobile users and reaching a broader international audience makes me almost enthusiastic about Chromebooks.
After all, until some potentially better hardware project (Firefox OS [2] or Indie Phone [3], who knows) expands to the netbook-ish form factor ("Lapfox"?/"Indiebook"?), the not-so-open inexpensive Chromebook hardware & affordable by hundreds of millions (potentially billions, we'll see) introduces and welcomes new demographics to the web and is better for the world (in the short run) than almost-fully-open expensive hardware that only a few million can afford (for now), don't you think?
Hmm... If you can port VLC to Chrome OS with ARC, I wonder what happens if you try to shove Firefox for Android into it. Are there fundamental roadblocks that would prevent it from working, or would you just end up with a slow and buggy waltzing bear?
I suspect their sandbox doesn't allow code generation since they statically verify you aren't using instructions they can't protect against and that would break it. That means that while you could probably get a Firefox running, it'd be with a Javascript interpreter, not a JIT.
While I don't work on any of the related pieces, it should be noted that NaCL has dynamic "check this code" support precisely so you can JIT-compile code and execute it safely.
Looks like this is the first 14nm Cherry Trail device with its fanless Quad-core Intel Atom x7-Z8700 processor. Hopefully it will be a great CPU for HTPC use.
I'm pretty sure the last generation was good enough for HTPC use, so this one should be better. Probably not good enough for real-world 4k, but up to 1080p, yes.
It's still snake oil. What they're designing is called a thinned phased array, or sparse phased array, see figure 19 in the white paper. Such design suffers from a major flaw known to any radar engineer: The Thinned Array curse. It even has a wikipedia article:
In layman terms, >99% of the power transmitted is lost to sidelobes and doesn't reach the intended users (1-a/A power is lost, if you want to get technical). Such design can carry a small number of users, operating very close to the environment noise floor. The maximum allowed number of users is approximately proportional to the number of transmitters. Each user added above this limit decrease the SNR for all users in the system, killing the communication for everyone. Not a sound design for a cellular system.
Phased-array design is complex and not always intuitive. Whoever invested in this company hasn't done proper due diligence.
Sorry, I don't think you are correct. What they are building is not just another phased-array system, and I don't think they are even doing any beamforming at each transmitter at all.
Instead, they are looking at a much more difficult task, of using constructive and destructive interference from distributed transmitters to only cohere the signal at a single point around the receiving antenna. Think CDMA, but spatially (like in 3d space).
> Radio frequency design and phased array design is complex and not always intuitive. Whoever invested in this company hasn't done proper due diligence.
One of the problems with what they are doing is that it is so new and flies in the face of decades of radio theory, which as you state is already incredibly difficult. Please give it another look, this video as well: https://www.youtube.com/watch?v=5bO0tjAdOIw
I'm not agreeing or disagreeing about this being snake oil... but how do you think phased-array and beam forming work if not by controlling phase delays between antennas so as to create constructive and destructive interference at a desired point, or points, in space?
Even if the air interface works the main reason I can see for carriers not deploying this is backhaul. It would be exponentially more expensive (35x if the air interface is indeed 35x faster) to provide bandwidth to all devices. Right now carriers actually rely on the LTE total cap limitation to save money on backhaul costs.
> how do you think phased-array and beam forming work if not by controlling phase delays between antennas so as to create constructive and destructive interference at a desired point, or points, in space?
I consider DIDO separate from a phased array in that each transmitter has its own separately transmitted signal to do the interfering, instead of one phase-shifted copy of the signal.
Artemis/pCell is definitely not a phased array in that sense.
This could make the backhaul problem even worse- because in order to get that 35x bandwidth , you have to transport ALL that data to a large group of base-stations,unlike today where you transport data only to the relevant base-station.
On the other hand, if a company invented a new wireless technology, it's probably smart enough to be aware of the back haul problem.
I don't know how loosely you want to apply the term "beamforming" but there are ways to send data over multiple antennas for multiple users that allows the signal to be recovered for each user independently even if the signals nominally appear to be interfering.
"Constructive and destructive interference from distributed transmitters" is basically the definition of a sparse phased array... and it suffers from the Thinned Array Curse. Every radio engineer sooner or later re-discovers this classic "curse" :)
What they have demoed and are building out now is a pretty incredible advance in modern radio systems. So there is healthy room for skepticism.
It seems like there would be some major engineering challenges ahead, with predicting localization and movement of each receiver to adjust the location of the 'cell', and dealing with multipath. So this might explain why they are going slowly with building it out.
But I think that it is not snake oil, just a disruptive innovation that is slowly trying to get a foothold after coming out of relative secrecy in development for years. If they get it right, though, then it will absolutely reinvent the cellular industry.
I went to school for EE/CE (double major) and I ended up getting a masters as well. I don't believe that there's anything that prevents this technology from working.
The main problem is that most of the "rules of thumb" or whatever you learn in school are based on a few assumptions that have served us well over the years. Antennas aren't directional, or are only marginally so. My signal is your noise and vise versa. Things like that.
What this technology does (along with several others I've read about) is challenge those underlying assumptions and by doing so, gets performance in excess of "what is possible" only so long as those underlying assumptions hold. But because a lot of people weren't taught WHY those assumptions were made, they believe the conclusions that result are fundamental laws of the universe rather than a good model for understanding.
For example, the general noise floor in the 1-2GHz range is about 1000 times higher than GPS signals and according to classical models, you can't recover it. http://www.gpssource.com/faqs/15 But as it turns out there's a LOT of redundancy and such built into the signal and through a method called "process gain" you can in fact recover it, even though it's way below what you'd originally think is possible.
Similarly radio controlled cars/planes/helicopters/boats/etc have limited frequency bands in which they can operate. This poses a problem because you can't generally fly the airplane around your house, but instead you go to a R/C airfield, where unfortunately there are a bunch of other enthusiasts there too, all vying for the same spectrum. If my radio interferes with yours, there goes both of our hard work crashing to the ground. This problem has caused HUGE uptake of DSSS technology whereby we can both transmit in the same band, but neither of us interferes with one another. This is done by coding both signals with a pseudorandom sequence such that my signal looks like noise to you, and your signal looks like noise to me, and our systems are capable of working even though there's some noise.
What the pCell is doing is basically like DSSS. But where DSSS is done strictly through time, the pCell is done through both TIME and SPACE. My signal for me is coherent only where I am, and your signal for you is only coherent where you are, and anywhere other than those places, it's just random noise.
It is odd that the 55" is only 1080p when you have 55" 4K Vizio TV's going for $999.