Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Did Uber steal Google’s intellectual property? (newyorker.com)
176 points by Fricken on Oct 17, 2018 | hide | past | favorite | 96 comments


"One day in 2011, a Google executive named Isaac Taylor learned that, while he was on paternity leave, Levandowski had modified the cars’ software so that he could take them on otherwise forbidden routes. A Google executive recalls witnessing Taylor and Levandowski shouting at each other. Levandowski told Taylor that the only way to show him why his approach was necessary was to take a ride together. The men, both still furious, jumped into a self-driving Prius and headed off.

The car went onto a freeway, where it travelled past an on-ramp. According to people with knowledge of events that day, the Prius accidentally boxed in another vehicle, a Camry. A human driver could easily have handled the situation by slowing down and letting the Camry merge into traffic, but Google’s software wasn’t prepared for this scenario. The cars continued speeding down the freeway side by side. The Camry’s driver jerked his car onto the right shoulder. Then, apparently trying to avoid a guardrail, he veered to the left; the Camry pinwheeled across the freeway and into the median. Levandowski, who was acting as the safety driver, swerved hard to avoid colliding with the Camry, causing Taylor to injure his spine so severely that he eventually required multiple surgeries."

These things sound like they require way more oversight before they should be allowed on the road if a single engineer can override protections like these.


I am unable to find any statement in the available descriptions of this incident to dissuade me from the opinion that the Camry driver behaved in a very negligent way.

There is no traffic violation called "boxing in".

The vehicle already in the freeway has the right of way; the merging vehicle must accommodate itself to the situation, not the other way around. If necessary, the merging vehicle must stop (way in advance of the merge point) and wait for a suitably long gap which takes into account the acceleration time.

It's possible that a vehicle on the freeway can make a hostile move which interferes with the merging driver's judgment, such as unexpectedly changing to the right lane or accelerating. I don't see any claim of this sort about this situation (which doesn't mean that it didn't happen).

For a merging vehicle to fail to adjust its speed to the traffic, and to drive side by side with a vehicle which has right-of-way, to the point of then having to swerve to avoid an obstacle, doesn't look good for the driver of the merging vehicle.

I get the point that the software should take into account psychotic maniacs of this nature; no disagreement there. It is a fact of driving in North America that people merge on the freeway by seizing the right of way, relying on vehicles passing the on-ramp to hit their brakes, and will pull crazy stunts like driving side by side with a car that doesn't slow down from them, even past the end of the merge lane where they run out of room. Software should defend itself against these maniacs. I say defend; it is a defense against reckless, unlawful motorist behavior.


This is completely unrelated to the fact that a single engineer was able to override the software to take the car for what is effectively a joy ride.

This should never have been possible to happen.

The fact that Waymo hushed this up also doesn't instill trust.


First off, the "single engineer" was Levandowski, the person who basically started Google's self-driving team and was the lead for the project, not any random person. Second, this was back in 2011, and that was in the early days, "Waymo" wasn't a thing until 2016. This is just a bunch of engineers experimenting, not a real product. Lastly, the person who did this is long gone.


First off, the "single engineer" was Levandowski, the person who basically started Google's self-driving team and was the lead for the project, not any random person.

So what? That does not dicount the fact that a single engineer rigged the software in a way which allowed a deadly machine to be used on public roads. If anything it makes the issue much worse that the lead engineer performed such an, arguably, criminal hack.

Actually I appreciate your answer. It illustrates perfectly why the SV croud cannot be trusted with policy decisions regarding autonomous vehicles.


Better ban people from installing anything in their cars. Who knows if that cell phone holder will kill. (Here's where I show you the absurdity of what you're asking for.)

Preferably also prevent mechanics from taking apart cars, they make mistakes and install unapproved parts.

It is far more dangerous than an actual test drive with an actual safety driver.


The problem isn't a single engineer overriding protections.

According to the article they all knew about it, including other executives, and failed to report or discipline Levandowski.

They only fired him after he tried to recruit other employees, which demonstrates how exactly they prioritize safety versus profits.

They should not be allowed on the road, but not because of a single engineer, but because of their corporate culture.

edit: he wasn't fired, he quit.


Afaik they never fired him - he quit


The part that bothers me most is that they never seemed to care about this while he worked there, why are we hearing about it so long after the fact? Given the unnamed Google sources being quoted, they would appear to be the source for much of this story. This quote seems to sum it up for me:

“He was that kind of guy,” the co-worker said. “You know, an asshole. But a really gifted one. Our asshole, I guess.”


My take is they tried to drag him through the dirt in the press to increase the lawsuit chances of success but now it’s coming back to bite them since it was basically Larry and other execs who were enabling him. Maybe it’s just some random news cycle thing or maybe someone’s bidding (know any companies going for ipo that would benefit?) - hard to tell


> According to people with knowledge of events that day, the Prius accidentally boxed in another vehicle, a Camry. A human driver could easily have handled the situation by slowing down and letting the Camry merge into traffic

Shouldn't the Prius continue at a constant speed? And its the merging cars job to adjust speed the get in?

I personally find a frustration when merging onto a freeway is where a driver in the main lane slows down for you but 'just enough' so your not sure if they are slowing down or not so you slow down to get behind them so they slow down more.... just keep your speed constant and the merging car will adjust things.


Until they don't and you plow into them. Never trust another driver. Ever.

It is better to cause a slight confusion but be prepared. This is why people are slowing down when they see a car merging. To brake and to maneuver.


Every day I am more convinced we are 20+ years from viable self driving cars.


The unintentional (I think) phrasing of your statement reminds me of those bars that put signs on the sidewalk reading "Free Beer Tomorrow!" :)


If someone has access to the source code, there isn't much that can be done for prevention. I suppose reactive measures, such as attaching a hidden GPS tracker like the ones in expensive rental cars, could detect violations.


Code review, trusted build environments and code signing could entirely prevent a single engineer from modifying the code running on the car.


None of those would stop a tech lead or engineering manager


No, but making an example of him would have reduced the incentive to do it again.

Also, removing him from the the tech lead position would have helped prevent further incidents.


Structure as appropriate for the potential for public harm. If necessary, make it required that an executive sign. If you can’t trust your executives then you shouldn’t have projects that can harm the public. They could have killed the Camry driver.


It is more appropriate to say that the Camry driver who was at fault would have caused a fatal accident.


>>None of those would stop a tech lead or engineering manager

Then something else should. Imagine one guy, gone nuts, changing the code on millions of vehicles in one shot.


That was 7 years ago. 7 years before that video ipods just came out, blogging became a thing, and the best phone out was a razor. That was a whole technological epoc ago.


70 years ago, flying cars were just around the corner.


And yet, 7 years after there are still similar problems with self-driving cars. Don't get too hang up on dates.


There are, for the self-driving car program in question which developed over that 7 years of progres? Can you name a similar incident in the past year of a Waymo car boxing in another car on a highway and sending people to the hospital?


It took 7 years for this particular incident to come to light. Until I learned about it, I thought that Google/Waymo was pretty good about disclosures, but now I have to wonder what present-day incidents we will only be finding out about 7 years from now.


Uber sent a woman to the grave. Don't know if that counts for you.


No, it doesn't, because their program sucked from the start, worse than most, and even if they were the best self-driving car program, Uber's has only been around for a few years and so is not a valid counterexample to the claim '7 years won't make a difference'. It sure does seem to've for Waymo.


The same person (Levandowski) worked at Uber. So all his knowledge went to Uber too, obviously. They didn't start at point zero.


Ah yeah just so you know there is a lot more to building a self driving car system than the general knowledge held in the head of the lead engineer. It is unlikely that he would have known even a tenth of the systems involved in the waymo program at a level of detail required to quickly replicate them.


You mean the person who was fired almost as soon as he got there and which the trial showed had near-zero contribution?


All I get from this is that Levandowski and his actions are dangerous for humans.


And several people died in human-caused wrecks while you were typing that sentence. Nobody seems too hung up on that. Not scary enough, I guess, and you can't blame one or two large monolithic corporations for it.

Self-driving cars are one of those instances where painstaking adherence to the precautionary principle is going to get a lot of people killed due to poor risk modeling. This article is a great example, where the reporter has engaged in elaborate rhetorical gymnastics to paint an incident arising from a human driver's poor judgement as the fault of a "rogue engineer."


> And several people died in human-caused wrecks while you were typing that sentence.

According to [0] 1.24 million people die in automobile related accidents annually. This works out to 0.04 per second. Unless parent is a very slow typist your number overstates the real toll several times.

[0] http://www.progressive-economy.org/trade_facts/traffic-accid...

Seems to be a mix of 2010 and 2014 data, but broadly speaking close enough.


There's no evidence thus far that self-driving cars will improve road safety. None whatsoever. For the interested, here is an excellent overview of the quality of evidence necessary to prove same: https://www.rand.org/content/dam/rand/pubs/research_reports/...


Agreed, that looks like an interesting report and I'll definitely spend some time with it.

However, the whole idea behind the report is that the reason we don't have the necessary evidence to draw a conclusion is that the cars simply haven't been on the road long enough yet. It accomplishes nothing to pound your desk and proclaim loftily that "There's no evidence whatsoever."

I don't see why AVs can't outperform human drivers, and further, I don't see why they need to be built with NASA-level software and hardware quality to meet that goal, as many suggest. That will only increase costs and delay benefits. "Move fast and break things" is a perfectly valid strategy if it results in fatality numbers that spike temporarily and then plunge rapidly, which is what I personally suspect is going to happen.


Nobody seems too hung up on that.

I don't think that's true. A lot of people campaign for better road safety.


And the people designing them are still just as irresponsible. It's not the technology that needs to be regulated; it's the humans.


I believe the point is the technology is irrelevant.

We need to learn to place reasonable restrictions on the use of potentially dangerous technology while it’s under nascent development.

I have no idea what that would look like, or if it’s possible. I do believe it’s an on going concern.

Edit: changed active to nacent to improve clarity.


I would wager you simply can't. Levandowski was spearheading the self-driving effort at Google.

He was guiding strategy and everything. When a senior employee misbehave it's hard to fire them. When they have a strategic value, it's almost impossible unless they are already lost i.e actively work against the companies interests to further their or a competitor's interests.


Continuing:

> The Prius regained control and turned a corner on the freeway, leaving the Camry behind. Levandowski and Taylor didn’t know how badly damaged the Camry was. They didn’t go back to check on the other driver or to see if anyone else had been hurt. Neither they nor other Google executives made inquiries with the authorities. The police were not informed that a self-driving algorithm had contributed to the accident.

> Levandowski, rather than being cowed by the incident, later defended it as an invaluable source of data, an opportunity to learn how to avoid similar mistakes. He sent colleagues an e-mail with video of the near-collision. Its subject line was “Prius vs. Camry.”

Holy shit. Why aren't there criminal charges here?


So now that Levandowski and Travis are out on their own, can they get back together and do whatever they were planning to- no restrictions from Google or Uber.


Not stopping for the other car betrays their complete lack of ethics.


A few years ago I would have agreed with you. Now I want robot drivers here ASAP. We need to get these human texters off the road.


This is like saying we want drunk drivers off the street. Call your congressman/woman have them work and pass a bill to hand harsh fines for texting while driving. This is not even technologically complicated, as your cellphone knows when you drive and unless emergency should not let you text. Pushing an unproven and insecure technology that doesn't differentiate a plastic bag swirling on the wind from baby that happen to feel out on to the street, is like trying to fix tooth cavity with a blow torch.


This is not even remotely like drunk driving, at all. Do you see pretty every adult in the US walking around with a bottle of whiskey in their pocket? Didn’t think so. You could also argue that we could fix drunk driving by installing breathalyzers in every steering wheel and requiring a passing test before being able to drive. Good luck with that.


There have been several proposals to install breathalyzers in cars. Had the public education and law enforcement campaign (MADD etc) failed to improve the situation these might well have been the next step.


It is not illegal to walk or even drive with a bottle of whiskey in your pocket, so long as its a closed container.


> This is not even technologically complicated, as your cellphone knows when you drive

How would the phone know if you sit in the left, right or the backseat? Would you propose preventing phone use for every car passenger as well?


Dear USERNAME, I just noticed we are moving in fast pace; per SAFEDRIVENOTTEXT Act, please state that you are a driver or passenger.

[_] Driver [_] Passenger [_] Not in a moving vehicle

Please be advised that per SAFEDRIVERNOTTEXT Act, it is first class misdemeanor to provide untruthful answer, and your driving privilege and/or insurance coverage can be impacted if your statement is untrue.


So you want to require everyone to interact with their phone while they are driving to agree not to interact with their phone?


You don't see a difference between typing a message on your phone and clicking one button of consent?

Texting while driving is what causes majority of crashes involving cellphones, clicking a button is statistical noise.


Don’t forget those of us who take the subway. My iPhone used to regularly berate me to turn on driving mode.


"Most of the race’s competitors had built automated cars, but Levandowski had constructed a self-driving motorcycle called Ghostrider—in part, he later admitted, because he hoped that its novelty would draw attention. Although Ghostrider performed rather pitifully in its début, breaking down a few feet from the starting line, in almost every other respect it was a success: the audacity of Levandowski’s creation, coupled with his talent for charming journalists, made him the competition’s star. The National Museum of American History acquired Ghostrider for its permanent collection, and in 2007 Levandowski—then twenty-seven years old, with only a master’s degree in engineering from U.C. Berkeley—was offered a job at Google worth millions of dollars."

Silicon valley in a nut shell folks. Completely disregard ethics and hey presto a few years down the line the total fraudster you hired turns out to have been unethical! Whodathunk!


This article made me feel nauseated. All these big name hotshots and Google execs so full of BS they can't even manage to fire this guy.

This is what a lot of Silicon Valley has become, but it has not always been this way and there are still some legitimate companies that that do in fact wear clothes - mostly semiconductor companies though.


How are sociopathic tendencies of higher echelons of corporate management news to anyone? The only “newsworthy” part of it is it relates to google (which isn’t really news to anyone who’s been on the inside). If it was abouy some finance firm nobody would bat an eye


It’s not news in general but many people liked to pretend it wasn’t true of Silicon Valley or at least favorite companies like Google: sure, there are sharks in most places but we’re a strict meritocracy where only results matter!

That’s been cracking over time as e.g. people noticed that all of the cool stuff Google does is a sideline for selling ads, Uber is a gypsy cab company which would fail the instant they had to comply with the law or pay for externalities, etc. Rather than letting SV redesign the world we’re hope the world will rein it in enough before more people are hurt.

Given how strong that self-image has been in tech culture for decades, a lot of people are going to have strong emotions when it’s no longer possible to maintain cognitive dissonance.


> legitimate companies that that do in fact wear clothes - mostly semiconductor companies though

Hahahha. No.

The days of Noyce's Intel and so on are long gone. Only nvidia really remains innovative


I didn't say anything about innovative - I said legitimate.


What does "legitimate" mean? Global is funded by oil money and was almost sold to the Saudis.

I'm going to take a wild guess and say you work in the semi industry? ;)


Legitimate means whatever I want it to mean - that's why I like it. In this case, I used it to mean something like "profitable, with a business model that doesn't depend on monopoly, selling a product that isn't smoke and mirrors."


Both Apple and Google make money hand over fist, quite profitable.

Intel, the shining example, definitely relies on the x86 monopoly


Delusional. Almost all of SV’s major success stories are grade A psychopaths. Simply how the game works. The rare cases that are benevolent usually get coopted by their more devious counterparts and VCs. (See Google, Twitter, eg.)


You and I clearly have a different idea of what silicon valley's "major success stories" are.

Some non-psychopathic successful individuals: Bill Hewlett, David Packard, Gordon Moore, Andy Grove, Federico Faggin, Ross Freeman, Bob Noyce, the Varian brothers, Bob Taylor, Jerry Sanders, Wilfred Corrigan ...

It's only in the last 25 years that movie villains have become the norm.


Fair enough I’m not familiar with that much older generation. But I think you’re off the mark, it’s more like 35-40 years. The notoriety of the Jobs, Gates, Ellison, and onwards era dwarfs the old school, so I think my charactization is accurate. And I doubt those old school guys were any less ruthless, it’s just how the game works.


I would also highly recommend that you become familiar with these "much older generations" before generalizing about Silicon Valley's success stories. It's called "silicon" for a reason, and being unfamiliar with these individuals is pretty disqualifying in terms of being able to say anything about the area and its history.


Gates is very little like the current generation. Even jobs, who is well known for being a jerk, was still very much human and not psychopathic.

Compare these guys to Page, Zuckerberg, Musk, Kalanick, Thiel, Kurzweil, ... and all of a sudden they seem very normal.


This is so true. The old adage is to under promise and over deliver... but really, if you do over promise and under deliver, things can still work out if you sell it well enough


I refer to the latter as 'people who fail up'

Every company seems to have one. They always intrigue me.


Well, I mean aren't Dark Triad traits supposed to be good predictors of both economic and romantic success?

I feel this is more a human nature problem than a "tech os evil" problem.


So basically, according to The Newyorker, a Google self-driving car was at least partially at fault for a hit-and-run causing serious injury and property damage.

Everyone knew about it, everyone watched the video, but no one saw fit to report the incident or take responsibility.

So all the claims about 10 million miles without any serious incident are, effectively, false.

What else are they hiding? How safe these cars really are?


I thought I was having intense deja vu but google has this archived form two days ago.

https://webcache.googleusercontent.com/search?q=cache:Ca1CO6...

Does hackernews move posts around and change their timestamps?


I can't remember if there's a specific term for it, but I believe HN has some way of giving a story a "second chance", which must have been used here and seems to involve resetting the timestamps.

I don't know if this link will continue working, but if you look at it through this view, the comment that you're replying to shows as "2 days ago": https://news.ycombinator.com/threads?id=oldgradstudent&next=...

But if you link to it directly, it shows as (currently) "4 hours ago": https://news.ycombinator.com/item?id=18223066

I can understand the second-chance idea, but lying about when comments were posted is very strange.


> So basically, according to The Newyorker, a Google self-driving car was at least partially at fault for a hit-and-run causing serious injury and property damage.

That's pretty hyperbolic and loose with the facts.

You can't call it a hit and run, because Google's car did not hit the other vehicle.

You cannot say that Google was at fault for the incident because they had the right of way. While slowing down to let the other car merge would have been safer for everybody, it is on the merging vehicle to ensure they have space to merge safely and to safely respond if vehicles in the lane they are trying to forcibly merge into doesn't make space for them.

There is certainly a point to be made about what type of incidents are being reported. Incidents like this, where the self driving car behaves unexpectedly but legally are important information as we try to create the regulatory framework for self-driving vehicles.


> You cannot say that Google was at fault for the incident because they had the right of way.

It is interesting to note that the concept of "right of way" has been removed from various jurisdictions in Australia. We now have many "give way" rules. Basically, if someone is supposed to give way to you and you could have prevented an accident and didn't then you are considered proportionally at fault with the other person.

Not that many people take notice, but you are supposed to drive defensively here and work to avoid accidents. The only ones who have any sort of "right of way" are emergency service vehicles like ambulance, fire and police.

One thing I encourage the young people I know is to take up the various "defensive" driving courses that are available.


Buried at the bottom, Levandowski's controversial world wisdom -

"The only thing that matters is the future,” [Levandowski] told me after the civil trial was settled. “I don’t even know why we study history. It’s entertaining, I guess—the dinosaurs and the Neanderthals and the Industrial Revolution, and stuff like that. But what already happened doesn’t really matter. You don’t need to know that history to build on what they made. In technology, all that matters is tomorrow."

To what extent does this attitude represent Silicon Valley? More usefully, could we avoid some ill-bred start-up ideas and venture capital wasted if the attitude of Silicon Valley was changed to include just a little more care about history?


That is very sad. I can only hope that Levandowski someday stumbles upon James Burke’s documentary series Connections and gains an appreciation for how science and technology can develop in unpredictable ways far beyond the expectations or aims of the creators.

A hundred years from now I bet history professors will bring up Levandowski as “one of those Silicon Valley jackasses who thought they were building the future but had no idea how they arrived at the present.”


Levandowski's sentament seems extraordinarily ignorant. I immediately thought of the "you should skate where the puck is going" quote which, I think, implies the importance of having information about the history of the puck and how the puck behaves, while acknowledging that the future is the goal. Obviously you want to end up in the future, where the puck is, but it is well-nigh impossible to get there unless you have good information about the past.


Sadly 100%. Most startups try to "disrupt" things they do not understand. Often with predictable results.


But occasionally with unpredictable results, which topples incumbent corporations who refuse to innovate or reduce costs, which is why startups succeed. In that respect maybe the startup's attitude isn't so bad.


The best thing about this article:

> Levandowski refused to discuss the case with reporters but attracted headlines with a curveball revelation: he had founded a church, called the Way of the Future, that was devoted to “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence.” Machines would eventually become more powerful than humans, he proclaimed, and the members of his church would prepare themselves, intellectually and spiritually, for that momentous transition. Some people wondered if the church, as a nonprofit organization, was a scheme to protect Levandowski’s fortune, but he assured reporters that he was sincere. “I don’t believe in God,” he told me. “But I do believe that we are creating something that basically, as far as we’re concerned, will be like God to us.”


AI can never be God, and won't ever be 'like God to us'. Not in a billion years.

It's funny how so many of the 'unfaithful' seem to lack the capacity for metaphysical thought.

This guy also seems a little bit crazy, and not in the good way.


Replying to my own comment as an edit: I used 'unfaithful' in jest almost academic sarcasm, not seriously in any way.

I am always amazed at both the narrow materialism, and hyper grandiosity of these people.

'Starting his own religion' (especially based around technology!) is a massive red flag ... extreme and also misplaced egoism.

This guy is a 'jump the shark' moment for the Valley.


One of Waymo's alleged trade secrets was that they used a fiber laser. This is well known LIDAR technology.

They also apparently think diode alingment pins are a trade secret.

In the trial they also made it seem like Waymo invented the concept of monostatic LIDAR.

My guess as to what happened is that Waymo has nothing solid on Levandowski but he acted extremely supiciously and they hoped to find anything he had stolen in the process of the lawsuitl


Did Uber steal Google's intellectual property? Probably not, if it's in the LIDAR area. Google's LIDAR is still an expensive spinning unit for experimental vehicles. The production solution is going to be some other technology - flash, MEMS, or something else that's more solid state. Continental and Luminar have both demoed reasonably good automotive flash LIDARs.

Google/Waymo may have some proprietary technology in LIDAR data reduction. There are hints of that in Urmson's 2016 SXSW talk. Lots of people are trying to apply machine learning to camera images, but applying it to LIDAR is probably more likely to work. Less ambiguous data. But there's no indication of that being an area of litigation.

I met Levandowski briefly when he was preparing for the DARPA Grand Challenge. He was still a student at UC Berkley then. His real achievement was that, after the DARPA Grand Challenge, in 2008, he built a self-driving car able to cross the Bay Bridge for the Discovery Channel.[1] That's where Google got their start in self-driving. There was not a rush, after the Grand Challenge, to throw money at the problem.

[1] https://spectrum.ieee.org/robotics/artificial-intelligence/t...


The article dedicates a good deal of page space to adding fresh verse to the Ballad of Anthony Levandowski before it gets into addressing the question asked in the headline, but it's all very good reading.


I thought Google had fired Levandowski but it seems he quit. Quite interesting recently to get insights on how/when things go bad at Google and then they arent holding folks accountable, possibly even hiding information.


Is anybody naive enough to think that these companies would give up on creating self-driving cars if they were not able to patent some of their technology? I don't think you can really even make the argument that patents, in this case, encourage innovation (across the industry - they do incentivize grabbing potentially useful patents as fast as possible for individual companies), nor that they benefit consumers.


Curious if the invalidated patents[1] make this harder to prove. I suppose trade secrets don't have to be patents.

[1] https://arstechnica.com/cars/2018/10/lone-engineer-spanks-wa...


Patents, by definition, are not trade secrets, since they are made public.



> he was known for having a charismatic (and, to some, annoying) tendency to launch into awkward sermons about the power of technology to change the world.

> “If it is your job to advance technology, safety cannot be your No. 1 concern,” Levandowski told me.

> Levandowski and Taylor didn’t know how badly damaged the Camry was. They didn’t go back to check on the other driver or to see if anyone else had been hurt. Neither they nor other Google executives made inquiries with the authorities. The police were not informed that a self-driving algorithm had contributed to the accident. Levandowski, rather than being cowed by the incident, later defended it as an invaluable source of data, an opportunity to learn how to avoid similar mistakes. He sent colleagues an e-mail with video of the near-collision. Its subject line was “Prius vs. Camry.”

God save us all from engineers with a Messiah complex who think a few boring little lives are a small price to pay for progress.


I wonder how much Sebastian Thrun was involved during this time frame. Isn't he also a key player behind Google self-driving car project? I don't even remember whether he was on the witness stand during the trial between Google and Uber.


The absolute hatred that this author for Silicon Valley is a little disconcerting to be honest. It’s make it difficult to take the person seriously.


Also interestingly enough, Uber's uncompleted "Spider" LIDAR used 1550nm, which I find to be interesting. Why pivot back to 905nm?


> One juror, clearly a gamer, whispered to his neighbor and, as if holding a controller, mimed the start of a classic Nintendo cheat code: “up,” “up,” “down,” “down.”

This isn't the entire code, and it's normally referred to as the "Konami code" since it appears across platforms in Konami games.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: