December 2015: "We're going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years."
January 2016: "In ~2 years, summon should work anywhere connected by land & not blocked by borders, eg you're in LA and the car is in NY"
June 2016: "I really consider autonomous driving a solved problem, I think we are less than two years away from complete autonomy, safer than humans, but regulations should take at least another year," Musk said.
March 2017: "I think that [you will be able to fall asleep in a tesla] is about two years"
March 2018: "I think probably by end of next year [end of 2019] self-driving will encompass essentially all modes of driving and be at least 100% to 200% safer than a person."
Nov 15, 2018: "Probably technically be able to [self deliver Teslas to customers doors] in about a year then its up to the regulators"
Feb 19 2019: "We will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. That is not a question mark. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year"
April 12th 2019: "I'd be shocked if not next year, at the latest that having the person, having human intervene will decrease safety. DECREASE! (in response to human supervision and adding driver monitoring system)"
April 22nd 2019: "We expect to be feature complete in self driving this year, and we expect to be confident enough from our standpoint to say that we think people do not need to touch the wheel and can look out the window sometime probably around the second quarter of next year."
April 22nd 2019: “We will have more than one million robotaxis on the road,” Musk said. “A year from now, we’ll have over a million cars with full self-driving, software... everything."
May 9th 2019: "We could have gamed an LA/NY Autopilot journey last year, but when we do it this year, everyone with Tesla Full Self-Driving will be able to do it too"
Dec 1, 2020: “I am extremely confident of achieving full autonomy and releasing it to the Tesla customer base next year. But I think at least some jurisdictions are going to allow full self-driving next year.”
-
Elon’s just been repeating the same promise for the over half a decade now. Oldest trick in the book.
This is pretty common in ML projects, and a big reason why there aren't many major companies whose core product is based in a complex ML algorithm that isn't fully baked by the academic community first.
In theory, if the approach to self-driving that Tesla is pursuing in any given year actually worked... then the release would be about two years away. In reality it hasn't been working well enough, and every year a new plan is drawn up to reach full autonomy in 2 years.
This is also coincidentally slightly longer than the average tenure for an engineer/scientist, and as such the champions of a given strategy/approach will have departed the company before someone observes the strategy not panning out.
As an ML researcher, I endorse this message. Casual readers may want to re-read what they wrote, because it's really true.
Exploratory AI should be thought of as "potentially kills your company if it doesn't work and you gamble on it working."
The ultimate truth is that you're outsourcing your thinking to ML researchers, much of the time. And as someone on the ground floor, let me tell you that we often don't have a clue how to achieve X any more than you do. We have ideas, testable theories, plans that may or may not work, and have a higher chance of success than random people. But it's still only a chance.
I don't think a lot of companies have fully internalized that. If your company is based around the premise of exploratory AI, you are literally gambling. And it's not the type of gambling you can win in the long run with. It's closer to "put it all on black" than "let's take a principled approach to poker."
I hope as an ML researcher, you're sensitive to the ML/AI usage here.
There are lots of ML algorithms that definitely work but none of involve claims of intelligence. So it's not just "exploratory AI" that you're talking about. It's "any AI", anything claiming to be "artificially intelligent" and fully replace a human in a key decision making position (but not "any ML" since ML encompasses much more boring stuff).
The number of companies that seem to be charging forward towards AGI is small. Most of the companies are doing what you might call "realistic AI": some definable goal (like self-driving) which everyone agrees will eventually happen (and probably will) but no one is quite sure how to get there in every detail.
I try to be an optimist, mostly because of how many counterexamples you see from history. Didn't some newspaper claim that human flight "may be invented in the next several thousand years" shortly before Kitty Hawk?
But for some reason, rich people keep coming to me with what they're trying to do. My advice is the same: you're going to lose your money unless you bet on proven techniques that exist today.
Take AI dungeon as the prime example. That is what I would call a grade-A idea. It was incredibly obvious that it would become a company (now Latitude) and that it would be successful, if someone was willing to put in the effort to make it so (which Nick Walton did). Once AI dungeon worked, and it had a "minimum viable working thing," the rest was a matter of execution.
But a lot of the ideas seem to fall into the category of... Well, for example, someone came to me saying they wanted to build a "virtual universe, filled with virtual people that you can talk to."
It sounds lovely on paper. But what are you selling, really? There has to be some specific value proposition. So strike one is that it's an unproven market. You yourself want a virtual universe. But is that virtual universe going to lead to something that will solve a lot of people's problems? And we haven't even begun to discuss how you're going to get there. What do you mean exactly by "virtual person"?
It's easy to pick on some of the outliers. But unfortunately, the problem runs much deeper. There are people who genuinely believe that AGI is within reach within our lifetimes, or perhaps within one generation. But whenever I try to corner them into giving specific details on how precisely to get there, the handwaving begins.
This is now a complete tangent, but, I found myself excited and enthusiastic to pursue AGI after a long conversation with a certain someone. They had "fresh eyes" -- a new way of viewing the situation, unlike anything that people were currently trying.
Unfortunately, after throwing myself into that mindset for several weeks, I had no choice but to conclude that their chance was closer to zero than 1%. And I was really trying hard to find that 1%, with all my mental effort (such as it is).
So what choice do we have but to let people pursue impossible dreams, and return to the work that we feel we can make an impact on? Live and let live. And of course, there's the likely outcome: our predictions will be incorrect, and we'll be talking with AGI in a virtual universe sooner than we think. But I wouldn't fall in love with their dream.
(I also think you were unfairly jumped on, and that you had a fine point, for what it's worth. Thanks for the prompt.)
Take AI dungeon as the prime example. That is what I would call a grade-A idea. It was incredibly obvious that it would become a company (now Latitude) and that it would be successful...
Not to further rain on the parade (without good reason) but I should mention...
"Recently we’ve learned that the server costs incurred for both the Griffin and Dragon models are higher than our total revenue. Nobody has ever deployed AI models this powerful before, and it requires extremely expensive hardware. If we don’t make changes to cover the costs of that hardware soon, AI Dungeon will not be financially viable."
AI dungeon and gpt-3 is exactly the kind of superficial BS that fails over the longer run, or only serve to fool people. A more advanced Eliza, but with content lifted from real people.
Most of the companies are doing what you might call "realistic AI": some definable goal (like self-driving)
Self-driving is absolutely the key problem, I'd say. I sympathize with you optimism. I'm optimistic about what "computers can do" but I'm pessimistic about DNNs + standard control becoming able to navigate the human-machine border.
Basically, I think a lot of problem are "AGI-complete", especially problem around human-computer interaction, more problems than people like to admit. And Remember, for things just in the NP-complete class, an "average" problem can be easy, it's the few examples that prove troublesome. It seems to me that AGI-complete problems are similar (and yes, I realize the term is a neolism, defined by only extension etc but I'd still say it's valid).
I also think you were unfairly jumped on, and that you had a fine point, for what it's worth. Thanks for the prompt.
Theoretically having lots of "karma" should make me not care about it. Theoretically...
Basically, I think a lot of problem are "AGI-complete", especially problem around human-computer interaction, more problems than people like to admit.
Having recently become a parent, I have a newfound appreciation for how complex spatial navigation tasks are. Children learn to recognize faces and objects within the first 6 weeks to 4 months of their lives. They learn to navigate environments over the next 1-5 years language is fully understood for non-gibberish use cases over the next 5-20 years. It's not a fair comparison, but it provides for roughly the only yard stick we know of.
What real world neural network algorithm is "fully baked by the academic community"? I don't think there are any.
I don't think there are companies with products based on AI where the AI has to work for the company. Google uses AI for search but search can screw up and search returns a lot of just indexed results. There's no "real world application" where AI works reliably (ie, gives a result that you can count on). That doesn't stop deep networks from an improvement on applications that were previous a combination of database queries. But this same only-relative-usefulness can be problematic when companies and institutions delegate AI to make decisions where it doesn't hurt them being wrong but it can mightly screw some random person (from credit to parole to whatever).
The relative improvement is both an oversell and an undersell depending on the context. For many applications the correct answer may be that a reasoned set of DB queries is about as good as it gets owing to lack of data, no better algorithm exiting, or product experience being mildly impacted by changes to the DB fetching component.
When confronted with these uncertainties internal stakeholder will often swing from "we just need more scientists working on this problem" to "it works fine, why would we spend time on this?" attitudes. The former almost always leads to over-investment where 3 teams of people are working on what should be one individuals project. The latter can sometimes be right, but I've also seen Fortune 500 Search rankings that have never been tuned let alone leverage an ML model.
There's a lot of amazing real world technical accomplishments seen from SpaceX. They recently landed a 1st stage booster for the 9th time. There's been more than 50 successful landings now. The idea that you don't have to throw away 2/3rds of a rocket for every launch is a game changer.
I do wonder if some of these same things could, or would have been accomplished in the absence of Musk, but with the same amount of capital and under the leadership of Gwynne Shotwell. I think they would have. Musk has a big dream, and is great at hyping stuff, but it's not like he's personally engineering the Falcon 9 and its recovery system. Shotwell hired the right people to implement the grand vision.
What I'm concerned about is the people who think that Tesla can do no wrong, and it's the most amazing thing ever. When the reality between the sales/marketing pitch, as you've documented above, and what actually exists in the real world on a certain date is so divergent.
The optimization problem of hoverslamming a rocket is something that I can fairly easily wrap my head around. Even without the advances in convex optimization of solving that problem you likely could have done it with somewhat less robust approaches. I'm pretty certain that rockets could have been landed using the engineering pieces that existed in 2001 with incremental improvements and learning (maybe not as robust so your optimizer eats a few rockets in your first 100 landings).
When it comes to automated driving though you're really requiring solving entirely novel never-before-solved problems. If there is a spectrum between "found a company on doing some existing engineering twice as good" and "found a company on solving the Goldbach conjecture" then driverless cars are a bit more towards the latter than hoverslamming rockets is.
Musk has clearly been way off base with self driving, and Tesla’s manufacturing is nowhere near where he said it would be by now. However with SpaceX it seems like he really it directly in charge of engineering. It was him pushing the booster recovery program and leading it technically. Yes of course it took a fantastic team to get it to work but I honestly believe there’s no way they’d have got there without him first relentlessly trying out every technical workaround got every problem for years, and second being willing to throw massive resources at it. I can’t see Shotwell doing any of that. She’s great, but she’s the one that makes it a viable company, not a red hot innovator.
I love SpaceX, but I don't find them quite as miraculously impressive as some people. SpaceX is "just" doing what NASA did in their glory days: rapidly iterating, innovating, and then carrying those innovations all the way through to real world flight. That last part is key. In industry we call that "shipping."
NASA didn't stop innovating per se. In the 1980s and 1990s they worked out their own version of vertical take-off and landing and actually test flew it:
What you don't say is that NASA was uber expensive in their innovation, with things like not using common CPUs because they were not well tested. And nobody takes risks there anymore.
The difference between SpaceX and NASA is mainly that SpaceX cares about cost.
Or we can say that NASA has incentives to absurdly increase cost, like any bureaucracy.
There is no way the DC-X could have been developed without increasing enormously the budget (and taxes from Americans). That was the reason it was not done.
DC-X was nowhere near to being a reusable first stage. The X-33 was a suborbital prototype.
And innovation when you have many, many billions is one thing, if you have to operate in a commercial market but still find a way to innovate at that level its quite amazing.
I have issues with how Musk communicates, but fundamentally I think Tesla is innovating as much as SpaceX. Many here and me have lots of issues in how Self-Driving is developed, but fundamentally its the right concept.
What Tesla is doing with batteries is quite amazing. I have over the last 2 years spent a lot of time understanding the battery industry and what Tesla is doing is actually incredibly impressive. People mostly don't understand that because battery and battery production is far more obscure as topic.
However whatever else you can say about Elon, he is committed to the projects that he is committed too.
SpaceX will not give up on Starship, or at most replace it with a changed design that tries to solve the same problem. Tesla will not stop trying to push down the price of battery, they are comited to it and are willing to go all the way to vertically integrate mining if that what it takes. Tesla will not stop developing Self-Driving, they will push forward and invest as much as they need to invest.
I do wish the Communication around the Self-Driving technology would change. They should have just called it 'Co-Pilot' and sold a 'Advanced Co-Pilot' and say they are working on 'Self-Driving' but it not a product (yet). Stop promising it will come so soon, and I not sure about how I feel about letting Beta testers put up videos.
Sandy Munro was invited to SpaceX design review session, so he has some first-hand comments about that on this video (@6:17): https://youtu.be/S1nc_chrNQk?t=377
Problems that are conceptually simple can still be very complex once all of the details are added. Theoretically landing a rocket is a simple matter of turning it into a giant model aircraft and doing the math on how much fuel you need to reserve. Re-lighting an engine is conceptually easy too. In practice we know this is a huge accomplishment.
Self driving is a problem that starts out hard. Think about how you get a computer to recognize other objects, especially other cars, based on it's fairly limited set of sensor inputs. Once you add in the details, like dealing with unexpected road conditions, detecting vehicles that are partially invisible to your sensors, etc... and it's hard to see a future where the technology is viable, at least in the near term. All computer vision stuff currently has a sizeable false positive/negative error rate that you just have to accept. But on the road a false negative or false positive can be fatal.
There's a list of quotes just as wrong for spacex.
The issue isn't the predictions. It's the time frames he makes the predictions about. We will have everything he's said one day, but in the mean time we need to call him out for what he's done: lying, and why he's done it: to get money.
My take as well. I've known so many people like this. I've been a person like this. Someone close to me is working for a small startup run by a guy like this.
It's an unfortunate flipside to their willingness to work so hard-- they almost need to believe in the impossible to keep up the pace. Ideally there is someone between them and the public face of the company, but all too often they can't help spouting off stuff they should be keeping under wraps in front of TV cameras or in tweets.
Yes, he's a believer. This belief in discounting the reasoning gap between humans and software is why Musk is both up at night worrying about evil AI taking over the world (it may already be too late!) and also genuinely believes that autonomous driving is a "solved problem" that just needs some of the rough edges removed (just a couple of years away!).
It's like there is this enormous gulf that most others can see, but to Elon it's invisible.
So don't let anyone tell you that philosophy and theology don't matter. We are witnesses a multi-billion dollar bet being made on the basis of some extreme views about the nature of man, and because this appears to be a core belief of Elon, I doubt he will ever update his priors to make more effective investment decisions based on feedback from real-world tests. He will always view this as a project that is almost-ready with just a few technical glitches to overcome.
If someone in a position of power looks honest, it's either because they have such a huge competitive edge that they truly don't care, or because they are incredible bullshit artists. In the case of Musk it's a mix of both, and so far he manages to turn some hype into reality. He is the epitome of plausible deniability.
I think it's fair to say that an extra marginal tens of millions of dollars or so will not affect Musk's life in the slightest. I find it hard to believe that Musk is motivated by money at this point.
He was always like that, which is why he did two internships in one summer and dropped out of Stanford to founded Zip2. He slept in the office. They had one computer, so it ran the service during the day and he used it as a dev machine through the night. Some people, that's just the way they are.
>>I do wonder if some of these same things could, or would have been accomplished in the absence of Musk, but with the same amount of capital and under the leadership of Gwynne Shotwell.
That assumes the same objectives would have been pursued without Musk, which I find unlikely, given the skepticism and criticism Musk had to endure, and dispel, in order to keep his companies on-target, without a scale-back of ambitions, all those years.
You forgot the best quote, I think, from Autonomy Day in April 2019: “We will have more than one million robotaxis on the road,” Musk said. “A year from now, we’ll have over a million cars with full self-driving, software... everything."
I wonder if Musk is intentionally bullshitting or if he completely underestimates the problem space because he's confident of delivering big things (SpaceX, electric cars).
The folks at Waymo/Google, some of the smartest people in the industry, admit to how hard of a problem this is and set realistic expectations. For instance, Waymo (and every SDC company except Tesla) say Level 5 autonomy is impossible and they are strictly targeting Level 4. But here is Musk who says "Level 5 will be coming next year" every year. I feel like there is a serious lack of humility at Tesla (or maybe it's just Musk).
Recently I asked myself that same question and came up with a satisfying theory.
Part of Tesla's business model is selling $10k FSD packages based on a promise that they will eventually deliver full self-driving. If, at any point in the future, any evidence surfaces showing Musk or other senior leadership at Tesla did not have confidence in their ability to actually deliver FSD, this would amount to large-scale fraud and may result in a class-action lawsuit.
On the other hand, if in all communication, internal and external, Tesla leadership projects confidence in their ability to deliver FSD, but then "unforeseeable circumstances" prevent them from actually delivering it, then it's not a scam, but just an unlucky turn of events. At the very worst, Telsa might have to partly reimburse their customers, but most likely they can keep the money.
So it does not really matter what Musk believes deep down. It's probably easier to do his job if he consciously makes himself believe that he can deliver.
> If, at any point in the future, any evidence surfaces showing Musk or other senior leadership at Tesla did not have confidence in their ability to actually deliver FSD
Tesla recently said that "Full Self Driving" is not capable of autonomous driving. They tell the regulator one thing and tell their customers another:
To be fair... he has convinced millions of people to put up $10k towards FSD. He’s laughing all the way to the bank. And we are....well...we’re on HackerNews complaining about him on a Friday night.
winning isn't on its own admirable, nor is complaining/criticizing on its own shameful. that's just the toxic ideology of "might makes right" that permeates our society
While I entirely agree with the sentiment of this post, I often find myself wondering - how does one effectively combat this attitude without making a full circle? How is anything to be established in society without some sort of "might", so to speak? Just a thought that resonated with me when I read your comment.
just to write it all out for clarity (not because I think you misunderstood):
the might makes right that I was pointing at is an attitude of "winning is everything. winners are superior, no matter how arbitrary or unfair the game."
the might makes right that you're pointing at is the fact that in the struggle to define laws and norms, there are winners and losers.
I think it's just a question of values. there is no "neutrality" - there is always a fight between competing values, and everyone picks a side. the value that says "winning is everything, winners deserve everything, losers can die in a ditch" is a bad value to me.
It’s interesting, a couple years ago I remember there was some debate about upgrading the public transport in Seattle(?), and I saw some comments saying maybe we need to hold off in case a full autopilot car infrastructure might emerge as a cheaper alternative. I think we’re all a lot more grounded at this point...
Not sure what this phrase means to you exactly, but it may not be the worst idea-- the only way I see a mass rollout of automated vehicles any time soon at this point is on dedicated roads/lanes with roadside sensors to assist. Anything else is gambling with human lives.
Oooh, that reminds me... it's been 3 years since my very gullible cousin called me an idiot for not believing that FSD would be here in 5 years. Two more years to wait for that 'toldyaso'. Shame that I didn't put money on it.
I was a fan of Tesla cars in 2016, didn’t know much about them or Musk but heard that they were fast, safe, looked good, and solved the problem of EV charging with the super charger network. Went to look at the stock price to see if I should buy some and almost fell out of my chair. Around the same time they were promising a solar roof that looked better than a normal roof and cost less, which sounded way too good to be true, and also the $35,000 model 3, which would be under 30k with the tax credit, also very difficult to believe. The final straw was the promises of full self driving, which clearly was a much harder problem than they were claiming and potentially impossible with their chosen tech stack. The more you look at this company the crazier the story gets, and now it’s worth more than the top few automakers in the world combined.
Ed Niedermeyer’s “Ludicrous” chronicles the story without the breathless media coverage that usually follows Musk. It’s incredible what the Musk empire has become, but there’s a lot more to it than meets the eye. https://www.amazon.com/Ludicrous-Unvarnished-Story-Tesla-Mot...
It took three years to go from "in two years" to "next year." That's three solar years per Elon year. Starting at two years in 2015, that means we'll get it for real in... 2021!
FSD is as hard to solve as general intelligence. I don't get why they bother going down this rabbit hole with sub-GI AI technology.
While waiting for an AI breakthrough, we should look into instrumenting our cities. That's how planes and drones can navigate, approach and even land. Sure, cities are more complex, but instrumentation can automate the routes. And it's a great infrastructure undertaking the like of which we haven't has in decades.
I think it’s wrong to say that FSD requires AGI since driving is a specialised skill.
I do agree though that there’s so much that can be done right now without waiting for some magical future tech. Inner cities without cars can be solved with public transportation. De-urbanisation can be achieved with a combination of remote work, cashier-less shops and a
well-run postal system. We don’t need to wait to do these things, they’re achievable now.
I don’t think it’s accurate to say Tesla is developing an AGI. I mean, the point of the project is to get a computer to drive a vehicle, not interpret French literature, which means the “AI” will be pretty “specialized,” which is kind of counter to the “general” in AGI.
So maybe they’re developing an “ASI.” But because there are already plenty of those and we call them “AI,” we might just say Elon’s trying for a harder, higher scale, and more commercialized version of a set of technologies that already exist. Kind of like all the other “impossible” things people mocked him for missing deadlines on in the past. And now, we’ve arrived at the point.
Driving may require some level of general intelligence and reasoning beyond what a traditional AI can accomplish if they truly want to handle all the edge cases implied by a cross-country summon.
from an available information perspective. There have been prognostications that AGI would be available in the 2020s for some time now if Moore's law continued.
When Spacex formed in '02 and Tesla in '03 many observers would have estimated that privately funded moon rockets, luxury electric cars, and mass produced solar panels would have been a bigger challenge.
Keep in mind that Tesla's approach for solving FSD still uses mathematics that's fundamentally half a century old and is by definition narrow AI, not AGI.
> uses mathematics that's fundamentally half a century old
I have news for you, the most common operations in AlphaGo, GPT-3 or any state of the art AI are: multiplication, addition, max, log, exp, sin, cos and random - all functions known for centuries.
It's the architecture, data and the compute that are making the difference, not the math, and all three are recent accomplishments.
Yes, that's my point. That neural networks were conceptualized roughly half a decade ago. Obviously there have been a lot of advancements like convolution, drop out, attention, deep learning, etc. But fundamentally this is old mathematics and while it's yielding good results at solving specific problems, it's not the answer for AGI. For AGI we will need new breakthroughs.
Exactly. Tackling AGI will require imo a significant breakthrough in the field of AI.
Meanwhile, it's much more productive and practical to look into how we can instrument our cities to make self driving cars possible with today's technology.
It's the other way round, and you hint at it in your last sentence: the critics have had huge opportunity costs, while Elon had virtually no cost for his repeated overselling of Tesla's FSD capabilities. To the contrary, a good chunk of Tesla's market cap (and thus Elon's net worth) is clearly attributable to its large followership of retail investors hyped by Elons predictions and overselling (and I'd guess that even some professional investors are falling for the "Level 5 is right around that next corner" claims). Tesla is valued as a tech company, not as an automotive company, and that is in good part due to its alleged competence in the software tech realm, of which FSD is the most coveted piece.
Don't make predictions or estimates when you're happy.
Marketers are usually overly happy, especially when in the act of estimating. This makes it hard for them to resist being overly generous with their predictions.
Elon Musk's "two years from now" (or even "later this year") is everyone else's "in five to ten years." His 2 being everyone else's 5 is also appropriate given that he describes their level 2 autonomy as level 5.
There's a big difference between promising tech in the future and saying that something is true now.
This kind of marketing is a standard release of information in the form of opinion of the future, which shareholders can decide for themselves whether to believe it or not.
When the SEC slapped him down previously it was because he conducted a release of information about "facts" - not opinion - that were supposedly true now (that he had a private buyer at $x price). This was deemed a market manipulation because it was presented as 100% true. Whereas self-driving cars in 2 years might be true and it's up to individual shareholders to do their own analysis.
It’s unreasonable to hold someone’s statements of opinion and future expectations to the same standard as statements of fact. Otherwise every single CEO would be guilty — they tend to be an optimistic bunch.
Still they are at the forefront in the field. IMO in the same ballpark as Waymo.
Elon operates by setting impossible goals and under delivering on them but still going faster than all the other players. Most FSD buyers are reasonably aware of the gamble they are taking.
If Tesla turns out to not be capable of deliver FSD, it's a straightforward class action to refund everyone. Meanwhile they are trying hard to build the damn thing and my kudos for that.
> Still they are at the forefront in the field. IMO in the same ballpark as Waymo.
They are nowhere near Waymo, who is doing actually driverless rides for the public. Tesla's confidence level is still at "you need to keep your hands on the wheel at all times".
I have questions about Waymo's taxi service. I'm certain that they occasionally run into situations that require human intervention, or at least need to be prepared for such an eventuality (see examples below). In such scenarios, what happens? The passenger operates the car? A remote operator takes controls? The car refuses to move and becomes a hazard or causes congestion?
For example, you come up to a 4-way intersection and a traffic cop there signals you to stop and says you can't go through because all the man-hole covers are off, but since you want to turn left you can go through the parking lot of the corner gas station to get onto the other road.
For another example, right after the st-patrick's day parade is over or right after the college town's team has just won the NCAA championship, the street is full of people and the car has to inch forward at a rate of half of a mile per hour and the pedestrians don't get out of the way until the car is about to touch them.
In the scenarios you described, a remote operator would "help" the car. They've specifically said remote operators can't control or joystick the car, but can "answer questions" which I take it to mean plot a different course. It can also pull over at a safe spot and not get stuck in the middle of an intersection, though nobody has run into this issue to test it from the videos I've watched.
The examples you gave sound like scenarios where remote operator would instruct "don't go there, go here" and the car does it by itself. But specifically about the traffic cop, it can actually detect hand signals from them (can't find the video though).
Waymo is a joke outside a trained, geofenced area. It is fascinating to me how a company went from gee we can index search results and sell ads to we can solve one of the hardest general automation problems in the world.
They definitely are not. Anyone working in this industry overwhelmingly agrees that Tesla is far behind the well established self-driving car companies (Waymo, Cruise, ...) mainly because they continue to rely on cameras only (and not lidars)
Solving vision is an incredibly difficult problem that is made harder by the lack of stereoscopic cameras. There is fundamentally no need to rely on vision alone other than bragging rights. It provides almost no practical benefit.
It’s a common talking point, I doubt that comment was sarcastic. Musk has said that he thinks Lidar is useless because humans operate vehicles with just vision, which of course is an unfair comparison since humans have intelligence to help them.
This has always been a ridiculous argument. Human eyes have a much higher contrast ratio than commercial cameras, and have built-in stereoscopic capabilities with some degree of rangefinding thanks to the fact that they are quickly adjustable. Normal cameras have none of that.
It was not sarcastic. I'm not really sure what you're arguing. It's obvious that vision-only would be more advanced than using lidars. Lidars are expensive and take space, so that's why they're trying to use cameras only.
And it's quite obvious that there are at least some situations in which vision will not work whatsoever - e.g. dense fog. I personally would quite like my autonomous vehicle to operate happily through dense fog. If you're happy to pull over to the side of the road and wait it out, be my guest.
Having multiple orthogonal subsets of the electromagnetic spectrum at your disposal provides redundancy and diversity - two features that simply CANNOT be accommodated with a single class of sensor, not matter how advanced it may be.
Yeah, that's true. Teslas have a front-facing long range radar and multiple ultrasonic sensors around the car for close range detection. Like most modern cars. But they are there just to avoid hitting anything, but can't be used alone for driving autonomously.
How can they be at the forefront of this field? A driverless Waymo will come and pick you up from in front of the Costco in Scottsdale AZ, right now. This Tesla can't do anything unsupervised.
Of all the self-driving companies it is likely that Tesla is dead last, behind Waymo, Cruise, Aurora, and Zoox.
It is outright scary how many Tesla fanboys think that Tesla are at the front of the pack. How much Koolaid can you drink without even checking what the experts in those industries are saying?
January 2016: "In ~2 years, summon should work anywhere connected by land & not blocked by borders, eg you're in LA and the car is in NY"
June 2016: "I really consider autonomous driving a solved problem, I think we are less than two years away from complete autonomy, safer than humans, but regulations should take at least another year," Musk said.
March 2017: "I think that [you will be able to fall asleep in a tesla] is about two years"
March 2018: "I think probably by end of next year [end of 2019] self-driving will encompass essentially all modes of driving and be at least 100% to 200% safer than a person."
Nov 15, 2018: "Probably technically be able to [self deliver Teslas to customers doors] in about a year then its up to the regulators"
Feb 19 2019: "We will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. That is not a question mark. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year"
April 12th 2019: "I'd be shocked if not next year, at the latest that having the person, having human intervene will decrease safety. DECREASE! (in response to human supervision and adding driver monitoring system)"
April 22nd 2019: "We expect to be feature complete in self driving this year, and we expect to be confident enough from our standpoint to say that we think people do not need to touch the wheel and can look out the window sometime probably around the second quarter of next year."
April 22nd 2019: “We will have more than one million robotaxis on the road,” Musk said. “A year from now, we’ll have over a million cars with full self-driving, software... everything."
May 9th 2019: "We could have gamed an LA/NY Autopilot journey last year, but when we do it this year, everyone with Tesla Full Self-Driving will be able to do it too"
Dec 1, 2020: “I am extremely confident of achieving full autonomy and releasing it to the Tesla customer base next year. But I think at least some jurisdictions are going to allow full self-driving next year.”
-
Elon’s just been repeating the same promise for the over half a decade now. Oldest trick in the book.
Disclaimer: I drive a Model 3