Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Waymo's collision avoidance testing (waymo.com)
382 points by EvgeniyZh on Dec 14, 2022 | hide | past | favorite | 488 comments


I think the problem we have with self-driving cars is more social than technological at this point.

Consider the hypothetical: a million self-driving cars on the road that, collectively, will have 1/10th of the fatal accidents that human drivers would have[0]. But, the ones they do have are accidents a human driver would almost certainly have avoided.

Is this something we would accept?

My guess is that no, we wouldn't. Because the accidents avoided don't make the news, but the accidents that occur- especially ones that you say "my god, how did it screw that up?" will make the news, and our perception would be that they are more dangerous.

Until Waymo's cars are better than most humans in every single situation, they won't be able to win over the public perception war.

[0]I'm making those numbers up. I acknowledge that. But it's a hypothetical so give me some leeway on this!


You're hitting the nail spot on with this one for me. As a blind pedestrian, I very much feel I am in danger of falling into exactly that group of potential victims you are hinting at. Right now, I have the illusive comfort of the "Vertrauensgrundsaatz" which basically tells every driver obtaining a drivers licence that they need to take special care when it comes to disabled pedestrians. Sure, one might say these new self driving systems will "just" have to follow that same rule as well, but I am very much doubtful this is technically possible. So currently, I feel like the drive to put innovation on the streets and pull money out of pockets is actually actively endangering me in the future. Not very bright outlook I must say.


Self-driving cars should be far better at recognizing a human in most orientations/poses/outfits, regardless of the car's angle, or weather. It doesn't get distracted, can see 360 around the vehicle, has faster reaction, and is specifically timid around pedestrians. Even the best drivers could easily hit someone based on conditions.

There have been suggestions that pedestrians could make cities interesting, since a place like New York would have every pedestrian no longer afraid to just walk into traffic. The cars will just wait until a fully clear moment that may never come


It's this adversarial environment that I fear is the complication.

As you say, if you know the cars will stop and let you go, this power will corrupt.

Extend this thinking to bicyclists, skateboarders, delivery riders, motorcyclists, car enthusiasts driving themselves - all able to simply cut off a self driving car knowing it will give way.

But there is more adversarial out there.

AI input hacking to get known responses. Cars disabled because of gang tagging on sensors.

Car enthusiasts or privacy enthusiasts (privacy shouldn't require enthusiasm but here we are...) updating car firmware/software. People with full physical access!

And we haven't touched on the ethical stuff which is a world of discussion in itself.

Self driving cars by evolution in current societal rules etc will take a LONG time.


> if you know the cars will stop and let you go, this power will corrupt.

> Extend this thinking to ... - all able to simply cut off a self driving car knowing it will give way.

But this is already the case; if I jump into slow moving traffic in a (western) city centre the cars will let me through, possibly even without honking. The only things to fear are distracted drivers (which is the same with the AI, because I can't know whether the AI sees me) and the occasional madman, but those are rare.

What's more interesting as you point out are situations where other drivers will take advantage of the AI's adherence to rules and safety. That's going to be especially fun in cities like Napoli, where things like driving through red signal are completely normal. It's probably not going to work there at all in its current state.


It's long past time that we turn the streets back over to pedestrians. Communities are much more enjoyable when car-free.


>As you say, if you know the cars will stop and let you go, this power will corrupt.

Would stepping out in front of a car be illegal?


That's the thing isn't it.

If "person walks in front of car" event is rare, then the automatic car stop saving a life or minimising injury is nice. Saved a person from a bad day for a minor inconvenience for the car people.

If a NY city worth of people do it without regard, then cars will be gridlocked.

My guess is:

- Assuming self driving cars that that stop for pedestrians are common

- then laws against deliberately walking in front will exist

- to judge the dispute of "deliberate" there will be more cameras, because why wouldn't there be...


Which, in the abstract, is an interesting theoretical question, but California recently decided to basically legalize jaywalking.

https://laist.com/news/transportation/understanding-new-cali...


> [...], since a place like New York would have every pedestrian no longer afraid to just walk into traffic.

London already has near suicidal pedestrians.

And I say that with respect: it's the opposite of the usual tragedy of the commons. Every pedestrian in central London who just crosses the street without looking places themselves in appreciable danger, but contributes a tiny bit to training London drivers to be extra careful.

(Going off on a tangent: tipping is an interesting opposite.

If everyone in the US stopped tipping tomorrow, economic equilibrium would shift so that service sector people would have comparatively higher nominal wages and roughly the same overall income. That's how it is in Japan or Australia or Singapore (or the US before the 20th century), where tipping is either less common or severely frowned upon.

Because eg prospective waitstaff in the US expect to be tipped, they accept lower wages. Similarly, a single patron not tipping in the US gets a lot of social awkwardness without shifting the equilibrium much. A single waiter in the US not accepting tips loses out on income and also gets social awkwardness, also without shifting the equilibrium much.)


Potentially, but sometimes I think we do forget how surprisingly good humans can be at this, and how hard it can be for computer detection systems to work well enough without also having a high number of false-positives where it will slam on the brakes for something it didn’t need to stop for (and in doing so, potentially even causing an accident!)


Like my driving teacher told me in lessons (to admonish my safety focused thinking): do you really think everyone you're sharing the road with has a driving license? I guess he has a point. There are people driving out there without a license, or with suspended license, and all kinds of various situations. Which is why we drive defensively and have to be ready for drivers that don't always follow the rules.


I would argue that someone without a driving license will drive more carefully not to gez caught. Also suspended license doesn't make you forget the rules.


> Also suspended license doesn't make you forget the rules.

No, but there's probably a good reason it's suspended. It's not that you forgot the rules. It's just that you're not following them.


You can get your driver's license suspended for all kinds of driving-unrelated stuff, including not paying child support (which is what I don't get at all - how is someone without a driver's license supposed to go to work to make the money?!).


"how is someone without a driver's license supposed to go to work to make the money?!"

Presumably by paying their child support.

I don't know if it works as intended, and it wouldn't surprise me if people have horror stories about clueless rulings by clueless judges, but obviously the intent is that the license is suspended because the person is intentionally not paying child support.


> > Also suspended license doesn't make you forget the rules.

> No, but there's probably a good reason it's suspended.

Not paying the renewal fee on time is a big reason. That's a rule, indeed, but nothing related to driving or safety.


That seems rather naive. Most people driving without a drivers license probably aren’t thinking that rationally…


My guess is that self driving cars are already far, far better than human drivers at not hitting pedestrians.


My guess is that it depends a lot on the driver. There are a lot of really reckless drivers, or really bad drivers out there on the roads right now. I know people who've been in several (relatively minor) car crashes, driving the same-ish sort of route to/from college and work that I was during the same period. I think they're just a bad driver. I feel like they're driving recklessly when I'm a passenger in a car they're driving. So a self-driving car might be a better driver than them, while also being worse than the median driver, and much much worse than like the upper portion of the human driver spectrum, like the 1st quintile of the 20% of human drivers who are pretty good.

Given how reckless some drivers are, I'd imagine the distribution of crashes among drivers is very skewed. I don't know numbers, but I'd expect to see something like 80% of car crashes involving (or objectively caused by) 20% of the drivers, with most car crashes involving people who have been involved in numerous crashes, and a large minority of drivers having been involved in zero car crashes.


I think the important point is that self driving cars are likely better than the averaged real world population of existing human drivers at avoiding pedestrians.


The average is not the only stat though, and doesn't tell everything. Even if it were true that FSD was safer than the average of human drivers, replacing some manually-steered cars with FSD cars could result in more pedestrians getting hit, depending on the breakdown within the two groups. The overall average isn't enough information and obscures the internal distribution of skill/safety within the overall population of drivers.

It matters who is driving what cars. If you're taking safe drivers and having them drive FSD cars, and comparing the average of that against the average of all non-FSD driven cars (which includes a lot of really shitty drivers in the average), then the net safety impact of this could be nothing. Introducing FSD cars would have no impact to overall car safety, and might even have a negative impact, if the FSD software is worse than the good drivers who are now not driving manually-steered cars but are driving FSD cars instead.


You might think that the average is not too terrible, but wow, there are some people at that one pointy end of the bell curve that make you shake your head.

I present, "Canada's Worst Driver," a series that I watched quite a bit when I had a cable subscription. Besides being hilarious, it should make you think a time or three before crossing the street. https://www.youtube.com/playlist?list=PLHB4WuM7Y_eP-PFv5F6To...


> My guess is that it depends a lot on the driver.

There is a lot of variation in drivers. Young ones with good reflexes and senses but bad on experience, old ones with worse reflexes and senses but better on experience. Ones that stop at stop signs, ones that blow through them if they think they can get away with it. Cities like New Orleans where drivers are much more aggressive because the cops don't care, etc...


The issue i speculate is the tradeoffs are more imposing than with a human. Sure i accept that the car can drive more safely than a human (all it has to do is never exceed 1mph) but as safety increases i speculate it involves accepting tradeoffs like cars getting stuck, driving slowly, blocking traffic, hard braking when not necessary, or even restricting the operating domain.


By your use of German, I assume that you experience a far higher driving standard than people in North America and frankly most of the world, and even that is far from perfect.

The Vertrauensgrundsatz does not account for distracted, tired and inebriated drivers.


On the other hand, you might be able to let self driving car take you anywhere instead of asking someone give you a ride.


I seldomly ask others for a ride, I prefer to pay a taxi cab. And no, I would still prefer a taxi driver over an autonomous vehicle even if they were available, because a human is still very much useful when it comes to the last 50 meters of a trip. I often ask for guidance to find the correct entrance, for instance. BTW, your argument is very cheap, I tend to hear it often. "You are afraid of roboters? Here is one, play with it, you will like it!"


I don’t think a self-driving hearse will calm their nerves


People showing up to their destinations dead, having passed away in a self-driving vehicle, is inevitable. May as well make it work on our terms.


>Sure, one might say these new self driving systems will "just" have to follow that same rule as well,

Next round of CAPTCHA: click the squares with blind pedestrians

think about it, this is how these things are being trained--by people frantically trying to click stupid images just to get to the webpage they wanted.


As a pedestrian who isn't disabled but just inattentive and erratic, I know I will feel much safer around 100 waymo cars than 100 human cars.


The crucial word in your sentence is "feel". If you think technology can only do good, you will likely feel safer around AVs. And since I have an inherent distrust regarding complicated tech, I "feel" less safe with more automation. Both of our feelings are subjective. The question is, will we be less or more safe?


Depends on the safety statistics of the specific tech. Around Waymo? Safer!


I wonder if we’ll all start wearing some kind of markers that these cars can more easily pick up. Like a blind person’s cane could be a specific hue or be otherwise easy to detect by driverless cars. Or cars could detect the UWB signal (like AirTags use) from our phones which is very precise.

Basically, maybe there’s better ways than standard vision (which for computers is still kinda basic compared to a person) to solve the pedestrian hitting problem.


Yeah but what a sad dystopia where you need to wear a beacon just to be outside without risking death, especially if like jogging on a sidewalk or riding a bike.

The cars are clearly the danger to everyone who isn’t in a car, why would the burden be on everyone else, all the time?


To be completely fair: joggers and bicyclists already wear such beacons.

I really don't want to be fair though, because I think your point is totally correct.


Why?

Because reality.

Since the advent of motor vehicles, I would argue risk has generally scaled with the efficiency and power of vehicles.

At the minimum, for 70 years we have known that danger to both pedestrians and safe drivers is ever present and growing. The reality is not about who SHOULD deal with this problem, but how to most effectively arrive at a functional solution which actually reduces risk for drivers and pedestrians alike.

As well, it's worth noting most accidents occur in areas where you drive most often and you're most familiar. And the increase in traffic accidents and fatalities over the last 10+ years has been due to driver distractions, ie cell phones etc, more than anything by a lot.

Yes there are reckless drivers out there doing stupid shit. However, that's always been the case. Now it's just easier than ever to capture them on camera and for them to go faster than ever for cheap. However, even back in the 70's, crash safety technology was laughably behind anything we have so it goes both ways imo.


> As well, it's worth noting most accidents occur in areas where you drive most often

It would be more remarkable if most accidents occurred in areas where you rarely go.


Where I am from, there used to be a time when certain people had to wear markers. We dont want to be reminded of that.


This is one reason I never understood Tesla's vision-based approach. In order to be accepted, self-driving cars don't need to be just somewhat better than humans most of the time. They need to vastly better in every situation, as you mention, to the point that they'll need every sensory advantage they can get.

I got out of a ticket once because I didn't see a "no through traffic" sign against a bright sunset. No chance that same cop gives a self-driving car a pass, nor should he.


One of my biggest concerns about Tesla's vision based approach is that it appears to be entirely about cutting costs. Nothing about it says they actually think it is superior; the cameras on a Model 3/Y are mediocre. They were mediocre the day the Model 3 was first released. If you were going to rely on a vision system in a serious way, you'd at least invest in better camera tech. Hell, Subaru EyeSight has a significantly better camera setup, last I checked, and who looks to Subaru as a technical leader?

Someone else said it here on HN, and I think they're absolutely right -- Tesla is all about vertical integration, and this is preventing them from excelling at anything other than saving pennies. A good part of why the new EV competition is doing everything better is they didn't roll their own tech. They bought packaged solutions from companies that only do one thing, but do it well.


FWIW, the Subaru system is fantastic — I use it daily and it’s very predictable and literally never been dangerous. And works really well in stop-and-go traffic!

It’s not FSD, but if you use it as a driver assist, it takes a giant amount of burden off of driving in traffic. Even on a busy highway that’s going somewhat fast, it’ll just follow the car in front of you. And truly, it’s never once been dangerous.

Only catch is that it can’t “see” very long distance — if you’re going 70mph and there’s a stopped car in your lane, it probably won’t stop (at least, I wouldn’t test it!). But again, it’s driver-assist and not FSD, so you’re still expected to notice that. And within its limitations, it is really exceptional.


The Subaru EyeSight driver assist system is pretty good. But I have seen it suddenly deactivate itself several times in heavy precipitation.


On the contrary, you can hear Andrej Karpathy explain how other sensors produce noise & are too low bandwidth when compared to vision. Other full self driving systems will probably follow suit & get rid of impossible to calibrate lidar systems. How are the cameras mediocre? Do you want even higher resolution cameras? This would cause compute turmoil. https://www.youtube.com/watch?v=_W1JBAfV4Io


have you driven a tesla autopilot at night in heavy rain with no visibility of the lanes? I did, I was amazed at how good it was. I couldn't see lanes but it did and judging by the picture on the screen it saw them very well. More pixels doesn't mean better, same as more ram doesnt mean better (iOS example).


Yes, I'm on my second Tesla in fact.

I have even earned the dubious honor of having been pulled over for suspected drunk driving because I used AP on a trip home from my MIL's house after Thanksgiving dinner. I was amused by how much trouble AP was having with mild corners on I-5, the state trooper behind me was less amused. Well, at least until he established that I was sober, then we had a good laugh about autopilot and he gave my kids sticker badges. A happy ending for everyone but I didn't activate AP again that night.

Maybe the problem was that it was clear and dry? Ya think it would have been better with some road spray on the cameras?

Subsequent to that, I ended up abandoning AP altogether because it was so jarring when it would decide an overpass looked like something it should slow down for. My wife suggested that she'd be happier if I just used my foot for the accelerator. For some reason we still ended up buying another one, but after the phantom braking on day one (empty freeway, overpass, apparently this is very much now "a thing"), I guess I'll just skip AP before even starting. I wonder if I could ask Tesla to deactivate AP altogether so I could have traditional cruise control?


Sounds like Tesla should update their autopilot software to force itself to be enabled and prevent driver control at night when it’s pouring rain.


Karpathy claimed they worked for years and could not reap the benefits from multiple sensors however hard they tried. He seemed really convinced and does not get to me as one who tells stuff just to justify cost reductions, like Elon sometimes is carried away.


It’s also possible they weren’t able to see benefits given the processors that were available to them at the time.

It’s also possible that they were so far behind Waymo in the journey to FSD that they weren’t yet at a point where multiple sensors would make a significant difference.


This is my read on it as well. I listened to the same interview (it was definitely on the front page, if not at #1 for a while). I stepped away feeling like Karpathy had described a lot of good business reasons for not using other sensors, but not a lot of good technical reasons. Sensor fusion is hard, yes, but maybe not harder than perfectly re-projecting 2D pixel images into 3D vector space.

Just my interpretation, but it felt to me like a hail mary because they were fully committed to being the first mover. Waiting around for LiDAR prices to come down would have meant that Waymo would have beat them.


The problem is, in reality, we have Tesla vehicles that sold with other sensors that are now disabled, and those cars are now very difficult to drive with any automation enabled, even simple cruise control, due to the limitations of vision-only driving. And this is leading to them slamming on brakes at inopportune times, blinding oncoming drivers with the bright headlights, etc. - issues that did not happen when other sensors were used.


In reality, we have Tesla vehicles that sold with other sensors that are now disabled, and those cars are now better to drive with automation enabled than with the extraneous sensors, even simple cruise control is better and phantom braking is actually decreased vs spurious radar returns that existed previously, due to the limitations of low resolution radar. Auto high beams is now better at not blinding oncoming drivers with bright headlights, etc. - issues that happened all the time when other sensors were used previously.


How is your experience the direct opposite of the parent post, given the same change?


Believe it or not, if you spend enough time with a Tesla, you quickly realize that actually detecting things is a solved problem.

The thing they need to improve and are doing so rapidly is actual trajectory policy calculations.

And that’s not going to get better with more sensors seeing the same things it already sees.


This claim is not true.

As someone who drives a Tesla with the FSD beta, the vehicle has been getting progressively better since 2018.

It’s drives smoother and brakes more predictably dive they stopped using the front radar.


This claim is not true.

As someone who owned a 2019 Tesla and who owns a 2023 Tesla, the older car had better autopilot. It was starting to degrade in 2020. The new one is worse. Phantom braking was quite rare in 2019. The very first night I drove home the new car, it phantom braked on a lonely, empty stretch of I-5.

I want the radar back.


FSD stack is disabled on highways. You are using the years old code. Beta v11 when it comes out will enable OP's FSD referenced improvements for highways.


I used to mock my relatives for not wanting to put their kids in a Tesla. But yeah, I'm not subjecting my family to beta software for a critical safety feature, nor should anyone else.


I definitely have some relatives that I would trust less than Tesla to drive with my kids.


Wasn't the Summon feature downgraded when they moved to a vision-only approach? One YT video I saw compared a version 1 and a more recent version and the vision-only wasn't able to do as much. Contradicts the idea additional sensors do not add value.


I think the idea is less "additional sensors don't add value in various scenarios" and more "we are 100% certain a vision-only system can perform well on existing road infrastructure; we are not sure about sensor fusion systems".


You are misunderstanding the situation. Karpathy claimed that sensor fusion of radar, sonar and vision isn’t working well. He made no such claim about Lidar. Lidar is the sensor that is the crucial difference between Waymo and Tesla's approach to self driving.


The reason he claimed sensor fusion was not working well was due to vendor versioning. He claimed the same sensor from different manufacturing batches performed differently and thus needed to be re-characterized, which then has follow on effects in various math models. Multiply this by many sensors, and the need for replacement parts inventory for a decade or two and the problem becomes intractable was his claim on his most recent appearance on the LF podcast.


Almost sounds like a supply chain/manufacturing problem than a software problem.


Then something else was the bottleneck at the time. It is very easy to prove that some sensors in some situations will be able to perceive things that other sensors cannot. In those situations the additional sensors are crucial first steps. I would guess the bottleneck is shitty reliance on statistical machine learning with a long tail of unhandled edge cases. Each case very uncommon, but in aggregate a very important sum.


If you're referring to the Lex Friedman interview, at the start of his answer, he mentions it was a cost-based decision. And that radar/ultrasonic wasn't worth it for them, due to the additional time it took. Not that it wasn't helpful, just more effort that could be better spent elsewhere.


He clearly states that extra sensors "contribute noise and entropy into everything. And they bloat stuff." https://www.youtube.com/watch?v=_W1JBAfV4Io

Essentially that trying to utilize multiple sensors cripples any progress (given that resources will never be infinite).


While technically true that extra sensors contribute noise, surely with the proper programming they should be helpful. You just need to weight the information from you extra sensors appropriately based on your confidence the signal is accurate.

For example, if your lidar sensor is 99.999% percent sure there is an obstacle in front of you, surely it's helpful to take that information into account, even if it is a tiny bit uncertain/noisy.


It's too bad that none of their competitors, who all have better results, disagree.


Wouldn't that be more related to parallel processing power and throughout capabilities than feasibility?

More data in any situation where bandwidth is already maximized will lead to entropy and noise. However, if the capabilities were there to process all of that data in low latency scenarios with headroom to spare, surely adding additional sensors and data points would lead to a more complete model of spatial awareness for the car.

That's all hypothetical and reliant on ignoring the realities of running a business and tech development lol


A bit like our discussion here. More standpoints can ideally be merged into a coherent, more complete picture. The key is to sort out disagreements as coming from different backgrounds and biases and everybody admitting she is not 100% right. Otherwise it is a quarrel of stubborn knowitalls that can't agree.

Fusion is hard. As hard as getting humans to agree. Been there. In both situations.

And of course you can concentrate on improving one echo chamber, err single sensor. But you can never come past its fundamental limitations.


I always thought it came down to two reason, costs and looks. Telsa has to sell a car people want to drive daily. They can't have a bunch of lidar sensors on their cars no one would buy them even if they could drive themselves. Also they would most likely cost a lot more due to lidar sensors not being cheap compared to normal cameras.

Unlike Waymo who doesn't care about selling cars to people to drive daily. No one going to care that the taxi they are taking looks ugly as long as it get them to the place they are going for cheaper. The cost is also a less of a factor due to them being able to produce an income by charging people to ride in them.


> They can't have a bunch of lidar sensors on their cars

Why not? There are production cars with lidar now that isn't the big spinning thing on top of the car.


I also never understood why we had to use "vision" approaches that have the same visual spectrum as what humans see. Any sort of sensor on a device is already synthetic, why limit the spectrum that you attach it to? Should use light sensors, sound sensors, gps, everything.


So far nobody really cared how road markings, signs etc look outside of the human visual spectrum, so there's likely a lot more variation there. Both in terms of how things look like when new, and in terms of acceptable wear.


Right. But that is in designing the existing roads. I'm talking about the cars. And I'm specifically asking why not adding more options? I don't mind a camera being part of the solution at all. Gives an obvious path to human labeling of training data. But, why not have more?


We have a good idea how all this stuff looks in the infrared spectrum, and that could help a lot in the sensor fusion.


And even then, the best cameras today are still a long ways off from matching human eyesight. Tesla's cameras are not state of the art, either.


I wouldn't be shocked if the best cameras are better. :)

Point stands that the ones being put in the cars aren't.


The best cameras have much lower dynamic range than eyes (particularly in low light), and much longer times to refocus. Not to mention higher feedback latency when focusing on a particular object: eyes and brain are very tightly connected.

The eye is an incredible piece of machinery, specifically for dynamic processing like this.


I meant that mostly tongue in cheek. That the best camera ever produced may be better than our eyes. I wouldn't be shocked if such a thing exists. Probably target large and would be way more than a single sensor camera, though.

And agreed that eyes are impressive.


Right so to approximate the capability of human eyes you would need multiple coaxial cameras with varying apertures. Then stitch the images together in software.




They are standing firm on the vision-only approach because it is the correct approach. FSD cannot be perfected unless the Tesla team puts 100% exclusive focus on perfecting vision based models that don't have lidar as a fallback. Tesla can only use lidar again for redundancy only after vision is fully solved problem.

https://tvtropes.org/pmwiki/pmwiki.php/Main/BurningTheShips


So the beatings will continue until morale improves, eh?

This is like saying we need to get to the moon using steam power and ONLY THEN we can improve by using rockets. What if it's completely infeasible to solely use vision for FSD?

Even humans use audio processing to augment the vision.


Combing multiple sensory inputs is a hard problem in itself. Say Tesla uses LIDAR with vision...they can have a great vision experience, a great LIDAR experience, but fuses those two experiences together can create lots of problems that wind up being worse than either by itself.


I agree with you entirely.

Continuing with your hypothetical, even though we’d be 90% safer as a collective, the safety of the individual feels compromised: the risk of an accident is non-uniform when involving humans (depending on e.g. age, experience, safety, alertness, etc.), but becomes uniform (or at least more uniform) with an algorithm in charge.

That’s a tough thing for people to buy into.


A very good observation. Based on your comment, I think we can relax the requirement stated by OP by saying:

"Until Waymo's cars reduce any individual's chance of an accident."

So for example, suppose a Waymo car is better than humans overall, but tends to do worse than humans when there's a small bump on the road. And suppose that all humans (in a given regulator's area, e.g., California) tend to encounter such bumps at roughly the same rate (per mile driven) over their lifetime. In that case, it's probably going to be acceptable, since every individual is better off.

I don't know, maybe this is not impactful / obvious enough for people to care about?

What certainly is obvious is that the safest drivers are much safer than an average driver (does anyone know of a study that estimates this ratio?). Therefore, at the very least, the threshold for Waymo should be not the average accident rate, but the accident rate for the safest drivers.


> the safety of the individual feels compromised

Exactly. I have had zero accidents in 20 years; I'm not interested in a car that will lower the overall accident rate if it increases mine.


Even the most perfect defensive driving won't fully mitigate the unsafe driving of others though. I'd rather have them to be in autonomous vehicles.


> I'd rather have them to be in autonomous vehicles.

Who is "them" and who allocates the groups?

We can't even get any kind of car control skill testing or training to be part of a driver license issuance.


Sure, but nobody thinks they are the bad drivers.


I think you're mistaken. There are still massive technological challenges. I have seen nearly no evidence that current self-driving car technology is even remotely close to matching the ability of a novice human driver. Sure while the "don't crash into things" algorithms may generally be fine, these systems seem to frequently deadlock in completely mundane situations. They also seem dependent on remote operator assistance when encountering non-ideal conditions, greatly limiting their maximum speed.

If anything, legislation and social acceptance has moved faster than the technology. That's the opposite what many of us observing this space expected 10 years ago.

At this point I'm starting to have doubts about whether the full dream of self-driving cars will even be realized within my lifetime.


I read this comment after taking a cruise in SF, which is a self driving cab with no driver. It basically reminds me of all the comments saying that VR has no future, written by people who have never tried VR and would get their mind blown by it if they tried the latest iteration. Maybe you should come to SF and try one of these self driving cars yourself :)


I actually do live in the Bay Area and spend a lot of time in San Francisco. I applied for the Cruise waitlist well over a year ago but have not been accepted. I've tried to organize with friends who have access but we rarely have a reason to go the Richmond or Golden Gate Park after 10PM. The coverage area is very limited.

I'm impressed that they're actually offering driverless rides on SF streets, but my point stands. The cars operate only on the slowest streets at the quietest hours. Any problem they encounter is handled by remote operators.

I'm not outright dismissive of self-driving cars. I truly want them to exist. I don't even own a car and dislike being behind the wheel. I just don't buy into infinite hype pushed by a revolving door of charlatans.

Also I do have a modern VR headset and celebrate the technology. But, to make a similar comparison, the metaverse "ready player one" vision is not within our lifetimes.


They don’t operate only on the slow streets. They operate mostly on a large rectangle in the north of SF (basically where I live so perfect for me).


How does having taken a ride make you an expert? That's a ridiculous argument on a level with trying to shut someone who criticizes Facebook's algorithms down by saying I use Facebook and never had an issue.

If you're into self driving tech, Cruise especially is regularly ridiculed for their vehicles blocking traffic. Take this latest example: https://twitter.com/as611/status/1597144790767788032

But more importantly in case you're not aware it's not true self-driving at all, they're geofenced and both Waymo as well as Cruise have been shown to have issues with even minor changes like a construction site. The tech isn't there, we don't really have full self-driving and if we actually put a million "self-driving" cars on the road as the above user suggested, it would be utter mayhem with current tech. If this will change in the future is to be seen and from what I've observed progress has already slowed as per ninety–ninety rule. It's easy to have some autonomy functions. It's hard to have millions of fully automated cars. So far, we're nowhere near it.


Funny, because VR has no future outside of some niche areas. It's the new 3D TV. The average person doesn't want a computer strapped to their head all day.


I ALWAYS thought 3D tv was the biggest gimmick. I was the first to point it out. And I’ll be the first to tell you that VR is freaking mind blowing and your comment is going to poorly age.


You might be right, but have you seen this: https://old.reddit.com/r/electricvehicles/comments/z9b1rq/wa...

This level of sophistication makes me think it will not "frequently deadlock".


A bigger problem is this: Say you need to prove to the public that the autonomous car is significantly safer, and you do an apples-to-apples comparison between a hypothetical Level 4/5 car and well designed new Level 2 electric car like a Volvo C40 or a BMW i4.

The modern Level 2 car is already today at below 1 fatality per billion vehicle miles travelled (VMT). The autonomous car then needs to be below 0.1 fatalities per billion VMT. Meaning that if you have 1 million vehicles of your make deployed, they each need to have driven 30-40 000 miles autonomously before you have enough statistics!

That means proving the safety of a Level 4/5 autonomous system is extremely expensive and slow, and requires significant public adoption before it's proven to be safe. The consequence is that, assuming proven safety is necessary before public adoption, it becomes impossible to prove safety.

Another point is that OTA upgrades for autonomy become entirely pointless, as you'll be polluting your statistics if you change the code more frequently than every ~3 years.


> The modern Level 2 car is already today at below 1 fatality per billion vehicle miles travelled (VMT)

Does this statistic count fatalities to people outside the car? I ask because (in the US, at least) car safety ratings don't take those into consideration.


> in the US, at least) car safety ratings don't take those into consideration

Which ratings? The NHTSA does include non-occupant fatalities in their statistics. E.g. https://www.nhtsa.gov/press-releases/2020-traffic-crash-data...


NHTSA safety ratings (https://www.nhtsa.gov/ratings) only count people in the car. It looks like they proposed a change to that this spring (https://www.nhtsa.gov/press-releases/five-star-safety-rating...), but I don't think they've implemented it yet. From that press release:

> For the first time ever, [the proposed rating program] includes technology recommendations not only for drivers and passengers but for road users outside the vehicle, like pedestrians.

News story on this: https://www.theverge.com/2022/3/3/22960262/nhtsa-ncap-five-s...


I think you’d use crashes, not fatalities.


To me, self-driving seems like a band-aid fix for terrible car infrastructure. Driving a car is already one of the most dangerous things the average American can do. Due in part to larger, heavier vehicles, high speeds in residential areas, etc. Even if you had a "perfect" driver that doesn't prevent someone from ramming into you.


> one of the most dangerous things the average American can do

About comparable to the risk of falling. Lower risk than suicide. Or death by opioid overdose. And of course, the most dangerous thing most Americans do, by a huge huge huge margin, is overeat and lounge on the couch.


Imagine thinking a picture of a penis is sexual harassment...


I agree it's a social problem, but IMHO it's a rather different social problem: Current cars, roads and car culture is adapted to human drivers, and AI is expected to be able to integrate into that.

What if we would make cars and rules that are adapted to AI cars and ignore human drivers? e.g. Ban human drivers from some roads, allow AI cars with designs that exploit AI advantages (e.g. much better reaction time) but do not require or even allow human backup (enabling us to put the passengers in a secured shell), etc. I suspect we could than reach a 1/20th rate today.


Why self-driving trains are much easier to implement, yet there are not that many systems capable of doing that? Many newer metro lines are GoA 2 or 3, theoretically capable of running autonomously, but they always require a driver in the loop.

My partial answer is, making an extremely reliable system is hard. If someone wrote a deadly bug even only happen at a very corner case, it still can kill people. And it's quite hard to prove there isn't such bugs.


We don’t half self driving trains because there’s far, far less incentive.

You only need 1 or 2 drivers for a huge train carrying a lot of ppl/stuff. Optimizing away the driving barely reduces the cost of operating the train as a whole.

With cars, everyone drives themselves. There’s a lot more driving happening and so if you can automate that away, you create more value.


Regulations. Nobody ever got fired for buying microsoft or for requiring a driver in the seat.


> Until Waymo's cars are better than most humans in every single situation, they won't be able to win over the public perception war.

The current situation as basically the opposite: waymo's cars are better than humans in almost zero situations. It's hard to gain the my trust when your car can barely drive in a drizzle.


I agree.

> But, the ones they do have are accidents a human driver would almost certainly have avoided.

I suspect most human-driver accidents are also accidents that (other) human drivers almost certainly would have avoided.

That's scant consolation for all the people dying in traffic accidents each day, of course.


It is a valid point but the financial incentives are so big that some jurisdictions will allow it. In fact they already do allow these autonomous systems on public roads. That is going to continue to expand and since the financial incentives are huge even when deaths happen the governments will continue to allow it.

And in fact some regulators fully understand the tradeoffs and will prefer autonomy for the better good of the public. An example of this is the Boeing 737 Max, those crashes wouldn't have happened if there were no autopilot systems. But regulators are not suggesting that all autonomous systems on planes be turned off because of the safety and financial advantages of keeping them in place even though they are obviously not perfect.


> Boeing 737 Max, those crashes wouldn't have happened if there were no autopilot systems.

Bad example. MCAS was an obvious case of a criminal corporate behavior, not a tradeoff between overall safety vs. technical perfection.


It doesn't matter, the point is that relying on autopilot will open up crash risks that do not exist with manual flight controls


It does matter! I'm assuming that any sane person, especially a highly trained professional, is not fully relying on automation and has at least some idea of its limitations. In the case of MCAS, corporate fraud distorted the level of trust pilots had in automation, leading to loss of life.


I think this is more true than not. But also underestimates the technical problem.

Airplanes are not fully autonomous, even in the instrument flight rules system which is highly standardized. You don't have non-standard or non-predictable things. In instrument conditions, only instrument planes and pilots exist. While portions of segments are automated, transitions between segments and phases of flight aren't. It's ripe for automation, yet isn't fully automated.

The automobile environment has more objects thus more density of complexity, more non-standard and non-predictable actors like non-autonomous vehicles along with bicycles, mopeds, pedestrians, etc.


Yup, exactly. "There was a 1% national accident rate before autopilot but now it is 0.5%, aren't things great?" Not really, because my personal accident risk just went up from 0.1% to 0.5%.


But you have no issues with paying for social insurance when you never use it yourself? Isn't this the same kind of tradeoff?


How is it the same tradeoff? I'm not increasing my chances of getting sick by buying insurance. The worst case is that I won't use it, which is fine. The worst case for autopilot is that I'll get into a crash because the camera couldn't see a massive truck in front of me because of glare and shoddy software.


It is not only a social, but a legal issue, too. If a human kills someone in an easily avoidable accident, breaking the traffic laws, he may go to jail and/or lose his driving privileges for a while. If an artificial neural network does the same, should it loose driving privileges for a while? Should someone go to jail?


To double down further on the social side of this, public transit is largely getting there faster than point to point driving. In that many trains and such are already largely "hands off the wheels" for operation.

Relatedly, another "problem" with "self-driving" cars is that we want all of the convenience and ease of use, without adjusting liability and ownership considerations. Consider, if Waymo gets to the point where they have a self driving car that you have to have a subscription to use, do you own the car? Are you the liable for any accidents it has?

To lean in on that hypothetical. I'd imagine a lot of families will use self driving cars to send kids to school. Is effectively a bus that terminates at your house. Who is liable for a mistake if the operation of it is completely remote?


> public transit is largely getting there faster than point to point driving

Aside from some niche cases in very dense cities, is this generally true anywhere? I've visited a lot of cities with various levels of public transit, and I can't think of many where it was faster. More convenient sometimes, sure, cheaper, yep, but faster? Not often.


Cheaper: Questionable, unless the customer is unemployed. Tickets are practically always heavily subsidized, the taxpayers pay that bill. Maybe there is a city in the world where this isn't the case but I've never heard of it.

As far as convenience goes self-driving cars should be extremely convenient once we have them, no? I agree with you that public transport makes sense in very densely populated cities but most places in the world are actually wasting money on lots of public infrastructure that makes little rational sense, simply because it's a popular demand.


In the United States at least the subsidies for public transit pale in comparison to subsidies given to drivers in the form of road maintenance and parking subsidies. Subsidized public transit makes a ton of rational sense. The fewer cars there are driving around me the safer I am and the cleaner my air is. I want everyone to have an option to get around that doesn't require a huge upfront investment. It seems you and I are optimizing for different values but there are plenty of rational reasons to support it.


Faster to hands off the wheels, is all I meant.

I was leaning into it being a social thing. If you care about reducing fatalities and such, public transit has been a thing for a long time.


> Consider, if Waymo gets to the point where they have a self driving car that you have to have a subscription to use, do you own the car? Are you the liable for any accidents it has?

I'd imagine the company would take on liability, as long as humans can't drive the vehicle or they aren't driving when the accident occurred. Mercedes already got the ball rolling on this [1].

[1] https://www.roadandtrack.com/news/a39481699/what-happens-if-...


I can't see that path getting taken without you basically losing ownership of the car?


Agreed, but I can see it being solely a "lease" model. For companies to take on liability, they'd have to make sure a customer doesn't modify the vehicle and keeps up with the maintenance. They might also want the authority to disable "self driving" on non-compliant vehicles.


> But, the ones they do have are accidents a human driver would almost certainly have avoided.

> Is this something we would accept?

No, and that's good.

Human drivers behave mostly like humans, even the worst ones. We all have millenia of evolution fine-tuned to recognize human behavior based on the most subtle of cues. So human bad driving is recognizable and thus far more avoidable.

AI drivers are effectively an alies species who make errors that make no sense whatsoever to a human mind, thus they behave, for all practical purposes, apparently completely randomly.


Can this be compared to seat belts? If seat belts needed to prevent injury in every single situation then we would claim that they are ineffective because there is always a chance that seat belt may prevent someone from flying out of the window and not burning in the car being strapped unconscious. Is the goal here to be 100% reliable with 0 deaths or achieve better stats than 46k deaths per year in USA? This now becomes more of a philosophical question.


This is one reason for going to vision only automatic driving systems like Tesla is doing. The system is more likely to fail when a human would also of had a difficult time. Heavy snow, sun blind out, etc. Strange failures due to radar, lidar, and other sensors with not be understood or accepted.


That would be a reasonable argument only if Tesla's image processing were as good as a human brain (it's not) and if their cameras were reasonably comparable to human eyes (they're not). To take the argument to an absurd extreme, you can't cover a car in 240p webcams from 2003 and expect good driving performance.

Moreover, other players also have cars covered in cameras. If anyone thought vision only was the best path forward because they couldn't figure out sensor fusion, they'd already have done it and saved the BOM cost.

For what it's worth, my experience has been that cameras are one of the more problematic sensor systems overall. Vendor software is garbage, any particular tuning is finicky in extreme conditions, you have to clean the damn things, cameras streams take up lots of bandwidth, etc.


> will have 1/10th of the fatal accidents that human drivers would have[0].

If this becomes true then society would have an even bigger problem with organ donations - available supply would plummet. Some of the largest sources of organ donations are from car crash victims.


I always hear people cite this statistic and I’m never sure what they’re getting at with it.

Surely we’re in agreement that this is a good thing, right? That a shortage of organ donors due to the donors not dying is a good thing. Sometimes it feels like people are suggesting otherwise and I can’t fathom the logic.


I think there's a slightly charitable reading of that opinion. Replacement organs are already in short supply, it's a real problem. By "bigger problem" he's referring to bigger than it already is, not that organ recipients dying is a worse problem than all of the organ donors dying like the status quo. It's a good problem to have, but it's still something the medical profession is going to have to grapple with when their supply dries up.


My semi-serious suggestion is self-driving cars should be painted bright orange with big squishy bumpers and a maximum 20 mph speed limit. They would still be perfectly useful as taxis in big cities but it would greatly limit the damage they could do to anyone.


You joke, but this isn't a terrible suggestion. Mercedes has the best answer, in my opinion. Speed limited, carefully controlled environments, full liability resting on the manufacturer. I really don't like how Tesla is foisting off all the beta testing on their customers, pushing that risk to them and other people on (or near) the road, and then also giving them 100% of the liability too.

The incentives are misaligned. Tesla should want to make the software better not just to attract some more customers, but because if they screw it up they're going to be on the hook.


I really don't think so. And we are not nearly close to that situation either. We have some obvious crashes in cars that are nowhere near to be probably "safer then human". And then we have super confident claims of safety by manufacturer.


So far people have been pretty good about accepting self driving cars, even the Teslas on autopilot that crash into parked trucks quite regularly.


> quite regularly

How many times do you think this has happened?


I don't know - I must have seen five or so headlines in the papers but don't really follow the stuff. Here's one https://www.youtube.com/watch?v=LfmAG4dk-rU


Another part of this is that driving is a lot of fun. People (like me) really enjoy it and definitely wouldn’t give it up easily.


Piggybacking on that sentiment, car ownership becomes less an expression of individuality when cars are driving themselves. No point in owning an expensive sports car when it's the one doing the driving. Ride-sharing and fractional ownership start to make more sense than owning the car outright.


> I think the problem we have with self-driving cars is more social than technological at this point.

> it's a hypothetical so give me some leeway on this!

IMO you should not base (and broadcast) your opinions about safety on hypothetical statistics. I don't even believe it's true that overall statistics show self-driving is safer than humans. IIRC prior reports showed that companies were selectively picking statistics about safety.


If 10% of the time they got into accidents humans would have avoided we wouldn't be where we are today. I can't imagine any scenarios where these cars get in accidents that a human would have certainly avoided. You also say avoided accidents don't make the news but I'm pretty sure footage of them avoiding accidents that humans would have no chance of will be a major part of their marketing.


>I can't imagine any scenarios where these cars get in accidents that a human would have certainly avoided.

Then you've not been following the space. The one that immediately comes into mind is the Tesla that slammed into the side of the semi truck because it was painted blue like the sky.


I've been following it pretty closely. I don't consider Teslas to be anywhere near self driving. By saying "these cars" I was referring to Waymo.


> I can't imagine any scenarios where these cars get in accidents that a human would have certainly avoided.

If you follow the events with self-driving accidents, most of them are nonsensical crashes that no human would ever have done.


I follow AV news very closely. I'm talking about Waymo. The small set of accidents they've been accidents in have been limited to fault of other drivers with the vast majority being rear endings


It's not necessarily true for people who think they are in the 99th percentile of best drivers.


As a software engineer myself they will always lose the argument. I saw so many smart systems falter in some weird way that I would never trust a software system completely. I drive an EV with some AI based system that automatically throttles the car etc. But to trust my life and my Familie in the hands of this system (or any other) no thank you.


This is the gist of it, but you have to ask yourself why it's just the accepted wisdom that it's okay to have the massive level of failure we have now.


> will have 1/10th of the fatal accidents

I think I'm relatively safe from cars on the sidewalk. Yet with fsd cars I'm not so sure anymore.


> I think I'm relatively safe from cars on the sidewalk.

Not even true. I've been hit by a car (with human driver) on a sidewalk. Driveways cross sidewalks, and drivers seem particularly inattentive when crossing them. When you're walking on the sidewalk, do you ever walk somewhere that's more than one block away? If so, you're going to have to cross a street anyway.


I spent a few years in self driving, I have immense respect for Waymo, and very little for Tesla. I think ultimately they will win the space.


This is my view as well. (I did self-driving related research, like platooning and taxi scheduling/allocation.) Waymo, Baidu, Didi, and others are the names that come to mind for places that produce research, produce data, and apply their technology in real-world practice.

My impression of Tesla is mostly shaped from (1) nonparticipation in the research community, (2) a very early "mission accomplished" declaration by calling their cars fully self driving, and (3) a longterm refusal to use LIDAR.

I don't consider Tesla a player in self-driving (edit: self-driving research), but I don't think Tesla does either. There's no reason for them to try to "win" the space.

From Tesla's side, it makes more sense to continue on their current tack: Applying results from existing research. I think Tesla's strategy is to be the highest bidder when it comes time for Waymo (or Didi, etc) to sell their tech.


They literally sell a package to purchasers of cars called full self driving and autopilot. They claim their competitive advantage is all the cameras of miles driven. They put special boards in cars for it. They absolutely consider themselves a player.


Let me clarify my position: Tesla does not advance self-driving research, and they don't need to. Tesla won't be the first to release fully self-driving cars. (I think 'SAE levels' are bunk, but let's say this is level 4.5 for the sake of discussion.)

EDIT: Sorry, and to clarify, I meant "not a player in self-driving research. I also do not think anyone has any vehicles we should call "self driving".

(I'll keep the rest of my pre-edit clarification below.)

To clarify further:

Tesla's offerings come from applying and engineering existing published research. That takes work, and they're making some money from that.

To the extent that "fully self driving" is an achievable goal, it makes no sense to expect Tesla to make the advances that get us there, when (1) they aren't doing that, and (2) they don't need to do that to make money.

To make this even more clear, let's make it concrete with one plausible future: In 2032, Waymo (or Didi, whoever) achieves true 'level 4' fully-self driving with proprietary technology. Their tech is seen in trucks, busses, taxis, as well as being equipped to a few thousand private vehicles. The safety stats are superhuman, and insuring such a vehicle is cheap.

In this future, Tesla Motors would like to enter into an exclusive partnership to integrate this technology into the cars they manufacture.


Literally making their own AI training chips with novel architecture doesn’t count as participating in research?

Google has tried and failed at commercializing similar technology in other verticals (like building environmental automation from their DC tech). The reason is actual incumbents (rightly) see the value of their position while Google comes at it as “our AI is the value, you just make dumb things”.

I expect the automakers to ship mediocre stacks that are put together by existing players like Bosch.

As an ex-Googler I would be floored if Waymo actually lands a sell-into deal with an automaker. A fully vertical taxi service is their path today because they tried and failed to sign any partnerships.


they have several now

august 2020, tesla friendly blog: "Waymo Has Partnerships With Fiat Chrysler, Jaguar, Nissan, Renault, Volvo, & Magna"

https://cleantechnica.com/2020/08/10/waymo-has-partnerships-...


Announcing partnerships is the easiest thing to do. The proof is in the pudding. Right now Waymo is paying carmakers for the vehicles, not the other way around.


I don't understand why this is relevant to the ongoing discussion. What are the implications of this statement, what is your point?


Quoting you: > To make this even more clear, let's make it concrete with one plausible future: In 2032, Waymo (or Didi, whoever) achieves true 'level 4' fully-self driving with proprietary technology. Their tech is seen in trucks, busses, taxis, as well as being equipped to a few thousand private vehicles.

I see no possibility where existing large automakers will let Google (Waymo) own the FSD stack (and thus own the data and huge piece of value chain) and be relegated to a mere maker of a dumb car.

And if GM is not going to buy Waymo tech, Tesla will definitely not.


To be clear, (1) we are discussing a hypothetical about the future, where things are different than how they are today, and (2) the hypothetical is only a clarification and the further argument does not rely on it.

So, I still have difficulty reading your comment and extracting a meaningful point from it. Any reply to you would be regurgitating things that have already been said.

More importantly, I've been trying to interpret your comments as made in good faith, but I don't think you're arguing with the contiguity of the thread in mind. It really seems like you're here to defend Tesla, and that "Tesla might buy Waymo tech" might sound like an insult to Tesla? (To this extent, I'm worried this might actually be another proxy Musk argument, which I really have no interest in.)

With all this, out of courtesy, I don't get HN notifications and I won't be navigating back to this comment thread to check for further replies.


Don't get me wrong: Trawling the floodhose of AI publications, taking them together, and synthesizing them into a coherent and useful tool is serious, difficult, and creative work. This is R&D, and the people at Tesla working on autopilot are smart. The rift between "publication ready" work and "commercial ready" work is a vast one.

R&D for commercialization is a very difficult and very profitable task! But one would not expect this to advance the field of autonomous vehicles, nor would it require participating in the research.

(Also, I thought Waymo had partnerships, but I'm not close to Google. I don't know a lot about Waymo as a business, only as a research entity.)


You seem to imply that for progress to be classified as “R&D” it MYST be published.

A huge amount of R&D is never published. Eg: lithography tech at ASML.


> Tesla won't be the first to release fully self-driving cars.

Technically speaking: yes, they were the first ones, back in 2017.

Their car might've been more likely to crash then actually end up at the desired destination, but they did release first with a pretty hilariously bad product.


I don't think anyone is doubting that Tesla considers themselves to be a player, especially since it sells more add-ons to their cars. Repeatedly publicly knocking the benefits of LIDAR in self driving demonstrates otherwise.


> I don't think anyone is doubting that Tesla considers themselves to be a player

> I don't consider Tesla a player in self-driving, but I don't think Tesla does either


They clearly are a player in the self-driving cars market. They may not be a player in self-driving research. These things can both be true at the same time.

I think that's the distinction driving the confusion here.


They want customers to consider them a player.

They doesn't necessarily mean that they themselves believe their own marketing.


Maybe it depends on how you define "full" self-driving. Is it full if it works only for the scenarios it was designed? Are you working full time if you only work 30 hours a week?


How do you view the approach of comma.ai ?


I don't know anything about them to be honest! Just searched them up.

AFAIK, they aren't working on self-driving or trying to advance research there, so it doesn't make sense to compare them to Waymo either.

But an open-source driver-assist upgrade package is interesting, but doesn't overlap with my experience. Sorry I don't have anything more meaningful to say!


I like the level 5 or bust approach taken by Waymo and Cruise. Anything in the middle (like what Tesla is doing) is IMO more dangerous than useful. The whole "the car is self-driving but you must also pay attention and have your hands on the wheel at all times" thing is idiotic.


Humans are notoriously bad at just paying attention and not being in charge. Those few seconds their actual attention is needed are critical.

I do appreciate that my car can do full distance control and assist if I am drifting, but it doesn't control itself, so I can never disengage. Personally I feel that this is wonderful and should be the limit. Anything past that should just be fully autonomous. Otherwise you're asking for trouble.


100%. If I'm driving and my attention wanders, I need the immediate feedback of drifting out of the lane and having to abruptly course correct, and being like "dang yeah, let's not do that again", possibly reinforced by passengers scowling at me from elsewhere in the vehicle.

I can't imagine trying to focus on supervising an AI pilot without that kind of feedback.


interestingly, I don't have much trouble with that, as it works exactly like most airplanes that I fly as a pilot.

An airplane autopilot is a dumb device, in that it does execute _exactly_ the plan you tell it to, and it is up to the pilot to at all times decide whether the current plan still makes sense or needs to be altered. So the pilot makes the strategic decisions, and leaves most of the physical tasks of flying to the autopilot.

I find myself using my M3 w/FSD in exactly the same way, as that I put on autosteer pretty much immediately when I'm out of the driveway, but I constantly nudge it into the lane that I want it to be in (by using the turn signal) or push the accelerator when I think it is taking too long pondering a turn. So i leave the physical driving (keep lane and distance) to the car but manage the car to always go exactly where I want it.

I have no trouble staying alert this way when doing medium long drives. Long highway drives where autopilot is so good that it requires no manual interaction is where the trouble starts and I find it hard to keep paying attention.

This is where in an airplane you have a copilot and can discuss strategic things like overnight stops, fuel stops, etc... Maybe Tesla needs a built-in chatbot to make me do that :)


Isn't airplane autopilot considered to be significantly easier than automobile autopilot? Fundamentally, it's PID to track a bearing and elevation, and the worst kind of emergency it's likely to encounter is rough weather or some kind of mechanical problem with the plane itself, neither of which need to be handled with sub-second response time the way many road emergencies do.

I get what you're saying about the physical/strategic split, but my perception is that automobile autopilots are simply not good enough to be trusted with the "physical" stuff in a hands-off way like an airplane's can. And that's mostly because driving a car in a straight line down a highway is way harder than flying a plane in a straight line in an empty sky.


"I have no trouble staying alert this way when doing medium long drives. Long highway drives where autopilot is so good that it requires no manual interaction is where the trouble starts and I find it hard to keep paying attention."

Exactly!!!!!!

The trouble is when your attention wanes and you don't know exactly when you should be paying attention. What I found is the small amount of automation my car does is wonderful to keep my exhaustion down. When I drove for 4 hours in my 2012 prius, it was a chore, i had to do ALL the mental math myself. When I drive my 2023 CX-50 I have to pay attention at least 30% less, and that 30% is a massive difference, it feels like 90%. And any time my attention wanes, the car starts complaining because I am drifting or not putting enough immediate response to the road. It becomes a "pay attention quickly" wake-up call that happens within seconds of attention waning.

The worry is when the car doesn't snap you back into that attention mode, and you just trust it, right up to the problem.


I suspect the below-"Level 5" driving systems will become more of an "augmented driving". I've driven in newer vehicles with automatic lane centering, pedestrian detection, etc. and they don't really seem like they're even doing anything, you still feel like you're the one driving, except that it's more precise with the occasional interruption by the car when it perceives risk of a collision.

These augmented systems will probably reduce the risk of accidents so greatly that the value proposition for Level 5 driving systems just won't be there.


"Augmented driving" is Level 2. That's where commercial products (GM, Mercedes, Tesla) are now.

Volvo was talking about level 3 back in 2017, but they gave up.[1] Level 3 means that the system may ask the driver to take over, but if the driver does not do so, the system must get the vehicle to a safe condition. Preferably pulled over out of traffic, but as least stopped without hitting anything. The driver is not required to watch the road.

The serious players are trying to get to level 4, where the driver is not expected to take over but the set of roads you can use is limited.

[1] https://www.youtube.com/watch?v=2q00jIBhkq4


The leveling system is a bit off. Level 3 doesn't mean "better" than Level 2. A Level 2 system might actually offer the best safety profile of any of the Levels. That's what I'm getting at: there's a lot of runway in Level 2 systems, and I think they'll be so good that it will kill momentum for Level 3+ systems.


Yes. Eliminating or vastly reducing the head-on collisions caused by drivers drifting across the center line, and the rear-end collisions where they don't see the stopped or slow car ahead of them, are a huge win. I'd trade full self-driving for really effective lane departure warnings and auto-braking collision avoidance any day of the week. Next step (or included) would be reacting to red lights/stop signs if it appears that the driver is not stopping. Deal with those things well and you've eliminated the causes of most serious accidents.


The problem is that when a level 2 system gets too smart it can confuse drivers and lead to bad reactions in response.

e.g. let's say you have a level 2 system which starts auto-evading, suddenly steering without user input, the user is likely to reflexively try counter-steering in response.


I think Mercedes has level 4 on highways now? I think this is the way forward actually, let cars drive cars themselves on the long boring bits (which are actually easy for AI) and leave the driving to the humans everywhere else. Having tried many augmented systems I don't believe in self driving in varied conditions within 10 years. I think the locations where waymo operates is a good indication of what is possible at the moment.


> These augmented systems will probably reduce the risk of accidents so greatly that the value proposition for Level 5 driving systems just won't be there.

I've driven a lot for decades and frankly enjoy driving. I drove from TPA to SLC via PHX and back for fun. But I will pay $500/mo level-5 subscription for a comfortable car that drives itself.


I'm trying to figure out how many hours a month are you in a vehicle where that makes sense for you?


Having level 5 available honestly opens up tons of options that were not available before. A 90 minute commute is so much more palatable when you can be sleeping. So are road-trips.


I'm not sure making 90 minute commutes more palatable is going to be a good thing overall, unless we somehow incentivize reduced travel in other ways. Better distribution of amenities and logistical improvements, maybe... More distributed work environments, perhaps coworking spaces, but with a less-stupid business model.


$500 seems reasonable compared to loan payments, full insurance, and parking. The math may not make as much sense if you would otherwise own your car outright, though.


That's $17/day. If it gives you an extra hour a day and eliminates driving-based stress then it seems very well priced. Especially if it includes the car


I'll also add that I think what Waymo is doing right now is closer to a semi-autonomous streetcar. There's probably immense value in that approach, especially as an alternative to mass-transit systems that have costly labor, but it's not clear that they are imminently close to "anywhere, anytime" self-driving.


> These augmented systems will probably reduce the risk of accidents so greatly that the value proposition for Level 5 driving systems just won't be there.

The value proposition of L5 systems is also the not driving part.


Here's Chris Urmson talking about this 7 years ago when he was still working on (what is now) Waymo: https://youtu.be/tiwVMrTLUWg?t=169


Aren't they geofenced, which would make them level 4?


What are your thoughts on airplane autopilots then?


> Anything in the middle (like what Tesla is doing) is IMO more dangerous than useful.

Even if a hypothetical, future Tesla FSD sometimes crashes in ways that could be prevented had the driver payed attention, it could still be statistically safer than a fully human driver (ie the number of FSD crashes even if left unattended < the number of crashes by humans driving).

To clarify, I'm not talking about the current state of FSD, I'm talking about a hypothetical, future Level 3.


Doing such aggregations is pointless. The majority of traffic accidents are caused by drunk drivers, drivers who are too young/too old, people on their phones or otherwise distracted, people driving in bad conditions, people driving unsafe cars etc. So yes, while Tesla autopilot may be better than all of them on average, I will still only use it if it is better than ME.


> I will still only use it if it is better than ME.

You're free to not use it. But if Tesla FSD is safer than the average driver (even if that's because the average driver is on their phone) then, going back to your statement, it is more useful than dangerous for the average driver.


What is your definition of "average driver"? There are ~250M drivers in the US and a little over 2 million car accidents that result in injuries every year. So the average driver is doing perfectly fine. In fact 99% of them are fine. Car accidents don't follow a bell curve. If you want to make the roads safer you need to target the bottom 1% of drivers, but there is no guarantee that they are the ones who will adopt autopilot. If everyone else takes their eyes off the road it will actually make things worse.


>Anything in the middle (like what Tesla is doing) is IMO more dangerous than useful

Intuitively I would have agreed with you, except Tesla has been doing it for years and their cars are statistically safer by every metric (fatalities, indicents, etc.).


> their cars are statistically safer by every metric

Compared to what population? Older, tech-savvy people buying 60k-120k usd vehicles? And do you mean in full self-driving mode Beta, which Tesla won't allow you to use unless you have a track record of save driving?


I wouldn't rule out Cruise.

They already have robotaxis in SF and are expanding into Arizona and Texas by the end of this year.


Saying they have taxi's in SF is a bit hyperbolic. I have an invite to that program and it's

- only after 10pm - used such an odd slice of the city I not only can't get picked up, it doesn't GO any where I go.

I would love to use either of these programs both for the novelty and because I think Autonomous driving is great, but I literally can't use the program I do have access to.


I think the parent wasn't referring to it being perfect, but that open road real-life testing is something very few companies have, and was implying we shouldn't count out Cruise because they are at least at that stage.


as a fellow beta tester, they are expanding their service area to include a majority of the city incl Mission.

Totally agree w/ you rn thou right now it's basically a neat party trick I have to go out of my way to show off instead of a super useful service.


> used such an odd slice of the city I not only can't get picked up, it doesn't GO any where I go.

It's been a month and a half since they've covered almost the entire city except FiDi/Union Square (and Twin Peaks)[1]. That's not a trivial omission, but do you really never take rides that start or end in any other area? In particular, that entire area is a fairly close walk from the only dense transit line in the city.

[1] https://twitter.com/kvogt/status/1587589014525448192


What does 'win' mean here? It seems like being able to pass on the costs of fleet management, insurance, gas, parking/storage, etc to drivers (the way taxis/ride sharing apps currently do) will always be cheaper than maintaining it yourself, even if you save on the driver fees.


At the rate things are going right now, Waymo will win when Tesla throws in the towel on developing in-house and licenses Waymo's tech in order to finally deliver on full self-driving.


Tesla already sold self driving for years - waymo uses way more sensors than a camera from what I can tell - not only would that bump unit cost they would probably have to upgrade previous customers due to their marketing.


> not only would that bump unit cost they would probably have to upgrade previous customers due to their marketing.

Only if they don't change they name. They can call it an upgraded RealDrive™ QuantumSense™ feature that no longer requires having your hands on the wheel.


and fuck with the aerodynamics - causing drag on the cars, decreasing range.

LIDR isnt an option.


That highly depends on the lidar system. All players are working a lot on minaturizing LIDAR systems and they fundamentally don't have to be as big as the waymo systems.


They are also Chinese solutions in that space like Baidu Apollo.


How things are going right now, I'd be surprised if anyone will be willing to license Tesla's tech.


This would imply that Uber/Lyft drivers on average are losing money by being on the service, which is obviously not the case. Having a large fleet of driverless taxis, even if you have to maintain them yourself, will be a very profitable business. There are other potential revenue sources as well, like licensing the tech to car manufacturers.


> obviously not the case

This is a strong claim. I thought there was a decent body of evidence that suggested most drivers make much less money than they think, when you take depreciation/repairs/etc. into account?


Noob questions, thanks in advance for humoring:

Do self-driving cars model, learn from, or both, from other cars? Say there's a new obstruction (traffic cones around maintenance crew). When deciding what to do, eg left or right, does the self-driver observe what the proceeding cars did?

Do self-drivers remember prior decisions? On the daily commute, there's a speed bump, pot hole, or whatever. Does the car anticipate the remembered road feature? Like maybe "Oh, last time I saw this pot hole, I had to swerve right. So this time I'm going tack right a little earlier."

Sorry for noob questions. Imagining a ubiquitous self-driving future, I keep thinking of boids and uncoordinated collective action, like flocking and murmurings. Wouldn't it be cool if cars did similar stuff?


Any visibility or opinion into AutoX? I hadn't really heard much news about them until that Electrek[1] article the other day that presents AutoX as being far ahead on disengagements relative to Waymo (who I thought was the leader in the space).

[1]https://electrek.co/2022/12/14/tesla-full-self-driving-data-...


can you elaborate? something wrong with Tesla's approach?


Yes. Photogrammetry + ML is fundamentally inferior to a LiDAR-based solution, particularly on the time-scale motor vehicles operate on.


They have a habit of running into things.


How do the number of accidents compare to Waymo when fleet size is taken into account?


Talking of FSD beta here, I don't think there has been any death or even major accident to date. But that's mostly because (especially until a few months ago) it was bad enough that nobody would trust it, so people were always alert to intervene.

If you were to let FSD beta just drive by itself enough time, and intentionally never intervene, it would eventually crash, no doubt about it. Before v10.69 it was hard to get a 20 minute drive with no interventions (unless it was mostly straight roads).


Relevant HN: https://news.ycombinator.com/item?id=33984922

Comparing disengagement and driver intervention data might be useful to compare Tesla FSD vs others.


The stats don't matter much when it's clear they can't handle collision avoidance with anything outside their limited training set like airplanes. You have to ask what else can't they detect in front of the car if they're 100% dependent on ML to decide there's an obstruction. That is an irresponsible threat to public safety no matter how lucky they are scraping by on their current architecture.


Except the stats *do* matter. It’s how we measure things that operate in the real world. The blind baby stroller benchmark might be academically interesting, but if distracted drivers are smashing kids on public roads for another decade while Waymo perfects it’s craft the net result is just more flat people.


Waymo crashes a lot more. From 2 years ago:

"In its first report on its autonomous vehicle operations in Phoenix, Arizona, Waymo said that it was involved in 18 crashes" (in 6.1 million miles).

For comparison, in the latest Tesla safety report:

"we recorded one crash for every 4.31 million miles driven", which is roughly what they had 2 years ago.

So Tesla is objectively safer. And Teslas are diven anywhere, whereas Waymo only drove in Phoenix, which is extremely sunny and dry.


I don't know the answer, but are these numbers comparable? I would assume that Waymo is talking about crashes while in self-driving mode, is that true for Tesla as well or are they just talking about all miles driven, self driving mode and manual?


I really doubt you're comparing apples to apples here. How many of those 4.31 million miles per crash had FSD enabled?

Plus the Tesla owner will save the his/her $80k Tesla rather than prove FSD can actually handle or fail a situation.


This is FSD only data.


Autopilot is not FSD, only runs on freeways, and is apparently so bad that Tesla refuses to release statistics that allow for an apples to apples comparison against having it off.


Great appeal to authority. Why not explain the technical reasons you believe Waymo will win?


I did not spent a few years in self driving, but correct if I'm wrong, but how would Waymo win when they can work only with HD maps (aka, nearly nowhere) while Tesla FSD work nearly perfectly now even on dirt roads (aka anywhere) with no map at all?


Waymo has proven driverless operations in Chandler, then Downtown Phoenix, then San Francisco. Truly driverless, no people in car. They’ve demonstrated driverless capability and the ability to expand to new regions, even if it means taking HD maps.

Tesla has not proven any reliable driverless operation, anywhere. They have removed hardware from their cars (radar, uss) and have not shown any meaningful progress in the past ~5 years nor any willingness to change from their “vision only, big data” strategy.

If things continue on the current trajectory Waymo will likely be operating in all major US cities and metros in a few years while Tesla’s self driving offering will probably be forcibly renamed by regulation and end in a class action lawsuit.

Basically, Waymo has proven N and N+1 capability, meanwhile Tesla has yet to prove N, and has lied to consumers and actually reduced their chances at achieving N due to cost cutting measures.


> and have not shown any meaningful progress in the past ~5 years

Really? I’ve had some casual interest in the progress of FSD beta and the past six months alone has seen dramatic improvements to numerous adversarial situations.

FSD beta is currently able to drive with confidence on unmarked roads at night in the rain, with only basic maps for wayfinding. This has been demonstrated by customers in their own cars, driving roads which haven’t been vetted by the developers.

I’m sure Waymo and other systems can do this too, but I haven’t seen it demonstrated.


> Waymo has proven driverless operations in Chandler, then Downtown Phoenix, then San Francisco.

And how many years did that take. Are they adding profitable operations in new cities year after year, with every year adding new cities?

As far as I can see they simply lose billion and billions of $ for no real success in actually having a product.

Tesla is actually using the technology to improve its Level 2 systems and make money with it.

> meanwhile Tesla has yet to prove N

First of all, Waymo has not proven N, because they don't make money on any of these things.

Tesla at least try to drive in N+10000000 other cases and navigate many of them without seeing them first.

If you have to go one by one threw every single city in the world its not clear to me that this is a better approach then solving a more general problem.

> reduced their chances at achieving N due to cost cutting measures

Tesla just made 3 billion $ of profit in a quarter. What cost cutting? They are currently doing major investments in upgraded sensors suits, upgraded data-centers and overall their team is still growing.

How much did Waymo make again?


Just because Tesla is profitable and making money by selling vehicles does not mean they are on a better path to engineering a self driving system than Waymo.

The opposite is also true, just because Waymo does not make money does not reflect the capability of their self driving systems. Saying "Waymo has not proven N, because they don't make money on any of these things." doesn't make any sense, and is not even true.

I can go to downtown Phoenix right now and request (and pay for) a fully self-driving ride from point A to point B. Teslas can not reliably complete any self driving route without any disengagements.

We are discussing who is closer to realizing a fully self-driving system, not who runs a better business.


> Just because Tesla is profitable and making money by selling vehicles does not mean they are on a better path to engineering a self driving system than Waymo.

That not what I said, what it means it has more staying power.

> doesn't make any sense, and is not even true

What data are you basing this on? Are any of their operations profitable? If they are, why are they not expanding those operations to make more profit?

> I can go to downtown Phoenix right now and request (and pay for) a fully self-driving ride from point A to point B. Teslas can not reliably complete any self driving route without any disengagements.

Yeah, but Tesla didn't spend 7 years only focusing on Phoenix, so using that as a comparison is just dishonest.

You are stacking the field in favor of what you want the winner to be.

Here is my proposal for a more fair test on who is actually 'winning' self driving:

"You take a car, and put it on any random road anywhere in the world, how well can it navigate to any other random road in the world"

How well does Waymo do on that test? I would guess worse then Tesla.

That test is much closer to what it actually takes to really claim that self driving is 'solved'.

In my opinion neither are close to this and both will burn many more billions and many more years before getting there. So to just confidently claim Waymo is way ahead is nonsense in my opinion.


> nor any willingness to change from their “vision only, big data” strategy.

It sounds like they might actually be including a new radar system in January. Nothing official yet though from what I've seen.

https://electrek.co/2022/12/06/tesla-radar-car-next-month-se...


"ot shown any meaningful progress in the past ~5 years nor any willingness to change from their “vision only, big data” strategy."

I guess they did now that they can make their own cheap lidars and adding them back in 2023.


Reversing the "cameras only" position is a step in the right direction. They are currently ~3 years behind collecting lidar data compared to Cruise / Waymo. I wonder if they'll be able to make it up with "ghost-rider" volume in 2023.


? Where are you seeing that they're adding lidar? The big news recently was that they're going to bring back radar.


Collecting HD Maps is an 80/20 problem (I have a patent in a subfield of this, for better or worse lol) - you can get a ton of value from a small set of focused areas. If you can solve greater metro areas (no dirt roads?), you've got a real solution.

I also think that the mapping and routing component matters a lot less than how good your collision and realtime avoidance systems are. And in that arena, Tesla is an unmitigated disaster.


Thanks for your anwser. I see a lot of bad things on Tesla FSD and I totally get the critics.

Yet, I follow DirtyTesla's YouTube channel and I think FSD is quite impressive compared to any other self driving software I've seen.

Would you mind to direct me to similar videos from Waymo for example in similar situations? I can't find anything even remotely as good as what Tesla is doing now.

I'm not a fanboy nor do I possess any TSLA actions (or even a car for that matter), I'm just interested in the field and until now I thought Tesla was the most promising tech (it seems I'm wrong, but I really like to see it!).


Compare Waymo[0] with Tesla [1]..

These are easily searchable which leads me to question your sincerity in feinting ignorance.

[0]https://youtu.be/mWvhw1KCmbo

[1]https://youtu.be/3mnG_Gbxf_w


I think you missed OPs point.

As I understand it, Waymo can’t drive on unmapped roads, and therefore there are no comparable videos of Waymo actually doing that.

You chose a Waymo video from their marketing channel, and a newspaper hit piece. And then questioned sincerity of OP…


did I? Here's a direct quote I was responding to:

>I see a lot of bad things on Tesla FSD and I totally get the critics...I think FSD is quite impressive compared to any other self driving software I've seen. > >Would you mind to direct me to similar videos from Waymo for example in similar situations? I can't find anything even remotely as good as what Tesla is doing now.

I can't believe one can make an honest argument that Tesla is ahead of Waymo on FSD


The honest argument is that we have seen Tesla’s technology exposed to unsupervised and uncontrolled adversarial conditions across hundreds of wildly diverse cities in the USA, whereas we haven’t seen Waymo vehicles doing anything outside of curated geofenced areas or curated marketing videos. Right now if you dropped a Waymo and a Tesla on an unsealed road in Michigan, one of them will drive at least as well as a human learner driver and the other will probably refuse to drive.

I agree that Waymo could well be far ahead of Tesla, but there isn't enough information in the public domain to say this with confidence. We don't have the ability to make a proper comparative assessment.


That was exactly my point, thanks for explaining it better than I did (as you must have guessed by now, English is not my native language, sorry).


Not to mention that there’s no evidence that any autonomous driving system was engaged on that Tesla.


They're just asking questions. Geeez.


This is something that seems really important, and is definitely a significant effort, but actually is inconsequential.

Think about a section of lightly used suburban road. The amount of work that went into making it involved was immense. A crew of road workers using expensive machines and large amounts of material were required to make it, and are required in it's maintenance. Don't forget the surveyors and engineers who made a highly detailed map and plans in the first place! (Though that map format isn't useful to self driving cars).

Also consider the sheer number of cars that drive that patch in a day. One car every few minutes adds up over hours, days, months, years.

So, yeah they have to drive a mapping car down the street a bunch of times to expand their coverage area. However this is insignificant compared to the effort that goes into our transportation infrastructure already.


Not to mention that Google Street View has demonstrated that such effort is viable even with way less incentive!

Besides, most miles driven are spent on highways and other town-connecting roads. To the average consumer, self-driving cars are way more interesting for commuting or long-distance travel than they are for a 5-minute drive to the supermarket.


Waymo currently works with HD maps.

Tesla currently works not at all.

It's not valid to compare Waymo's current capability unfavorably to a version of Tesla's capability that only exists in someone's head.

I would bet on Waymo working on a dirt road before Tesla does.


??

Literally a Tesla with FSD working well on a dirt road: https://www.youtube.com/watch?v=wv1l6aTnB_I


my dumb car "works" for self driving assuming a straight enough road without obstacles. A safety critical system needs to have a very robust definition of "works" that is far beyond "it happened to not crash on this particular road at this particular time with this particular set of obstacles".


Fair enough; my "not at all" was hyperbole, Tesla's driver assistance software does not in fact crash every single time it is used without human intervention.

I do find it mildly disturbing that in that video, the driver points out the car making fully blind turns where it cannot see that there's nothing it would hit.


Did I miss the dirt road part? That looks like it starts on a rural but paved road, and ends in a town.


99% of the time it doesn't kill you isn't what we're shooting for.


With 1.5 fatalities per 100,000,000 miles[0], the benchmark to meet is 99.9999985% of the time it doesn't kill you. Injuries are going to be a lot higher, obviously. Still, I think most self-driving enthusiasts underestimate the bar that needs to be crossed wrt safety. And general vehicle safety isn't going to remain stagnant. I think it's going to be a cost vs injuries tradeoff for quite a while until we get human-level or better self-driving safety in all circumstances.

[0]https://www.statista.com/statistics/193018/number-of-us-cras...


This is a very good question. Elon is dumping on LIDAR and 3D high resolution mapping.

That may be a smokescreen. Tesla collects a lot of data from their cars. What they do not have are these supposedly superfluous high resolution maps. If Tesla's camera-sourced data proves to be insufficient, that will have been a very bad gamble, in addition to whether camera data is sufficient for real time decisions.

When they pay off, bold gambles make businessmen look smart. That's why nearly all business hagiographies are the product of survivorship bias. Just like your buddy who won in Vegas.

We will see this risk-taking play out in Starship and Starlink, too.


The cars themselves have the hardware necessary to make an HD map.

That means that Tesla could make an HD map covering 95% of miles driven in the USA within a week with their fleet of users. And next week they could make an updated version of the same map.

So, making and updating an HD map isn't an issue.


I created some 3D models of a real world building and surrounding environment using photogrammetry from 20+ megapixel DSLR photos and decided the accuracy was totally inadequate and the artefacts were too hard to manually clean up.

I then hired a dude with a LIDAR scanner and did it properly. The difference in quality/accuracy is like 120x80 ASF video files in 1996 compared to 4K footage today.

Anyone who thinks you can build "HD" virtual worlds using the crappy cameras on a Tesla needs their heads examining. Maybe with thousands of passes and some epic compute and signal processing, but why bother? Just LIDAR it.

My Tesla can't even decide if a traffic light is a single traffic light or not on a sunny summer's day from a distance of twenty feet. Almost every time it is either dark or humid or winter (road grime) it tells me one or more cameras are obscured. But only after I've already started driving, obviously. This supposedly cutting edge AI driving machine frequently thinks I'm leaving the carriageway on UK B-roads (it's almost dangerous) and is significantly less reliable at distance cruise control and lane-assist than my Skodă. (I presume VW just quietly bought a black box from Bosch or whoever to do this.)

Tesla are barking up the wrong tree IMO. At this point the camera-only stance feels like a religious thing, not based on sanity or the real practical world. I imagine that someone came to Elon and said "reconciling conflicting radar and camera signals is hard" and he applied his considerable genius and issued an edict to "let's not do that then!" like it would magically make all the actual hard problems go away.

Heck, Teslas can't even seem to reliably parallel park themselves, frequently getting stuck halfway, or hitting kerbs. If they can't solve that highly constrained problem, I'm hardly going to trust taking my eyes off it at 70mph.


Tesla cars do not have LIDAR sensors. The downside risk is that high resolution imaging using multiple sensors is a requirement for level 5 AVs to work well enough. That means all the data Tesla has collected could be of limited value.


Waymos velocity seems to have slowed dramatically since 2015 when they first did fully driverless rides on the public road and started deploying to multiple regions.

Now, 2 billion dollars and 7 years later, they are still only in a handful of small regions with limited numbers of vehicles.

That tells me there is still some fundamental issue that is hard to solve. I wonder why they aren't more transparent and tell us what that issue is that they've been battling for 7 years?


(I know nothing about self driving)

I seems like the hardest 90% of the work is the last 10%.


The fundamental problem which is impossible to solve is game theory.

Suppose the collision avoidance is perfect. Now put it on the roads of NYC, or New Delhi.

There are a lot of people who will just walk in front of a car going 40mph, if they know for sure it will brake hard and stop.

The problem isn't technology, it's humanity.

The solution is to change the rules of the road, have protected lanes for self-driving buses and taxis and cars, and enforcement.

Let vehicles that can take full advantage of communicating with each other and the road go fast and use the infrastructure to maximum theoretical capacity, without having to worry about dumb human drivers.


> if they know for sure it will brake hard and stop.

I think the solution here is to issue tickets to those people. You could probably ticket them already under some statute like "endangering road users" or something.

With self driving cars having always on cameras, you only need to ticket each idiot once or twice, and they'll stop doing it.

We already punish people who run around on the runway of airports - seems no different.


> The solution is to change the rules of the road, have protected lanes for self-driving buses and taxis and cars, and enforcement.

We reached this conclusion about 150 years ago and came up with rails. In addition, you get cheap electricity so reliably that modern trains don't even bother having batteries.

Yes, rail lines as they are deployed now might not be the ideal future proof solution, but something similar which allows 'cars' to go off track for the last mile but otherwise not incur wear and tear on your own tires and engine/transmission for the long haul might be a practical idea.


Waymo first deployed on public roads in 2019

(I appreciate 2019 feels like 7 years ago)


Did you mean offered a service to the general public? Because Google's older self-driving car drove that one blind guy to the Taco Bell drive-thru more than 10 years ago. And they had been driving Googlers back and forth from their homes and offices for years prior.


I suspect that the issue is with cars being so cautious that they just stop as people keep walking, or at best herky jerky move fwd. In NYC, a car like that wouldn't get anywhere as the pedestrians just won't stop. the pedestrians stop when they see that the driver isn't going to stop and they gonna get hit.


So much of urban pedestrian-driver interactions depend on confirming eye contact and determining intent from body language.

I don't see how an automated car with no driver can deal with the "are they crossing or not?" question that you get every few minutes while driving in a city. Both because body language is a hard problem to get right, and because there's a lot of non-verbal communication that a driverless car doesn't have a way of participating in.


Waymo does pose estimation to help determine pedestrian intent and react accordingly [1].

[1] https://blog.waymo.com/2022/02/utilizing-key-point-and-pose-...


Thanks, that's an interesting read. Not sure that I trust pose estimation algorithms with my life, but still interesting!


Ahaha, yup! Especially around midtown where crossing the road is a symphony between cars and pedestrians.


Regulation is one of the major factor that slows down. You need more and more test cases to achieve higher reliability, but data collection at scale need approvals and regulators want to see if it's reliable enough to approve. This chicken and egg problem is not something easy to solve since at its heart it's a trust problem. Tesla was an exception because they choose to put all the responsibility to the drivers by making it technically ADAS but marketing it as "full self-driving".


It's clear which bits they haven't been focussing on... There are multiple videos on youtube of rides (some where it has gone wrong) and the user experience is terrible. The car has a robotic voice which plays a long and annoying unskippable message with every ride, and 'Rider support' sounding like they are following a strict script with no ability to be helpful or fix the problem [1]...

Imagine if every time you started your car, a robotic voice said "Welcome to your Ford Pickup XYZ model. Please ensure your seatbelts are fastened. If you are too hot, you can adjust the climate with the climate controls. If you want to lower the windows, please don't put your arms out. etc etc. Have a nice ride today in your Ford(tm) Pickup(tm).".

[1]: https://youtu.be/2ZmdxkBV5Tw?t=180


Not entirely different from getting on a plane? You have to listen to the safety protocol before take-off.


Most things about planes are pretty user unfriendly to be fair... "arrive 2 hours before departure"... "queue for hours through security"... "walk miles to your gate"... "have to buy your ticket a long time in advance, and then 'check in' more than 3 hours in advance but less than 48 hours"...

We're a long way from the ideal of "show up at the airport 5 mins before, hop on a plane, and hand cash to the pilot for your ride".


> We're a long way from the ideal of "show up at the airport 5 mins before, hop on a plane, and hand cash to the pilot for your ride".

Also known as the private jet model. I've experienced it, through a bit of lucky employment, but I remain too poorly funded to make it my normal method of air travel. I do wish I could afford it.


I realize this is essentially a PR piece, but still, it makes me feel much better about the potential future of automated driving than what Tesla is doing. If I owned TSLA right now I'd sell.


A canned test should not make you feel better. This could be the first time they actually passed the test. They might still fail with a cardboard cutout half the size.


Not to defend TSLA, but I don’t think self driving is the reason why Tesla cars sell, it is more about being arguably the best mass produced EV out there.


> more about being arguably the best mass produced EV out there.

In 2018 this would be a really good argument. What does Tesla do better now, compared to another modern purpose built EV, for example a Ford Mustang Mach E, or a Hyundai Ioniq 5, Kia EV6, etc?

I struggle to identify any particular feature I would say they are better at, much less something that would make it the best mass produced EV. I say this as a two-time Model 3 owner, having just bought the most recent one two weeks ago. I don't quite have buyers remorse yet, but it's nagging at me that I may have just made a foolish choice for the wrong reasons.


The fact that you don’t have to go through a dealership is a huge selling point for me. No markup, no dealing with high pressure sales tactics. Just buy it online.

As someone in the EV market I almost went for a Tesla for this reason.


Polestar (Volvo's EV spinoff) has copied this piece of the Tesla playbook, and their cars are pretty competitive in features and price as well. But the Polestar 2 is also much more "traditional" vehicle than Teslas, and most all reviews rate its closest competitor the Tesla 3 more highly.


I think Tesla's charging network is a nice part of the package. I'm pretty worried about going EV -- I'm not going tesla, because I don't like the way they look -- mostly about dealing with finding charging stations that work (and well) when I need them.


Look at a map of EV charging stations, and quickly realize you will never have a problem


There are no competitive cars on the market in their class that are as efficient and as performant right now.


Afaik, they are not. They have best charging network in United States. They come low in reliability index. And many people like their software.

They are not obvious winner among EV cars currently. They were first to do actual high end EV car and that vision changed the market back then.


They are obvious winner among EV cars based on sales.


The topic was "best EV car now" and not "best selling EV car" nor "best EV car few years back". They used to be the best EV car and Musk used to have huge charizma for many people. They were selling dream too. All of that sells, but is not "how good the car is" metric.

Tesla also used to have reputation of being well build cars few years back. I don't know whether their quality went down due to mass production or whether it was just not known about them.


Will I ever be able to have self driving on a personal vehicle, or is this just centralized automating the work of a taxi driver? IMHO, these are two very different things for the consumer. This is why I actually prefer the Tesla approach, or actually Comma AI. (If it can be made to work robustly…)

It would suck to be in a world where the only way to do self-driving is indistinguishable from the Uber or taxi service we already have (and likely wouldn’t even be cheaper if it’s proprietary to one or two mega-companies who can extract nearly all the productivity surplus from this as monopoly rents).


I do not think the outcome is only Uber / Lyft but with AI, but if that is the outcome I still think it would be a win. Today supply of Uber / Lyft in my area at off hours is spotty, and that makes it unreliable. I have gotten stuck walking home 2+ miles multiple times in the last year because I couldn't get a ride at any price. That's not a problem in Manhattan, but not everywhere is Manhattan. Driverless cars would be on 24/7/365 so wouldn't have that problem. The more reliable these taxi services are, the more viable it is for people to get rid of their cars.

I also expect long term self driving cars will be safer than humans, and as a person that primarily walks around instead of driving that's a benefit to me even if I'm not in the car.


Why wouldn't driverless cars have the exact same problems? A driverless car is pretty expensive, so it needs to be making money a high fraction of the time or it's not economical for a company to invest in it, just like a regular taxi service (I'm really curious how they would handle 'surge' times - have fleets of cars that sit parked and unused 99% of the time??). Uber and Lyft actually have a lot of flexibility in this regard, since the cars already exist for other reasons (and don't cost Uber/Lyft anything when they're not driving). The idea that 'driverless' somehow means 'lots of cars, everywhere, at all times, very cheap' doesn't make any sense to me from an economics perspective.


tbh car ownership for day-to-day is kind of silly, leave it to more commercial use and enthusiasts

i may be biased since i use public transit or bike for everything


Coming from a corn-fed midwesterner who got his license as soon as legally possible, car ownership is totally silly. We are all fleet managers of extremely complicated mechanical objects with huge liabilities from a financial, legal and moral perspective. If self-driving cars do one thing it could at least set people free from personal vehicle ownership, even if they still have car dependent lifestyle.


> i may be biased since i use public transit or bike for everything

It is good to recognize this. A very large portion of the population does not live somewhere that makes good sense for pervasive public transit, walking, or biking for regular transportation needs. And many of those people actively don't want to live somewhere like that. Personal vehicles have a use case, and that does not become invalid just because it does not match your own preferences.


Having just had to get around quite a bit via walking and scootering, I’m definitely not excited about a future totally without personal cars. This works very well if you’re childless or if you live in a place like Manhattan (loved the subway there) or with excellent weather, but it’s just not the same as the personal room and safe area with your personal belongings that a personally owned car provides.


What's the difference from a (better maintained) taxi service? Especially one that in this hypothetical future, would be driverless.

In general I think the trend of personal car ownership is something that will become somewhat of a hobby rather than a daily necessity, even outside of cities as long as Waymo (and others) are able to actually achieve their ambitious goals. The only way I see that reversing is if people are forced to live out of their cars due to absurd home costs, which is a very very bleak future.


Because you don’t have your personal belongings in the taxi, you have to take them in and out. A personal car is a little room, like a little part of your home, that you bring with you when you travel. With kids especially (diapers, wipes, books, toys, car seats, snacks, a place to change diapers or change clothes or breastfed in privacy or nap, etc… protected from the elements and climate controlled), this is really helpful.


I extensively use car sharing services in Europe and it covers almost all of my use cases, the only exception being long distance trips, those are just too expensive when you’re paying by the minute or kilometer at todays prices.

There’s options with fixed parking spots, and services that allow pick up and drop off anywhere.

You tend to structure your life a little different once you don’t own your own car anymore, you start to think twice about little trips you would’ve done otherwise. On the flip side I now have access to 5 different types of cars ranging from small to big (vans) from my phone. It doesn’t even require that much more planning considering it’s reached critical mass around here and there’s a ton of cars available.

The biggest player around is profitable too, so it’s not going away any time too. It’s saved me thousands and spared me from so much hassle surrounding car ownership. I consider myself an enthousiast but I just got a motorcycle for the weekend instead, pennies on the dollar compared to a car.

All in all I notice I’m just happier not being in a car all the time anymore, you might consider it your safe area but it might as well be a golden cage at times.

I understand it’s different once your throw small children in the mix so it might make sense there, but the reality is that a lot of people could do with a lot less car at most points in their life.


Something that seems to happen a lot on HN is the pervasive assumption that everyone lives in an urban area, or wants to. It is totally fine that some people choose that life, but it makes for these one-sided conversations where someone explains in detail why they have the right answer, while describing things that largely do not even exist outside of a relatively dense urban environment.


I'm addressing the GP who references Manhattan, obviously all of the above does not apply to more rural areas. I'm merely providing an anecdote of a city dweller, I figured that much was obvious. Please live your best life and if that means living outside of the city and owning a car, I wish you all the best.


Usable public transport is not limited to urban centers. GP mentions Europe and there, you have rural areas connected by good bus or railway system people use to get to school or work.


Kids can use bicycles from when they are very young, and until then you can put them into a little thing you can drag behind your bicycle.

If you are doing longer travel, using the train is actually awesome. Those trains have actual places for children to play in.

If you cities and that goes for small cities as well are properly designed its very possible. Its just that in the US cities are literally designed so as to make it impossible.


> i may be biased since i use public transit or bike for everything

That's just a different way of saying that you never have any cargo (children, groceries, etc) to move.

You should also bear in mind that not everyone wants to live in such high density that everything you could ever need is 5 minutes away on foot.


> That's just a different way of saying that you never have any cargo (children, groceries, etc) to move.

We have plenty of cargo (children and groceries) and do not own a car.


You can buy puts.


The market can stay irrational longer than you can stay solvent.


If you're buying puts you have bounded risk (the amount you invested), albeit with a more cliff-like risk profile than other strategies.


In this case replace "you can remain solvent" with "the put remains valid".


As well as a time/volatility element added so its not necessarily "TSLA went down a lot, so you profit a lot".


The argument literally everybody always makes and 99.99% they are simply wrong and don't want to admit it.


Options are gambling, which is not how I play the market. Aside from some lucky YOLOs, you are far more likely to lose money in that game.


I recall reading an analysis of the self driving safety stats a few years back that concluded, if you counted incidents requiring human intervention as if they were accidents, then based on total miles driven, humans far outperformed self driving, by like an order of magnitude. In other words, companies (waymo included) were sugar coating their stats for good PR, though Waymo was still at the front of the pack.

Some other comments on this thread suggest that self-driving is only perceived to be unsafe, but is statistically much more safe. Unless the stats improved significantly since then (and what specifically achieved that?), and an independent analysis can agree, I'm not trusting corporate PR.


>...if you counted incidents requiring human intervention as if they were accidents...

The true number is somewhere between this worst case and the numbers Waymo presents.

Most driver interventions that I've seen on video were not narrowly missed accidents; they're the car being confused by road construction, a double parked driver, pedestrians spilling onto a street etc.

I have also seen (for Tesla at least), videos of driver interventions that definitely would have been accidents if the driver hadn't stepped in.

I definitely agree with your point that I'd love to see more in-depth figures and the CA DMV might release those more detailed figures? I'm not sure.

EDIT: Sure enough, the CA DMV disengagement reports list the exact cause (https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...)


I almost feel like AI enabled vehicles need a special light on the exterior to indicate to other drovers that this thing is a robot. As I'm driving I would probably learn to approach these vehicles differently from the average normal driver.


This sounds like an excellent idea. There is a lingering issue about drivers not paying attention and even sleeping while their vehicle is in self-driving mode, that's contrary to the requirement that the driver is actively supervising.

I don't think it's a realistic expectation that drivers are always fully attentive and able to respond in time to a crash situation when the self-driving mode is active. Such a light would give me the heads up that I should stay on my toes.


I agree. But we end up adding more stressors to human drivers that way to gain what really?


It's not necisarily a "stressor" if self-driving cars behave somewhat consistently, it's just another signal you can use when making decisions on the road.

"Will this car suddenly decide to change lanes?", "Not likely, it's self driving and there aren't exits or major changes in traffic in front of it".

There will also be quarks in any automatic system. Learning and then predicting these will make the streets safer and more comfortable. For example, perhaps self-driving cars are overly cautions around some local crosswalks. If I'm behind one of these things in the winter I might be aware I should leave even more extra room for the sudden slow downs that other cars will be less likely to do. If I'm smart this wont be the difference between an accident or not, but it will make for a smoother ride.

My larger question is, will proliferation of AI cars increase or decrease net traffic flow. People seem to be driving larger and larger cars slower and slower anyway, so maybe this target is achievable, but this all worries me.


One reason to introduce self driving cars is to hopefully remove the number of SUVs from city roads, which are only driven by people because they’re “safer and protect me”. At best they’re an absolute nuisance to every other car and pedestrian, and at worse they’re absolute death traps that clog up roads.


> are only driven by people because they’re “safer and protect me”.

That's a lovely straw man you built. I've not met anyone who bought an SUV for that reason. 99 times out of 100 it's for the utility, especially the third row. Which, if you ask me, isn't as useful as people think when they buy it, but still, it's a big factor when you expect to be regularly driving the kids around along with their friends.


Yeah a lot of HN forgets that people have families, and the kids typically are at school while the parent is driving downtown to the job in the morning. So in the moment it looks like a waste, but alas people are quick to judge.


I can think of 4 people off the top of my head who have no kids but have an SUV. Maybe it’s an Australian issue more than other countries.

https://www.theguardian.com/cities/2019/oct/07/a-deadly-prob...


There's a lot of people who buy them because they're the majority of what dealers are selling.

Want a minivan? Good luck.

Station wagon? You need a time machine.


The self driving cars will be SUV or CUV at a minimum and there will be more of them, not less.


We need personal car size limits on city streets, stricter regulation about viewing angles and heights for personally licensed cars, and car mass taxes in general. Some of these SUVs should not be drivable with normal consumer vehicle licences. They should require a higher license level and training on driving large vehicles.


There is a very, very long list of things that should change to turn society away from car dependence.

Free highways being a huge issue. Not having car registration be based on weight. Far higher requirements to be allowed to drive. There is a long list.


I don’t think OP is necessarily asking for less cars, just less big, dangerous cars.


The same logic that gets you from bigger cars to smaller cars. Also will lead to cars to no cars.

If you reason is safety, cars are the issues, not bigger cars.


Exactly. Just like we saw with Uber & Lyft, but it will be exponentially worse. When I can easily just tell the car to go be available somewhere, or take my kid somewhere, or this package, etc, then guess what -- I'm gonna do it.


You could also just make SUVs illegal.


I appreciate that in this they demonstrate not just rigs where a manikin is thrown in way of danger, but actual humans performing regular/irregular tasks. This to me is akin of the bullet proof {vest,glass,etc} manufacturer willing to put themselves behind their product for demonstration. With AI systems I think this is particularly important because with such high dimensional data it is possible that the vehicle picks up on things like the pull cable or that it is a manikin and not a human (e.g. pneumonia predictions strongly correlating with medical equipment within x-rays rather than inflammation). A kinda two for one confidence builder here.


> akin of the bullet proof {vest,glass,etc} manufacturer willing to put themselves behind their product for demonstration.

I suspect the cyclist in the video is not a $500k/year ML engineer, it's a $50K/year veteran trying to stay out of the welfare line.


I really want to see how testing goes for MobilEye when they start to open up their prototypes a little more. Of the youtube videos I've seen theirs is by far the most impressive and I have a lot of respect for Amnon Shashua.


I was driving through heavy rain recently and had a small epiphany: human drivers are willing to take way more risks under adverse weather conditions. There was no way the car could stop quickly from 65mph on a wet road if something appeared suddenly in front of it. The robot driver would really have to drive way slower than anybody else in order to be safe. That would be annoying for the passengers. In some countries it would also be constantly honked at.


i have to chuckle at the use of language to humanize their tech here - comparing "the waymo driver" to "NIEON". the one of those that sounds like the name of a robot from the future is actually just referring to a normal human.


The whole point is that it's not a normal human; it's a model that is better than any human could be.


also "real agent" which just means human


I think the problem with self driving vehicles -- to add to that conversation, is the companies focus on the wrong market.

For people it is asking for them to give up control either as drivers or as pedestrians in a city. Most people like to have a sense of control of their lives. (My take anyways. I know lots of people like having tracks in their career and progress but I would argue they still feel in control.)

For the whole truck shipping lobby on the other hand it will be: save money. (I know that is a dystopian take, but I find me more likely for trucking or something similar to be the first fortress to "fall")


One thing I'm very curious about with AI driven cars, is whether they will develop some of the overly cautious / paranoid habits human drivers get around bicycles.

Take people passing a bike in a bike lane for example. Some will go way under the speed limit, and not just pass the bike even though it's in it's own lane. Or others will swing WAY wide to pass them, even though again, they are in their own line.

Will AI itself learn that indeed, bikes do suddenly veer into your lane, or will AI in fact learn that no, they don't do that?


Personally, I don't understand the economics of lidar for self driving vehicles. 1) How many lidar units will one vehicle need and how much will this cost? 2) How will noise from other lidar systems be addressed? Eg two or more disparate lidar systems on other vehicles using similar frequencies? 3) How small can the lidar systems be made while still being effective in real world use? These units being used for testing are massive and probably stupendously expensive!


While it feels like Waymo is taking a better approach than Tesla, I still do not see a viable solution to the trolley problem.

These tests are nice and all but what is going to happen when a situation invariably arises where someone is going to get hurt? What's going to happen then? Is the car going to opt to protect the passenger (because customer) or the pedestrian (much more likely to be hurt/killed).


I suspect these cases are simply way more rare than people expect, especially if the car is driving defensively.

If the car is, for instance, tracking likely pedestrians that will enter the road and slows down for safety when they’re near, someone would have to irresponsibly run into the road. If the choice is to hit them or turn over a cliff, I’d expect the car to hit the careless pedestrian.


Answer: The passenger… now can be move on from this inane argument? If they are safer than humans it frankly doesn’t even matter.


can't wait for the day I'll leave work, get into a self-driving car, watch a movie, have it stop for takeout, sleep as it drives me across the country and then wake up in the morning on the other side of the country ready to start my day. Rooting for Waymo!


I'd love to see an independent body conducting the same exact tests across all platforms (ie: Tesla).


It should be adversarial. An independent body should administer test suites provided by the manufacturers. Everybody has to pass 99% of the union of all test scenarios before being allowed on the road. The tests are run monthly.


>It should be adversarial. An independent body should administer test suites provided by the manufacturers. Everybody has to pass 99% of the union of all test scenarios before being allowed on the road. The tests are run monthly.

While I appreciate the spirit you're being overly glib, and we should all be wary of overly simplistic answers since reality can have surprising emergent effects. Like, your proposal as written would give behind manufacturers who cut corners and were bad an effective veto over companies doing well. If say Tesla finds they have to pretty much start from scratch, that they're 5 years behind Waymo now because they took a shoddy haphazard approach, well they might as well create a bunch of impossible tests that Waymo can't pass. It doesn't matter that they can't pass them, because they couldn't pass the ones Waymo proposes either, so this way they could throw a spanner in the works and pull back on the leaders. You could alter your suggestion to be that manufacturers can only propose tests that they themselves can pass with ready-to-ship vehicles, so everyone has to meet each other's standards. But what if there really are tests that everyone should pass but that no one can yet? Or what if there is explicit or tacit collusion in the other direction, where everyone low balls the tests because ultimately it's more profitable to get stuff shipping?

Basically I don't see any reason to not just have the government continue to be involved here and come up with independent road safety standards as advised by their own experts, with public comment and rationale. Ultimately it's the public interest at stake and the rules are about use of public infrastructure. Why not just have an aggressive federal standard course and set of tests that everyone must meet?

I also think in terms of incentives that FSD car manufacturers should be fully liable for any accidents caused by the car while FSD is active, simple as that.


What if you limited to test scenarios that a human ("NIEON") could reliably pass?


I'm sure motivated engineers could come up with tests that are extremely difficult for current AI but basically useless for deploying a real self driving system.

See the millions of examples of trolley problems, for example.


Trolley problem examples are hard, but your "NIEON" won't reliably pass either.


If Tesla can't pass each and every one of their proposed tests then they aren't even in the game. What you describe would not occur.


Kinda crazy to think the tests have to be done monthly, but they'd have to be due to software updates, what crazy times we are approaching.


The "unit" in "unit test" will now refer to the car crashed on every Jenkins build


I watched this video yesterday which seems really applicable...

How to crash an airplane – Nickolas Means

https://www.youtube.com/watch?v=099cHWSbAL8

Reflecting on the video, they don't need to save everyone, but it certainly should be the first time when not everyone dies.


So out of 10,000 tests, it’s okay if they fail on 100 of them?


As nice as it may be to think that humans are perfect, it's not like they'd score 100% on this level of testing either.


The automated driving systems will have to pass a far higher bar than human drivers as a comparison. People will get even more upset about self-driving tech causing injuries/wrecks/deaths/endangerment vs what human drivers cause.

Long after self-driving systems are superior to human drivers on average, the headlines will still scream about humans being killed by self-driving tech. The sensationalism will still sell and people will still be very outraged about it.

The expectation will be no mistakes. Anything short of that will always draw a hyper emotional negative response, which will lure in political/regulatory responses.


> Long after self-driving systems are superior to human drivers on average

For starters, that's not the correct metric. Self driving systems have to at least surpass the median driver, not the average (mean). Auto-related fatality stats are heavily skewed by a small subset of drivers who engage in very risky habits.


Why should they have to pass the median?

> Auto-related fatality stats are heavily skewed by a small subset of drivers who engage in very risky habits.

Right and it would be great if those people used self driving cars instead.


> Why should they have to pass the median?

Because you have to convince people like me to buy a self-driving car, and as long as that car is more likely to get me killed than I am, my family will remain in a car that I drive. I do not drive drunk, I avoid driving in inclement weather when not required, or at night, or when I'm really tired. I don't race, I don't road rage, I am a very defensive driver. I have not had an at-fault accident ever (in 30 years and counting since I got my license) and the only accidents I've ever been in at all were minor fender-benders.

So convince me why I should endanger myself so that you can have an unsafe computer driven auto on the road?

> Right and it would be great if those people used self driving cars instead.

So make a self-driving car for them. You will need to subsidize it, since these types of drivers are more likely than not unable to afford a fancy new toy. When the technology can finally cross the median point, then we can talk again about regular, good drivers hanging up their keys.


>People will get even more upset about self-driving tech causing injuries/wrecks/deaths/endangerment vs what human drivers cause.

Will they?

I mean for example Tobacco companies lied and the truth that we know today is that smoking is very very detrimental to their health. It's also detrimental non-smokers in society via second hand smoke, and secondary effects like cigarette butt litter. It doesn't even provide any solid utility like transportation does, it just feels good.

Not only do people still smoke today, people _start_ smoking today given all the information we have.

So when I see behavior like that, I'm not confident that people won't want FSD just because it's 'dangerous'.


You're 100% correct. People will want FSD for themselves for sure. That won't stop them from blaming the tech companies when they read articles about the cars killing people. Ralph Nader's _Unsafe At Any Speed_ tanked Corvair sales after publication, although his critiques arguably applied to other cars more than the Corvair. The sales of other, similar contemporary cars weren't affected at all.

FSD will be incredibly convenient, which means humans will always be motivated to come up a reason, valid or otherwise, that justifies their own use of the tech while allowing themselves to condemn others for mishaps incurred doing the exact same thing.

"They didn't maintain it correctly." "They didn't listen to the warnings." "They bought the wrong brand." "They weren't current on software updates."


I doubt it, the more common self driving deaths become the less newsworthy they will be.


In fact, we can even subject human drivers to the same tests and compare the results.


> As nice as it may be to think that humans are perfect, it's not like they'd score 100% on this level of testing either.

Someone posted upthread that current fatalities on something stupid like 1.5 per 100000 miles.

Humans are currently ahead in the safety stats game.


Maybe have variable points per test and have a minimum passing point total, so that an important test could fail you in its own.


Yeah we can quibble over the details. The key aspect is adversaries.


And a meta body should coordinate with manufacturers to periodically submit faulty products to make sure they’re flagged.


100%


IMO, public road, public code.


That's a nice phrase, but it seems impractical given the enormous complexity of these systems.


Impractical or not, public streets belong to the public, and the public gets to dictate how they are used or not used.

I'm willing to negotiate. If companies don't want to release their code, they can put up $50M in a trust, and $1M bond per car, and automatically accept at-fault for any fatalities involving one of their self-driving units, with a minimum of $1M paid out for each deceased to their survivors.


They refer to NIEON as an undistractable human driver, but isn't it just another model? Is that a bit misleading, or can someone shed more light on this?


There's a footnote:

"NIEON is defined by (1) gaze being directed through the windshield toward the forward path during the conflict and (2) a lack of sleepiness and intoxication-related impairment."


So basically, a public transit driver.


NIEON is "a reference model that represents an ideal human state for driving"

You can use this for comparison, as in "what would NIEON do in this situation"?


It's just a hypothetical for comparison: "is our self driving car doing as good or better than a competent, undistracted human driver?"


As with all AI, it works until that special set of circumstances pop up where it fails and when it fails there is no bottom as to how it fails.


Self driving cars are mostly a bad idea in most cases. The only thing worse then cars with 1 person in it driving around is cars with 0 people driving around.

The goal should be less cars overall specially in cities. Technology that will cause more cars and more traffic is just terrible.

Cities should ban cars, make it pedestrian and bicycle friendly, build 'self driving' metros and S-Bahn style train systems.


There's no need to ban cars - it would be quiet sufficient just for governments to stop building infrastructure (and making various laws around its usage) as though cars were the only type of transport that mattered. FWIW, in principle self-driving tech could make our cities far more "people" friendly and less attuned to the needs to cars, but only if there was some sort of motivation for the tech to be developed with that goal in mind.


Actually there is a huge need to ban cars in cities. Many cities are doing it and those cities are far better to live in and visit because of it.

Large scale pedestrianization of cities is a gigantic for for improvement of the world. Both in terms of less accidents, cleaner air, more attractive city, more efficient transit, increase health from bicycle use and walking. I could go on.

So the program should be ban cars from anything that could even remotely be consider a core part of the city, massively slow cars down around the city, remove all on-street parking on all public land, eliminate all minimum parking requirements. The list of policies goes on.

> self-driving tech could make our cities far more "people" friendly

Having cars with 1 person use up the most valuable space in the world is bad enough, doing it with cars that have 0 people in them is worse.

Self driving at best should be for a very small % of trips, maybe less then 1% in very few special cases, maybe for old people, people with disabilities and stuff like that.


> Many cities are doing it

Can you give an example of a such a city (other than the obvious examples of having sections of the inner core of a city where car usage is restricted, which is pretty much the norm in many older European cities)? But it doesn't make any sense to build (or even maintain) a city with roads designed for cars, then not allow them to be used for that purpose. If the roads and other car-based infrastructure were re-purposed in a way that made them clearly unsuitable for cars, again, as I said, you don't need to ban them - it just wouldn't be feasible or practical to use them for the vast majority of people.


There are lots of example and test going on. Yes old cities in Europe always had this but these zones are expanding. Barcelona Super-Blocks, Paris has a huge program, even Brugge is doing more of it, many cities in the Netherlands. In North America Montreal has been pushing forward. Even New York has done some of this, but not nearly enough.

> a city with roads designed for cars, then not allow them to be used for that purpose

Actually, walking, bikes, buses and commercial vehicles can use them just fine without personal cars being there.

You can also put stuff there, like restaurants can expand. You can make use the space in lots of different ways.

Yes eventually you want to move away from asphalt streets towards something more reasonable (and more environmentally friendly) but there is lots of things you can do on asphalt.

> it just wouldn't be feasible or practical to use them for the vast majority of people.

Actually removing cars and giving priority for walking, bicycles and public transport increases the transport capacity of the streets. Its also better for business along those streets as has been proven in study after study done on pedestrianization.


> Actually there is a huge need to ban cars in cities. Many cities are doing it and those cities are far better to live in and visit because of it.

I think you missed the point. People drive cars on roads. Stop building, and start removing, roads and people no longer drive on them. There is no need to go through the process of banning cars if you just change how and what infrastructure gets built.


You don't need to remove roads, you simply need to repurpose them.

You can't just remove roads in the middle of cities. Like what does that even mean?

You just block of cars and let people retake those spaces.


> You can't just remove roads in the middle of cities

https://en.wikipedia.org/wiki/Freeway_removal

Plenty of other info online describing various successful transformation projects that basically involved getting rid of car-based (or worse, car-exclusive) infrastructure. My main point is that it doesn't need to be replaced with something where cars aren't allowed at all, just with infrastructure that gives equal priority to all modes of getting around. And it's just not removal, I'd hope to see that as a guiding policy for the building of new urban developments too (sadly we're still someway off that in Australia - most of that occurs on the city fringes and further entrenches car-dependency).


While Waymo is spending $$$ gathering driving data, Tesla has 100s of 1000s of cars doing it, for free. In terms of sheer data, in this race, Tesla wins. Now whether Tesla can actually use the firehose of data and actually train models that use it productively, remains to be seen. With the departure of Karpathy, I am not so sure.

If Tesla gave all the data to Waymo, Waymo would reach L5 in no time.


Waymo was collecting data before Tesla, and switched to simulated training a long time ago because it's more effective.

The self driving AI can gain 100 years experience in just 1 day using simulation: https://blog.waymo.com/2020/04/off-road-but-not-offline--sim...


Tesla uses simulation to but there is no replacement for real world data. The real world is crazy and Tesla can see people driving in real condition from Alaska to Miami Florida.


Amateurs think that "simulation" is all you need. They see realistic environments on their PS4s and XBoxes and think that's enough.

If simulated data were enough, why _hasn't_ Waymo launched in placed like NYC or Boston?


If you think driverless car training in simulations is a playstation game, then you're projecting with your comment. Also, because of weather.

Waymo just went live with full driverless for the public in Phoenix. Tesla can't do full driverless so I think that evidence settles the debate of simulated vs. data collection, unless you have evidence otherwise?


If the goal was to spend 10 billion on establishing a money losing operation in Phoenix then maybe you would have point.

But in fact that isn't actually why 10 billion $ were invested.


Waymo is the first fully driverless vehicles to market, which Tesla has yet to deliver, so the fact you calling it a "losing operation" shows your confirmation bias here. Did you buy a Tesla with this promise, perhaps?

You still didn't answer the question - where's the evidence?


Seems like something I'd expect CNBC to say, not Hacker News.


Hacker News is superficial negative group think. Inverse hacker news is where it's at. Then you can predict the success of startups like Dropbox and Coinbase


I do have some experience in building ML systems, and know enough to know that I don't everything. But I do know the value of real world data in a noisy environment like driving conditions.


Is any of Waymo Driver's design published? Like do they use RL or how do they approach control.


Tesla mentioned in this thread almost twice as many times as Waymo.

Elon love/hate is a powerful force.


Waymo and Tesla's approach to self-driving could not be more different. One of the scariest parts about Tesla is that they don't even seem to know what they don't know. In related news: It looks like Tesla may add radar back: https://www.forbes.com/sites/bradtempleton/2022/12/12/tesla-...


They need to add radar back. Ability to "see" slow downs or stopped traffic even if your vision is obscured is a great benefit for self driving.


They need LiDAR. The biggest problem for FSD is still around bounding box detection.

I just don't believe that you can infer the dimensions of objects using stereoscopic images with a reliability that you need to make FSD work.


How do our eyes work?


Far more complex sensors than cameras attached to some of the most complicated and least understood processing systems running a general intelligence with millions and millions of years of training.

Also, when you're in the car, you probably move your head more than you realize. Moving your head around and looking around also gives you better understandings of distance. Far more than a couple of cameras feeding a few megapixel images into a ML model.


By us constantly moving them around in three dimensions unlike a car.

And also having a computer behind them that deeply understands what an object is, the forms it should take and what its expected behaviour should be. We don’t ever confuse billboards for real people.


>By us constantly moving them around in three dimensions unlike a car.

Fair but to compensate the car has many more cameras than we do.


For looking forward Tesla's have three fixed cameras looking from the rear view mirror. All the other cameras do not have stereoscopic vision and are not looking forward. A camera looking out from a side pillar isn't helping gauge the distance to something far up the road.

On top of that these cameras can't move, can't be re-aimed, and have generally far worse dynamic range than our eyes.


Why make the difficult task of self-driving any harder, by artificially limiting the sensors you use? Planes have rigid aluminum wings and jet engines instead of bone muscle and feathers.


Our eyes work in concert with a supercomputer accessing huge stores of contextual data (far eclipsing anything in a Tesla neural database) to understand and react to unique situations.


Rather poorly, given the rate of traffic accidents by humans.


That's your opinion, but Tesla does not "need" to add anything from the point of view of actual Tesla customers!

Tesla cars are selling faster than they can make them and adding a Radar will slow down the manufacturing, not make it faster.


Tesla is currently being sued by actual customers based on this exact issue. They've promised FSD, and can't deliver it - especially without lidar.


> Tesla cars are selling faster than they can make them

If this is true, why are they reportedly cutting production and offering incentives? If they were booked solid, no incentives would be needed.

https://www.reuters.com/business/autos-transportation/tesla-...

> Radar will slow down the manufacturing, not make it faster.

? They are adding radar, though: https://electrek.co/2022/12/06/tesla-radar-car-next-month-se...


> adding a Radar will slow down the manufacturing, not make it faster

Tesla told the FCC that it plans to market a new radar starting next month.

https://electrek.co/2022/12/06/tesla-radar-car-next-month-se...


I agree. Tesla FSD has so many obvious limitations that can be worked out on closed courses that it has no legitimate reason to be tested on public roads. For example, it cannot drive directly into the sun. That's a flaw they could work out on the test track (by adding different sensors). There needs to be regulatory intervention to force FSD off the road.


Curious you mention this I was using it this morning driving directly into the Sun... even without our lanes painted after they repaved the roads it kept me in the right part of the road and even engaged the turn signal automatically as it stopped at the traffic light before our right turn... I know they had issues with direct Sun few years ago but it would seem to me driving with it enabled into the sun works just fine...



Might’ve been because it perfectly obscured the straight-ahead view, given the hills where he’s driving. While not perfect, the lenses on Tesla’s forward-facing camera is good at retaining detail if the sun is above eyeline. https://twitter.com/greentheonly/status/1200626377097129984?...


Yeah, I see that is a pretty intense road and situation... my road is much much straighter... I'll try to capture it and share definitely not difficult...


How is this an intense road? It looks pretty wide and clear of traffic with a few parked cars. If it can't handle this, it's got no chance in an average city in Western Europe.


What is a "pretty intense road and situation" for you is some people's daily commute.


That looks like an extremely normal residential road with no other drivers or pedestrians.


The really scary part is actually the one nobody in these comments seems to address. We are letting humans drive these things with barley any training on roads that are terrible designed for safety of the people around the vehicles.

This utter disaster of a situation leads to 10000s of deaths that has well known solutions.

But instead of preventing these deaths with well known low tech solution, a super expensive technological holy grail is gone somehow fix those problems.


Fully agree here but what's the "well known low tech solution", besides public transit?


I agree with your and the parent poster's opinion that the best safest option is to encourage and fund buses and trains


Which is consistent with what they had said at the time when they removed the low-resolution radar they had been using. Specifically, they had said that radar would be useful if it were sufficiently high resolution. The radar in the works and rumored to be added soon is anticipated to be high resolution.


But Tesla's software version is 10.69.x....69...get it...LOL!!


And one of the scariest parts of Waymo is that they may never ship and it'll be 20 billion dollars down the drain


I mean, that’s good though right? If it doesn’t work it doesn’t work. Worst comes to worst taxi drivers still have a job.


I think you mean, worst comes to worst 1.35 million people still die in traffic accidents every year.


If companies really had a noble causing of reducing traffic fatalities I'm pretty sure the billions they spend could be better spent advocating for light rail and buses. And hey, thousands less people would die of lung diseases to boot.


I don't understand what you mean. Doesn't the Phoenix launch count? They've definitely shipped something to prod.


I don't see how that's supported here? I mean, clearly Tesla does its own closed-track testing. I've seen coverage of that in the past. They don't release public software the doesn't pass these kinds of tests. Likewise it's not like Waymo restricts their testing to closed tracks, they have vehicles on the streets too.

> One of the scariest parts about Tesla is that they don't even seem to know what they don't know.

I'm curious what the reference here is? Again, are you taking coverage of Waymo's test environment as evience of its absence at its competitors?


Tesla claims they are safer than human drivers by fudging statistics about accidents per mile.

Waymo describes in detail how they test their algorithm against specific scenarios and makes statements about those experiments.

One company is trying to sell you cars and telling you what you want to hear, the other company is extremely careful with their statements.


> fudging statistics about accidents per mile.

How? The only problem with this page[0] is that they haven’t released 2022 stats. Otherwise:

> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.

0: https://www.tesla.com/VehicleSafetyReport


That "vehicle safety report" is 70 words, none of which answer the question of "how many crashes were there" or "how many people were injured or killed." One of the few numbers they give us, the percentage of incidents/hour for driving "with active safety but without autopilot," bounces around by like a factor of two from quarter to quarter, which doesn't make any sense. Literally every provided number is in Tesla's favor, and context is only provided in ways favorable to Tesla.

This is a press release, not a safety report.


I'm honestly curious what test tracks Tesla uses. Do they just rent private tracks for their road testing before it gets to public beta? It clearly isn't done onroad by safety drivers.

GM has Milford and Yuma + others, Waymo has Castle, Zoox uses Altamont Raceway, Nuro has a track at the Vegas speedway. I can't imagine Tesla's small Fremont loop and their winter track in Fairbanks are sufficient.


Tesla’s FSD is so terrifyingly bad at routine tasks (I used it for six months before giving up) that it’s natural to assume whatever closed track testing they did was ineffective. Or perhaps they did a lot, I don’t know—but it doesn’t feel like many of those lessons learned made it into the “production” system.


I really like AP in general, but for FSD Beta, I tend to agree. I've seen enough mistakes from even the really careful Youtubers that I don't understand why its still in the field.

And those are the mistakes that they were willing to show! To be clear, I mean situations where the driver needed to take over but either didn't, or didn't in sufficient time to avoid an illegal or dangerous maneuver.


From what I can tell of people posting FSD videos on YouTube, they are actively seeking adversarial conditions with a desire to show where it fails. I’m sure there are some YouTubers that are trying to sugarcoat FSD, but I haven’t seen any.


I partially agree. I mean, I find dirty tesla's videos to be pretty fair.

Having said that, he's also been really clear that the video doesn't always make it obvious just how many aspects of FSD are just plain weird. Even when its not actively failing, it moves in odd ways that are uncomfortable.

He's also said that earlier versions resulted in curbed wheels.


Yep, my mate curbed his front passenger wheel that way. But I dunno how you fix that without LIDAR sensors mapping the terrain around the vehicle.


Agreed, that's one way. The other way is multiple downward facing cameras with stereoscopic vision of the area immediately around the car. Mobileye demonstrated a system like that and it worked really well.

Tesla's cameras have huge blind spots near the car and aren't stereoscopic in all directions. They had ultrasonic sensors at one time, but those had blind spots and major limitations too.


> They don't release public software the doesn't pass these kinds of tests.

Do we actually know that this is true? October of last year "internal QA" found regressions on an already public release and they rolled out an update in less than 24 hours after that was published. Both the fact that it was already public when QA found an issue and that they were able to push new public version in such a short time seems to me like they don't necessarily have a release gauntlet for each version, or at least not a very robust one.

https://www.cnbc.com/2021/10/25/tesla-rolled-back-fsd-beta-v...


While both Waymo and Tesla are testing vehicles on the streets, and have done so for some time, their approaches could hardly be more different.


There's no reference. It's just fashionable to hate on anything associated with Elon at the moment.


The thing is – nobody cares!

People can scream bloody murder till the cows come home, but Tesla for all its faults is king of the hill - literally more valuable the rest of the car industry combined. Musk is the invincible and irrespective of any shortcomings he's literally the richest person on Earth.

Tesla's approach is demonstrably the best approach in the court of the customer, irrespective of what we (others on the road) think or are put at risk.


"we (others on the road)" is not "nobody", and we definitely care. The law is supposed to intervene in tragedy of the commons cases like this for precisely this reason. The free market is known to have a few blind spots and this is one of them.


> literally the richest person on Earth

Not anymore https://www.bbc.com/news/business-63963239


"invincible"? He has literally lost over $100B this year.

Tesla's market share is eroding every year, and is expected to decline to less than 20% by 2025.

https://www.cnbc.com/2022/11/29/teslas-dominance-of-evs-is-e....

There are plenty of people like me, who are choosing non-Teslas when they buy EVs.


Market cap has absolutely no relation to the "court of the customer", nor should it in terms of what safety features we allow or disallow.


He sucks


So the AI learns - slow down when driving between shipping containers


Well… Yes. It is a scenario where you you have a blind spot on the road ahead of you, so I’d hope a human driver would do the same.


Consensus from the two AV articles today seems to be that Tesla, despite releasing more data to the public on their FSD system, is evil, incompetent and endangering society (the data doesn't say so, but just the fact that Tesla won't release even more data is shady). Whereas Waymo making a PR statement with no substance is proof Tesla will fail and Waymo will save us.


> Tesla, despite releasing far more data to the public on their FSD system,

The same Tesla who argues to the California DMV that FSD is a level 2 system to dodge reporting requirements? The Tesla who has a major EV outlet demanding that they release meaningful disengagement data? [1]

Perhaps I missed something but I really have no idea what data you could be talking about, unless by "data" you mean access to the FSD beta.

[1] https://electrek.co/2022/12/14/tesla-full-self-driving-data-...


Tesla themselves released safety data quarterly for years. I don't know why it stopped, but internet guesses range from Tesla is a fraud to the NHTSA started to question its accuracy, and they pulled it for liability reason. Tesla actually updates miles driven on Twitter. And going on YouTube, you can find endless videos of user experiences. I said thousands of hours before, but it very well could be hundreds of thousands of hours.

Find anything on Waymo. The NHTSA released autopilot and AV vehicle crash data in raw number of crashes, and I wanted to normalize both Tesla and Waymo somehow, and I couldn't find anything from Waymo other than some repeated claim of "over 20 million miles" which dates back years. If it is only 20 million, then Waymo is insanely dangerous, and no one should be defending them, but I have a feeling they're into the hundreds of millions at this point. Also, look up similar YouTube experiences. Waymo has almost nothing--a few people following their vehicles


> Tesla themselves released safety data quarterly for years.

The reports they have released have been pretty rudimentary and therefore misleading. I'm glad they released the data when they did and they should keep doing it and expand it to FSD beta with disengagements: https://twitter.com/Tweetermeyer/status/1488673180403191808

> you can find endless videos of user experiences. I said thousands of hours before, but it very well could be hundreds of thousands of hours.

I don't really give Tesla all that much credit for allowing their users to publish video of them using the software and hardware that they paid for, especially when they have asked users to consider Tesla's reputation when choosing what to publish. Further, while videos can reveal interesting qualitative information, they are a really bad substitute for the quantitative data that Tesla is already sitting on and choosing not to release.

Reports for every AV company that tests in CA (except Tesla): https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...


>Consensus from the two AV articles today

Consensus from informed industry observers who are sounding the alarm bells about how dangerous Tesla's approach is from a first-principles standpoint

The "data" Tesla released is a meaningless attempt to pull the wool over your eyes.


I wish HN would be better than that, but your summary is perfect.


What's your response to the fact that Tesla's data is actually very bad? [1]

[1]https://twitter.com/TaylorOgan/status/1602774335244177408?re...


I'm the side of LIDAR is a crutch. Not only is it crazy expensive and has many moving parts, but the sensor input is a bunch of dots. One has to have cameras anyway for the high density vision input.

With newer algorithms, you can stitch multiple cameras to reconstruct a scene and semantically identify important objects. I do like what Tesla is doing with its new voxel neural net.

Yeah it will take a few more years, but the hard problems are doing what our visual cortex does with two cameras (our eyes). Also keep in mind a car has multiple cameras around to give full 360 view. They also sense in IR range that our eyes do not.


Teslas vision based system makes a prediction of what the environment around it is, how far away things are. A LIDAR based system knows with certainty how far way things are.

LIDAR may be expensive now, thats not to say it won't get cheeper, with fewer parts.

But the main issue is a social one, I don't think the public will except any fatalities as a result of a vision based system making a mistake. Even if the deal rate from accetends is lower than with a human driver.

A self driving car can't be 50% better, or even 200% better it needs to be thousands of times better than a human driver for it to succeed. I think the certainty of a LIDAR baed system is the only way to do that.

An AI based vision system is Tesla saying that can make a "brain" better than a human at understanding vision.

LIDAR is saying, humans can't measure how far away things are, we can do better than humans by doing something they can't.


> I'm the side of LIDAR is a crutch. Not only is it crazy expensive and has many moving parts

Newer lidar units are orders of magnitude cheaper, and solid state. If I could only choose between a camera and lidar, I'm going with the latter.


The fact that they don't highlight any night testing is troubling, and that they have a single closed course facility in a place where it never snows.

I often wonder if these companies are truly trying to revolutionize driving, or just trying to put a couple of "Johnny Cabs" in the southwest and call it a day. Their strategy really does seem geared towards the latter outcome.


Recently there have been Waymo vehicles out on the streets in the Seattle area. I even saw one driving around the day it was snowing here.

I suspect we’ll start seeing more training and testing in places with less nice weather as time goes on.


Night isn't especially concerning for vehicles that have more sensors than simply cameras; headlights and infrared cameras exist. Rather, dealing with winter conditions seem like the reason that these will be confined to the southwest.


> Night isn't especially concerning for vehicles that have more sensors than simply cameras

This is precisely the type of presumptions I'd like to see tested. I mean, if you're going to go to the trouble of all this and have a 118 acre road course, it seems the height of hubris to just say "well, the sensors will probably be better than cameras at night."

It's amazing to me that on Hacker News people people have this puritanical embarrassment over obvious technical questions.


Oh this is definitely false!

Night is very concerning and is a problem for computer vision systems.

During the day illumination is fairly consistent. Sure the sun moves around, but it's not a spotlight. At night, illumination varies a lot. A detector that works during the day may not work at all at night. This is no joke, there are papers that show dramatic performance losses for tasks like pedestrian detection at night.

Cameras are worse at night. They need to be more sensitive which dramatically increases their noise. They may need to have longer exposures leading to blur, which is of course made worse if you are moving the camera.

Headlights also don't provide the same visibility so your reactions must be faster. Reflections are a big problem too. Oncoming headlights are also an issue.

Nighttime testing will be critical.


>Cameras are worse at night. They need to be more sensitive which dramatically increases their noise.

This is solved by having multiple sensors. Nighttime is really not a problem, especially given the broad spectrum and LIDAR.


It could just be that night testing doesn't look as good in a blog post.


Waymo has been testing in other cities, including ones with snow, for years.

One example: https://arstechnica.com/cars/2017/10/waymo-starts-testing-in...


These speech recognition systems don't really work that well when there's a lot of speakers present, talking over one another. Are these researchers even trying to revolutionize speech recognition? it seems their only goal is to make speech recognition work with one speaker in a silent room and call it a day.


What does that have to do with self driving cars? These are meant to be used on roads in varied conditions. Your comment isn’t applicable.


Some conditions don't exist in many places. Someone can have a self-driving car and use it for life in Arizona and it doesn't matter that it wouldn't work in Alaska. It's still progress and still useful to a subset of people. Hell, even a car that works 9 months out of the year in the Northeast would be useful. Don't see too many bikes out when it's -5 degrees but no one is claiming bikes aren't useful.


To be fair I have seen a large number of people say that bikes aren't useful because they can't perform a supercommute or can't be used when it's below 0.


I think the safety margin between a speaker and a car are two entirely different things. If your smart speaker fails to work for you, this does not offer any potential to harm me.

I think it's worth putting them into different classes, don't you/


Research and all tech advancements are done in increments, there's no other way about it. I see no point in tackling rough weather conditions when the basics aren't even finished. Of course for the product to actually hit the roads it needs to be held to a very high standard but that's beside the point. You were criticizing their progress, even though we know they don't have a final product yet and they're probably some ways off.

It's like criticizing the Wright Brothers, saying stuff like "meh, they barely were off the ground, who cares"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: