tl;dr: Autonomous driving uses a whole host of multiple and different kinds of sensors. Musk said “NO, WE WILL ONLY USE VISION CAMERA SENSORS.” And that doesn’t work.
Guess what? I have eyes; I can see. You know what I want an autonomous vehicle to be able to do? Receive sensory input that I can’t.
We also use way more than just our eyes to navigate. We have accelerometers (ear canals), pressure sensors (touch), Doppler sensors (ears) to augment how we get around. It was a fools errand to try and figure everything out just with cameras.
What’s worse is it will be hard to reverse this decision. Tesla is a data and AI company compiling vision and driving data from drivers around the world. If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize. You basically start from scratch at least for the new sensors, and you fail to deliver a promise to old customers.
If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize.
I don’t see why that would have to be the case if the new data is a complete superset of the old data. If all the same cameras are there, then the additional sensors and the data those sensors collect can actually help train the processing of the visual-only data, right?
tl;dr: Autonomous driving uses a whole host of multiple and different kinds of sensors. Musk said “NO, WE WILL ONLY USE VISION CAMERA SENSORS.” And that doesn’t work.
Guess what? I have eyes; I can see. You know what I want an autonomous vehicle to be able to do? Receive sensory input that I can’t.
We also use way more than just our eyes to navigate. We have accelerometers (ear canals), pressure sensors (touch), Doppler sensors (ears) to augment how we get around. It was a fools errand to try and figure everything out just with cameras.
deleted by creator
Also you can alter the vision input by moving your head, blocking the sun with your hand etc.
This seems like a classic case of ego from Musk.
What’s worse is it will be hard to reverse this decision. Tesla is a data and AI company compiling vision and driving data from drivers around the world. If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize. You basically start from scratch at least for the new sensors, and you fail to deliver a promise to old customers.
Sounds to me like they should full steam ahead with new sensors, they will never deliver on what they’ve promised with the tech they are using today.
Old customers situation won’t change and it would only be better going forward.
I don’t see why that would have to be the case if the new data is a complete superset of the old data. If all the same cameras are there, then the additional sensors and the data those sensors collect can actually help train the processing of the visual-only data, right?
How do we prove we’re not robots? Fucking select the picture with traffic lights or buses, right? How was this allowed.
“Honey, the car ordered itself new tires again!”
He’s such a fucking moron