Hey Folks 👋
We talk about cars yet again. Coz tech is everywhere, and everything is now tech.
We humans have a simple way of driving cars. We see the vehicles around us through our eyes. Our brains help us make sense of that sight, and we use our hands and feet to take action.
This is almost second nature to many of us, but for machines, the story is quite the other way around.
If you’re in one of the few cities around the world that are experimenting with self-driving cars, you must have wondered: “How do these cars actually see and learn to drive?”
This article is exactly for that curious person wanting to know a little bit more about that driverless car going by his side without a real driver.
This article that you’re reading is the second in a series of 3 articles we did on SK NEXUS covering the self-driving industry
In the last part of this series we discussed Apple’s secret car project that the company worked for around a decade. And the crash that led to Cruise AI’s suspension and exit from the self-driving race.
What is LiDAR?
We went over a brief intro to LiDAR in the previous article but I’ll still go over the basics here.
The term LiDAR stands for ‘Light Detection and Ranging’. These are radar-like sensors that use light, with the main purpose of recreating a digital map of their surroundings.
Inside a car, LiDAR systems fire rapid pulses of infrared laser light (sometimes hundreds of thousands per second) into the environment. These pulses hit objects such as cars, cyclists, buildings, poles, trees, or pedestrians.
Each pulse bounces back to the LiDAR sensor, and the system measures how long the round trip took. Because light speed is constant, the sensor converts this return time into a precise distance measurement
Then the software comes in which stitches millions of such distance points into a real-time map of sorts. The same software further talks with the digitally-controlled actuators to drive the vehicle.
Actually, LiDAR is used in a lot of other areas other than self-driving vehicles. This includes robotics, industrial automation. Governments use LiDARs to map areas, track landslides or create flood-risk models.
Militaries use LiDARs for night-time scanning and surveillance activities. Autonomous vehicles like drones also have LiDARs strapped to them to map obstacles in their path.
You can see that traditionally LiDARs have been in-use by governments, militaries or very large companies, but it is through the self-driving vehicles that we hear of them in the press these days.
The Good and Bad of LiDAR
Some Pros of LiDAR
LiDAR sensors are highly accurate and they can sketch detailed maps of objects surrounding a self-driving car
LiDAR work at night, unlike cameras that go black in pitch darkness
LiDAR sensors provide added redundancy. Many companies don’t just use LiDAR alone, but they use LiDAR in combination to cameras, radar and AI software systems
We can say that, especially in self-driving cars, LiDAR sensors are part of the team that handles self-driving inside the cars. They are one of the players of that team along with other self-driving hardware and software that deliver self-driving capabilities.
Some Cons of LiDARs
LiDAR sensors are notorious for being expensive. A single unit costs somewhere in the thousands of dollars but recently the price has come down because of increased manufacturing
LiDAR based sensors are spinning sensors and the added hardware does add bulk and weight to the self-driving vehicle that may interfere with vehicle’s performance to some degree
Natural weather conditions like rain, fog, smog, can scatter the beams of LiDAR sensors which can directly interfere with LiDAR data
At this moment, LiDAR sensors, especially in self-driving cars appear to be harder to scale to consumer-level pricing. The only players that are using it today are heavily funded by giant companies like Alphabet (Company that owns Google)
Fun Fact: Waymo’s early prototypes had LiDAR units worth $75,000 now dropping roughly around $1,000 showing cost progress.
What is Computer Vision?
Computer Vision is a branch of Artificial Intelligence with a goal for computers to be able to see and interpret the world around us through images and videos.
We humans use our eyes to capture the imagery of our surroundings. Our eyes have a direct link to our brains. Our brain accepts the input from the eyes and makes the imagery that we process.
Computer Vision has a similar goal. It wants to teach computers how to make sense of raw image/video input from digital cameras and use AI software systems to process that input.
And that’s why Computer Vision has a direct application in self-driving cars because out of many other applications of Computer Vision, cars make up a really good application because if done right, it can help humanity’s need to travel and even save lives and time.
So, In comparison to LiDAR systems, Computer Vision is a bunch of cameras with a relatively small computer with sophisticated AI software that is trained to take decisions just like a human driver would.
This AI software makes use of deep learning and other breakthrough technologies in Artificial Intelligence, combined with digitally controlled actuators that control your car.
LiDAR sensors can create a 3D map of a vehicle’s surroundings while Computer Vision is limited to a 2D view just like humans are through a windscreen.
A significant difference between LiDAR and Computer Vision is cost. Earlier LIDAR installations were tens of times more expensive than Computer Vision hardware but this steep difference is gradually falling.
Although part of the cost of Computer Vision is company-operated or cloud data centers that are used to train the AI models that Computer Vision vehicles use.
But still, the cost divided by a big fleet size is still less expensive than a Computer Vision system.
You may think of this as an analogy but self-driving companies literally train their AI software models to learn from dashcam videos recorded by humans. The machines literally observe humans driving and try to replicate it as best as they could.
Common examples of Computer Vision Only companies include Tesla, Comma.ai, Mobileye, etc.
The Good and Bad of Computer Vision
Having gone through the pros and cons of LiDAR, let’s discuss some of the pros and cons to a Computer Vision Only approach.
Pros
Obviously the biggest pro for a Computer Vision approach is significantly cheaper hardware than LiDAR focused approach
Because Computer Vision systems are cheaper, they are much more easier to scale to big fleets of cars, They’re more scalable
LiDAR sensors cannot see colors like traffic signals or signs naturally, Computer Vision system do see Lane Marking, Traffic Signals, Road Banking, etc
Computer Vision systems are more close to how humans learn driving, they literally observe humans drive from millions of real-world driving miles (fleet learning)
Cons
Computer Vision systems are more prone to visual confusion. They can’t see at night. Rain, glare, fog, poor lighting, etc affect these systems way more than LiDAR based systems
Computer Vision systems do have a limited depth of perception unless they are paired with sensors. This means that the actions they take are based on a limited set of data compared to LiDAR based options
Most Computer Vision Only self-driving systems are limited to only SAE Level 2–3.
If you don’t know what SAE Levels are, you may check out the first article in the Self-Driving series where I covered what these levels are and what each of them actually means for you:
The Two Approaches to Winning Self-Driving
When you study the tech-stack of most companies working on solving self-driving technology, you’d see a classic red-pill, blue-pill split among their approach to problem solving.
The two biggest players; Tesla and Waymo show this difference. The former is focusing on Computer Vision only while Waymo utilizes LiDAR sensors along with a bunch of other hardware.
The LiDAR-Focused Approach?
On one side of the spectrum, we have companies like Waymo who believe that we need LiDARs along with other expensive sensors to achieve self-driving.
The LiDAR focused approach is significantly more expensive when compared to Computer Vision systems, despite LiDAR prices going down in recent years.
In the early days, automotive-grade LiDAR systems were sold even for thousands of dollars a sensor.
That is because they had moving parts and had to maintain vision while moving; that was itself an engineering challenge.
Waymo is among the most successful among companies using LiDAR focused approach for self-driving and many experts agree with their approach of using LiDAR as a redundant backup while using Computer Vision too.
Companies like Waymo that are a proponent of the LiDAR approach use Computer Vision in addition to LiDARs, radars, geofencing, additional sensors and GPS, etc.
This is how Waymo has reached close to SAE Level 4 but this is not without its own challenges.
Many of the big proponents of LiDARs are companies that are purely self-driving companies with big backers to fund them. It’s Google’s parent company Alphabet in Waymo’s case.
The first challenge to a LiDAR focused self-driving approach is the cost. It is rumored that a single Waymo care costs somewhere between $150,000 to $300,000.
And that is why some experts believe that these companies are just burning cash waiting for the moment to break even, that may not even happen.
The second challenge for this approach is limited geographies. Currently services like Waymo that have got to Level 3 or 4 are limited to just a few cities.
And that’s by design because they need to rely on geofencing and other technologies that aren’t as scalable across the world as Computer Vision could.
The Computer Vision Only Approach?
On the other side of spectrum are companies trying to solve self-driving just using Computer Vision. Tesla is the biggest proponent of vision only approach.
It needs to be said that I’m in no way qualified to present a definite answer on which approach is better but I can present arguments from both sides.
The interesting thing about the LiDAR vs Vision debate is that there are reasonable voices from both sides of the aisle.
Many companies, most prominent Tesla, believe that they could reach full self autonomy just using cameras and AI software. Elon Musk has publicly expressed his dislike for using LiDARs for self-driving.
People on Musk’s side believe that Computer Vision will get really good with time and once that happens it would scale a lot faster than self-driving cars that cost around $300,000 a unit.
Yet, there is also a large pool of folks who believe that Vision only approach is not a good idea. They fear that giving a software this level of control control especially when it isn’t independently verified is a risky move.
And their talks holds weight. We’ve seen with Cruise AI that bad software itself can be life threatening with this level of control. Some even challenge that Computer Vision would get to the point of SAE Level 3 or 4.
Some of the main debate is about scalability:
Vision only companies are betting on software getting to a point where it can replicate human behavioral driving without using expensive sensors.
LiDAR focused companies are betting on LiDAR hardware getting cheaper with time and other technologies like geofencing scaling to larger populations.
What the Future May Look Like
There are also two answers to this question:
Carcinization: Many experts believe that the industry will eventually merge both schools of thought. This means that in the future, we may see Computer Vision and LiDAR based approaches combined into self-driving solutions
Software will eat LiDAR - This is the hope for companies like Tesla. They want Computer Vision software to become so good that we don’t need LiDARs and other expensive sensors, which means eventually, perception algorithms may get so good that sensors become supplementary, not essential.
Still, even if we get the self-driving software to become so good that we don’t need sensors, we may see regulators pushing for them just because of redundancy or increased safety.
Personally, I would love to see a world where cars drive themselves and humans focus on better things to do but I still think there is some time before we see cars take over and humans become comfortable with it.
But the way AI has been advancing these last year, who knows what’s going to happen and how far will we give up control to machines that learn from us.
Still, I’m excited for new technologies that would come out of these very self-driving technology investments.
In the next article of this series, I’ll cover some of the biggest exits and controversies in the Self-Driving scene.
In the third article, I’ll discuss why Apple cancelled their self-driving project after a decade and how companies like Cruise AI crashed themselves out of trust, money and decades of work on driverless cars.
I really want to know how you guys feel about this technology battle. Are you on team Computer Vision, LiDARs or do you have some other opinion on solution to Self-Driving?
Please do share what you think down in the comments and if you feel there was any value in this post, share it with a nerd who’s into cars or technology.










Obviously the biggest pro for a Computer Vision approach is significantly cheaper hardware than LiDAR focused approa
This is the strongest argument I always fall back on for sure. Great post thanks :)
Nicely written!