Adding radars, LiDARs, and other sensors to cameras does not meaningfully advance us toward full self-driving

Saturday, January 17th, 2026

Adding radars, LiDARs, and other sensors to cameras does not meaningfully advance us toward full self-driving, Genma_Jp argues:

Here are the six main reasons:

Marginal information gain: RADAR and LiDAR primarily provide depth and relative velocity — data that modern neural networks can already derive sufficiently from camera images alone, especially given that precision requirements decrease at longer distances.

LiDAR’s fundamental weaknesses: It performs poorly in rain, fog, and on reflective surfaces (blooming), produces sparse and noisy returns requiring fragile clustering, and lacks the angular resolution for reliable classification at distance.

RADAR’s practical limitations: Despite better weather penetration, it delivers extremely sparse detections, suffers from clustering and classification challenges, and often masks weaker objects behind stronger reflectors — particularly problematic for static infrastructure in low-speed scenarios.

Irreplaceable role of vision: RADAR and LiDAR cannot detect critical semantic information — traffic signs, lights, lane markings, or pedestrian intent cues. Stellar computer vision is mandatory anyway; the other sensors cannot compensate for its absence.

Cameras are robust enough: Modern imagers match or exceed human-eye performance, and practical mitigations (wipers, airflow) handle issues like raindrops. In truly degraded visibility, the safe response is to slow down — something an AV can do systematically, just as humans do.

Fusion as a crutch: Multi-sensor approaches deliver quick early wins by patching vision weaknesses, but they mask the need for true mastery of computer vision through massive data and compute. Companies end up over-investing in complex fusion logic instead of solving the hard problem.

Leave a Reply