Then something else was the bottleneck at the time. It is very easy to prove that some sensors in some situations will be able to perceive things that other sensors cannot. In those situations the additional sensors are crucial first steps. I would guess the bottleneck is shitty reliance on statistical machine learning with a long tail of unhandled edge cases. Each case very uncommon, but in aggregate a very important sum.