This is why we can't have nice things. The law needs to recognize the point that the autopilots - are likely to SAVE more lives than they lose. Why is there an expectation for it to be "foolproof"? It just needs to be better than the (totally fallible) human operator MOST of the time. I think that the company is liable, if there's a known problem that they do not immediately acknowledge and move to correct and improve the system. But, they shouldn't be liable for huge damages just because a self-driving system isn't 100% successful - provided that they can show a lower accident rate than human drivers per mile. In such cases, the "damages" should be covered by the owner's insurance to reasonably spread the risk. If we don't make the laws work that way, we'll never get serious automation that does improve the world.
Last edited by WattAJag; 05-21-2019 at 12:46 AM.