At the end of the day, we have to look at the many ways the car has to get information. Visually there are simple cameras, multi-spectrum cameras, OCR technologies, etc. to help "see" what is around them. With tech like that, something like fog is irrelevant. When you add an advanced AI into the mix, you have the ability to make a car far safer than if a human was behind the wheel. The problem comes when the AI is required to make a decision that has potential impacts on the vehicles around it. It's the classic
trolley problem. How do you teach AI to rationalize the potential loss of life.
Autonomous vehicles are something we won't escape, and in fact, the tech is growing rapidly.
This is not a problem at all, because if you ask three humans, you'll get four opinions on how to handle such a situation! No matter how an autonomous system will answer this question, it will always be dissatisfactory, because we, humans, cannot answer the question ourselves!
Anyway, it's a constructed situation that will likely not occur to anyone in our lifetime; so even if an autonomous system won't react in the way we'd like it to - whatever that is - the sheer amount of avoided accidents, injuries, and deaths in other situations will easily outweigh it. Besides, only an autonomous system would even be fast enough to estimate the probable outcome: a human will almost always have to make an uninformed decision within a fraction of a second. That's why nobody thinks of blaming a human if he made a wrong decision. Why blame an autonomous system that makes a different decision based on facts and calculations? Why should we even want it not to do that?
Or, to put it another way: the chance of dying in such a situation because an autonomous system decided to sacrifice you is much lower than to die by pretty much anything else we decide on every day, e. g. taking a plane that might crash, or simply crossing a street at the wrong moment. Nobody will then say, afterwards, that is was your fault because you made the 'wrong' decision.
You can't blame the autonomous system for making a decision, as long as it was a reasonable one at the time it was made. If everyone would use one, I'd feel much safer on the streets. The off-chance of my autonomous system killing me to safe other people's lifes is acceptable under the premise that the autonomous systems in other cars will do the same to protect me.