How is that handled nowadays with people instead of robots? We make our decision either voluntarily or involuntarily, and then try to explain away why it happened the way it did, and the court deems us guilty or not. I'd imagine something similar would happen but with the added advantage that the software can be patched so that it will avoid the possibility in the future; whereas human beings seldom learn from their incidents in traffic.
As a programmer yourself, you know that, no matter how complex the code for such situation could be and no matter how many variables are considered by it, in the end it sums up to a series of "if X
then do Y
". What I mean is, a lot of things go through the head of an human in such situation, it's unpredictable, in fact most likely you'd just act impulsively and not in a rational manner because it would require a reaction too fast for our brain to proccess it logically, on a court they'll always have that in account and what they'll judge from your action is if was it for example if was it caused because you were speeding, you were drunk therefore you took longer to react, and so on.
A self-driving car, as being a car driven by a computer, works totally differently. First off, we can only assume the car would not disrespect any traffic rules that could lead to the accident, and so I believe the driver could only be accused if he knew or should've known the car had some damage or malfunction that could lead to such consequence, or the car maker if it was caused by a defect or error. Then, when presented with the imminence of such accident, the AI would immediatly analyse the situation, the multiple manouvers it could perform to avoid it or minimize the damage and the predictable consequences of each one of them (in a way an human could never do, being able to immediatly analyse and calculate stopping distances, probabilities and such). Then, at least until quantum computing becomes available to consumers, the software has to tell the car what to do. You can program it to have thousands of possible outcomes depending of the situation but the programmer has to "tell" the AI what should do in each specific scenario.
Now, if, simply put, there's an occupant, there's a guy that's on the middle of the road/ a car crossing the street and a T-bone impact is imminent etc. and we assume the AI car could not be guilty since it's programmed to drive according all the traffic rules all the times, and the only way to avoid the accident is by risking the occupant's life, what would it do? Saying "it does what it considers to be the best solution" isn't a viable answer for the reasons I stated, neither is saying to "act randomnly" (that would be the most dangerous solution IMO), so the code has to lead the software into a solution. Basically, it can't work like us humans do.
As for limiting city centers/highways to self-driving only, I don't see why they'd have to do that. I mean, from a technology standpoint, there's no reason to since self-driving cars will behave like regular drivers do, or even better since they'll be able to avoid most of the mistakes and infractions, and will of course account for non-self-driving cars since there will be at least a decade of overlap between them.
I didn't meant to say that as something to happen on the next 1 or 2 decades, but in the long term. If self-driving cars became widely available and reliable, there's just no reason not to eventually take that step. If all the cars were self-driven in cities and highways, the whole traffic could move quicker, no traffic lights and other city inconveniences would be necessary (as I suggested, it's possible eventually all cars become connectedon a network, so they could share their movements and coordenate their movements to pass on junctions without having to stop; roads in such places wouldn't have to keep in mind the human compreension (an architect cannot design a road full of confusing junctions that, although more efficient to a particular case, would be hard for an human to take the correct road) and so on. Basically, I believe removing the human error from roads where high efficiency is a constant need when the market has avanced to the point such thing becomes possible would be the best solution to do.