As artificial intelligence and machine learning is increasingly commercialized, it’s going to begin challenging our legal and moral notions of agency, blame, and responsibility.
Google’s self-driving car had a very minor accident with a bus, and Reuters had this to report about it:
Alphabet Inc’s (GOOGL.O) Google said on Monday it bears “some responsibility” after one of its self-driving cars struck a municipal bus in a minor crash earlier this month.
The crash may be the first case of one of its autonomous cars hitting another vehicle and the fault of the self-driving car. The Mountain View, California-based Internet search leader said it made changes to its software after the crash to avoid future incidents.
Some stray observations:
- I’m surprised Google owned up to even “some responsibility”, as I would have thought they were eager to shed all responsibility early in the product’s existence, because while in the long run they’ll be less accidents with robot drivers, I’m uncertain that the first batch will always be so fortunate.
- I imagine this is 100% the fault of the bus driver, and for purely an unfair reason: as an bike rider, I see how bus drivers in New York City act on the road, and it isn’t always friendly.