Self-driving cars are trickling onto the roadways, and doing so seemingly with a nod from government agencies. In fact, the introduction of self-driving cars into traffic likely will speed up due to recent guidelines set by the U.S. Department of Transportation, which further clarify what the rules and regulations regarding the vehicles are — largely in the cars’ favor.

Self-driving cars are not currently illegal in any U.S. state, but they are not expressly legal in all of them either. And among those in which the cars are legal, some state governments require caveats, such as the oversight of a human operator. Underlying the excitement of these new technological developments is a wariness among the general public regarding the safety of driverless vehicles, and it may be deserved.

There have already been several crashes involving driverless cars — some of them fatal. Even beyond the obvious concern of the potential safety hazards the vehicles pose, there is a perhaps even more uncertain issue underlying safety itself: who is to be blamed in the event of an accident involving a driverless car?

In theory, it is possible that driverless cars are or could become safer than a car driven by a human, due to the elimination of human error. In fact, many technological advancements have already made roads safer for pedestrians. What is certain, however, is that the engineering behind driverless cars would have to account for an immense amount of variables, both internal and external, in order for the cars to operate safely; whether the cars’ programmers will succeed on that front or not remains to be seen.

What Is the Moral Machine?

The moral machine is an MIT-created project which seeks to collect public opinions regarding how human morals should inform machine programming. There is a clear concern when it comes to programming a machine with a preferred moral decision regarding ethical dilemmas for which humans could never come to a definite, ethically-correct solution. However, machines, on their most basic level, operate in a binary fashion, so executive decisions will likely need to be made about what option driverless cars will defer to when faced with a problem on the roadway with no clearly preferable solution. Humans have the benefit of subjectivity and emotional interference when questioned about their decisions involving an accident; driverless cars do not.

Driverless Car Ethics and the Trolley Problem

The moral problem of self-driving cars is essentially an old thought experiment brought to life. “The Trolley Problem,” popularized by Philippa Foot, a British philosopher, is an excellent prototypical example.

The trolley problem presents a hypothetical scenario where you happen upon an out-of-control trolley that, based on its current trajectory, will run over five people if it continues. However, you are standing near a switch or lever which, when activated, will divert the trolley onto a different track where it will only kill one person instead. There have been various interpretations and variations of this experiment, some interjecting different demographics in order to force the subject to evaluate which type of person or people are more valuable.

These classic moral dilemmas are particularly relevant to self-driving cars because, in a sense, a self-driving car could very well enter a situation where it is both the runaway trolley and the passerby standing next to the switch. For example, what should the car be programmed to do if a pedestrian runs out into the street, but in order to avoid them the car would have to swerve into another vehicle? Such concerns are increasingly under the scrutiny of a skeptical public, as well as companies that are trying to improve the function of self-driving vehicles.

Liability Issues for Self-Driving Cars

Many innovations cause legal discord based on uncertainty and lack of precedent, but, in the case of driverless cars, this uncertainty combined with the well-known deadly potential of vehicles is fomenting particularly complex debates. Liability as it concerns traffic incidents involving driverless cars is a prominent facet of these issues. A car cannot be held accountable for unlawful behavior (or, at least, it would be very ineffective).

Who should be liable for accidents or unlawful behavior involving a driverless car: the owner of the car, passengers in the car, the business who produced it, the insurance company, or maybe even other people who were involved in the incident?

Some laws and precedents have already been established, although the issue is clearly sizing up to require a longstanding tennis match in both the courts and the legislature. Furthermore, most of those precedents are based on standing laws preceding the advent of driverless automotive technology. For example, John Villasenor, a professor at U.C.L.A. opines that the strong precedents of product liability law have and will likely continue to factor heavily into liability disputes regarding driverless vehicles.

How the chips fall in terms of self-driving cars and liability will heavily impact the future for these vehicles in the market. A citizenry that is already suspicious of the repercussions for public safety will be hard-pressed to accept the cars if liability costs become the responsibility of private citizens. Furthermore, the market for the vehicles will be negatively impacted if paying for liability costs affects how expensive the cars are to own.

Tort Liability for Self-Driving Car Accidents

Tort laws have to do with legal infractions between private individuals, which are decided with penalties that usually involve reimbursement for the injured party (if it is determined there was indeed an injured party). As an issue of negligence, car accidents usually fall under the purview of tort law, because it is often a case of a private individual causing injury to another in a non-criminal manner (there are some exceptions, such as many types of incidents which involve an accident with a pedestrian).

The tortfeasor, or the one who caused injury, is said to have “tort liability,” which essentially means that they are responsible for repaying the injured party according to court orders. Thus far, this system has been largely applied to incidents involving self-driving cars as well. The first step in determining legal accountability is investigating which car or party was primarily at fault.

Product Liability for Self-Driving Cars

However, although much of the legal landscape surrounding driverless cars utilizes similar precedents as the law does with regular vehicles, there are significant differences as well. As previously stated, it seems that product liability precedents will also play a large role in the laws and regulations surrounding driverless cars. There is a likely future where whether an incident involving a self-driving car should be scrutinized through a lens of product liability or tort liability will ultimately depend on whether or not the incident was a result of an automotive malfunction. Product malfunction could cause the issue to lean more toward a product liability case against the production company.

Ride-Hailing Liability

Ride-hailing services are leading the charge in terms of the use and real-world testing of driverless vehicles. Uber, in particular, has been incorporating the use of driverless vehicles in their business model. However, this trend presents yet another set of challenges due to the intricacies of owner/contractor liability questions. Considering the existing legal questions regarding driverless cars, these questions about driverless Uber liability in a car accident become all the more opaque.

I am the founding partner of Brauns Law Accident Injury Lawyers, PC. I only represent plaintiffs in injury cases and only handle personal injury claims. This allows me to focus solely on personal injury litigation and devote myself to helping injured residents in Georgia recover fair compensation for their damages.