Skip to content
logo

Along with jetpacks, robot butlers and hover-boards, the concept of the driverless car seemed like a techno-utopian dream envisioned only by science fiction novelists. This dream, however, is fast becoming a reality: after two and a half years of trials, in early 2020 a driverless electric Nissan Leaf completed a 230 mile trip from Bedfordshire to Sunderland. The success of this project, the UK’s longest autonomous car journey, marks a “huge step towards the roll-out of driverless cars on UK streets” according to Business Minister Nadhim Zahawi.  The modified self-driving Leaf, known as HumanDrive, uses laser scanners to search for obstacles, a detector on the bonnet to monitor vehicles in front and cameras to recognise speed limits and traffic signals.

Following on from Nissan’s trial, in August 2020 the government launched a consultation on an automated system capable of taking vehicle control to make driving safer and easier.  This technology is designed to enable drivers – for the first time ever – to delegate the task of driving to the vehicle.  The government is seeking views from the industry on the role of the driver and proposed rules on the use of this system within the current legal framework.

These cornerstone achievements coincides with the government plan to ban selling new petrol, diesel or hybrid cars from 2035 at the latest. With members of the UK’s Intelligent Transport Society advising the Law Commission that a ‘Digital Highway Code’ is needed, clearly the implementation of driverless cars will have monumental repercussions.

Reconsidering the definition of a ‘driver’

Firstly, then, lawmakers will have to re-consider what it means to be a ‘driver’ as defined by statute. Under the provisions of the Road Traffic Act 1988 (RTA 1988), a ‘driver’ is interpreted to be “a steersman of a motor vehicle… as well as any other person engaged in the driving of the vehicle.” It was made clear in Pinner v Everett (1977) 64 Cr App R 160 that the Defendant’s vehicle was still being ‘driven’ despite the ‘driver’ being sat behind the wheel of their stationary car. What if, however, I get into the backseat of my driverless car and from thereon the journey is made without any intervention from myself? Would I still be committing a criminal offence if I used my mobile phone?

In their initial preliminary Consultation Paper, The Law Commission proposed a new category of user which they have coined ‘user-in-charge.’ “Where a vehicle is listed as capable of safely driving itself, and the automated driving system is correctly engaged,” the human user would not be a driver.” As such, a ‘user-in-charge’ would not “generally be liable for criminal offences arising out of the driving task.” The Law Commission tentatively propose, however, that a user-in-charge, while not being responsible for the dynamic driving task, “must be capable of taking over the driving task in planned circumstances or after the vehicle has achieved a minimal risk condition. They must therefore be qualified and fit to drive.” If the HumanDrive project does suggest, though, that it is possible to develop a fully autonomous vehicle then it must be questioned why it is necessary for the human passenger to be fit to drive when they are not envisioned to be in control of the vehicle.

The very ability of the user to take over the driving task from the autonomous system poses a unique legal dilemma. As soon as the user has actual control of the vehicle, rather than the automated driving system being in control, then  it is likely that  it is at that point that they become a ‘driver’ and all liabilities would be transferred onto them. We know that Judges have used the concept of control in previously heard cases to decide who ought to be liable in the event of a road traffic accident. In Langman v Valentine [1952] 2 All ER 803, a driving instructor was considered to be a ‘driver’ because he had control of the vehicle, with one hand on the steering wheel and one on the handbrake. Similarly, in Nettleship v Weston [1971] 2 QB 691, though primary liability was imposed on the learner driver, the Court of Appeal considered that the instructor was contributory negligent because they were partially in control of the vehicle.

Should it be possible for somebody to take over from the driverless system? Should we not, instead, rely on the driverless system and thus, if a road traffic accident occurs, liability would rest with whoever is in control of the system and not with the driver themselves?

Simon Beard, researcher at the Centre for the Study of Existential Risk explains:

“Modern AI, based on machine learning, is all about continually improving decision making to achieve the desired result (like safe and efficient transport) and avoid anything undesirable (like deaths.) This can lead AI systems to function in unexpected ways, both brilliant and bizarre, and means that they will forever be learning from their mistakes and working out how to do better.  What driver can say the same thing?”

Summary

There are some ideas that at first seem simple, but which become more complex and profound the more that they are explored. Trying to write new legislation on automated vehicles is likely to be akin to navigating a minefield. Just considering this one ramification in isolation is likely to spark debate and this is only the tip of the iceberg. However, such ethical deliberations will need to happen within the legal field – and soon – considering that science fiction is fast becoming a reality.