The problems of self-driving cars mirror the concerns of autonomous container ships

The problems of self-driving cars mirror the concerns of autonomous container ships (Photo: Rolls-Royce)

The problems of self-driving cars mirror the concerns of autonomous container ships (Photo: Rolls-Royce)

At the Autonomous Ship Technology conference in Amsterdam, Stig Peterson, senior scientist at Norway’s independent research organization SINTEF, spoke on the lessons that the fledgling autonomous ship domain could learn from similar exploits within the self-driving automotive segment – to stay vigilant on the scenarios that could go wrong, and in better defining its roadmap to commercial deployment. 

Peterson spoke on how safety is a fundamental tenet of automobile development and how it could be achieved in the space by using effective forms of barriers. “A typical definition of safety is freedom from an acceptable risk of harm to humans. But now, it also means the freedom from harm or damage to property or the environment – the two things that are most relevant to ships,” he said. 

These barriers could either be physical or through computer-defined control systems that can take up the work of a human. The latter is especially true of repetitive and redundant tasks like driving, which a computer could do substantially better because it does not have lapses in concentration – a frequent occurrence with humans behind the wheel. 

One of the biggest challenges related to autonomous ships and automobiles is collision avoidance and obstacle detection. Peterson pointed out that it is a problem that threatens the future of semi-autonomous technology, because the better the technology, the higher the possibility for humans to not act responsibly during the few times they are forced to take control of the vehicle. 


For instance, if a vehicle’s autonomous driving technology frequently encounters scenarios that require humans to take back control, the human at the driver seat would be much more vigilant. However, if the self-driving technology is highly evolved, tackling most of the driving complexities by itself, the human in the driver’s seat can become lethargic and unattentive behind the wheel, as he rarely needs to take control. 

Though this seems to be a favorable position to be in, closer scrutiny and practical instances have shown the opposite to be true. The fewer the interventions required, the slower is the emergency response level of the human behind the wheel – leading to accidents due to sluggish human reaction times. 

The Uber incident in 2018 that killed a pedestrian in Tempe, Arizona, had the human to blame, at least partially. The accident occurred because the driver did not take evasive action in the split second when the autonomous vehicle gave back control to the human, in an attempt to evade the pedestrian crossing the road. 

Post-accident, it was revealed that the emergency braking system on the self-driving car was turned off, as it behaved erratically during the times it was switched on. Apparently, the emergency braking system was extremely sensitive to even the minutest of obstacles, braking even when there was a leaf blowing in front of it or a piece of paper flying across in the wind – essentially, rendering it unreliable. 


However, with the emergency braking system switched off, the human in control lost his time buffer to react, which led to the unfortunate incident. 

“As humans, you sit in the car, eight hours every day for a year, and nothing ever happens with the cars driving themselves. But could people be trusted or depended on to actually act on a second’s notice when something suddenly happens outside of the usual?” questioned Peterson. 

Ultimately, for autonomy to work on the roads, it is essential to define not just the levels of automation, but also to create a concrete framework of who would be held accountable and liable in case of an accident caused by an autonomous vehicle. “A system should be considered autonomous if it can legally accept accountability or liability for an operation. Technology today is not good enough for collision avoidance or obstacle detection, because a human operator is in control of the system now,” said Peterson.

In essence, the issues that plague the autonomous vehicle industry within the automotive segment are parallel to the scenario that faces the autonomous ship segment. Peterson was cautious in his statement, highlighting his concerns on the technology’s shortcomings, and eventually concluded by saying that all the key stakeholders like software programmers, the original equipment manufacturers, the users, and the regulatory bodies need to come together to create an ecosystem that is both foolproof and accountable when there is a slip up. 

Exit mobile version