Enabling autonomous moral machines

Author : Alistair Hookway, EPDT

05 January 2017

The age of the autonomous vehicle is very much upon us. Numerous multinational corporations have thrown their weight behind the concept of autonomous driving, including Google, Apple, Nvidia, and Tesla.

(Click here to view article in digital edition)

However, with autonomous technology and the AI behind self-driving cars still at a relatively early stage, there are major issues still in need of serious consideration.

The most pressing of these is the ‘trolley problem’, which poses the question of the best route to take if a serious collision is inevitable. For example, if an autonomous car is cruising down a street at just under the speed limit and pedestrian steps out into the road - does the car swerve to avoid the pedestrian, collide with a wall and cause harm to its occupants, or does the car continue its course, killing the pedestrian, but protecting the vehicles occupants? 

The trolley problem is an ethical thought experiment that asks the question of whether an individual would save the lives of many at the expense of an individual’s life, the utilitarian choice, or not interfere at all. The term was coined by Philippa Foot in 1967 and was originally explained using the idea of a runaway tram trolley heading for a group of five people and whether an individual should divert the runaway trolley so that it would hit just one person instead. Is directly taking action to kill one person more ethical than taking no action and killing five?

Many of the questions raised by the problem are hypothetical, but can be translated to the modern setting of autonomous driving. The moral question would have very real consequences for those travelling in autonomous vehicles and would be handled by AI rather than humans as the original debate was intended for. This adds a further layer of questions; how does AI make an ethical decision? Who programmes the AI to make the decision? Can data be applied to a situation when morality has a direct relevance? Who bears responsibility for any decision that the AI takes? The answer to these questions is very much a matter of life and death, and needs to be addressed before autonomous cars become a mainstream option. 

Increasing demand for autonomy

Tesla is perhaps the most progressive and renowned developer of autonomous and semi-autonomous vehicles. The company and its CEO, Elon Musk, have very much injected the idea of autonomous driving into the public consciousness, with Musk suggesting autonomous vehicles could be mainstream in just six years time. All Teslas now come with full self-driving capabilities and a new onboard computer, with more than 40 times the computing power of the previous generation, runs the Tesla-developed neural net for vision, sonar and radar processing software. 

The latest Tesla, the Model 3, had over 370,000 reservations in the first two months after its announcement, indicating the significant demand and interest in autonomous vehicle technology. Many videos have appeared online showing drivers using, or perhaps misusing, Tesla’s AutoPilot feature. Drivers are encouraged to always keep their hands on the steering wheel to ensure that they can take control when necessary, but some videos appear to show drivers performing a number of tasks whilst in the driving seat in order to highlight the capabilities of the AutoPilot feature. 

It is clear that the warning for drivers to keep their hands on the wheel is to ensure that the driver is paying attention to the road and can avoid any collisions with other drivers, or in essence, to oversee the AI. After all, as Google has experienced, the majority of collisions with self-driving cars are the fault of a human driver crashing into the autonomous vehicle. Out of 14 collisions involving Google’s self-driving cars, 13 have been the fault of human drivers with one being the fault of Google’s AI assuming a bus would allow the car to merge. It is these rare and unlikely collisions that can cause the most serious damage. In June 2016, a man in the US died after his Tesla crashed into a tractor trailer having misread its white colour as the skyline. Tesla said that the driver had not kept his hands on the steering wheel and therefore could not have prevented the crash. However, autonomous vehicles have been tested over millions of miles and these are two of very few instances where the AI has had to accept blame. When lives are on the line, chances cannot be taken when it comes to optimising and developing autonomous vehicle AI. 

The utilitarian approach

To move beyond semi-autonomous driving and into the realm of fully autonomous vehicles operating on a mass scale, AI software still needs to be developed to ensure minimal defects. The trolley problem is a particular taboo that needs to be confronted by the automotive industry. In an attempt to gauge the public consensus, MIT researchers developed an online morality quiz based on the problem (http://moralmachine.mit.edu/). Entitled Moral Machine, it makes users choose one of two options in a number of examples where both options result in casualties from an autonomous car collision. Each example varies, changing the pedestrians from male to female, young to old, law abiding to breaking the law, athletic to unfit etc. The user must then use their conscience to decide which option they feel is the most appropriate outcome. The quiz then presents them with a graph showing what preference they had with regard to each criterion and how their individual decisions compared with the results of all other users. Perhaps unsurprisingly, the most preferred option from all users is a utilitarian approach where the largest number of deaths is prevented, regardless of whether it is pedestrians or the occupants of the car who perish.  

The Morality Machine is certainly a useful tool in gauging the views of the public. After all, it would be more appropriate for an AI to make decisions that have been influenced by society rather than the company that developed the software. The quiz, however, is flawed somewhat as answers are made in the knowledge that they have no real influence. Everyone would like to think that they would take the utilitarian stance of saving as many lives as possible, regardless of their social standing, age, or sex, but would anyone want to get into a car that was based on this approach? The utilitarian approach would not take into consideration whether the lives being saved were those of the car’s occupants or pedestrians. Would an autonomous car be bought if people were aware that the car was willing to kill them in certain situations? 

Embedding morality

These lose-lose situations are obviously very rare and should not prevent the continuing development and advancement of AI software and autonomous technology. Perhaps the ideal situation would be a road network consisting solely of autonomous cars, removing the unpredictability of humans. This is unlikely to happen anytime soon, however, and is simply not feasible. The heart of the issue lies with the developers of the hardware and software that is implemented in vehicles. Autonomous technology involves endless amounts of data being produced concurrently by sensors and radars that monitor the space around the car. Such technology allows the car to pre-emptively act to avoid any collisions and is far more efficient at assessing driving conditions and identifying hazards than any human driver. Automotive developers therefore have a pivotal role to play in the safety and mainstream adoption of autonomous vehicles. 

The continuing problem faced by automotive developers involves AI acquiring vast amounts of information and data in order to navigate the road. However, this requires computational power and memory bandwidth that have previously been not feasible for a vehicle. Thus, developers have to streamline the process to optimise hardware and software to make the process more efficient and less demanding whilst still achieving accurate results. 

To process and make sense of these large amounts of data, a technique called Convolutional Neural Networks (CNNs) are often used. This technology is the same as the one incorporated in a Tesla. The ability to identify hazards in different driving conditions is critical for autonomous vehicles due to its direct impact on safety. A lack of accuracy in identifying road signs, traffic signals, or pedestrians could lead to a situation similar to the trolley problem. CNNs are layer-based systems of interconnected artificial neurons which process different sections of an input image. The outputs of these individual sections are then tiled so that the input sections overlap. This creates a better representation of the original input image and is repeated multiple times for each layer. The use of tiling lets the CNN account for any slight deviation of the input image. The catch-22 here, however, is that this process of using CNNs and streamlining them runs the risk of simplifying the decision process to a point where it fails to make the best decision possible, an obvious drawback for an autonomous vehicle. 

CNNs require significant computational power and so optimisation is necessary for an automotive based application. The process is already complicated enough in terms of a vehicle identifying an array of road signs and potential hazards in varying conditions using CNNs. Situations such as the trolley problem add further complexity since the final output must be highly accurate due to its critical safety importance. As a result, there has been an increasing amount of functional safety requirements for automotive software.

The challenge for embedded developers

Advanced Driver Assistance Systems (ADAS) come under five stages ranging from Stage 0, no automation in the vehicle, up through partial automation and to Stage 5, full automation. Level 4 and 5 make autonomous driving decisions and are therefore required to anticipate functional safety hazards in real time and mitigate driving risks with high assurance. Two trends developed as a result of increasing functional safety requirements. The first trend was the complexity of ADAS systems increasing due to the multiple sensors and sensor types working together simultaneously to make driving decisions. The second trend was that a black-box application-specific integrated-circuit (ASIC) module design for safety is not sufficient due to software alone not being sufficient to monitor and track all safety-critical events. 

Cadence is one company that has set out to try solve these issues. The company has set two goals to optimise its neural network structure whilst adhering to safety requirements. The first is to reduce the implementation cost of CNNs in terms of energy, memory footprint, memory bandwidth, compute resources, and latency. The second goal is to improve recognition accuracy by reducing overfitting, the overreaction of the network to anomalies in the data which results in bad predictive ability. To achieve these goals, Cadence is working on two methods; a new basic neural network structure and automated optimisation by iterative improvement. 

As well as safety requirements, Cadence has to heed the power capacity in automotive applications. The power efficiency of using CNN needs to be significantly improved since the most advanced ADAS chip consumes 3W which provides insufficient performance. An autonomous vehicle requires AI that is capable of working to its full ability, rather than one which is hampered by a lack of computational power.

The use of CNNs shows that data is very much responsible for an autonomous car’s behaviour, rather than any type of embedded or programmed morality. Cadence’s CNN currently achieves accuracy of over 99% when tested on German traffic sign recognition, a clear indication that autonomous driving can be safer than a human driven vehicle. To enable faster image processing, the company has also developed a new high-performance, low-power DSP for neural network applications. Compared to its predecessor, the Tensilica Vision P6 DSP delivers up to 4X the multiply-accumulate (MAC) performance. Versus commercially available GPUs, the Vision P6 can achieve twice the frame rate at much lower power consumption on a typical neural network.

Summary

With no correct moral answer to the trolley problem, it is also therefore impossible to solve using data. Despite this, if the situation does arise, there will inevitably be uproar about the ‘rise of the machines’ and the failures of autonomous technology. The only action that can be taken is for companies to continue their work in making solutions as optimised and accurate as possible. Until there is a road network consisting solely of autonomous vehicles, there will always be the risk of the trolley problem, but for now, it continues to be just a hypothetical situation with no obvious solution. 


Contact Details and Archive...

Print this page | E-mail this page