Bismillah, alhamdulillah: The driverless car is a clearly disruptive technology. With the rapid and breathtaking advances in technology it seems the limit to this technology being introduced is no longer technological but human.
Once the technology companies consistently demonstrate that a driverless car is 100 or 1000 times safer than one driven by a human they will have proven their technological prowess. But one significant obstacle will stand in their way: the human. What I mean by the human is the emotional reluctance of us as humans to be driven by computers. The thought of putting our lives in the hands of algorithmic robot drivers makes us balk. But why? Two things come to my mind: autonomy and irrationality.
Autonomy appeals to a base human instinct: freedom. As humans we got used to driving our cars and having the autonomy to drive them, wherever, whenever and sadly sometimes into whoever and whatever. The driverless car will virtually eliminate this negative effect of driving, the roads will be safer, little children who run out in front of a car will have an amazingly improved survival rate. But this safety will come at the price of the loss of freedom. The freedom to off road, to park in tight spaces, to climb the pavement when no one is looking, to break the speed limit when the road is clear and empty. To let the wind rush through your hair and feel the exhilaration of adrenaline pumping through your body and awakening your senses.
Technology companies can try and talk logically about the issue, state the number of lives saved, the increased time we can spend answering emails and social media messages while being driven to work and all the other wonderful economic reasons. But I am afraid we have become rather accustomed to driving ourselves around. The tech companies will have to learn how to pander to our hurt sense of freedom.
An idea would be to have ‘cheat’ modes that assuage this emotional resistance. The degree of ‘cheating’ should be enough to appeal to our emotional side but constrained enough to not cause harm. You can imagine a program that calculates the distance to the next car and tells the driver that s/he is a zone where certain rules can be relaxed and the controls can be handed to the driver or ideas that appeal to younger drivers may include the option to drive on two wheels or do perfect handbrake turns. You get the idea.
Doctors sometimes do this with their patients. As a medical profession we emphasise the importance of taking medication on a daily basis, but we know that as many as 50 or 60% don’t follow the rules, even though it’s for their benefit. A good doctor who perceives this problem in a particular patient may negotiate a ‘cheat’ day or two and achieve a much better compliance then the stern faced doctor who insists medication has to be taken everyday. Patients can take a pill break or be simply told it’s okay to miss a day or two of medication every now and then – as long as it’s not a habit. Patients respond to this flexible approach.
The second problem is our tendency to use irrational comprehension when we think about safety. The safety of a driverless care is demonstrably safer and is likely to become safer still. Yet, as with all machines there are things that will go wrong, as seen in the Tesla car test fatality recently.
When a human drives a car, it is under their direct control. This gives the human a heightened but false sense of control and hence security. This sense of security is lacking with driverless cars. Here an accident is less likely to happen but the reason why it happened is now not connected to the human. This lack of connection and control heightens the perception of risk.
As humans we tend to analyse risk in an emotional and irrational way. This is a potential barrier to the adoption of this technology. In medicine doctors tend to face this problem frequently with patients who under or over estimate risk based on many irrational thoughts. It is not infrequent to find a patient consulting a doctor because their friend died or had had a disease. The proximity of the person and their close emotional response to their friend induces empathy. As part of the empathy they too feel they are at risk of suffering from the same disease. Irrational but real. Dealing with this ‘gut’ approach is what doctors, especially family doctors, do on a daily basis. So how do they overcome an irrational and wrong conclusion?
Trust is a key factor. ‘Doctor I trust in you’ is not an infrequent statement that is heard in the consulting room. From a logical perspective this approach is equally irrational. What should decrease a patient’s anxiety is facts and figures. Trust in someone, on in its own should not decrease anxiety or worry, but it does. The irrational human problem finds is best treated with an irrational solution.
Turning back to the driverless car, what equivalent approach will help? One idea is to draw an analogy between the driverless car and another potentially driverless form of transport: the horse or camel.
These animals, once trained, can continue to take their riders on their journey even if they don’t pay attention to where they are going. Animals have been part and parcel of human history for thousands of years and they are a symbol of trust and reliability.
Harnessing this connection is likely to provide an avenue to bridge this gap.The technology giants have to turn back to the humble horse and camel because the rider is the same.
I wanted to explore the idea as I can sense we are close to achieving technological breakthroughs which will introduce the idea of the doctorless or doctorlight consultation in the future. But I will leave that thought hanging in the air, for the moment.