AI Ethics, Machine Law and Robophobiaby Stefan Morcov
Ever since the word “robot” was invented in 1920 by the Czech SF author Karel Capek, Artificial Intelligence has been associated with numerous dystopian fantasies where machines take over the world in order to destroy mankind. Today’s media is full of such references. For instance, in the Matrix, Terminator and Black Mirror, robots are perceived as evil machines craving to take over the world, while in Asimov’s works, robots are perceived as inspirational.
Karel Capek’s robots gave birth to a dedicated word for this fear: robophobia. But is robophobia anything more than simply technophobia, which is a general fear of technology and new? Are intelligent machines that dangerous? Are we ready to assign legal and moral responsibility to intelligent machines? And if not, how far are we from that moment?
Below, we’ll dive deeper into how robophobia integrates with machine law and how it can help us live better lives.
Legal and ethical implications
To better understand what AI Ethics is all about, let’s take a closer look at the recent events in which autonomous cars were involved in deadly accidents. While some of these tragic accidents have been caused by pedestrians, others were caused by software bugs or even operator mistakes.
The vast majority of accidents of self-driving cars are due to human drivers or pedestrians. That’s 81 out of 88 accidents filed in California last year alone, according to an Axios report. The interesting question to ask is who bears responsibility for the intelligent machine. At this point, the general public is divided: some assigning responsibility to the last person interacting with it, be it the driver of an autonomous vehicle, or a pedestrian, while others assign responsibility to the machine itself.
The key barrier that separates humans from machines, which is also the definition of a legal person, is represented by consciousness and self-awareness. We cannot yet speak of free will and robot responsibility until robots can achieve true intent. In the end, it all revolves around the following question: have artificial intelligence algorithms reached the capacity to take their own, independent decisions?
The Magic of AI – A Sneak Peek Behind the Scenes
Most people perceive AI as black-box magical wonders, replacing blue- and even white-collar workers in the near future. They might be able to read orders, send out invoices and payments, sorting and delivering packages, solving customer complaints, reading hand and face movements and gestures, translating from one language to another, placing restaurant reservations, reading, and formatting live minutes of the meetings, and even building other robots. They are doing all this, and much more.
AI is the one technology that surprises and enchants me the most. Years ago, during my university time, I was fascinated by the AI promise, the wonderful mathematics – neural networks, expert systems – those concepts were all so enchanting. But those promises needed years to be fulfilled – all that theoretical beauty needed huge processing power to start delivering practical applications. Well, not anymore – data mining, data science, and processing power made it – AI works!
If for most people it is indeed incredible what technologies like AI, ML and RPA can accomplish in 2020, for data scientists and AI developers this is pure engineering and mathematics with immediate practical applications and benefits. Let’s explore some of them below:
- The number of partnerships, joint ventures, and initiatives between AI-focused companies and organizations from all industries and research areas is exploding. According to KPMG, 75% of biopharma companies consider that AI will play a key role in finding new drugs.
- The RPA market is expected to have a CAGR of 31.1% over the next years to a volume of USD 3.97 billion by 2025.
- AI-based bots are revolutionizing services, financial, telecom, retail markets by automating communication with customers and solving increasingly complex requests with minimal or no human intervention. Enterprise use of AI grew by 270% over the past 4 years according to Gartner, with more than 37% of organizations already making use of some form of AI.
Most practical artificial intelligence algorithms are based on machine learning. While there is a valid paradigm making use of algorithms that are not deterministic, we are far from AI freedom of choice, free will or intentionality. AI/ML uses indeed heuristics and randomness, but it is mostly focused around data analysis, pattern detection, and optimization of parallel backtracking that makes use of randomness and huge processing power.
Today’s AI is complicated analyses of large sets of data, too complicated and too large to ever be grasped by the human mind. But even if we treat AI as non-deterministic for all practical purposes due to their intrinsic structural complexity, which generates dynamic complexity phenomena such as emergence and chaos – it remains deterministic. We don’t even know yet if binary computing will ever simulate real non-deterministic algorithms, or if true quantum and analog computing are needed for this. In all scenarios, AI remains for the moment a tool, performing programmed tasks, as ordained by humans.
Randomness and heuristics, even in highly complex systems, are not free-will – at least not yet.
In fact, similarly to the distinction between complicatedness and “real complexity” in systems theory, we need to differentiate in AI research between artificial intelligence and “real artificial intelligence”, or even “Artificial Consciousness”.
Especially since, until this moment, machines are yet to pass a full Turing test.
And this is my final point: responsibility
We need this serious discussion, about quality, ethical and legal standards for the design, development, testing, production, and operation of autonomous intelligent machines – AI, RPA, bots. This involves both:
- personal and corporate responsibility;
- responsibility for processes, products, and data.
The responsibility is situational but still lies with the designers and operators of the machine, not with the machine itself. Robophobia is not justified in any way. For the moment, the machine is and remains a tool – a complex very useful tool, but still without intrinsic ethical or legal responsibility.
If you want to learn more about Artificial Intelligence and dive into the future of AI, you can read our Insight here: The Past, Present and Future of AI