News
Human and Robot Hand Reaching out and touching in the Index finger

Why the race to build AI regulation has no losers

Facebook Twitter Created with Sketch.

There is a race between corporations’ seeking to develop artificial intelligence (AI) and the efforts by governments to build the necessary legislative frameworks to direct its development according to their ethical and social values. This race can be lost only by those who do not participate.

The global competition to reap the potential rewards offered by AI and algorithms has been ongoing for many years. Search engines, driverless vehicles and virtual assistants are just the most visible innovations in digital technology, and many more are likely to follow. Deep learning, for instance, is quickly evolving and could significantly augment data processing, reducing the need for human intervention in the improvement of algorithms.

The growing complexity of this technology, combined with the increased number of areas in which it is applied, presents significant risks, which arise precisely from the difficulties in making procedures transparent and replicable. These machine-learning algorithms can become so complex that they become incomprehensible in steps between inputs and outputs, resulting in the so-called “black box effect”. Furthermore, algorithms process data sets and are developed by engineers; neither element likely to be free from bias.

In a world where algorithms play a growing role in determining people’s choices and life paths, in everything from healthcare plans to staff selection, the last thing that ought to be desired in algorithms is the presence of discriminatory behaviour reinforcing such phenomena as racism or gender inequality. Yet, this is exactly what happens: back in 2015, Amazon executives had to stop an AI system that reviewed job applicants automatically when they realized it was not rating candidates fairly and that it showed bias against women. Even though Amazon stated that the algorithm was only meant to support the recruitment process and that human decisions always had the last word, the entire programme was promptly shut down.

More recently, Facebook was forced to change its policies regarding targeted job ads when it was discovered that women were prioritized in job adverts for roles in nursing or secretarial work. In contrast, job ads for janitors and taxi drivers were primarily shown to men who came from minority backgrounds. A return to the old “human” selection systems is not a solution: it would only slow down all the processes that are now automated without solving the dilemma precisely because all these discriminations have a human origin.

How AI should be regulated is a matter of human and ethical values, and there is no universal solution. For instance, there are thousands of AI-enabled cameras installed in our territories, and facial recognition algorithms allow tracking of people’s movements and behaviours, regardless of whether they consent. While not a dystopian consequence, at least from our European perspective, these applications do run the risk of diminishing people’s trust in this technology and decreasing the potential benefits it offers. There is no need to explain the advantages of such an approach from an authoritarian regime’s perspective.

However, such surveillance mechanisms could be seen positively by cultures with a different perception of values, which are the only factors that must be considered in regulating innovation. The economic model applied by the large US multinationals has shown that these differences exist even between Europeans and Americans, despite the supposedly united under the label of “Western Bloc” countries.

Minor differences in approach can actually matter a lot to the outcome: thinking (and planning) in advance is crucial to ensure that our society becomes the driver of the inevitable changes. Building an adequate legislative framework, which does not stifle research and application of these technologies and directs their development, is essential to avoid being passively subjected to value-driven technological development by foreign corporations and public bodies, as has already happened for the digital economy.

Provided it is wisely regulated, AI development can bring unprecedented benefits for industry and the economy and help tackle our society’s most urgent collective challenges, including the impact of human biases reflected in algorithm-driven processes. The values that should guide this change are too subjective to provide an indisputable interpretation. Hence, transparency of processes and decisions is the only absolute value that can be relied upon to guide the drive towards regulation and development of artificial intelligence, ensuring that it benefits all.