In some countries, it’s difficult to land large projects without engaging in bribery and some companies are prepared to take that step. It goes without saying that both private and public companies have always faced moral and ethical dilemmas. Such dilemmas are becoming increasingly more common with the rise of big data and machine learning. In the years ahead, ethics will be a discussion topic of growing relevance. We should be prepared for this…
People in the United States love beauty contests. Needless to say, the problem is that the judges’ opinions can be biased. It therefore came as no surprise when a panel of robots was given the chance to judge a beauty contest because, after all, Artificial Intelligence is completely unbiased. Using machine learning, a set of algorithms was developed and dozens of photos of men and women were uploaded. However, in the end, the robots turned out to be extremely racist: the vast majority of the winners were white. Despite the large numbers of photos submitted by people of color, the vast majority of winners turned out to be white.
In New York, gun searches are conducted very frequently in disadvantaged neighborhoods because the prevalence of guns is higher there than in other neighborhoods. People of color make up a relatively high proportion of the residents of these disadvantaged neighborhoods. Moreover, in the U.S., large numbers of people – regardless of their skin color – are in possession of illegal drugs. The police often find these drugs when carrying out gun searches. Analyses of the gun-search data (and big data) showed, or seemed to show, that many people of color use illegal drugs. As a result, the decision was made to search people of color for drugs more often – in other neighborhoods, too. It sounds logical, but it isn’t. A failure to deal with data carefully is a moral failure.
When performing big-data analyses and using machine learning we leave a lot of choices up to computers that make assessments based on pattern recognition. This can lead to undesirable choices and undesirable behavior due to incorrect data selection, for example. And it’s only going to get worse.
Advances in information gathering and technology are taking place at an unparalleled pace. The processing capacity of chips has been doubling every two years for the past 46 years. Devices are steadily getting faster, smaller, better, cheaper and simpler to use. Sensors (the internet of things) are already able to see, hear and feel, and their capacity to smell and taste is steadily improving. That is one reason why we are storing increasingly more data and doing more with it. The amount of data we store, including in writing, on video and digitally, is also doubling every two years. This means that in the past two years we have stored just as much data as we did from the Stone Age up until two years ago! And, by extension, this means that in the coming two years we will store just as much data as we did from the Stone Age up until today. Moreover, given that we are sharing increasingly more information via social media, it’s almost impossible to keep anything secret anymore. Do you remember Pieter Storms and his reality show Breekijzer in which he shook up companies? His ‘descendants’, Twitter Storms, are shaking things up much more! In short: capacity is growing exponentially and we can do more and more with data, but the risks are growing too and we can’t keep anything secret anymore. Tricky!
Big-data analyses offer enormous possibilities. Petroleum giant BP has many pipelines through which oil flows. The pipes are full of sensors that constantly record and update all kinds of information. At a certain time, BP entered all the data from these sensors into an analysis tool. The results showed that one type of oil was responsible for 75 percent of the degradation of the pipelines. Up until then, BP had never thought about the impact of individual types of oil. The company now had information that was relevant for maintenance purposes. Big-data analyses not only provide unexpected and detailed answers to your questions, they also answer questions you hadn’t thought of asking! Using big data in this way improves our ability to solve crimes and catch criminals, fight diseases and cure people, predict customer behavior and much more.
Machine learning is the icing on the cake. It results in true intelligent behavior. Last year, Google DeepMind’s AlphaGo AI program convincingly defeated the human Go world champion, something that many believed would not happen for many years to come. Increasing numbers of machine-learning programs are able to pass the Turing Test easily. Chatbots can actually engage in conversations and before we know it reception bots will be performing better than people. They will know everything about their customers. Sophia the robot can talk to people and even has more or less human facial expressions. She is witty – and that is not part of her programing: “Can robots be self-aware, conscious and know they’re robots?” “Well, let me ask you this back: how do you know you are human?” Saudi Arabia has even given Sophia citizenship.
There is, however, a downside. In 2016, Microsoft launched an AI chatbot called Tay on Twitter. She was supposed to learn from the conversations she had. Certain users of 4chan’s pol board took up the challenge. They started to send Tay messages with offensive content. Within 24 hours, Tay was making sexist, racist and pro-nazi comments and Microsoft quickly removed her from Twitter. Data science and machine learning recognize patterns and make distinctions based on these patterns. Another word for making such distinctions is “discrimination”. If that doesn’t cross the line, it’s right on it. If, on top of this, you consciously or unconsciously load biased data, the results can be extreme or extremist. We are skating on thin ice because big data and machine learning have absolutely no sense of ethics or morality. That doesn’t have to be a bad thing, but it certainly can be. No matter how wonderful the possibilities are, they also involve high risks. Companies have to realize that when they choose pattern recognition and machine learning they are relinquishing some control. A certain degree of unpredictability will be introduced. Of course, we ourselves can decide what data we will and will not make available, but we cannot see what will then be done with that data. Big-data analyses and machine learning not only lack a sense of ethics, their actions are also invisible. We therefore do not know exactly what significance the outcomes have, or if they were arrived at legally, in compliance with prevailing regulations. And even if nothing improper has been done, we may still experience a loss of transparency in respect of management, auditors, legislators and society. To what extent is this okay?
Moreover, the question is if simply being ‘legal’ is enough. Don’t we also want our actions to be legitimate? The distinction between legality and legitimacy is a hot topic. Something that is legal might not necessarily fit the current view of what being legitimate is. You don’t want your customers saying, “Sure, it’s allowed, but it isn’t right,” regarding your company or its services. Over time, views can change again. Consider, for example, the tax evasion debate. In reality, you need to be able to think ahead five years: in five years’ time, will we still be able to defend the decision we are making now? The more data we store, the greater the issue of privacy becomes. We will therefore have to include every aspect of privacy in this discussion regardless of whether it is about legality or legitimacy. What if we lose our clients as a result of our own actions or the actions of a single staff member? Or if we lose staff members, or our partners? What if the regulators lose their trust in us? And what if we ourselves become lost? And how do we get big-data analyses, machine learning and software bots in check? That is a second, much more complex question.
Bob Dylan once wrote the lyric: “To live outside the law, you must be honest.” In these times, more than ever, legislation is lagging far behind developments taking place in society. Private and public companies that are considering using machine learning can’t afford to wait for it to catch up. We have to live outside the law because the law no longer suffices. But you will not have to determine what is legitimate on your own because ultimately it will be decided by the prevailing public opinion. Moral dilemmas are nothing new. Companies have always needed to have a moral compass. But with the rise of big data and machine learning this need is becoming extremely urgent. We will have to choose: is being legal good enough, or do we also want to be legitimate? Can things be illegal and yet still be legitimate? We cannot avoid such questions. We will have to make several explicit moral choices at a time when decision-making is often decentralized or takes place in self-managed teams. Or even via computers. That is not a one-off action. While new moral dilemmas are arising, we want to provide new answers to old moral questions.
It is vital that we make explicit choices, but we will have to translate these choices into policy that will be able to withstand the test of criticism in future social debates. And we have to ensure that people and machines act in alignment with these choices. How can we influence the behavior of people and machines? Of course, we can set rules but that doesn’t seem the best way to embed our moral compass. It’s more sensible to define principles and even better to embed a moral compass in the DNA of a company. Then everyone would always make the right choices. Moreover, we not only have to address our visible behavior, but also our invisible behavior. Thanks to the social push, the chances of keeping anything secret are only diminishing – especially if it’s something that is not legitimate. The papers are full of such stories every day.
We engage in marketing activities because we want to present ourselves to the market in a coherent and consistent way. Don’t we also want to be coherent and consistent where ethical issues are concerned? Ethics and a moral compass will only continue to grow in importance. Are we ready for a Chief Ethics Officer?
Ethics and a moral compass have become important issues. The goal is for us to make explicit choices that ensure the entire company acts in a coherent and consistent manner. Ultimately, you have to be able to explain these choices to yourself and to your coworkers. Furthermore, you should always have a plausible story for shareholders, legislators, regulators, auditors and in fact society as a whole. If you don’t start working on this now, you will soon find that you are too late.
Bart Stofberg (firstname.lastname@example.org)
Consultant at Quint Wellington Redwood