(How) should researchers worry about ethics?
A few days ago I saw a tweet mentioning that NLP researchers should seriously start worrying about the ethical implications of their discoveries. That is easier said than done.
Technology is amoral
First things first, I think we can all agree on the fact that technology itself is amoral. Technology is, ultimately, based on mathematics and manipulation of numbers (this is what computers do). Formulas and algorithms are neither good or bad on their own, as long as no meaning is attached to the numbers they manipulate. Morality emerges as soon as we give an interpretation to the variables: a tiny robot chasing and killing viruses may resolve the ongoing from Ebola and HIV pandemics (I am obviously oversimplifying), which is good, but a drone chasing and killing people in the name of democracy or some god is, arguably, evil. Yet, these robots would probably use very similar control software.
Let me cover a few more preliminary assumptions for what follows. First, I am only considering ethical concerns about possible good or bad uses of an invention. I am not interested in ethical research conduct, which should be a non-issue (it’s not, apparently, but I will not talk about it now). Second, I am assuming that, if an invention can be used to do bad, it will be used that way by somebody. Perhaps not today and not tomorrow, but, eventually, one day, somebody will find it useful for their evil endeavors.
What does it mean for researchers to worry about ethics?
So, with this setting, what does it mean for researchers to worry about ethics? At the very least, they should try to conceive and be aware of possible good and bad use-cases of their inventions. But then, what to do with this collection of use-cases? Just mentioning them at the end of a research article is not sufficient, as it does not stop the evil guys. Worse, it could even give them a few more ideas!
Then, should researchers refuse to work on anything that could have bad uses? What about the good uses? Not spreading something that can do bad is good, but not spreading something that can do good is also bad. Are researchers really the best people who can make such a decision? Not to mention that many people, not only researchers but everybody who is working on a technology with potential bad use cases, cannot afford to simply oppose their employer and be ready to quit their job. They have families to support, debts to pay off, and such a move could hurt their future career prospects. Current society makes it very hard to oppose your employer, and we cannot hold somebody liable for working on possibly evil products when, in practice, they have no other choice. Actually, there have been some positive episodes about this, recently, involving some big tech companies (Google’s search engine for China, Amazon’s facial recognition for police, and Microsoft’s augmented reality platform for the US Army), and who knows how many negative ones we will never know about. These episodes, anyways, were very positive examples suggesting that such an option can be viable, but would these people have quit their job if management decided to proceed? Would management have fired them instead? Seems like those big companies would not have much trouble replacing a few percents of their workforce, and complying was just a PR move after all.
What if there was an external entity to prevent technology misuse? Just like the police is supposed to prevent crime, this organization would monitor companies and prevent them from doing evil things(TM). While I am sure such organizations exist, I cannot comment on their effectiveness besides noting that there are several companies doing evil things in plain sight. This might be caused by the fact that it takes time to officially reach a consensus of what counts as bad.
Should private companies be involved in politics?
This brings me to a tangentially related topic: should private companies be involved in politics? My answer is: no. Companies should not take any political stance because, while we citizens of democratic countries can choose our government, we cannot choose which companies do business with our country and our people. A bad government can be overthrown, but the market only listens to money. The right solution, in my opinion, would be ethical regulation by the government. To work, this requires swift action to keep up with the inventiveness of evil actors, and a relatively stable government (a bit like China, a bit unlike Italy). And, of course, cultural approval: this works well in Germany, but it would be unacceptable in the US, where the “free markets” reign supreme.
In practice, however, it is inevitable that business intersects with politics, especially when it is the government that is putting pressure on companies (see the recent episodes about China v. NBA, Blizzard, Apple and many more). And do not forget that companies are able to alter the governments’ course of action by lobbying for their (companies’, not the government’s, or the people’s) best interests. I am convinced that the more money one wants to make, the more evil they should be; after a certain point, profits can only be increased with more and more unethical actions. Hence, very evil players can always out-bet their opponents. It is hard to bring positive change, when the world only understands money.
Leaving my bitterness aside, there is a huge elephant in the room that I did not consider, until now: what is evil? While some things are obviously evil, and some other are obviously good, a lot of things that matter in practice are in a gray area in between, with both good and bad uses involving trade-offs that are just impossible to evaluate thoroughly. There is an even bigger problem, which is to have different people agree on the same thing. The ongoing cold war between the US and China, as well as the past cold war between the US and the USSR, clearly show that people do have very different, and sometimes incompatible, values, and that this is unacceptable to some of these people.
What can I, as a researcher, do about it?
This looks like a messy knot that nobody can untangle alone. What can I, as a researcher, do about it? I like to think that my research is positive: I study how the body fights diseases, and how we can design better drugs to treat them. I really wanted to work on things that can bring positive change, and this was a determining factor for my choice of pursuing a PhD. I did not want to work as a data scientist in the industry, because my tasks would mostly be about sales, marketing, and in general customer-related: reduce churn, increase retention, improve engagement, and so on. In the past years, it has become clear that such activities can get very unethical. But hey, it’s all in the best interest of the customer, am I right? Not many companies are working on products whose core is based on machine learning, without which the product would simply be inconceivable. The few companies that do require actual research work, i.e. a PhD.
Back to my research, can it be used for evil purposes? I believe so, for one could, for example, use it to design a “reverse vaccine” that teaches the body to attack and destroy itself. In practice, it would not be so easy, as the body does have safeguards in place to suppress autoimmunity, but with some modifications they could be accounted for in the design of such a drug. As of today, however, this would require a considerable amount of experimental research so this possibility is quite remote, for now. Does this stop me from working on it? No, it’s ridiculous. What if, for some reason, my discoveries turn out to work better on a certain part of the population? Or if my employer wants us to focus our efforts on the European population? Or if they want to focus on precision medicine? This would mean that my inventions would be used to create very specific drugs, tailored to a given patient and their very personal disease. The problem with this is that, obviously, only the rich and privileged can afford it. I am not necessarily talking about billionaires, us citizens of the developed world are privileged compared to the majority of the world population. Perhaps it is unfair, but, on the other hand, one has to start from somewhere, right? A good compromise would be to work on this with the assurance that such a technology, once developed with the help of the rich, would be made available to others. Which means that either it will not be patented, or my employer will make an effort to bring this innovation to those who cannot afford it, but desperately need it. Both are, from a business perspective, not optimal, to say the least. Perhaps my employer will listen, if not to me then to the “market”.