The Facebook experiment and the web’s ethical void

Facebook logo Español: Logotipo de Facebook Fr...

Facebook logo Español: Logotipo de Facebook Français : Logo de Facebook Tiếng Việt: Logo Facebook (Photo credit: Wikipedia)

The Facebook Experiment has upset many people.[1] In the experiment, which was conducted in 2012, Facebook manipulated the timelines of some of its users. They filled it with good or bad news to study its effect on the user’s mood. After the results of the experiment were published, some defended it with the argument that users had agreed to it in their terms and conditions. The experiment was legal because people consented to the terms and conditions.[2] A related question, though, was whether the experiment and Facebook’s behaviour was ethical.

Defend the company is the first priority.

Although people have focused on the experiment’s ethics, this blog focuses on the ethical content of terms and conditions and ethics of Facebook’s decision to manipulate some of its users. The issue is a salient one as Facebook is not alone in relying on terms and conditions to defend the legality and ethicality of how it treats customers. The terms and conditions are a legal document. Lawyers write them for a legal purpose. They explain the relationship between the company and the users. They show the company’s and the users legal rights and obligations. From the company’s perspective, the terms and conditions protect the company.[3] On the surface, this is understandable. Yet, we have to ask should this come at the price of ethical behaviour towards the service user? Is this suggesting an ethos that if it is legal it must be ethical? However, if we consider that the law is the lowest ethical common denominator within a community and is not the final or sole determinant of the common good, then such a view is problematic.

Imagine if the Constitution were based on Facebook’s terms and conditions.

What would society look like if it were based on Facebook’s terms and conditions? Although it is a speculative question, it reminds us that our lives are shaped increasingly by our digital experiences and expectations. In turn, those experiences shape our approach to ethics and, increasingly, politics. People see their relationship with the political community or the government as similar to, if not the same as, their relationship with web service providers. Despite the influence of the digital domain, the Facebook Experiment returns us to a fundamental problem. Is the legal order always an ethical order?

History repeats itself when the legal order is the ethical order.

The Facebook experiment and the ethos it reflects should remind us of a previous age. History has shown us a society that had a similar approach to law and ethics. The Weimar Republic in Germany was such an age. At that time, legal positivists like Hans Kelsen, who along with others, helped to justify and sustain the Nazi legal order by explaining that the law was without a normative content.[4] The content of the law did not determine its validity. In practice, this meant that any law was valid so long as the political authority within the state promulgated it. Even though we are demonstrably not in a political system similar to 1930s Germany, we find that the digital domain projects an ethos in which if it is legal it is ethical.

Terms and conditions may be legal, but are they ethical?

The issue we have to understand is the ethical content of the terms and conditions. To put the question directly, are terms and conditions written so that the company can take advantage of its client and customers? Such an approach may be legal, but is it ethical? As the Facebook Experiment revealed, this is a debatable question. The deeper question is why did the Facebook management team and its legal team fail to consider the ethical content of their terms and conditions and the ethical content of the experiment? Both may be legal, but they are clearly not ethical. To change the terms and conditions to include a reference to research and believe that this substitutes for informed consent suggest a gap between the ethical content and the legal content of their decisions and terms and conditions. It also suggests an ethical failure within the organisation. Yet, Facebook is not alone in this approach.

Facebook is no different from many other social media service providers. Their terms and conditions are similar to other companies. They meet legal requirements without an apparent concern for their ethical content. As long as they are legal, that is sufficient. If terms and conditions are written with this perspective, we have to consider if other decision are made this way. Do companies consider an ethical responsibility beyond the minimum of complying with the law or a legal obligation? Some might defend their approach to say that the basic ethical responsibility is simply compliance with the law. However, the law is the lowest moral or ethical order within a society.

The law is a society’s lowest moral order

The law is the lowest ethical framework within the society. It shows the basic agreement about a form of justice to sustain the decent political life within a community. The laws create the foundation of the common good. However, the common good is not the highest good simply. Instead, it is formed in response to, and guided by, an approach to a higher good as revealed by rational enquiry into the best way to live. Even though this higher good is unobtainable by a society, it is not beyond an individual’s or a corporation’s understanding. In other words, corporations and citizens can act in ways that are ethically superior to simply complying with the law. Here we see the tension for Facebook. Facebook have fallen short of the ethical responsibility that we would expect from another person. They did not consider whether they would harm someone. To put it broadly, they would have had to consider whether their behaviour was just.

In considering the question of justice, or whether Facebook’s behaviour was just, we see that the “experiment” reveals something darker about human nature in the digital domain than we care to consider in our technologically empowered world. As one observer noted, the people conducting the “experiment” did not see the people as themselves. On the surface, this appears decisive. If they had been willing to see the experiment’s subjects as themselves or their family and friends, they might have come to a better decision.[5] Here is a key failing of most ethical programmes. The ethical problems are usually seen as someone else’s problems that are easily dismissed because “we do not know” these people.[6] If the people designing the experiment were subjected to it, they may have changed their view. Such a view, though, is potentially problematic.

If we test of an ethical decision by asking whether it would be applied to your family, we make an assumption. We assume that the person who makes that decision will look out for the welfare of others. Such an approach, while interesting, would not ensure that the decision was ethical. We know that the Nazis or the Bolsheviks were quite willing to apply their brutality to their own families.[7] They would, and did, execute their own family because that is what was legal within their society. In other words, the Party or the Fuhrer required it. The political authority in their society said it was legal and therefore ethical.[8] Leaving aside the ethical content of the Facebook Experiment, we have to consider whether there are any limits to manipulating or experimenting within the digital domain.

Does Facebook’s ethical emptiness reflect a deeper ethical problem within the digital domain?

The Facebook experiment awakens us to an ethical problem. The digital domain appears to have an ethical deficit. The deficit suggests that technologists lack robust ethical training. As one commentator noted, the Web needs a moral operating system. The access to information allows companies to control, manipulate, and influence service users and customers.[9] This concern is not new. Philosophers, such as Martin Heidegger, have worried about technology’s power to shape a person’s ethical perspective. Heidegger argued that the essence of technology creates a worldview that reduces man to a standing reserve. In this worldview, we are conditions to think about the world, and subsequently man, as objects to be harvested or used just as wheat or coal is harvested or mined as a resource. As a resource, man loses his ethical standing. Instead of a moral being with an intrinsic worth, man is reduced to a resource to be experimented on or consumed.[10] Even though the philosophical issues raised by Heidegger are beyond our scope here, we need to consider the ethical training of technologist and the constraints on companies given their power over the individual. How can we trust Facebook or any social media service provider to act ethically?

Has the Facebook experiment been the moment we left the digital Garden of Eden?

The experiment has woken many people up to a central problem for the digital domain—trust. The question that will be asked is “Why should we trust anything that Facebook or any internet service provider says about the willingness to protect and respect the user?” If the terms and conditions are our only protection and we find how they are written as a minimum legal compliance without ethical content to defend the organisation, then we face a challenge. How can a user be sure that they will be treated fairly or ethically? Facebook demonstrated that they are willing to manipulate its users unconditionally and without concern for their explicit informed consent or welfare. If, as it has been suggested, that all social media provider seeks to manipulate or harvest their users in some way, what does that say about our ethical life? Has this been an implicit desire within corporations and people which the digital domain has allowed to manifest itself? For some this may be considered the will to power. If we have to engage the social media world with that assumption, then we live in a problematic age. Leaving aside the technologically powerful, who can protect themselves and benefit from this ethically dubious behaviour, who will protect the vulnerable and the weak? Who will protect you if you cannot protect yourself? Such a situation suggests that we have returned to a digital state of nature in which the technologically strong (Facebook, Google, and hackers) do as they will and the technology weak (the rest of us) do as we must.[11] What can be done?

What can be done?

If the digital state of nature is to be avoided, digital businesses will need to demonstrate that they treat their customers and clients ethically. Those companies that can show they are acting ethically and will act ethically will have a comparative advantage. If a customer believes the terms and conditions are written to take advantage of them for the company’s profit, then they will seek other providers. To demonstrate their ethical behaviour, and regain trust, companies will need to be transparent about how they are using customer data and the ethical safeguards. In particular, are they making decisions about customers with an explicit concern for ethical behaviour? They can do this with a code of ethics, a training programme focused on ethical behaviour, and a compliance system that drives ethical behaviour. If the web-based economy cannot ensure trust and respect for the dignity of the human person, it may not be sustainable. If we cannot trust our social media companies, then we have a further question about the trust needed to sustain the digital economy. Without the trust that sustains intangible property rights, an advanced capitalistic economy becomes difficult to sustain.

The ethical dilemma is more than the digital domain.

From the Great Recession, we learned that firms within the financial industry have demonstrated unethical behaviour. The Governor of the Bank of England and the Head of the International Monetary Fund made public statements on the need for the industry to improve its ethical behaviour.[12] What we found in the financial crisis was that companies rarely, if ever, considered the ethical impact of their decisions. It would appear that ethics were the first victim in the pursuit of profits. However, the danger is actually greater in the digital domain. The lack of ethical behaviour is not about money, it is about reducing humans to a resource to be consumed, manipulated, and experimented upon without apparent limit. The approach may have been legal, but is it ethical? History has shown us what can happen when people are reduced to an administrative decision. We need to decide whether we want history to repeat itself or if we are willing to have an ethical digital domain. What the companies may find is that they have unleashed an ethical contagion in which they become subject to the same brutal logic that they are willing to apply to their customers.

[1] The Facebook experiment: “In January 2012, for one week, Facebook deliberately manipulated the News Feeds of nearly 700,000 of its users as part of an experiment. News Feed is a constantly updating list of stories from people and pages that you follow on Facebook, and includes status updates, photos, videos, links as well as app activity.” (Accessed 24 July 2014)

[2] Even though people pointed to references to research in the terms and conditions, these terms were added after the experiment was conducted. On the problematic use of terms and conditions to signal consent, consider that the Nuremberg Code that human experiments must be based on freely given and fully informed consent. http://www.ushmm.org/information/exhibitions/online-features/special-focus/doctors-trial/nuremberg-code (accessed 24 July 2014)

[3] An ex-Facebook employee who worked as a data scientist for the company suggested that the experiment would have been vetted by the legal team and the PR team and there was no internal review board for such decisions. http://www.forbes.com/sites/kashmirhill/2014/07/07/ex-facebook-data-scientist-every-facebook-user-is-part-of-an-experiment-at-some-point/ (accessed 24 July 2014)

[4] One only need to consider Kelsen said quite infamously that a despotic order or a tyranny was still a legal order

It is altogether senseless to assert that no legal order exists in a despotism, but that the despot’s arbitrary will holds sway . . . after all, the despotically ruled state, too. represents some sort of ordering of human behavior. . . . This ordering is, precisely, the legal order. To deny it the character of law is only an instance of the naiveté or presumption of natural-right thinking. What is interpreted as arbitrariness is merely the autocrat’s legal ability to assume the power of making every decision . . . and to abolish or alter . . . previously established norms. . . . Such a condition is a legal one, even if felt to be disadvantageous. As a matter of fact, it also has its good points. This is shown quite clearly by the not at all unusual call for dictatorship in the modern stale ruled by law. [Emphasis added] quoted from Leo Strauss’s Natural Right and History p.4 n.2 which is quoting Kelsen’s Algemeine Staatslehre (1925) found here: http://www.commentarymagazine.com/article/leo-strauss/ accessed 7 July 2014

[5] See Ethics for Technologists (and Facebook) in HBR blog Michael Schrage http://blogs.hbr.org/2014/07/ethics-for-technologists-and-facebook/ (Accessed 15 July 2014)

[6] See for example the ethical experiment in an MBA programme where students are asked what they would do if they were on the board of a pharmaceutical company and they found a drug killed 20 people a year. After a debate, they decided to export the drug and fight the FDA rather than withdraw the drug. When asked if they would want their doctor prescribing it, they all said no. http://www.ethikospublication.com/html/mbas.html

[7] The case of Otto Ohlendorf should raise concerns about measuring a decisions ethical content by whether you would apply it to your family. Ohlendorf was in charge of Einsatzgruppe D on the Eastern front a mobile extermination unit. http://en.wikipedia.org/wiki/Otto_Ohlendorf

When he faced trial at Nuremberg he was asked the following question in different ways. http://www.gnosticliberationfront.com/otto_ohlendorf_testimony_at_nuremberg.htm

“I asked him now whether if he found his own flesh and blood within the Hitler Order in Russia, what would have been his judgment, would it have been moral to kill his own flesh and blood, or immoral.”

After a series of attempts to avoid answering it, he finally replied.

His response was “If this demand would have been made to me under the same prerequisites that is within the framework of an order, which is absolutely necessary militarily, then I would have executed that order.”

[8] The Bolsheviks willingly confessed to crimes against the party in the show trials of the 1930s. They believed in the legitimacy and necessity of their cause *even though* they knew the charges against them were false.

[9] This TED talk by Damon Horowitz suggests that technologists needs to improve their ethical training. https://www.ted.com/talks/damon_horowitz The talk itself raises a troubling spectre. The audience reaction suggested that people are no longer trained to be moral. Instead, they seem to operate with a crude moral system. When that crude moral system is placed within a corporation with it’s a demand for profits and a dominant culture, in which the employee is encouraged to go along to get along, it is not surprising that unethical decisions occur. The problem is so pervasive and so present because it means that the political philosophical crisis of the West, in which it no longer believes in the founding principles regarding political philosophy enabling a moral life and a common good, are manifested explicitly within the digital domain. What has only been debated or discussed within philosophy departments is now everyday practice in the digital domain.

[10] See the recent essay by Mark Blitz Understanding Heidegger on Technology in The New Atlantis. http://www.thenewatlantis.com/publications/understanding-heidegger-on-technology (accessed 22 July 2014) In this essay Blitz reviews the recent publication of Heidegger’s other essays around his Question Concerning Technology.

[11] If such a digital state of nature exists, one has to ask why some activists want to constrain the state which acts as the only legal defender of the weak and the vulnerable. If the state is limited by its ability to monitor the web, because of increased encryption, and it is the only legal defender of the weak, as Facebook demonstrated its willingness to prey upon its serve users, who benefits? Hackers will demonstrate against the power of the state in the Web yet they never seem to be able to explain who they will turn to except the state when predators like Facebook emerge.

[12] See their speeches at the Inclusive Capitalism conference on 27 May 2014. http://www.inclusivecapitalism.org/ Mark Carney explained that without ethics capitalism will disappear. http://www.bankofengland.co.uk/publications/Documents/speeches/2014/speech731.pdf (accessed 28 May 2014)  Christine Lagarde warned that to restore trust in the markets ethical norms needed to be strengthened. https://www.imf.org/external/np/speeches/2014/052714.htm (accessed 28 May 2014)

Advertisements

About lawrence serewicz

An American living and working in the UK trying to understand the American idea and explain it to others. The views in this blog are my own for better or worse.
This entry was posted in compliance, learning organisation, management, privacy, Uncategorized and tagged , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s