Are Terms and Conditions there to take advantage of the user?

Facebook logo Español: Logotipo de Facebook Fr...

Facebook logo Español: Logotipo de Facebook Français : Logo de Facebook Tiếng Việt: Logo Facebook (Photo credit: Wikipedia)

The Facebook Experiment has upset many people.[1] In the experiment, which was conducted in 2012, Facebook manipulated the timelines of some of its users. They filled it with good or bad news to study its effect on the user’s mood. After the results of the experiment were published, some defended it with the argument that users had agreed to it in their terms and conditions. The experiment was legal because people consented to the terms and conditions.[2] A related question, though, was whether the experiment and Facebook’s behaviour was ethical.

Defend the company is the first priority.

Although people have focused on the experiment’s ethics, this blog focuses on the ethical content of terms and conditions and ethics of Facebook’s decision to manipulate some of its users. The issue is a salient one as Facebook is not alone in relying on terms and conditions to defend the legality and ethicality of how it treats customers. The terms and conditions are a legal document. Lawyers write them for a legal purpose. They explain the relationship between the company and the users. They show the company’s and the users legal rights and obligations. From the company’s perspective, the terms and conditions protect the company.[3] On the surface, this is understandable. Yet, we have to ask should this come at the price of ethical behaviour towards the service user? Is this suggesting an ethos that if it is legal it must be ethical? However, if we consider that the law is the lowest ethical common denominator within a community and is not the final or sole determinant of the common good, then such a view is problematic.

Imagine if the Constitution were based on Facebook’s terms and conditions.

What would society look like if it were based on Facebook’s terms and conditions? Although it is a speculative question, it reminds us that our lives are shaped increasingly by our digital experiences and expectations. In turn, those experiences shape our approach to ethics and, increasingly, politics. People see their relationship with the political community or the government as similar to, if not the same as, their relationship with web service providers. Despite the influence of the digital domain, the Facebook Experiment returns us to a fundamental problem. Is the legal order always an ethical order?

History repeats itself when the legal order is the ethical order.

The Facebook experiment and the ethos it reflects should remind us of a previous age. History has shown us a society that had a similar approach to law and ethics. The Weimar Republic in Germany was such an age. At that time, legal positivists like Hans Kelsen, who along with others, helped to justify and sustain the Nazi legal order by explaining that the law was without a normative content.[4] The content of the law did not determine its validity. In practice, this meant that any law was valid so long as the political authority within the state promulgated it. Even though we are demonstrably not in a political system similar to 1930s Germany, we find that the digital domain projects an ethos in which if it is legal it is ethical.

Terms and conditions may be legal, but are they ethical?

The issue we have to understand is the ethical content of the terms and conditions. To put the question directly, are terms and conditions written so that the company can take advantage of its client and customers? Such an approach may be legal, but is it ethical? As the Facebook Experiment revealed, this is a debatable question. The deeper question is why did the Facebook management team and its legal team fail to consider the ethical content of their terms and conditions and the ethical content of the experiment? Both may be legal, but they are clearly not ethical. To change the terms and conditions to include a reference to research and believe that this substitutes for informed consent suggest a gap between the ethical content and the legal content of their decisions and terms and conditions. It also suggests an ethical failure within the organisation. Yet, Facebook is not alone in this approach.

Facebook is no different from many other social media service providers. Their terms and conditions are similar to other companies. They meet legal requirements without an apparent concern for their ethical content. As long as they are legal, that is sufficient. If terms and conditions are written with this perspective, we have to consider if other decision are made this way. Do companies consider an ethical responsibility beyond the minimum of complying with the law or a legal obligation? Some might defend their approach to say that the basic ethical responsibility is simply compliance with the law. However, the law is the lowest moral or ethical order within a society.

The law is a society’s lowest moral order

The law is the lowest ethical framework within the society. It shows the basic agreement about a form of justice to sustain the decent political life within a community. The laws create the foundation of the common good. However, the common good is not the highest good simply. Instead, it is formed in response to, and guided by, an approach to a higher good as revealed by rational enquiry into the best way to live. Even though this higher good is unobtainable by a society, it is not beyond an individual’s or a corporation’s understanding. In other words, corporations and citizens can act in ways that are ethically superior to simply complying with the law. Here we see the tension for Facebook. Facebook have fallen short of the ethical responsibility that we would expect from another person. They did not consider whether they would harm someone. To put it broadly, they would have had to consider whether their behaviour was just.

In considering the question of justice, or whether Facebook’s behaviour was just, we see that the “experiment” reveals something darker about human nature in the digital domain than we care to consider in our technologically empowered world. As one observer noted, the people conducting the “experiment” did not see the people as themselves. On the surface, this appears decisive. If they had been willing to see the experiment’s subjects as themselves or their family and friends, they might have come to a better decision.[5] Here is a key failing of most ethical programmes. The ethical problems are usually seen as someone else’s problems that are easily dismissed because “we do not know” these people.[6] If the people designing the experiment were subjected to it, they may have changed their view. Such a view, though, is potentially problematic.

If we test of an ethical decision by asking whether it would be applied to your family, we make an assumption. We assume that the person who makes that decision will look out for the welfare of others. Such an approach, while interesting, would not ensure that the decision was ethical. We know that the Nazis or the Bolsheviks were quite willing to apply their brutality to their own families.[7] They would, and did, execute their own family because that is what was legal within their society. In other words, the Party or the Fuhrer required it. The political authority in their society said it was legal and therefore ethical.[8] Leaving aside the ethical content of the Facebook Experiment, we have to consider whether there are any limits to manipulating or experimenting within the digital domain.

Does Facebook’s ethical emptiness reflect a deeper ethical problem within the digital domain?

The Facebook experiment awakens us to an ethical problem. The digital domain appears to have an ethical deficit. The deficit suggests that technologists lack robust ethical training. As one commentator noted, the Web needs a moral operating system. The access to information allows companies to control, manipulate, and influence service users and customers.[9] This concern is not new. Philosophers, such as Martin Heidegger, have worried about technology’s power to shape a person’s ethical perspective. Heidegger argued that the essence of technology creates a worldview that reduces man to a standing reserve. In this worldview, we are conditions to think about the world, and subsequently man, as objects to be harvested or used just as wheat or coal is harvested or mined as a resource. As a resource, man loses his ethical standing. Instead of a moral being with an intrinsic worth, man is reduced to a resource to be experimented on or consumed.[10] Even though the philosophical issues raised by Heidegger are beyond our scope here, we need to consider the ethical training of technologist and the constraints on companies given their power over the individual. How can we trust Facebook or any social media service provider to act ethically?

Has the Facebook experiment been the moment we left the digital Garden of Eden?

The experiment has woken many people up to a central problem for the digital domain—trust. The question that will be asked is “Why should we trust anything that Facebook or any internet service provider says about the willingness to protect and respect the user?” If the terms and conditions are our only protection and we find how they are written as a minimum legal compliance without ethical content to defend the organisation, then we face a challenge. How can a user be sure that they will be treated fairly or ethically? Facebook demonstrated that they are willing to manipulate its users unconditionally and without concern for their explicit informed consent or welfare. If, as it has been suggested, that all social media provider seeks to manipulate or harvest their users in some way, what does that say about our ethical life? Has this been an implicit desire within corporations and people which the digital domain has allowed to manifest itself? For some this may be considered the will to power. If we have to engage the social media world with that assumption, then we live in a problematic age. Leaving aside the technologically powerful, who can protect themselves and benefit from this ethically dubious behaviour, who will protect the vulnerable and the weak? Who will protect you if you cannot protect yourself? Such a situation suggests that we have returned to a digital state of nature in which the technologically strong (Facebook, Google, and hackers) do as they will and the technology weak (the rest of us) do as we must.[11] What can be done?

What can be done?

If the digital state of nature is to be avoided, digital businesses will need to demonstrate that they treat their customers and clients ethically. Those companies that can show they are acting ethically and will act ethically will have a comparative advantage. If a customer believes the terms and conditions are written to take advantage of them for the company’s profit, then they will seek other providers. To demonstrate their ethical behaviour, and regain trust, companies will need to be transparent about how they are using customer data and the ethical safeguards. In particular, are they making decisions about customers with an explicit concern for ethical behaviour? They can do this with a code of ethics, a training programme focused on ethical behaviour, and a compliance system that drives ethical behaviour. If the web-based economy cannot ensure trust and respect for the dignity of the human person, it may not be sustainable. If we cannot trust our social media companies, then we have a further question about the trust needed to sustain the digital economy. Without the trust that sustains intangible property rights, an advanced capitalistic economy becomes difficult to sustain.

The ethical dilemma is more than the digital domain.

From the Great Recession, we learned that firms within the financial industry have demonstrated unethical behaviour. The Governor of the Bank of England and the Head of the International Monetary Fund made public statements on the need for the industry to improve its ethical behaviour.[12] What we found in the financial crisis was that companies rarely, if ever, considered the ethical impact of their decisions. It would appear that ethics were the first victim in the pursuit of profits. However, the danger is actually greater in the digital domain. The lack of ethical behaviour is not about money, it is about reducing humans to a resource to be consumed, manipulated, and experimented upon without apparent limit. The approach may have been legal, but is it ethical? History has shown us what can happen when people are reduced to an administrative decision. We need to decide whether we want history to repeat itself or if we are willing to have an ethical digital domain. What the companies may find is that they have unleashed an ethical contagion in which they become subject to the same brutal logic that they are willing to apply to their customers.

[1] The Facebook experiment: “In January 2012, for one week, Facebook deliberately manipulated the News Feeds of nearly 700,000 of its users as part of an experiment. News Feed is a constantly updating list of stories from people and pages that you follow on Facebook, and includes status updates, photos, videos, links as well as app activity.” (Accessed 24 July 2014)

[2] Even though people pointed to references to research in the terms and conditions, these terms were added after the experiment was conducted. On the problematic use of terms and conditions to signal consent, consider that the Nuremberg Code that human experiments must be based on freely given and fully informed consent. (accessed 24 July 2014)

[3] An ex-Facebook employee who worked as a data scientist for the company suggested that the experiment would have been vetted by the legal team and the PR team and there was no internal review board for such decisions. (accessed 24 July 2014)

[4] One only need to consider Kelsen said quite infamously that a despotic order or a tyranny was still a legal order

It is altogether senseless to assert that no legal order exists in a despotism, but that the despot’s arbitrary will holds sway . . . after all, the despotically ruled state, too. represents some sort of ordering of human behavior. . . . This ordering is, precisely, the legal order. To deny it the character of law is only an instance of the naiveté or presumption of natural-right thinking. What is interpreted as arbitrariness is merely the autocrat’s legal ability to assume the power of making every decision . . . and to abolish or alter . . . previously established norms. . . . Such a condition is a legal one, even if felt to be disadvantageous. As a matter of fact, it also has its good points. This is shown quite clearly by the not at all unusual call for dictatorship in the modern stale ruled by law. [Emphasis added] quoted from Leo Strauss’s Natural Right and History p.4 n.2 which is quoting Kelsen’s Algemeine Staatslehre (1925) found here: accessed 7 July 2014

[5] See Ethics for Technologists (and Facebook) in HBR blog Michael Schrage (Accessed 15 July 2014)

[6] See for example the ethical experiment in an MBA programme where students are asked what they would do if they were on the board of a pharmaceutical company and they found a drug killed 20 people a year. After a debate, they decided to export the drug and fight the FDA rather than withdraw the drug. When asked if they would want their doctor prescribing it, they all said no.

[7] The case of Otto Ohlendorf should raise concerns about measuring a decisions ethical content by whether you would apply it to your family. Ohlendorf was in charge of Einsatzgruppe D on the Eastern front a mobile extermination unit.

When he faced trial at Nuremberg he was asked the following question in different ways.

“I asked him now whether if he found his own flesh and blood within the Hitler Order in Russia, what would have been his judgment, would it have been moral to kill his own flesh and blood, or immoral.”

After a series of attempts to avoid answering it, he finally replied.

His response was “If this demand would have been made to me under the same prerequisites that is within the framework of an order, which is absolutely necessary militarily, then I would have executed that order.”

[8] The Bolsheviks willingly confessed to crimes against the party in the show trials of the 1930s. They believed in the legitimacy and necessity of their cause *even though* they knew the charges against them were false.

[9] This TED talk by Damon Horowitz suggests that technologists needs to improve their ethical training. The talk itself raises a troubling spectre. The audience reaction suggested that people are no longer trained to be moral. Instead, they seem to operate with a crude moral system. When that crude moral system is placed within a corporation with it’s a demand for profits and a dominant culture, in which the employee is encouraged to go along to get along, it is not surprising that unethical decisions occur. The problem is so pervasive and so present because it means that the political philosophical crisis of the West, in which it no longer believes in the founding principles regarding political philosophy enabling a moral life and a common good, are manifested explicitly within the digital domain. What has only been debated or discussed within philosophy departments is now everyday practice in the digital domain.

[10] See the recent essay by Mark Blitz Understanding Heidegger on Technology in The New Atlantis. (accessed 22 July 2014) In this essay Blitz reviews the recent publication of Heidegger’s other essays around his Question Concerning Technology.

[11] If such a digital state of nature exists, one has to ask why some activists want to constrain the state which acts as the only legal defender of the weak and the vulnerable. If the state is limited by its ability to monitor the web, because of increased encryption, and it is the only legal defender of the weak, as Facebook demonstrated its willingness to prey upon its serve users, who benefits? Hackers will demonstrate against the power of the state in the Web yet they never seem to be able to explain who they will turn to except the state when predators like Facebook emerge.

[12] See their speeches at the Inclusive Capitalism conference on 27 May 2014. Mark Carney explained that without ethics capitalism will disappear. (accessed 28 May 2014)  Christine Lagarde warned that to restore trust in the markets ethical norms needed to be strengthened. (accessed 28 May 2014)

Posted in compliance, learning organisation, management, privacy, Uncategorized | Tagged , , , , , , , , , , | Leave a comment

The myth of the transparent organisation.

Accountability vs. Responsibility

Accountability vs. Responsibility (Photo credit: shareski)

We will hear that transparency is good for organisations and organisations will even tout their transparency. In many cases, the organisations believe what they are doing is transparent. They publish information on a regular basis that describes decisions, financial positions, and future strategies. In this, transparency is a means to an end for the corporation as it appears to be a good corporate citizen. Here we see the beginning of the problem. The organisation wants to appear to be transparent. The appearance becomes the goal rather than the reality. This has two consequences one external and the other is internal.

Transparency becomes reputation management

We can see the external consequences very simply. In the external realm, the organisation sees transparency as an issue that affects its reputation. Transparency must be managed like its reputation. The goal is not to be transparent but to appear transparent. The transparency will be managed. The organisation will publish what best suits its interests and its reputation. Such an approach is not surprising. Human nature is such that we want others to see us as we see ourselves. We want to control how others view us. Transparency means that someone else can potentially see the organisation as it is rather than as it appears to be. Transparency in this sense can become a form of accountability. It is hard to be accountable. For an organisation that focuses on its reputation, any transparency, except that filtered and managed for appearance and reputation, will threaten it. The transparency will make the organisation accountable in a way that it cannot control and will be resisted.

Vertical and horizontal transparency and accountability

The organisation sees transparency as a barrier to what it wants to do. One way to avoid the barrier is to claim it is already accountable. The problem though is that accountability can mean two things. The public will want vertical accountability and the organisation will want horizontal accountability. The term vertical accountability refers to the audience that holds the organisation to account. The audience is either the employees or to the public. By contrast, horizontal accountability has a different audience. The audience is peers such as the board or the regulators. The organisation does not exist and hold power, either corporate or political, to be held to account in ways that they cannot manage. Externally the organisation wants to be seen for what it appears to be rather than what it is. This brings us to the internal consequences.

Transparency is difficult when you are opaque to yourself.

To manage its reputation, an organisation will become opaque to itself. The organisation will control what is said by staff to align with its reputation. The control is usually informal or cultural. Take for instance the public sector. Some public sector organisations publish their corporate management team minutes and transparency information. For some organisations they will publish more than the minimum because that is their culture. For others, they will publish the minimum and present it to suit their interests. If something is a sensitive topic, such as spending on consultants, the term consultants will be replaced with something less noticeable like professional services. The organisation considers itself transparent and accountable. Such a scenario may seem farfetched. Perhaps it is. Yet, it reflects a dysfunctional culture. The culture resists transparency. It may want to be praised for being open and transparent, but it resists accountability. Where this occurs, we often see a perception gap between senior managers, middle managers, and junior employees that creates perverse outcomes.

The perception gap creates perverse incentives.

The perception gap that creates perverse outcomes occurs in the following way. Senior manages agree a plan to deliver a widget in 10 days. They want to beat the target of 20 days. They tell the middle managers this without consulting them. They are consulted by being told the plan. The senior managers expect them to do as they are told. If the widget will be delivered in 10 days, then it must be delivered in 10 days. It is for the middle manager to work out the details. The middle manager, in turn, then has to deliver the 10 days target against their other work. To meet the target, frontline team do perverse things.  They send a lower quality good or they massage the figures. They will count delivery as the day shipped. The senior managers are pleased because they see the 10 day target being met. The frontline staffs become disillusioned because they see the senior managers are out of touch. The middle managers lose respect because they cannot convince the senior managers the target is wrong and they allow perverse outcomes so that they can show they can meet the targets.

Internal culture works to maintain appearances

The internal problem comes when the gap between appearance and reality becomes too great to manage. The desire to manage the external reputation, rather than let it reflect the reality, infects the internal culture. The same perception-perversion gap will occur. The issue is not the gap between appearance and reality but the size of that gap. As the gap increases the internal culture becomes dysfunctional. In an extreme form, we can see this in the failure of Enron where the image of the executive was maintained until it could not be maintained anymore and the market was able to see the company for the shell that it was. We may consider these aberrations, yet, the underlying issue is that companies resist transparency that will show this gap and they are trained to resist it.

Do what is best for the company hides the problems

In particular the training to resist the transparency can be seen in things like single loop learning or blame avoidance. When a problem arises, the managers will act quickly to solve the problem. If the problem persists, and threatens the appearance, “we are a good company at x”, then the manager faces a choice. They have to explain a problem that threatens to undermine the reputation, the appearance, which the organisation is defending. Most employees want to be good employees and do what is good for the company. As a result, they may report the problem in such a way as to avoid blame without explaining that the reputation is wrong. In this regard, they do what is best for the company, or rather the senior managers, protect or support the reputation, rather than explaining the reality. If a junior officer tries to do the right thing and describe the reality, their senior managers may be embarrassed or seek to avoid blame by claiming that the junior officer “does not have all the details or the wider perspective on the issue”. When this occurs, the junior employee sees that it is better to deliver only so much transparency that will be accepted by the senior managers.

Transparency if I am at risk contain the crisis if it is the company at risk

The culture changes so that the employees embrace transparency that affects them or harms them. If it affects the organisation, the goal is to “contain” the crisis and limit transparency. The organisation’s reputation becomes the overriding goal for employee. For organisations, as for governments, silence protects them. They will resist anything that breaks that silence especially if they cannot manage it. The organisations and its executives want transparency that they can manage. They want to decide how they are seen. The goal is to keep others from knowing the organisation as it is rather than as it appears. The transparent organisation while well intentioned becomes a practice in reputation management rather than a change in culture or behaviour.

What is to be done to avoid the problem?

First, the organisation needs to work at being transparent to itself. This means it must have good internal communication so that bad news or news that contradicts the public reputation can be reported upwards.

Second, the organisation must align its reputation with its reality. If it is constantly seeking awards, the issue is whether it is in the business of winning awards or delivering a superior product. The first is about reputation management the second is about excellence.

Third, the organisations need to focus on the outcomes that are best for the company not just for senior managers. This is often the hardest part as senior managers rarely like to become powerful to be held to account.


Enhanced by Zemanta
Posted in compliance, coruption, learning organisation, transparency | Tagged , , , , , , | Leave a comment

How to write transparent investigation reports

Students photographing evidence in SUNY Canton...

Students photographing evidence in SUNY Canton’s Criminal Investigation program (Photo credit: Wikipedia)

In the age of Freedom of Information, public sector organisations, including the police, have to be prepared to respond to FOI requests for how they conduct investigations.  For some organisations and some situations, the investigation report is made public such as in a public inquiry.[1] In many cases, the request will relate to an issue of public interest, but in other cases, such as internal disciplinary issues, the case will not attract the public interest. In those cases, the FOIA will be less likely to apply because personal data (s.40 (2) in the UK, will limit the personal data that can be disclosed. However, in cases where the public interest is high, the organisation may have to disclose some if not most its investigation report either under FOIA or as part of another regulatory requirement such as an Ombudsman investigation.[2] With that requirement in mind, it is a good idea to develop an investigation procedure and guidance that will reflect the need for transparency after the investigation is completed. The benefits are twofold. First, you are likely to have a more robust investigation. Second you are likely to be ready to be more transparent with your own organisation and, most importantly should the demand arise, to the public or regulator.

If the organisation is not prepared for FOIA, the way it conducts an investigation can appear to be a cover-up because they fail to follow these 8 steps. In all cases, a balance must be struck between confidentiality, privacy, and the public interest. However, even if the investigation is not to be made public, the steps are important for the organisation to be transparent to itself within the legal confines of confidentiality.

First, draw up clear terms of reference for the investigation. You want the people doing the investigation and those being investigated, or those involved in the investigation, to understand what you are doing, why you are doing it, and how you are doing it. The same would be for a criminal investigation where the subject has to know the crime they are being charged with and what they are under investigation for having done. If you are investigating something by the organisation because of a public complaint, you will need to let the complainant know the terms of reference in principle, even if you cannot provide them all the details in case that may prejudice the investigation. If you don’t provide the terms of reference or the nature of the investigation, especially on a public complaint, you may create an expectation gap between what they think you are investigating and what you are investigating.

After the investigation is completed, or as part of the final report, the terms of reference should be shared with all people involved, with the FOIA caveats regarding confidentiality and prejudice to subsequent or ongoing investigations. In complaints about a service, rather than an individual, you are likely to have the terms of reference implicit in the complaint. If they are not, then it is important to let the complainant know what you are investigating. This is the first step to avoid the appearance of a cover-up. If the organisation does not keep a copy of the terms of reference or never has terms of reference, it can give the appearance of a less than robust approach to investigations. If the investigation is a simple complaint, then the complaint itself will be the terms of reference. In smaller organisations or on basic investigations, this will be the case. Anything involving more than two people will likely need a terms of reference to know what is being investigated and why as well as explaining the priority of interviews to the investigator. All of this is bearing in mind the critical point that during any investigation, the disclosure of information relating to the investigation is on a need to know.

Second, set up a list of questions, themes, or issues that will be explored to express the terms of reference.  The questions should be enough to set the question map rather than a definitive list. The themes or issues that need to be covered could be disclosed if the exact questions may reveal sources and methods that would prejudice an ongoing investigation or prejudice future investigations. The caveats here is if the investigation takes a number of iterations so that questions asked at the first round can influence the second round. As a mentor of mine once said “Questions breed questions”. As questions always lead to more questions so that one cannot determine all the possible issues before they emerge. At the same time, one would have to avoid disclosing any personal data, such as names of people to be interviewed or who has been interviewed if it would prejudice the interview. If a copy cannot be shared because interviews are still being conducted, they should be shared as soon as the interviews are completed and it is not prejudicial to an investigation. In some cases, such as a disciplinary or tribunal the questions may be shared as part of the tribunal process. If the questions are not transparent after the event, it can give the appearance that questions are already determined and the outcome is decided. In other words, you are only asking for what you expect to find. .

After the investigation, the questions may need to be disclosed as part of an FOIA request because the nature of the investigation, especially one in the public interest, would need to be shown to be robust. In a small investigation, or ones that relate to investigations that do not attract a high degree of public interest, the questions or issues can be included in the terms of reference.

Even though the questions can be included in the terms of reference, it is best that they are drawn up separately and informed by the terms of references rather than limited to the terms of reference. The caveat here is if the issue is a minor or small investigation.

Third, set up a timetable when the interview is scheduled to be completed. This does not have to be set out in stone, but it should be specific enough so that that the people know the overall timetable for the investigation. No one likes to be involved in an open ended investigation. Smaller investigations can have this set out clearly as the issue may be easy to resolve. If the organisation cannot give a schedule of when the investigation is likely to be completed, it is a sure sign it cannot plan and it would look like a cover up or a pre-determined outcome is in place. The timeline will help to keep the complainant informed and you can then update them at certain points or report that there is nothing to report if that is the case. This is especially important in complaints about a service.

Fourth, keep a list of the people interviewed and when they were interviewed. If the organisation cannot provide this list, after the investigation as required, it shows that it is not organised nor that the investigation is well structured. Again, the issue here is after the report or the investigation is completed as the FOIA request may ask to demonstrate that the appropriate people were interviewed. If an incident or a complaint involved an officer and they were not interviewed or relevant people were not interviewed, this could prejudice the investigation. If the investigation is not to be made public, the organisation still needs to know for its own transparency and accountability how the investigation was conducted and who was interviewed.

Fifth, include something from the interviews within the report. Otherwise it will appear that the report has not covered all the questions or involved the responses from all the people interviewed. If people are interviewed and they are not included in the final investigation report, that will need to be explained in the report. In some cases it may not be practical or wise to include the names of everyone interviewed especially if there are confidential sources. The issue here is the final report would need to tell the organisation what was found and what needs to be done.

After the investigation, a FOIA request may still require the organisation to withhold some of the report as it relates to personal data or confidential information. If the organisation is interviewing people but does not have a need to include them in the final report, there may be an appearance of a cover up or at a minimum poor organisation. This can be overcome by having a list that is used for the organisation and then redacted for the purpose of disclosure in the public domain.

Sixth, the investigation report should guide the reader from the terms of reference to the recommendations. The reader should be able to follow from the report’s terms of reference through the questions to the conclusions and on to the recommendations. A well written report, leads the reader step by step through this process. If the report does not follow the terms of reference or the recommendation does not fit the questions, then the report will raise more questions than it answers. Thus, a well structured report that is clear will demonstrate better transparency to the organisation and to the public.

Seventh, if the report has recommendations, there should be a follow up action plan that shows how those recommendations are to be addressed. For any investigation report there should be a second report outlining the action plan for the recommendations from the investigation. If this does not exist, the complainant will not be certain you are going to solve the problems that were identified. At the same time, they and others have no way to check that you have done what you have recommended or explained why you could not do what was recommended.

In a smaller investigation, this will not be needed because the investigations recommendations are likely to be the solution to the problem. In a larger organisation or on an issue involving many people, there should be a clear action plan that the organisation can monitor to make sure that it has completed what it promised to do.

Eighth, if at all possible share all of the above or most of the above with the person who made the complaint or raised the issue. At a bare minimum, this will help to avoid the appearance of a cover-up and it will demonstrate you have done what the complainant asked. In a basic customer complaint, you need to tell them what went wrong, why it went wrong, and what you have done to fix it. The complainant may not need to see all the interviews and the investigation, even though the organisation may need that for its own learning.

In more complex cases, if someone is a victim of a crime it would be strange not to tell the victim what the organisation found out and what it will do to make it right. This does not mean they receive the whole report or special access, but that it is best to let the victims know about the outcomes.  For example, once the disciplinary hearings are finished and the investigation report is no longer as confidential as the public interest has changed, then the organisation should consider disclosing the full report or as much as can be disclosed under the appropriate legislation. Again, this is driven by the public interest in the issue or the investigation. At a minimum, the organisation should be prepared to be transparent to the public and to itself.

Internally, the organisation needs to have a process to learn from each investigation with a learning outcomes circulated to all staff, if required, and more sensitive or more detailed information to those with a need to know. For example, if an organisation investigates a fraud case it will publicise that success without great detail for the public or general staff. However, it will likely circulate specific control improvements to those employees that have a need to know about the fraud and its consequences. The purpose of the investigation is to find the problem, fix it or assign blame if required for further criminal action; it should not be to avoid scrutiny or transparency. When the organisation shares information to learn from the investigation, it must still follow the duty of confidence to protect personal data from inappropriate or unauthorized disclosure.

Even if you do not end up sharing the information for legal reasons, you should share it internally so that the organisation can learn from the issue. In all cases a balance must be struck so that you do not disclose so much that you kill the patient but enough that the public, if a public interest issue, and the organisation learn from the incident.

The eight steps might sound like common sense, but many public sector organisations do not prepare their investigations for transparency. As a result, they store up problems because they are neither transparent to themselves or to the public. If they are unprepared for transparency, because they are opaque to themselves, their investigations can appear, even though it is unintended, to be a cover-up because they have not done these steps or have not prepared themselves with the possibility that they would have to disclose information relating to the investigation and its outcome. If an organisation does not follow these steps it will be a good indication that they are not a learning organisation. Most, if not all, of the points will be followed by organisations that want to learn from the complaint or the issue. If it is a small issue or complaint, most of the eight items will be covered by good customer service. In more complex cases, such as police or criminal investigations, the balance needs to be struck because the public interest is strong to maintain the integrity of the investigative process while demonstrating, if only to the regulator, that a robust investigation process works to satisfy the public interest in the process. At a minimum, the eight steps will at least ensure the organisation is transparent to itself even if it is not transparent to anyone else.

I would like to thank Donna Boehme of the Compliance Strategists for comments on an earlier version of this post published as 8 Steps to ensure your investigation does not appear to be a coverup. I wish to thank her for her time and her comments. They improved the post by pointing out some errors and omissions. Any remaining mistakes are my own. 

Compliance Strategists are a leading consulting firm based in the metropolitan New York area, specializing exclusively in compliance, ethics, risk and governance practice.


[1] See for example Serious Case Reviews, when a child dies or a serious outcome occurs in a safeguarding situation, have to be published. They are published with some personal data removed and confidentiality protected as required. However, the point is that they are now published whereas they were not available to the public previously.

On the issue of public inquires and royal commissions in the UK see the following as well as historical examples On the general issues of a public inquiry see

[2] See how the UK local government ombudsman approaches investigations.


Enhanced by Zemanta
Posted in compliance, customer service, learning organisation, management, privacy | Tagged , , , , , , , , , | Leave a comment

IAPP Privacy and Freedom: A review by Lawrence Serewicz (@lldzne)

lawrence serewicz:

Here is my review of Alan Westin’s book Privacy and Freedom. I would welcome your views on the review.
I would be particularly interested in what you think of the following thesis. The privacy professionals have failed to deliver on the promise of privacy as corporations show a disregard for privacy. The work of Westin and others, while well intentions, has failed to deter the demand for personal data as a commodity and shows the weakness of the privacy compliance work.

The book remains important which is why I think the questions need to be explored.


Originally posted on Blog Now:

The IAPP has republished Alan Westin’s best-known book, Privacy and Freedom, which was first published in 1967. Despite its age, the new version, it is the same text with several introductory essays, provides context for a reader coming to it for the first time. The introductory essays, which include one by Westin on how he viewed his work and its impact, provide a useful context for the author, the book and its relevance.


Although the introductory essays offer an insight into the book’s impact and the author’s contribution to privacy professional field, a critical essay would have been welcome because the privacy landscape has changed dramatically. The change is more than technological because it includes the change in cultural attitudes to privacy. The cultural and technological changes have undermined his definition.

For most readers, Westin and his book are best known for providing a robust definition of privacy. His book…

View original 841 more words

Posted in Uncategorized | Leave a comment

8 Steps to ensure your investigation does not appear to be a cover-up

This post has been removed as it has been superseded by the post How to write transparent investigation reports.

I would like to thank Donna Boehme of the Compliance Strategists for comments on 8 Steps to ensure your investigation does not appear to be a coverup as I have used them to revise the post.  Any remaining mistakes are my own. 

Compliance Strategists are a leading consulting firm based in the metropolitan New York area, specializing exclusively in compliance, ethics, risk and governance practice.


Enhanced by Zemanta
Posted in compliance, coruption, information management, learning organisation, local government, management | Tagged , , , , , | Leave a comment

Has ECJ’s Google ruling made us forget there are other memories?

English: The Google search homepage, viewed in...

English: The Google search homepage, viewed in Google Chrome. (Photo credit: Wikipedia)

The recent ruling by the ECJ has raised some concerns about the right to be forgotten. Many commentators have suggested that this ruling means the right to be forgotten exists. However, they have gotten ahead of themselves, as the right to be forgotten, if it is to be created, will arrive when the EU’s latest Data protection directive is agreed. The ruling creates a precedent, but does not create a right. However, the issue is neither the right to be forgotten nor the greater power to remove links, as these are the practical concerns that hide the underlying issue. The focus on the search engine forgets that there are other memories that are not affected by this ruling that intersect with the search engines. Moreover, the role of memory is important so that people can be represented if they are not remembered, they cannot be represented.

What are the other memories?

There are three types of memories that dominate social media: Permanent, Corporate, and Individual.


The permanent memory is the state. The state is *the* record keeper. The state made records and records make the state. The state holds your permanent record.[1] It is also the holder of the “official record” (See link on accountability). You can see it is permanent in the way that you cannot erase your birth or your existence from a state’s systems. They hold you in perpetuity. For instance, on the marriage certificates in the United Kingdom, different places record different information about the parents, which suggests the ways that the way the state “remembers” some people is a way to forget others.** In this sense, if the person is not remembered they cannot be represented, which raises secondary questions about the nature of democracy and the institutions that represent individuals.


The corporate memory refers to memory held by companies such as Google, Facebook, Experian, and Zurich. Individuals are captured by this memory when they interact with them as customers and provide information. The idea of a corporation goes back to the middle ages.  In the middle ages, the corporation would have been a guild house that existed as an institution between the king and the individual. The ECJ ruling addresses this memory because Google is the one remembering the information even though governments, corporations, or individuals may have supplied it.


The individual level of memory is whatever anyone can retain or remember personally or within their digital memory. Some commentators have explored how this memory is growing and challenging the other two. The web allows such memories and knowledge to be linked in ways that allows individuals and corporations to challenge the other types of memory. The political and social consequences have been dramatic initially but it is still unfolding. States and corporations have succeeded by their ability to adapt to changes and they are still developing their response to this challenge. At an immediate level, we can see the challenge from individual memories in the way that they can use the web and their enhanced memory capacity to challenge the official history of events like Hillsborough and other incidents where an official version exists but is contested. The individual can create a memory and *share* it through links that challenge the state’s role as a gatekeeper of knowledge and memory. The ECJ has an indirect effect by limiting what can be found by severing the link, but not removing the memory or the engram.

Our collective memory is more than Google. 

The discussion of the ECJ has overlooked these memories. Instead, commentators and analysts have assumed that because Google search engine will be changed that memories of genocides or disputed issues will disappear. However, this misses the wider context of memory that even Google exists within. One could say that the links Google holds are simply tears in an ocean of memory. However, the discussion of the three types of memory only captures the surface, or public, view of memory. We need to look beneath that surface and remember that the public memory, while vast, is only a fraction of the private memory that exists.

Public memories are dwarfed by the private memories

For governments, there are the private memories held by the government and not known or seen by the citizen. The private memory is not limited to intelligence work or investigations from regulatory bodies like tax agencies. Instead, this means the memories created and used in the course of the government’s work to deliver services to its citizens. These memories are created and used by the state without the citizen being aware of them except perhaps by their effect. Please note that this does not include the state using such private memory to punish or coerce through blackmail or repression. I mean this as the bureaucratic shadow that all citizens have but may not recognize.

Do we owe our digital soul to the company store?

For corporations, the private memories can be the work that they do with customer data for analytics or customer profiling. The recent news about actuaries and the work around health profiling has brought this information to the public’s awareness. Some people are aware of it because of the concerns over data mining and data analytics. This is often hidden from view and the public are unaware of it. One only need to see, in the UK, the work of the demographics users group that uses large data sets to profile people just as the credit companies do with work that the public do not usually see. This is not to confuse memory and data but to suggest that discussions about memory and links on Google overlooks that they are built from data. The data is the building block for a memory.

Private memories to challenge the official record

Finally, the ruling misses the private memories of individuals who can, as mentioned above, create rival memories to the state or to the corporation. They can take screenshots or set up memory sites that would not be seen by the individual. One effect of the ruling may encourage a private trade for such memories, where everyone has the potential to be an archivist or a private investigator. Here the ruling can never reach and this is the fastest and widest store of memory. While it is haphazard and less robust than either state or corporation memory, it is a reserve within which the state and the corporation exist because individuals can use those private memories to rally others and they act like engrams within a society. The private memories become like touchstones to remind people in a way that was previously limited to the public archives or even privately controlled archives.  The web allows private memories to become public or at least accessible in ways that were previously not available.

Forget about forgetting, we do not yet understand memory. 

Until we understand the full scale of memory as well as its public and private faces, we cannot address the true concerns about privacy and autonomy. In that sense, the Google ruling will simply make us forget memories and what they mean. What we need to remember is that the ruling and people’s reaction to Google is only a way in which the individual is trying to assert themselves within the community. One can argue quite persuasively that the ruling could eventually be applied to public archives to the extent that they are linked and searchable. Even though this is not discussed nor considered in this ruling, the challenge to memory means that in time the public archives will become as contested as the web. The more they become available, the more the individual will assert their self-professed “right” to control their identity and the community’s memory of them. In that sense, we are starting to see a new era of contested memories.


**I am grateful to Stephen Benham who made this point.

Scotland’s People <> “…, name and occupation of father, name and maiden name of mother, …”

Daily Telegraph <>

“North of the border, in Scotland, and in Northern Ireland, if you are getting married you will be asked to name both parents on your marriage documentation. So too, across the UK, if you are entering a civil partnership. But when it comes to marriages in England and Wales, mums are left off the official paperwork. The only, rare, exception, is if a mother has been authorised by a court as the ‘sole adopter’, then a couple can make a special request to have her name included, but without court papers, you are stuck.”

Petition on <>

[1] We must be careful to remember that there are other institutions that create and retain memories. A well known institution is the church. As Jürgen Habermas pointed out in his book, the Structural Transformation of the Public Sphere the Church created a space between the state and crown in terms of public representation. In these rival spaces, the individual could have their identity protected and represented. However, as the state expanded, the crown and the Church, to some extent, have receded as representative institutions. However, they both remain as viable memory stores to rival the state.

Enhanced by Zemanta
Posted in Uncategorized | Tagged , , , | 1 Comment

Thoughts on the Trust, Risk, Information and the Law Conference (#TRILCon)

On the 29th of April, I attended the TRIL Trust, Risk, Information and the Law Conference, in Winchester hosted by the University’s Centre for Information Rights.  The conference was well organised with about 60 attendees.  The day was structured with four sessions. The morning had the opening plenary and the first presentation session. The afternoon followed the same pattern with a plenary and a presentation session. The final session was the closing plenary. People live-tweeted from the event and their tweets can be found at the hashtag #TRILcon on Twitter.

The opening plenary was by Matthew Reed Chief Executive of the Children’s Society “The role of trust and information in assessing risk and protecting the vulnerable.” He gave an insight into how important information and the trust of children for the Society’s work. These are issues that resonate through presentations as trust is at the heart of concerns with data and surveillance. He spoke at length about child poverty, which helped the participants understand how large-scale data collection can build up a better picture of child well-being, which in turn can be analysed to look for trends and other issues.

Questions to consider

An interesting question to consider from this presentation was how to understand the child as both a data subject and a legal person. We need to consider them as an individual, a legal person, for data or information purposes yet still regard them as a child in other contexts. In the context of the Data Protection Act (DPA) the test for a subject access request from a child usually relies upon the age of 12 years old where a data controller needs to consider whether they can decide whether their own interests regarding the request. Yet, society sees a different age for other legal acts such as sexual consent is 15 and the age for voting is 18. At the same time, though a child is a data subject from birth even though an adult with parental responsibility will have a large influence on the child’s access to data and their existence as a digital individual. Therefore, a child in care has to rely on the organisation or the state to act as their parent for data protection purposes.

The opening plenary helped set the stage for the first set of presentations. The schedule can be found here.

The conference had a number of presentation strands and had depth and variety. I attended my panel, Surveillance, encryption, State secrets & fashion! The first paper was on Spain’s transparency laws. The paper suggested that political culture’s view of transparency shaped the public’s understanding of its success and possible constraints. The challenge was whether the public could look beyond the headlines when the Spanish government appeared to have a greater influence over Spanish media than UK media.

My presentation was on Blinding the Leviathan: Encryption Surveillance and the Digital State of Nature.  In that presentation, I argued that surveillance was necessary to fulfil the sovereign’s fundamental responsibility and contract with the citizen. The sovereign is created to deliver public safety and because it had the right to determine peace and war within the state, it had to have the means to ensure that it was not threatened which included surveillance of the public space. I then suggested that the digital state of nature DSON, which is similar to the state of nature that Hobbes argued man escaped by creating a sovereign, presented a new challenge. The DSON blurred the clear line between domestic and foreign, public and private, and friend and enemy. Therefore, the Leviathan’s surveillance has to extend into these areas. Yet, when individuals used encryption to thwart the state, it blinds the Leviathan and limits its ability to protect the individuals. A blind Leviathan was still strong enough to deliver the benefits people wanted, their many and expanding rights, but unable to look into any areas that the individual, rather than the state, decided. The result, though, will not be increased freedom and autonomy but the opposite as the state lacks the means to deliver the many and expanding rights of citizens.

The next presentation was excellent. The University of Winchester and London College of Fashion collaborated on the paper. The multimedia presentation offer a fashion show to explore the ways in which wearable computing, like Google Glass and other devices, was changing how we hide from surveillance and the ways in which it enhanced surveillance. A number of interesting points and ideas were presented on the way that data, trust, risk, and information could and did intersect with our most intimate experiences.

Questions to consider

What is the relationship between fashion, our identity, and surveillance? If we wear various personae to fit within different contexts, does ubiquitous surveillance, through our lifestyle devices, penetrate those personae to reveal us? Our concern with surveillance may result in an iterative relationship, where technology defeats technology, so that fashion to thwart surveillance is only available to a few in much the same way haute couture is only available to a few.

 Afternoon Plenary: Statistics

The afternoon plenary looked at the use of statistics in law looking at the Bayes Theorem and Likelihood tests with a presentation by Professor Norman Fenton “Improving probability and risk assessment in the law.” As the presenter explained, the problem of using the statistics was not just the public having difficulty understanding the maths. Instead, it showed how statistical experts often presented the theorems and the inferences incorrectly, which created problems. As many businesses, such as Amazon, use algorithms and Bayesian probability theory to help profile customers based on their interactions and purchase trends, the session was useful. Though focused more on the use of statistics in law it did show a wider application for other fields such as behavioural advertising and other predictive systems that rely upon big data.

 The Afternoon presentations.

I attended the session on Data linking, statistical disclosure control, Facebook privacy policies and the right to be forgotten.

The Facebook session looked at the problem of privacy statements being limited by what the customer could understand. In a survey of 100 university students (undergraduate and graduate) only 4% had read their Facebook privacy agreement. As a result, it may be difficult to assess how well these capture consent that is fully informed, specific, and freely given. Another problem highlighted by the paper was that privacy statements are usually written in English and then translated into a host country’s language. A poor translation compounded the difficulties experienced with understanding the consent. The user is then left vulnerable because they will not be aware of or able to understand the ways in which their privacy statement may have explained how their data is going to be used, stored, and potentially sold.

Questions to consider

An interesting question from this paper was the extent we take consent for granted in the digital domain. Even if we have well designed privacy notices and opt in or opt out statements, how well does this capture consent and would it really be able to capture any future uses? The deeper problem, perhaps at a philosophical level, is how we demonstrate consent to the other laws and to the government in general when we have to make repeated and detailed consents when our data is used but our other behaviour, such as driving, does not attract the same requirements. We start to see a possible tension between the physical and digital domains.

The next paper on Big Data and the right to be forgotten offered an insight into whether we can be forgotten with large data bases that link data. Another problem was the tension between the digital person and the public person in that a public act may be remember or forgotten in a way that is different from the way in which a digital act is remembered or forgotten.

Questions to consider

In the digital age, who remembers determines whether it can be forgotten. The “official record” may be expunged, but the individual can remember now as well as the state can. Will the right to be forgotten extend to the private domain where rival memories are created and maintained? If the concern about linking and big data relies upon data quality, can that quality be assured in the future? A further question is whether the linking and data can resist or overcome strategies to muddle the history or paint a counter narrative. In that sense, the session on statistics will help us determine whether the history we are reading, through the linked big data, is accurate.


The final plenary: [De]-anonymisation & technology panel*

The final plenary brought together a number of speakers on this topic. What was of interest was the presence of the ICO on the panel as they had set the code for anonymisation and psuedonymisation. They pointed that they were the first Regulator in the EU to publish such a standard.  The panel discussed the problems associated with the process and with making sure such data could not be identified by future, yet, unidentified methods.

Questions to consider

Can the tension between useful and meaningful data and personally identifiable data be reconciled? The richer the personal data sets being used the greater the potential to identify someone. Will the concern over data be mitigated by the natural law of data inertia or decay? The data quality could not always be assured so gaps and problem could render its use moot at worst or difficult at best. As the data decays or lacks a robust quality, can we be certain that the correct re-identified someone with great confidence.

Final thoughts

The conference was a success. I found the breadth of papers and presentations stimulating. In my session, I had a number of interested and insightful questions. All the papers sparked discussions and further ideas. The event was well managed and structured. I would recommend people involved with information governance to attend any future events. I have organised similar events and I appreciate the amount of work needed to host and run such events. The Centre for Information Rights offered an excellent day and a lot of stimulating content and discussion, which is exactly what you want from a conference.


Enhanced by Zemanta
Posted in compliance, culture, data protection act, information management, privacy | Tagged , , , , , , , , | Leave a comment