On the 21st of April, I attended the Trust Risk Information and the Law Conference 2015. It was by the Centre for Information Rights. The theme was the Privacy Arms Race. My blog on the previous conference can be found here. A write up by Paul Gibbons (author of the well-regarded FOI-Man blog) can be found here (accessed 25 April 2015)
The event, sponsored by Bond Dickinson, was a good mixture of academics and practitioners exploring the various challenges in the race for privacy against the increased capacity to broadcast, track, and find personal data.
Kieron O’Hara, who gave the opening talk in 2014, returned with an eloquent presentation on the Right to be Forgotten (RTBF) and the Google Spain ruling. In the Google v Spain decision, he pointed out, we are not forgetting people so much as de-indexing them from search engines. Although free speech advocates worried it was stifling free speech, the decision has less of an effect on speech and more of an effect on search systems. There will be ways to find the information, if interested, but it will not be as easy to find.
The ruling, as Prof O’Hara explained, was also a possible regulatory bar for new entrants into the search engine business. New entrants will need to manage the searches in response to the new regulatory burden as well as customer expectations. The costs and bureaucratic systems might be easy for Google to manage but will smaller or new search engine companies might struggle.
Implications: What I found interesting from the talk was the way in which the ruling is a first step in setting the boundaries to the digital public space. I will return to this theme throughout the post. I found one way to look at the privacy arms race was that individuals were trying to become better digital citizens and expecting digital behaviour from governments and corporations in line with their expectations. The RTBF is an example of the community, through the law, shaping the public memory. O’Hara explains that it is not a new right as such as the control of memory and the belief in a right to be forgotten, such as with various societies allowing historical cases to be expunged or removed.
After the opening session, two breakout sessions were available.
One was stream 1B which was about The Information and Enforcement Battle ground
The papers were
Privacy, Darknets, and the Consequences of Copyright Enforcement Andrew Black, David Komuves (University of Edinburgh)
The case of public documents of high public interest where private information can be found: a comparison between England and Spain. Pilar Cousido Gonzalez (University of Madrid, visiting Professor University of Winchester)
Predicting your behaviour: the need for big data policies Andrew Kimble (Bond Dickinson LLP)
The second was 1A: The technology battleground.
I attended this session as I was presenting the second paper.
Possible Futures for personal data stores. Marion Oswald (University of Winchester) Kieron O’Hara, Max Van Kleek (University of Southampton) Dave Murray-Rust (University of Edinburgh)
Dave Murray Rust presented the group paper which explored how and why people lied online to protect their identity. The paper found that people often engaged in this behaviour because privacy notices were too complex, too long, and too difficult to understand. The concerns about the way the organisation might use their personal data, which if the privacy notice was poorly written or structured, would be high, could lead people to become “privacy vigilantes” to protect their privacy. Methods that might be used are providing false or incomplete information. One solution they explored was the idea that people would have their own personal data stores. They could take their personal data and set the various access controls so that when the visited sites or used devices the settings were already in place. An alternative was that the PDS would contain the links to the personal data and act as a conduit to where the data was stored and not a store in itself. The goal is to create a system that enables the user to manage their multiple identities across the various systems.
What underpinned this paper was the discussion the idea that people had multiple identities depending on the context. A person can be a member of a demographic group and be a member of a voluntary group. They could then manage their identity within each group. Another concern was who owned the data and whether the law would change about data ownership. At the moment, we do not own our personal data. [See Ruling on Fairstar]
Implications. Can the user bundle their identities together in such a way that they have not created their own security service dossier. By collecting the information together so that they can manage it, the user also makes it easier to manage them. Even if the PDS is encrypted, it does give a focal point that provides a near instant understanding of a person across all their platforms. Does the benefit from convenience outweigh the surveillance or identity theft threats?
I gave the second presentation.
Esotericism or Encryption: Can technology protect philosophy?
I argued that the threat from state surveillance and persecution are a threat that is coeval with political life. The individuals who wanted to protect their privacy from the state or from the community used various devices. I compared and contrasted the practice of esotericism with encryption. The key change, though, in esotericism was the rise of the modern state and the Enlightenment idea of free speech. The ability to speak openly and be tolerated reduced the need for esotericism as persecution was reduced. However, in the digital age, persecution has returned and individuals use encryption to protect themselves. The problem though for both esotericism and encryption is that technology can defeat both. For a short summary of the article see this blog post https://lawrenceserewicz.wordpress.com/2015/04/24/persecution-and-the-art-of-writing-the-return-to-an-ancient-problem/ (25 April 2015)
The result is a deeper problem. If philosophy is a radically private act and Internet of Things threatens that privacy, is philosophy possible? If philosophy is not possible, so that all philosophizing has to be done publicly, then it becomes a political activity. In that domain, the technological state can ensure a status quo because it can search out through algorithms and computer assisted linguistic analysis to find ideas and speakers who threaten the status quo. Any change will be filtered through the technological status quo, which might moderate most change but also make some changes impossible. For a short blog post on this point see https://mediameditations.wordpress.com/2015/04/23/freedom-of-thought-and-the-public-domain/( accessed 25 April 2015) However, all of this assumes that the status quo is just and desirable. If not, then the ability to change it might be stifled. In a sense, the technological status quo encourage a dogmatic belief in the status quo that becomes tyrannical.
Implications: The struggle for privacy is part of a wider struggle of the individual against the community or the state. The danger from a search for privacy is that we can surrender the public sphere to ensure we retain our privacy. In that sense, we want the public to accept and condone our private lives and acts in exchange for accepting the political status quo. If we trade privacy for public life, do we have the basis to shape the public life and to resist the state’s power on anything except the private life that it might challenge and thus accepts a political status quo so long as it is benign towards our private lives.
To wear it or not – that is the question! Melanie Eberlein-Scott, Spencer Wood (Facts International) The new black is wearable technology. Melanie discussed a wearable technology is becoming both a fashion item and a powerful tool for marketing and health industries. Spencer was not able to make the conference, but provided a video clip for his parts of the presentation. The way fashion and technology intersect because it shows how individuals, and society, seek to adapt to it. At the TRILCon14, there was an interesting presentation on fashion and its response to surveillance.
This presentation went beyond the previous one as it was describing devices that were being used and the services that rely on those devices. Although this was focused on the benign technology, these are devices that the user wants to wear and are not required to wear; the technology has its origins in surveillance. The earliest wearable technology for monitoring was military use for global positioning systems and law enforcement with prisoner tracking bracelets. The technology is becoming fashionable as it provides health and consumer data that allows for an individualised or personalised service. The technology allows us to track our performance and monitor our status. For marketing firms the technology can provide a valuable insight as companies can see in real time the effect of product placement and advertising. The wearable technology field is increasing with Google Glass and Apple Watch leading the way. It remains to be seen, though, whether Apple Watch can ignite interest beyond the low-key response to Google Glass
Implications: Wearable technology will only increase. Even though the technology might not broadcast user information, it does provide a ready store that hackers or other interested parties would want. One point that was raised was whether the wearable technology could be used to modify behaviour or ensure compliance such as with health information for insurance purposes that limit the user’s freedom. Despite these concerns, the field looks to grow as there are many applications for industries seeking comparative advantages and users who want to benefit from the personalised services such as health monitoring or consumer advice that it can provide.
Lunch Plenary Address
After lunch, the plenary session was Professor Hankin discussion the Foresight report on Future Identities. https://www.gov.uk/government/publications/future-identities-changing-identities-in-the-uk (accessed 25 April 2015) The report looked at what went into online identities and recognized that there was not a single unchangeable identity for an individual. The report was the product of this research project. https://www.gov.uk/government/collections/future-of-identity (accessed 25 April 2015) The site is worth visiting because it contains the 20 background papers prepared to support the final report.
The implications: The future of identity will have to find a way to put the individual within a wider context even as it seeks to individuals or personalize identity and services. Someone’s group identity might have a greater impact on their behaviour than their individual identity.
After the plenary session, there were two choices. The first stream covered the regulatory battleground
The expanding scope of the Data Protection Directive: The exception for a “purely personal or household activity” Oliver Butler (University of Cambridge) –
Will Export Control Regulations Change the Way Corporations Use Cloud Computing Services? John Eustice, Timothy O’ Toole, (Miller & Chevalier) –
The second stream, which I attended, was the online battleground
A right to an online private life for employees? Megan Pearson (University of Winchester) –
She discussed how employee privacy has evolved as the social media age has allowed surveillance to expand into the time away from work and the way private life can affect professional life. The presentation covered how many employees had been fired or punished for comments on social media. The cases were based mainly on bringing the company into disrepute. Two main issues were how the company became aware of the material, often it was an anonymous tip off, or the comment had received wider circulation.
What was interesting was the expectation within the audience that people either should know better or that the issues could be solved by a company being clear about the policies associated with social media. What was also interesting is that the sensitive material was often broadcast or provided by friends and not the state or the company monitoring the employee or the web. In some cases, there are people who seek out such comments or pictures to amplify them or report them to the company.
Implications: The danger to employees is similar to the average user who finds their tweet, post, or picture being broadcast across the web. The issue was the balance between an organisation’s concern about its corporate reputation if an employee brought it in to disrepute and the individual’s social media life away from work. What is emerging though is that companies and individuals are learning from it. We might be starting to see people become better digital citizens as they learn to behave online and to understand the boundaries to acceptable and unacceptable behaviour that we learn in the public domain from childhood. If we accept that the digital domain is still in its relative infancy with people being able to publish directly to a global audience, the law, society, and individuals are adapting quickly to develop the acceptable social conduct.
The final plenary session was themed on Data protection & the right-to-be-forgotten – the future?
Is data protection law the new defamation? Judith Townend (Director, Centre for Law and Information Policy, Institute of Advanced Legal Studies)
Judith explored whether the data protection law and the increased emphasis on privacy would have a chilling effect or a deterrent effect on journalism as it was suggested the defamation law would. The results suggest that it did not have an effect, in large part because the exceptions that exist, and the focus on other issues. The research showed that privacy and privacy related issues had a much higher profile. However, in light Leveson’s promised review of data protection and journalism, which has not arrived, it remains an open issue. Even though the ICO has published guidance on the role, it is not a statutory guidance. An interesting finding from the paper was that news organisations were receiving more requests for personal data which suggests that the public now have a greater awareness of their information rights.
Implications: Even though the paper did not find a positive result, it did raise a point of continuing friction. The Leveson review’s promised review of journalism and data protection might have been overtaken by events. The Vidal-Hall v Google ruling reaffirmed the tort for the misuse of private information. Although journalism is not directly affected, it does raise the bar on those who might access the information in the hopes of providing it to the media. A second issue is that the Right to be Forgotten are influencing the way that Google, which appears to want to increase its role as a news organisation, manages its search engines. As media companies have to rely on the search engines, it can mean that the regulation of search engines has a knock on effect on journalism.
Where now for the right to be forgotten? Iain Bourne (Group Manager, Policy Delivery, Information Commissioner’s Office) –
Iain discussed how the ICO was handling the RTBF complaints. These are the complaints lodged by people who are refused by Google in their request to be de-indexed. What they found was that for the thousands of requests that Google had received very few had led to an ICO complaint. The low conversion rate could either mean that Google has an effect procedure or that people are unwilling to appeal. One point to consider from the presentation was that Google v Spain focused on privacy and not the context of search as search is not the same as search. The presentation explored their published criteria and the challenge, especially around information that covered allegations of criminal offences . The difference between EU members on this issue as they have different approaches to what is available and what is not. The process is not seamless or without its flaws because the web global reach means that mirror sites can host information outside the EU jurisdiction. There are also sites and organisations devoted to retaining links to the data that had been de-indexed.
Implications. The low conversion rate suggests that the RTBF is having the desired effect. Cases where there is a borderline issue seem to be handled well by Google. Thus, the clear-cut cases are handled easily. The same approach seems to work in the cases where someone might have cause to appeal are not appealing because of the response provided by Google. What needs to be explored is if the RTBF case is having an effect on what people publish and the way that search algorithms are being designed. Although more sites are changing their systems so that the search engine robots cannot scan inside the site, it is uncertain if there has been an effect on what is published. The challenge is whether the RTBF can be scaled to a global “right”. Even if it is, the effort to enforce it in all but the most obvious cases may prove difficult. However, it does signal intent and it does start to shape the public domain and what is published and how search engine companies operate in the new regulatory environment.
Trust as Collateral Damage in the Privacy Arms Race – the right to be forgotten Alastair McCapra, (Chief Executive, Chartered Institute of Public Relations)
The final paper of the plenary session looked at whether trust was possible in light of the right to be forgotten. Alastair discussed several issues of how the RTBF was affecting the trust in the web as a record keeper and whether that process was being shaped by such request. He looked at how Wikipedia entries had been affected by take down requests under RTBF. Another area of concern was that the RTBF was giving the false impression that PR disasters could be “cleaned up”. The RTBF would not change the underlying issue and it gave a false expectation that it would mean that issues could be addressed.
Implications: The continuing struggle between privacy and transparency and the desire to control information with the desire to make information free presents a struggle that has consequences for how individuals will use the web. The right to be forgotten is the first attempt to manage the public domain in a way that responds to the individual and gives them some control over their personal data.
Overall, I found the day stimulating and full of interesting ideas. The presentations were excellent and they sparked good discussions with the audience. The organisers are to be commended for running a seamless event. The event has developed from its first year and I look forward to the next one. If you get a chance visit the Centre for Information Rights site.
 Electronic surveillance, fashion, marketing & the law’ Savithri Bartlett, University of Winchester, Matteo Montecchi, University of the Arts London, London College of Fashion and Marion Oswald, University of Winchester