Is data protection fit for the ‘big data’ era?

bigdata-1200x646.jpg

The recent EU’s General Data Protection Regulation (approved in 2017 and applicable from 2018) builds on a 20-year-old regime, the 1995 European Directive, that was implemented in UK by the 1998 Data Protection Act. It regulates the processing of personal data through restrictions on how such data – including social media data – can be recorded, stored, altered, used or disclosed. Under the DPA, personal data means data related to a living individual who can be identified, either directly or indirectly, from the data, or from other information held by the same organisation. The updating of the Regulation was needed because of the fast developments in technology, and the appearance of new legal and ethical issues in different fields.

For example, under its new terms of service, Google can significantly influence an election by predicting messages that would engage an individual voter (positively or negatively) and then filtering content to influence that user’s vote. The predictions can be highly accurate, making use of a user’s e–mail in their Google provided Gmail account, their search history, their Google+ updates and social network connections, and their online purchasing history through Google Wallet, as well as data in their photograph collection. The filtering of information could include “recommended” videos in YouTube; videos selectively chosen to highlight where one political party agrees with the user’s views and where another disagrees with them. In Google News, articles could be given higher or lower visibility to help steer voters into making “the right choice”. Such services can not only be sold, but could be used by companies themselves to block the election of officials whose agenda runs contrary to their interests. In the case of Facebook, we have to remember that the company confirmed that it sold more than $100,000 worth of political ads to Russian sources trying to sway last fall’s U.S. presidential election and leaked reports showed that Facebook was giving advertisers the option to target users using keywords like “Jew hater”. The company now stated that it will invest more in machine learning to “better understand when to flag and take down ads,” and expand its advertising content policies to stop ads that use even “subtle expressions of violence”. These tactics had already been tested by Russia through major acts of cyber-enabled information warfare against a rival state as Estonia. According to analysts, certain patterns have emerged from these conflicts, allowing experts to draft a rough model of the techniques Russia uses to destabilize its opponents: “First, people’s trust in one another is broken down. Then comes fear, followed by hatred, and finally, at some point, shots are fired”. The pattern was particularly striking in Crimea. People posted reports on Facebook about gross mistreatment by Ukrainians; dramatic messages circulated on Instagram about streams of refugees fleeing the country. Billboards suddenly appeared in Kiev bearing pro-Russian slogans; demonstrations followed. Rising suspicion and mutual mistrust split Ukrainian society. In a matter of months, fighting broke out. Russia used the conflict as a pretext to send in “aid convoys,” presenting itself as a benevolent responder in an emergency.

The core of regulation is and should be the existence of consent. Under the DPA, individuals must give their consent for their personal data to be processed by an organisation, both at the stage of initial registration for a social media service, and for any subsequent changes to the terms of use of the data. Since the EU’s new GDPR, this consent needs to be expressed and not only assumed. There are, however, further worries. The new European legislation has built upon the right of users towards withdrawal of consent (or ‘right to be forgotten’). This right does little to allow users control over their personal information: it merely grants users a right to end the agreement into which they entered upon joining the social network. Users’ information is removed, at the price of being unable to continue using the social networking service. Furthermore, with social networking websites offering authorisation services to independent sites, a “forgotten” user loses the ability to access these third-party sites with the possibility of yet more personal information on the third-party sites becoming orphaned in the process. In this sense, some defend the idea of more fine-grained controls, stating that allowing users to remove specific pieces or clusters of personal information, without affecting their ability to use social networking is essential if users are to be given genuine control over their personal data.

The GDPR has improved the on-boarding process of consent-giving, but hasn’t paid the same attention to the off-boarding side of things. Maybe regulators don’t realize the dimension of how much integrated our lives have become with technology, and how private companies must assume a bigger responsibility in order to guarantee a democratic society.

Article written by Andrew Macsad.

Ben Lin