Is data protection fit for the ‘big data’ era?
The recent EU’s General Data Protection Regulation (approved in 2017 and applicable from 2018) builds on a 20-year-old regime, the 1995 European Directive, that was implemented in UK by the 1998 Data Protection Act. It regulates the processing of personal data through restrictions on how such data – including social media data – can be recorded, stored, altered, used or disclosed. Under the DPA, personal data means data related to a living individual who can be identified, either directly or indirectly, from the data, or from other information held by the same organisation. The updating of the Regulation was needed because of the fast developments in technology, and the appearance of new legal and ethical issues in different fields.
For example, under its new terms of service, Google can significantly influence an election by predicting messages that would engage an individual voter (positively or negatively) and then filtering content to influence that user’s vote. The predictions can be highly accurate, making use of a user’s e–mail in their Google provided Gmail account, their search history, their Google+ updates and social network connections, and their online purchasing history through Google Wallet, as well as data in their photograph collection. The filtering of information could include “recommended” videos in YouTube; videos selectively chosen to highlight where one political party agrees with the user’s views and where another disagrees with them. In Google News, articles could be given higher or lower visibility to help steer voters into making “the right choice”. Such services can not only be sold, but could be used by companies themselves to block the election of officials whose agenda runs contrary to their interests. In the case of Facebook, we have to remember that the company confirmed that it sold more than $100,000 worth of political ads to Russian sources trying to sway last fall’s U.S. presidential election and leaked reports showed that Facebook was giving advertisers the option to target users using keywords like “Jew hater”. The company now stated that it will invest more in machine learning to “better understand when to flag and take down ads,” and expand its advertising content policies to stop ads that use even “subtle expressions of violence”. These tactics had already been tested by Russia through major acts of cyber-enabled information warfare against a rival state as Estonia. According to analysts, certain patterns have emerged from these conflicts, allowing experts to draft a rough model of the techniques Russia uses to destabilize its opponents: “First, people’s trust in one another is broken down. Then comes fear, followed by hatred, and finally, at some point, shots are fired”. The pattern was particularly striking in Crimea. People posted reports on Facebook about gross mistreatment by Ukrainians; dramatic messages circulated on Instagram about streams of refugees fleeing the country. Billboards suddenly appeared in Kiev bearing pro-Russian slogans; demonstrations followed. Rising suspicion and mutual mistrust split Ukrainian society. In a matter of months, fighting broke out. Russia used the conflict as a pretext to send in “aid convoys,” presenting itself as a benevolent responder in an emergency.
The GDPR has improved the on-boarding process of consent-giving, but hasn’t paid the same attention to the off-boarding side of things. Maybe regulators don’t realize the dimension of how much integrated our lives have become with technology, and how private companies must assume a bigger responsibility in order to guarantee a democratic society.
Article written by Andrew Macsad.