“Oil changed the world in the 1900s. It drove cars, it drove the whole chemical industry” said Intel CEO Brian Krzanich, adding that “Data, I look at it as the new oil. It’s going to change most industries across the board.”
Data is the latest commodity, spawns a lucrative, booming industry, prompting antitrust regulators to intervene to restrain those who control its flow. One can argue that, just few years shy from a century ago, control of oil was a key factor in who won World War II. In this day and age, similar concerns are being raised by the giants who deal in data, the oil of the digital era. These tech-titans—Alphabet (Google’s parent company), Amazon, Apple, Facebook and Microsoft—look invincible.
The need to regulate privacy, and setting new data policies across the globe, is thoroughly studied in the report entitled ‘Data Policy in the Fourth Industrial Revolution: Insights on personal data’ by the World Economic Forum (WEF) in collaboration with the Ministry of Cabinet Affairs and the Future, the United Arab Emirates.
Fourth Industrial Revolution, and the data question
Many believe that we are on the cusp of The Fourth Industrial Revolution, unlike its predecessors —steam and water power, electricity and assembly lines, and computerisation— it will pose a more radical change to the way we live, work and relate to one another, or might even challenge our concepts of what we consider human.
With the fast-paced adoption of cyber-physical systems, the Internet of Things (IoT), and the Internet of Systems. This revolution is expected to impact all disciplines, industries, and economies.
In his book, The Fourth Industrial Revolution, Professor Klaus Schwab, founder and executive chairman of the WEF, describes the enormous potential for the technologies of the Fourth Industrial Revolution, as well as the possible risks: “The changes are so profound that, from the perspective of human history, there has never been a time of greater promise or potential peril. My concern, however, is that decision-makers are too often caught in traditional, linear (and non-disruptive) thinking, or too absorbed by immediate concerns to think strategically about the forces of disruption, and innovation shaping our future.”
These advancements promise to help countries boost economic growth, create jobs, reduce poverty, promote trade, and improve the quality of people’s lives.
However, the same technologies that can be used to improve health and medicine, enable personal interaction and engagement, and streamline the way governments provide services, can also be used to limit access to information, justify discrimination, restrict opportunity, and magnify an array of other harmful practices.
According to the WEF report, data is the centre of this broad digital transformation, as it is collected, created, used, processed, analysed, shared, transferred, copied and stored in unprecedented ways and at an extraordinary speed and volume. By 2020, an estimated 50bn devices will be wirelessly connected to the internet.
Understanding the ways data is generated, has become crucial, for any effective governance, the report states, taking into consideration that billions of sensors come online that passively collect data (without individuals being aware of it) and as computer analytics generate and synthesise more ‘bits about bytes’.
Data collection, creation, processing and sharing became inevitable, despite its source, whether it is volunteered by individuals, observed from behaviour, inferred by organisations or obtained from third parties.
Despite that, the global regulatory landscape for data is increasingly complex, and regulations remain unclear. Currently, there are more than 120 different national laws governing the collection and use of data, with new laws imminent in the EU, China and Brazil. Set to go into effect in 2020, a new data-protection law was recently passed in California, the home state of many major technology companies, and national privacy law is now being seriously contemplated in the US.
“It’s important to note the potential impact of conflicting regulation and data-localisation requirements on digital trade and commerce, which is reliant upon cross-border data flows, and which helps distribute economic benefits across the globe,” the report indicates.
Hence, rises the dilemma of how to address recent technologies that fall outside existing regulatory frameworks, as regulators around the world are experimenting with new approaches to data policy.
The pace of technological advances means that existing laws and regulations can quickly become obsolete, frustrating both customers and businesses seeking to access new innovations. However, individuals can also become concerned if they feel governments are not sufficiently protecting them from new risks.
“One of the greatest individual challenges posed by new information technologies is privacy.
We instinctively understand why it is so essential, yet the tracking and sharing of information about us is a crucial part of the new connectivity. Debates about fundamental issues such as the impact on our inner lives and of the loss of control over our data will only intensify in the years ahead,” according to Schwab.
Privacy complexity: data protection versus security
Against this backdrop, a range of issues and concerns frames the modern privacy debate, which raises ethical, technological, legal, economic, cultural and even philosophical questions. The complexity of the challenges do not mean that solutions cannot be developed. It does mean that the solutions are unlikely to be simple and straightforward.
The report summarises the four main directions for confusion and tension surrounding the issue of privacy:
First are the semantics of privacy, as privacy conveys a variety of overlapping harms, including, for example, the appropriation of a person’s picture or name for commercial advantage, surveillance of individual affairs, and public disclosure of private facts.
Second, are the power asymmetries, the fact that the ability to understand complex and inscrutable data flows within many global platforms is increasingly impractical.
Third are the macro approaches to privacy, as jurisdictions, countries, and cultures take different approaches to address the identified harms without any coordinated global policy approach.
Finally, are the micro perceptions of privacy, as individuals display a range of inconsistent behaviours driven by individual choice and economic rationales, often saying one thing and doing another.
The report indicates that new approaches are needed to help policy-makers address this complexity, and to understand, navigate and simplify the challenges. Policy protocols must be considered together to understand how each decision interacts with, or influences, other decisions within a single data policy framework.
Stressing the importance of the notion of privacy – the right to private life, data protection ,and confidentiality of communications – and the characterisation of privacy as a right, necessarily implicates a range of values and norms that may vary from country to country.
“A country that places less emphasis on individual autonomy may not value “the right to privacy” to the same extent as other nations, particularly with respect to the relationship between the individual and the state,” the report indicates.
Furthermore, the WEF believes that clear and cohesive data protection frameworks will provide commercial actors with regulatory certainty, as policies that are flexible, iterative, and adaptive can address some of the differing stakeholder perspectives.
On the other hand, the relationship between privacy and security also warrants clarification. Despite the fact that these two terms are overlapping and complementary, yet they are fundamentally different.
According to the report, information security concerns the confidentiality, integrity, and availability of information. Privacy risks may result from authorised activity that is beyond the scope of information
security.
Thus, protecting individuals’ privacy cannot be achieved solely by securing personal data.
Security involves protecting information from unauthorised access, use, disclosure, disruption, modification or destruction.
Privacy, on the other hand, is concerned with managing the risks to individuals associated with the creation, collection, use, processing, storage, maintenance, dissemination, disclosure or disposal of personal data.
The way forward, revisiting FIPPs, dimensions of trust
Nearly 40 years ago, Fair Information Practice Principles (FIPPs) were published, as an early attempt at developing a shared vocabulary and a common set of principles. The FIPPs are the basis of most privacy laws and data-protection frameworks in effect today.
The report believes that the FIPPs require further consideration and refinement. As machine learning and artificial intelligence (AI) find new ways to leverage data in larger volumes, along with new forms of ubiquitous and ambient data collection through IoT and connected devices, models of consent must change and adapt.
“Reinterpreting the FIPPs, or simply evaluating them in light of new technologies, may serve to effectively modernize any FIPPs-based regulation currently in effect,” the report states.
Moreover, the WEF identifies the increasing trust gap, as one of the barriers toward the fast-paced adoption of the technologies of the Fourth Industrial Revolution.
In the recent years, trust concerns within the digital ecosystem have been on the rise. Security breaches, identity theft and fraud; concern from individuals and organisations about the accuracy and use of personal data; companies confused about what they can and cannot do; and increasing attention and sanctions from regulators are just some of the indicators.
In September for example, Facebook notified users of a massive data breach affecting over 50 million people. In the same month, Google announced that it allow third-party apps to access and share data from Gmail accounts, which has over 1.4 billion users globally.
In fact, in 2017, the global Edelman Trust Barometer had its biggest drop in trust ever across the institutions surveyed of government, business, media and NGOs compared to the previous year. In 2018, though outliers to this trend persist, little had improved, and some important markers were even worse.
What can be done?
The report concludes that one of the directions that needs to be adopted is the risk-based approach to data, which means going well beyond legal compliance and embracing rigorous analysis, deliberation and, at times, confronting issues of ethical uncertainty.
The aim is to fully inform decision-makers of potential risks, so they are not intentionally, or accidentally, ignored. The process is designed to render decisions that are informed, deliberate, and human.
Moreover, the report specifies three potential outcomes:
First, is that decision-makers agree the initiative has a high probability that a material set of adverse consequences could occur, so the initiative is terminated regardless of the potential benefits.
Second, is that decision-makers consider all of the potential risks and decide that the initiative will go forward as proposed. No changes to mitigate risk are taken as the risks are viewed as insignificant or the potential benefits outweigh the risks.
Third, is the scenario where decision-makers take some steps to mitigate some risk and the project goes forward with full knowledge and acceptance of any residual risks.
Consequently, identifying and setting specific rules for categories of sensitive data is now a core part of nearly every data protection framework, according to the report.
The report highlights the importance the sensitivity of the data in a given context, and the potential risk of adverse consequences or harms to individuals from the processing of that data, whether it is labelled as personal or not.
“Understanding the sensitivity of a data element, or given category, is important not only in the context of privacy but for data security, information governance, and risk management more generally. Once categories of sensitive data are identified, a framework must identify the implications of being labelled as such within a given framework. Higher standards regarding consent, security, and legitimate use may be appropriate. In some cases, the collection and use of certain sensitive information may be prohibited outright,” the report concludes.