Today, users are becoming increasingly concerned about their privacy online. Nine out of ten Americans have doubts about the security of their personal data on the Internet, and according to an Intouch International study, 67% believe that laws should be written to protect them. With this in mind, companies that use consumer information to create ad hoc products and campaigns should consider placing Artificial Intelligence control and data protection at the top of their list of concerns to maintain their customers’ trust.
In September 2018, Facebook fell victim to a cyberattack, in which the personal data of more than 30 million users were compromised, and this started to trigger suspicions. Another example is Google, which admitted listening to 0.2% of the conversations that users have with their virtual assistants to improve their services in July 2019.
Artificial Intelligence and data protection: two worlds facing each other
Europe’s General Data Protection Regulation (GDPR), which became effective in May 2018, was the first large-scale effort to offer consumers greater legal protection of their data. And we see that this trend begins to spread: in the United States, the state of California is also preparing a similar law that will affect almost 40 million Americans.
However, all these regulations face a direct conflict with reality: with the rise of advanced technologies such as Artificial Intelligence, the need to collect data is stronger than ever. Not in vain, these systems feed on user information and learn from it. This learning process is known as Machine Learning, a technique that involves the creation of mathematical algorithms based on the analysis of large amounts of data, extracting correlations to create models that are subsequently applied to new information. Thus, this tool can learn from its own experience, even regardless of the contribution of information by man’s hand.
So, is it possible to find a balance between what society demands, the needs of the industry, and the obligations of governments? Can legal structures help manage the conflict between Artificial Intelligence and data protection?
The solution isn’t simple. In this scenario, some think that applying legislation such as the GDPR will be a brake on technological advances when the information analyzed contains personal data. However, for the progress of this technology to be efficient, it is necessary to consider privacy as the basis on which it should be based, and not as the ceiling that slows its growth.
Companies and organizations, which are much more agile and flexible than governments, have in their hands the possibility of calming their consumers’ doubts by applying certain limits. These could be some of them:
1. Follow the guidelines set by the GDPR
Both if you are a company that operates in the European Union or out of it, this regulation can help you measuring your actions and setting limits. Its five most important pillars are:
- Artificial Intelligence systems or data collection must be transparent.
- The information collected from the user must have a specific meaning and purpose.
- The user must be adequately informed about what this information is going to be used for.
- Consumers must be able to unsubscribe from the system.
- The data must be deleted upon the consumer’s request.
2. Introduce control elements in the algorithms
The news, back in March 2018, that an autonomous vehicle without a driver had ended the life of a woman also launched many doubts in legal terms. Who was responsible for this incident? The car, its creators …?
Machine learning is related to automated decision making based on profiling. When this decision implies legal effects on the interested party or affects it in a significant way, the processing of the data must be reported. In these cases, the interested party must, in turn, transfer relevant information about the logic involved in decision making. This can be a complication because due to the nature of this technology, it is not always possible to determine how this result has been reached or to explain it understandably. This phenomenon is known as Black Box: when the process followed by the system at the time of concluding is unknown, giving up the why in pursuit of what.
This fact has impulsed many AI software, developing companies to start introducing control mechanisms in the algorithms. Some of the principles to follow are:
- Alignment of objectives: the human being and the machine must have a common understanding of the aim.
- Alignment of tasks: humans and machines must understand the limits of the decision space of each part.
- Man-machine interface: there needs to be an environment in which the human and the machine are coordinated in real-time to build trust.
3. Designate people to be in charge of Artificial Intelligence and data protection
As GDPR establishes that there must be a person responsible for following the guidelines set by the regulations within the company, soon it will be ubiquitous to see professionals specialized in data protection, privacy, security…
Having a person within the company with a person who is responsible for carrying out good practices in data protection will be one of the keys to maintaining user confidence.
Similarly, professionals like this will prosper in other areas such as lawyers specialised in privacy, technological algorithm consultants, auditors…
In the end, it is a matter of time that the needs of Artificial Intelligence and the ethical issues that it generates fit like the pieces of a puzzle. The claims of the users, the legal limitations and the will of the companies will make necessary, in the near future, for the obligations and responsibilities to be much clearer.
Would you like to find out more about our coworking spaces?
Let us know what you need. We will provide the best information and services for your business. And if you prefer to visit us in person, please feel free – we look forward to meeting you!