With the May 2018 deadline for compliance with the General Data Protection Regulation (GDPR) inching closer, U.S. multinational companies have been eagerly awaiting guidance from the Article 29 Working Party on key provisions, such as the use of algorithms to make processing decisions, the new 72-hour response period for data breaches, the meaning of consent under the GDPR, and the appointment of a Data Protection Officer. Over the next few weeks, we will be providing our analysis of recent WP29 guidance.

Today, we begin with new guidelines addressing the use of algorithmic processing engines – what the GDPR calls “automated decision-making.” According to the Guidelines, profiling is an automated form of processing, carried out on personal data, the objective of which is to evaluate personal aspects about a natural person.

Companies use algorithms for a wide range of business processes. If, for example, your company compiles personal data of EU individuals for marketing purposes or uses applicants’ personal attributes of credit rating to extend a loan, or tracks consumers’ on-line behavior through cookies, the GDPR’s new rules will apply. This is true even for US companies that don’t have a physical presence in the EU, if they are marketing toward or collecting data from EU residents.

A Ban on Online Profiling?

Generally speaking, the Guidelines prohibit entities from using “solely” automated profiling to make decisions that have a legal or “similarly significant” effect on EU residents. “Legal effects” are broadly defined to mean effects on people’s legal rights or legal status, such as their entitlement to a social benefit granted by law, their ability to enter at the border, or increased security measures or surveillance by the competent authorities

A decision has a “similarly significant effect” if it has the potential to significantly influence the circumstances, behavior or choices of such individual. Examples include the automatic refusal of an online credit application, or e-recruiting practices without any human intervention. Online behavioral advertising may fall into this category in cases where it is intrusive or targets individuals’ vulnerabilities. For example: regularly showing on-line gambling advertisements to individuals in financial difficulty who may then sign up for these offers and potentially incur further debt.

There are exceptions to the prohibition on solely automated decision making.  The GDPR permits automated processing if it is:

  1. necessary for entering into, or performance of, a contract between the individual and a data controller (although the WP29 makes it clear that this must be truly necessary and not just incidental to the contract.)
  2. permitted by EU Member State law; or
  3. based on the individual’s explicit consent (which the Guidelines suggest will be difficult in practice to obtain).

One mechanism for avoiding the ban on solely automated processing is for companies to inject some layer of human oversight into the processing channel. The Guidelines address this possibility by clarifying that human oversight must be meaningful, and not just a token gesture. It should be carried out by someone who has access to all available data, and has the authority and competence to change the decision.

Because of how narrowly the exceptions listed above will apply in practice, some have described the GDPR as imposing a de facto ban on online profiling. While this isn’t quite true – there will still be many solely automated decisions that don’t affect legal or substantially similar rights – there is no question that companies must carefully consider the use of automated processing engines that affect EU residents.

General Profiling

Even if the automated decision-making falls under one of the exceptions listed above, data controllers must, at the time the data is collected:

  1. tell the data subject that they are engaging in this type of activity;
  2. provide meaningful information about the logic involved; and
  3. explain the significance and envisaged consequences of the processing.

One of the biggest challenges for companies will be to devise language that provides meaningful information about the logic used for automated decision making without revealing proprietary or competitively sensitive information.

Data controllers will also face challenges in complying with the GDPR’s requirements for “data minimization” (personal data can only be collected when necessary for a stated purpose), “purpose limitation” (personal data may only be used for that purpose) and “storage limitation” (personal data may only be retained for as long as necessary to achieve the stated purpose).   This may entail operational adjustments for many companies.

Similarly, complying with “right to access” or “right to erasure” requests may require that companies engaging in automated decision making operations develop user-accessible ways to access, review, and correct data collected about them.

Due to the inherent risk of error or bias in automated decision making, WP29 notes that a data protection impact assessment may be warranted to assess the risks associated with the processing. Special consideration must be given when engaging in profiling or automated decision-making in relation to children, who are a more vulnerable segment of society.

GDPR covered entities that engage in automated decision-making should begin to reassess their practices and make the necessary changes as soon as possible.  U.S companies should carefully analyze and document their decisions concerning profiling activities, which for many, will be a time-consuming and ongoing process.