Updating Canada’s Privacy Act for Artificial Intelligence

More and more of our lives take place online, where transactions, interactions, and conversations happen digitally. But the privacy rights of Canadians are protected by laws enacted before the development of key technological changes such as artificial intelligence. The federal government is, as a result, in the process of reforming Canada’s privacy statutes. McGill University law professor Ignacio Cofone is among the experts who provided guidance to the Office of the Privacy Commissioner of Canada regarding the reform of Canada’s private-sector privacy statute. He and his co-authors condensed that guidance for Policy as a contribution to the ongoing consultation process on updating the Privacy Act.

 

Ignacio Cofone, Ana Qarri and Jeremy Wiener

February 4, 2021

Artificial intelligence (AI) is more embedded in our daily lives than ever before. Yet Canada’s federal privacy laws are unequipped to deal with the risks it poses. Recognizing this, last year the federal government tabled an overhaul of Canada’s federal private-sector privacy statute (the Personal Information Protection and Electronic Documents Act, or PIPEDA, enacted in 2000). If passed, Bill C-11 would enact the Consumer Privacy Protection Act. In November 2020, the federal government also launched a public consultation process to help reform Canada’s federal public-sector Privacy Act.

Since the Privacy Act was passed in 1983, technology has changed dramatically—algorithmic decision making now plays a role in a myriad of daily life situations. So has our understanding of privacy’s connection with other human rights beyond privacy, such as equality and non-discrimination. AI has led to crucial developments while also producing unique risks, particularly when used in government decision-making.

We seek to contribute to this public consultation with the goal of modernizing Canada’s Privacy Act by raising some key applicable proposals laid out in more detail in the Policy Proposals for PIPEDA Reform to Address Artificial Intelligence Report, commissioned by the Office of the Privacy Commissioner of Canada. Central to this goal are strengthening public and private enforcement mechanisms and enacting new rights and obligations that protect Canadians’ privacy and human rights.

Privacy reform must start by explicitly adopting a rights-based approach that grounds the Privacy Act in Canadians’ human right to privacy, equality, and related Charter rights. This is a natural step given that, as the Supreme Court has recognized, Canada’s Privacy Act has quasi-constitutional status because it is inextricably tied to Charter rights. In the AI context, where the risks to fundamental rights (such as the right to be free from discrimination) are heightened, adopting a rights-based approach is important and follows precedent set by the European Union as well as countries in South America and Asia. In practice, this means that principles of necessity, proportionality, and minimal intrusiveness, which are core to Canadian rights-based balancing tests, must run through any modified version of the Privacy Act.

A rights-based approach is only as effective as the mechanisms enforcing it. The Office of the Privacy Commissioner of Canada (OPC) should thus be granted the power to issue binding orders and financial penalties. The OPC currently only has the authority to investigate alleged violations of the Privacy Act, to suggest steps to compliance, to publicize its findings of violations, and to initiate court proceedings if the violating federal body does not adjust its privacy practice. As a result, enforcement is slow and largely depends on agencies’ voluntary compliance. Providing the OPC with the power to issue binding orders and financial penalties to public bodies, as Bill C-11 does for private entities, would redress this shortcoming.

However, under the current statute, even when the OPC successfully achieves compliance, those whose rights have been violated are left without compensation. When someone’s rights are breached, the only thing that they can do is make a complaint to the OPC and wait for the investigation results. This is insufficient: rights-based privacy legislation must give Canadians a mechanism to protect their rights without being fully dependent on public enforcement. Granting Canadians the right to sue agencies they believe have violated the Privacy Act would ensure that Canadians are not left without redress. It also goes hand in hand with giving the OPC enforcement flexibility: if individuals can claim for violations of their rights, the OPC can be given greater freedom to decide which investigations to pursue without leaving statutory violations unaddressed. Such rights would also make enforcement more efficient by multiplying enforcement resources, thereby lightening the burden placed on government budgets.

A rights-based approach is only as effective as the mechanisms enforcing it. The Office of the Privacy Commissioner of Canada (OPC) should thus be granted the power to issue binding orders and financial penalties.

Bill C-11 takes a step in the right direction but does not go far enough in granting private rights of action. Europeans and Californians, for example, have a right to sue for violations of their respective privacy statutes, with limitations. Canadians under Bill C-11, however, would only have a right to sue following an investigation by the OPC. This limitation on the ability to exercise rights undermines the purpose of granting private rights of action, which is ensuring that Canadians can obtain redress and including disincentives also for minor violations of the Privacy Act, on which the OPC might not focus. The Privacy Act reform should include private rights of action to ensure demonstrable accountability of public bodies and effective enforcement of the Act.

Two rights are key for Canadians: purpose limitation and data minimization. The Privacy Act should – as Bill C-11 does – mandate purpose specification and data minimization as essential elements of privacy-protective design. Purpose specification limits the use of personal information to those purposes that are identified when the individual consents to the collection of their information. Data minimization means that organizations collect and use only the personal information they need (and not more) to fulfill such identified purpose. These principles promote transparency and accountability and reduce the risk of overarching surveillance and privacy harm. Consider, for example, the extent of information disclosure that takes place in the course of online shopping or subscribing to an online service. Often users are asked to provide personal contact information that is unnecessary for carrying out the transaction.

Data minimization and purpose specification should not be universally required. Data is often de-identified — stripped of identifying personal information such as names or addresses —  for research or statistical purposes, or to develop AI. Federal public bodies should be encouraged to pursue such important purposes. They can lead to, for example, more efficient traffic management or public transport development. The difficulty is that organizations often find a useful purpose for collecting de-identified data only after it has been collected and processed. To avoid unduly limiting AI’s benefits, Privacy Act reform should follow Bill C-11’s lead by exempting de-identified data from data minimization and purpose specification requirements.

The issue is that aggregated de-identified data still reveals information and trends about groups. This means that it enables decisions that can affect de-identified individuals based on group attributes (such as gender, race, sexual orientation, and political preference). Such breaches in group privacy can amount not only to discrimination, but also infringements of other Charter rights such as freedom of opinion and expression, and freedom of association. Think of risks to democracy revealed by the Cambridge Analytica scandal.

De-identified data are still personal information because they relate to identifiable (even if not identified) individuals. While it is impossible to eliminate all risks, they present lower risks to human rights than identified data. This is because they are one step removed from identifiable individuals. An exemption from data minimization and purpose specification requirements may thus encourage agencies to de-identify data, lowering (albeit not eliminating) risks for individuals’ human rights. To ensure that data stays de-identified, an offense that prohibits re-identification, similar to what is proposed in Bill C-11, should be introduced.

These two rights relate more broadly to design’s importance for privacy. In the words of the former Information and Privacy Commissioner of Ontario Ann Cavoukian, who coined the term “privacy by design”, “embedding privacy into the design specifications of various technologies” is crucial. It entails proactively mitigating the risk of privacy violations, such as re-identification of de-identified data, helping prevent privacy violations. The European Union, for example, already requires organizations to design for privacy and human rights in all phases of their data collection and processing by implementing “appropriate technical and organizational measures.” So should the Privacy Act. By establishing a way to proactively mitigate risks, privacy by design contributes to the goal of accountability in privacy.

Automated decision-making requires special provisions. It heightens the risks to human rights posed by data-driven technology. Protected categories, such as gender or race, are often statistically associated with seemingly inoffensive characteristics, such as height or postal code. By relying on inoffensive characteristics as proxies for protected traits, algorithmic decision-making can lead to discriminatory results that adversely affect members of protected groups. There are numerous examples of this outcome around the world. For example, last year, the U.K. stopped using algorithms that made decisions for welfare, immigration, and asylum cases after allegations that the systems were perpetuating racist patterns of decision-making.

Automated decision-making requires special provisions. It heightens the risks to human rights posed by data-driven technology. Protected categories, such as gender or race, are often statistically associated with seemingly inoffensive characteristics, such as height or postal code.

To reduce the risk of such algorithmic discrimination, the Privacy Act should obligate federal public bodies to log and trace their data collection and processing systems. Traceability is an essential part of a transparent data processing architecture. It would promote accountability and increase public trust in the government’s privacy protection systems. In the case of the scrapped U.K. system for welfare decisions, for example, data traceability would ensure that individuals can gain access to a log of the data used to make decisions about them. Data traceability would also show where the data was collected from–directly from individuals, another government body, or an external party.

Granting Canadians the right to an explanation would provide key support to traceability. Quebec’s Bill 64, for example, proposes to provide people the right to know the personal information and the reasons or factors that contributed to automated decision-making processes, as well as the right to request the information’s source. The federal Treasury Board Secretariat’s Directive on Algorithmic Decision-Making (DADM), as well as the tabled Bill C-11, also contain a right to explanation. The Directive provides that an explanation’s detail should increase with the decision’s impact on the individual. Welfare decisions, for example, impact individual livelihood; these decisions are likely to be considered high-impact. Therefore, the explanation requirement would likewise be higher. The Privacy Act should build on — and legislate — this existing framework.

Individuals cannot exercise their right to explanation unless they know when it is applicable — in other words, when data about them has been used to make a decision affecting them. Public bodies should thus inform Canadians when they can request an explanation, lightening the load associated with requesting one.

Together, these provisions would mitigate the risks associated with automated decision-making, promoting the transparency that public trust in the government’s data collecting and processing requires as the government moves to use AI.

In conclusion, the Privacy Act needs a rights-based approach. Such an approach, which provides for data minimization and purpose specification, data traceability and related rights to explanation, and to effective public and private enforcement mechanisms, would be a significant improvement on the Privacy Act we currently have.

According to a recent OPC survey, 90 percent of Canadians are concerned about their privacy. And they should be. Reforming the Privacy Act in this way is an essential step toward alleviating warranted concerns and turning Canada into the protector of human rights that it strives to be.

Ignacio Cofone is an Assistant Professor and Norton Rose Fulbright Faculty Scholar at McGill University Faculty of Law. Ana Qarri and Jeremy Wiener are JD/BCL candidates at McGill University Faculty of Law.