The Israeli Privacy Protection Authority Guidelines concerning privacy at the entrance to work and commercial places within the fight against Coronavirus

In the midst of the Coronavirus health crisis, as the economy gradually returns to routine and commercial activity and work resumes, non-invasive temperature testing and health questioning are becoming a condition for allowing entry into workplaces or shopping centers, by force of the Emergency Regulations (New Coronavirus – Restriction of Activity (2020), sec. 3A(1)) (“the regulation”).

Employers in workplaces and commercial places are required to check those seeking to enter the premises under their responsibility and restrict access to certain individuals, where there are indications of potential health risks to others, all in an effort to prevent the spread of the epidemic.

The health questioning, conducted by employers or anyone on their behalf, consists of these three questions:
1. Are you coughing?
2. Was your body temperature higher than 38°C or have you had a fever like this in the past week?
3. Have you been in close contact with a Corona patient in the past two weeks?

Entry will be denied to anyone found to have a body temperature above 38°C,
and/or those who did not respond negatively to each of the questions.

The Israeli Privacy Protection Authority (PPA) has recently issued guidelines for “Privacy at the entrance to work and commercial places within the fight against Coronavirus” (“the guidelines”), with the aim of clarifying the privacy aspects of the regulations, and providing guidance for employers, in particular.

The gist of these guidelines is adherence to the principle of purpose limitation. Namely, restricting the use of information collected during the process of temperature testing and health questioning to that required by law, and refraining from using it for any purpose other than preventing entry into work and commercial places of potentially infected individuals. Any other use of the information may be deemed as a violation of privacy.

Furthermore, despite a clear justification for looking at the Coronavirus-related health status of individuals seeking to enter a work or commercial place, involving violation of their privacy – it must be ensured that such violation will be reasonable and proportionate and done in accordance with the regulations and privacy protection principles.

The guidelines clarify and advise the following:

Health questioning

o Work and commercial places required to conduct questioning under the
regulations, shall avoid introducing additional questions about Coronavirus-
related health aspects, or request other personal information.

o Work and commercial places shall avoid as much as possible from collecting and storing personal, potentially identifiable information, including Coronavirus-related health aspects, about individuals seeking to enter these places, disclosed during the questioning process.

Body temperature testing

o Work and commercial places shall avoid using the information registered and processed during the technological examination, including the body temperature data of those entering their premises, for any other purpose.

o Work and commercial places are advised to refrain from retaining the
information received through the use of heat measurement technology,
particularly where the information that can be associated with a specific,
potentially identifiable, individual.

To the extent that the heat measurement technology employed includes
sensing devices such as thermal cameras, it is recommended that these
measures will operate only in real-time, that is, without automatic retention
and documentation of the information collected within them.

 Insofar as such information has been collected, it must be deleted within a
few days time (unless a special need for its collection has arisen).

 The information gathered during testing will not be used for other purposes
and shall not be transferred to other parties, except as required by law
and/or as part of a legal, mandatory requirement on the part of enforcement
and public health bodies.

For the original guidelines (in Hebrew) please click here.

Guidance to help K-12 School Administrators and Educators Protect Student Privacy during the COVID-19 Pandemic

Schools, including both educators and administrators, are facing tough privacy questions as COVID-19 continues to spread across the world. In particular, schools are grappling with how to inform communities and public health officials of health incidents among students and how to respond to those cases, while still respecting students’ privacy.

Israel’s Basic Law: Human Dignity and Liberty (5752 – 1992) provides constitutional protection to all persons, including students, from any arbitrary or Illegal injury to the privacy of their self,  family or home. Under this law, schools are required to uphold and protect each individual student’s right to privacy in the physical school environment as well as outside it. Specifically, the Pupils Rights Law defines all student information received or accessible to persons associated with the education system as confidential. Specifically, section 14 of this law prohibits the disclosure of such student information, unless such disclosure is necessary to fulfill one’s job or duty. 

These student privacy protections extend online as well. Relevant obligations and responsibilities are placed on schools to protect their students’ private information in the context of any school-operated website. Consent from a student and their parent or legal guardian is required for the disclosure of certain types of personal information, such as a student’s name or email, while other types of personal student information are explicitly prohibited from being posted on a school’s website whatsoever. Additional online privacy protections are also afforded under Israel’s broad Privacy Protection Law, 5741-1981. 

Finally, according to the Patient Rights Act of 1996 (Sec. 20), the disclosure of students’ health information (by either caregiver or medical institution), in the context of the present public health crisis, would be permitted only if: the student (or her legal representative) has provided consent to such disclosure; the caregiver or medical institution are obligated to do so by force of law; or where the Ethics Committee, after giving the student (or her legal representative) the opportunity to be heard, determined that such disclosure is essential for the protection of the health of others or the public and that the need for disclosure outweighs the student’s privacy interests. The disclosure of the student’s health information shall be done only to the extent necessary for the purpose of maintaining public health, and with the utmost avoidance of disclosing her identity.

However, the constitutional right to privacy and the additional specific protections afforded under these laws are not absolute. The right to privacy in Israel should not be applied to an extent greater than required. As well, disclosures of personal information may be ordered by a court, for example, when such disclosure is required to protect lives. In the face of the current public health emergency presented by COVID-19, schools face unique challenges when deciding how to best protect students’ privacy in a balanced way. Therefore, the following offers a framework of guiding principles and best practices to help schools navigate questions and dilemmas which may arise regarding the adequate protection of students’ private information.

The Underlying Framework

Consent has always been, and should remain, the core requirement for any disclosure of student information from school records. However, exceptions may be made to this basic rule if disclosures are needed to protect the health or safety of others in an emergency. Therefore, if a school determines they must disclose a student’s personal information without receiving their consent due to such a threat, they should perform a case by case analysis considering all of the circumstances related to the threat.

In doing so, they should determine whether the following criteria are met: 

1. Significant and articulable threat: Is there an articulable and significant threat related to a student or other student’s health or safety?

2. Necessity: Is the disclosure of student PII needed to protect against such health or safety threat?

3. Data Minimization: What is the minimum amount of student information needed to address the issue at hand? Who are the relevant parties who need to receive this information?

4. Document Disclosures: In the event such a disclosure was made, the school should record the specifics, including the significant and articulable threat, the information disclosed, and the parties who received the information.

As well, schools should consider the below best practices surrounding such communications if they choose to share student information:

1. Provide useful facts not rumours: Schools should be mindful to consider information that they share to be tailored towards effective threat prevention rather than simply spreading rumours. For example, if a school learns indirectly that a student may be infected, they should verify this information before deciding if and what should be shared with the community or with health-care professionals.

2, Weigh potential harms against intended benefits: School administrators and educators should consider potential harms that could occur if they identify a student, and should use alternative approaches to effectively advocate precautionary measures. Sharing information that a particular student may be infected could cause harm to the student, including bullying and/or shaming them. As well, sharing that a student has symptoms before they have been tested or ruled out other possible conditions like the flu may simply cause fear. Instead perhaps just continue to encourage social distancing.

3. Consider additional school policies that may apply: Schools may be subject to additional policies, covering social media or other forms of communication or interaction. School staff should be cognizant of any such applicable policies, which may impose disciplinary action for posting or sharing this type of information.

4. Consider alternatives to personally identifiable information: If and when possible, schools should opt to provide non-identifiable information in lieu of personally identifying information. This may mean providing generalized information that does not directly or indirectly identify an individual student. It could also mean providing aggregated or de-identified information to various agencies to assist in their response to the pandemic.

The following are a number of concrete examples to illustrate how schools can share information about students while protecting their privacy during a public health emergency:

Can a school share with the community if they know or suspect a student has COVID-19?

To start, it is worth emphasizing that in many situations, in order to receive sufficient notification of risks to their children, parents do not need to know which student specifically was or may be infected (even if they would like to know). Therefore, schools should determine whether they can disclose that a student may have COVID-19 without directly or indirectly identifying the particular student. 

For example, let’s say that Eli on the school soccer team has tested positive for COVID-19, or the school suspects he has been infected. Eli is the only boy on the soccer team. Administrators will want to proactively notify the relevant community including the parents of other students on the team, that COVID-19 may be in the school community to facilitate prevention efforts and ensure that people have the information necessary to address a potential outbreak. Given COVID-19’s high degree of infectiousness, it may be wise for schools to err on the side of caution and notify the entire school when suspected but unconfirmed cases exist. 

Whether or not a school knows or merely suspects infection, it may not be necessary to identify Eli as the symptomatic individual. Schools should avoid identifying Eli either directly or indirectly. Therefore, because it is widely known that Eli is the only boy on the team, schools should not share that a boy on the soccer team has, or may have, COVID-19. Rather, they should generalize this announcement sharing only that a student on the team who attended the most recent soccer match is, or may be, infected.

Of course, the school may want to specifically notify parents of other students who had close contact with Eli when he was potentially contagious so that they can take measures to self-quarantine. In this case, the school should contact Eli’s parents and obtain consent to release this information. However, if they determine that an exception to obtaining this consent is required, they should consider the aforementioned criteria in determining whether to disclose, and if so, what information specifically may be shared and with whom.

Articulable and significant threat of a health or safety emergency:

Is the school able to explain, based on all the information available at the time, what the threat is and why it is significant? If a local public health authorities determine that a public health emergency, such as COVID-19, is a significant threat to students or other individuals in the community, it is reasonable that an educational agency or institution in that community will likewise determine that an emergency exists. Sharing this information may be particularly necessary in the early stages of a pandemic to facilitate prevention efforts and ensure that people have the information necessary to address a potential outbreak.

The disclosure is necessary to protect the health or safety of the student or other individuals:

Schools should decide whether Eli’s teacher, classmates and their parents, or students with whom Eli spent significant time need to know that Eli has COVID-19 in order to protect their health. 

Only disclose the minimum amount of information required to address the issue at hand to the relevant parties:

Disclosures do not have to be “all or nothing”. Rather, the school should consider carefully how much information is actually necessary in order to address the issue at hand given the particular circumstances. Would it be sufficient, for example, to just say that “someone on the soccer team” has COVID-19, without identifying Eli as the infected student to his classmates? If the school does believe they need to identify Eli, they should make sure they provide the minimum information needed—that he has COVID-19 and perhaps a window of time when he may have been infectious, if known—and not additional information such as any other specifics regarding his health history. Likewise, disclosures should be limited to the parties to whom this information is pertinent.  For example, if administrators know that Eli is exhibiting symptoms of COVID-19 but hasn’t yet been diagnosed, they could choose to tell only immunocompromised or at-risk students and faculty that a student may have the virus, before communicating with the larger school community.  Schools can also combine communication approaches, for example by identifying Eli as necessary to classmates and their parents but sharing only de-identified information, such as “a sixth-grade student likely has contracted COVID-19,” with the broader school community.

Can a school share with health officials, for example a student’s primary care physician, if they suspect a student may have COVID-19?

If a school cannot reach a student or their parents, and suspects that student might have COVID-19, they may want to reach out to the student’s primary care physician to ask if the physician can confirm that the student has COVID-19 so the school can notify the community. If so, they should follow the above framework and best practices to determine the best course of action. Specifically, they should be aware that the physician may not be able to disclose health information back to the school due to medical confidentiality protected under the Patient Rights Act of 1996 (Sec. 10a), according to which a physician is obligated to maintain the dignity and privacy of her patient, as well as due to physicians’ ethical fiduciary duty towards their patients. However, if a school suspects a positive case, administrators could recommend that other parents take their children to get tested.

Can a school share student information in response to a voluntary request from a researcher, newspaper, or government agency in order to assist in responding to the COVID-19 outbreak?

As noted above, schools should feel free to share de-identified or aggregated information to help in the public response to the COVID-19 pandemic. However, if and when doing so, schools should keep in mind the widely accepted standard for properly de-identified information: whether a reasonable person who does not have personal knowledge of the relevant circumstances, could identify the student with reasonable certainty based on both the information the school discloses at that time and other information in the recipient’s possession that could be combined with the information disclosed. 

For example, let’s say an agency wants to learn about visits to the school nurse in late February involving typical COVID-19 symptoms. Utilizing the above framework and best practices, schools could provide an aggregated percentage of student visits. This should be provided rather than for example, other information that is more than the minimum required to address the issue at hand, for example  i. more granular data that breaks down visits by class-year, gender and ethnicity, that could allow  individual students to be identified; ii. Specific health records of individual students.

In Conclusion

The COVID-19 pandemic represents a public health crisis and there may be significant public interest in sharing student data due to the circumstances. Nevertheless, it remains vital that decisions to share such information could have a significant impact on student privacy and as outlined above there may be effective alternatives that do minimize the potential harms while still addressing the serious health risks at hand. Schools should keep these principles and guidelines in mind as they navigate the myriad situations they may be facing in order to best safeguard their students and student privacy.

This resource is intended for informational purposes only and should not be considered as legal advice. You should contact your attorney to obtain advice with respect to any particular issue or problem.

Use of Digital Means to Fight the Coronavirus

This article was written on the morning of March 15, 2020.

On March 16 Israel’s government has adopted a resolution and on March 17 approved Emergency Regulations which allow the General Security Services (SHABAK) to monitor the location of the mobile devices owned by COVID-19 patients and people who interacted with them in the 14 days before being confirmed with the virus. The stated purpose of the monitoring is to notify the confirmed and suspected patients with a text message that they need to go into home quarantine and to enforce the quarantine obligation. The specific process will be determined by the Health Ministry and approved by the Attorney General. 

The directives apply only to the COVID-19 situation and are in place for 14 days. The SHABAK will not save the data produced and will make no other use of the data. The data will be forwarded directly to the Health Ministry, which will be responsible for sending the text messages to the public. The entire information collected by the government will be purged after the regulations expire, and a 60-day period of evaluation by the Heath Ministry of its function has ended. The data obtained by these means will not be used for a legal or criminal process.  

Do Desperate Times indeed call for Desperate Measures? This adage is believed to have been coined by Hippocrates, the Greek physician and the father of Western medicine, who also gave us the tenets of medical ethics with the Hippocratic Oath.

Indeed, the current situation is unusual. It challenges our values and confronts us with moral dilemmas. Humanity has known epidemics before, but the technologies available to us today to monitor people’s movement, physiological biometrics, the crossing of Big Data, and insight generation were not in existence in previous eras. The COVID-19 epidemic started a crisis that juxtaposes the basic human rights to life and health with the right to privacy and liberty. We are now on the verge of a decision, which, unless restrained with significant measures, may constitute a dangerous precedent of the government penetrating citizens’ life, transforming Israel into a Police State.

Last night, the Prime Minister announced the government is planning to use technology to fight COVID-19. The resolution, which will be put before the government today, will probably talk about adopting the capabilities usually reserved to Israel’s General Security Service, which to this day have been restricted to fighting terrorism. The use of such capabilities at ill or suspected-ill civilians is a dramatic development. Let us not underestimate the danger that such a decision poses to our democratic values. The Prime Minister pointed out that this is an extreme measure, which is nonetheless required, even though it violates people’s privacy. The announcement provided no details of the plans, the types of information to be monitored, the purposes the information is meant to serve, and by whom or the implications of the analysis of the information collected. The Prime Minister provided no details on the checks and restraints that will apply to the use of these measures.

The government has pointed out that this is an extraordinary measure, which violates people’s privacy, but is required nonetheless. The announcement gave no details of the plans, the types of information to be monitored and for what uses, the consequences of the analysis of the information, or which control and restraint measures will be implemented. It is possible that only confirmed patients will be monitored to enforce their isolation. At this stage, confirmed patients amount to merely several hundreds of people. It is possible, however, that the measures will include monitoring or collection of location data from all of the Israeli citizens and their cross-matching with the location data of the confirmed patients to identify the people who were in the vicinity of a confirmed patient and for how long. If this is the plan, the State will be able to tell each one of us how probable it is that we have been exposed to the virus or contracted it. Yet another possibility is requiring us to report our whereabouts and how we feel on a daily basis.

During regular times, a State agency that believes it needs information from a citizen’s cellphone must apply for a court order and present enough evidence to justify the invasion of privacy because of a suspected offense. This procedure applies whether the State seeks to investigate a cellphone in its possession or asks the telecom provider for data on communication between persons. The police or the government ministries are not allowed to use the measures reserved for the Secret Services to fight terrorism, namely eavesdropping and monitoring. In a democratic state, our most fundamental civilian rights include protection against surveillance and breach of our private space by the State. This protection is critical to preserve the freedom of opinion and faith, freedom of expression, and freedom of movement.

But during this particular time, the Health Ministry issues directives under the People’s Health Ordinance and orders which the Ministry’s Director General is authorized to issue. The government may be planning to rely on this framework. Note that in so doing, the government is crossing a significant boundary in a State that is built on fundamental values, where the Knesset grants authority to the government, and where the courts serve as auditors. Resorting to an ordinance that does not require approval by the Knesset or the judicial system is a potential risk to the elementary democratic rights of all Israeli citizens.

Regardless of the legal authorization framework approved by the Attorney General, some issues have not been clarified to the public (although the government probably discussed them). Clarification of these issues is essential for assessing the risk to individuals’ privacy and liberty posed by the State’s use of technology to protect health and life.

How will the mobile data be used? Is it to inform us of possible exposure to a confirmed patient so we confine ourselves to our houses? Or is it to enforce disobedient confirmed patients to enter quarantine? Along the continuum of civilian liberty, lies a distance between encouraging social responsibility to policing and forced monitoring. Is the government planning to scan the cellular signals of clusters of more than 100 people to enforce the prohibition on gathering? If this is the case, the breach of privacy of each one of us would be smaller, since the information collected is aggregated, not personal. Or is the government going to track us individually at all times?

Given the current risk assessment, it makes sense to fine-tune the balance between civilian freedom (as reflected in individual privacy) and public health. However, any decision that attempts to modify the balance must ensure the following:

In grappling with the COVID-19 pandemic, humanity has many more tools than it ever had. These may come in handy in overcoming the disease as a significantly lower loss of human lives. Nonetheless, we must be cautious, responsive, and proportionate in employing these measures. In the absence of independent checks and balances, we risk letting in one of the biggest threats to democracy: a Police State.

The virus that crowned us: COVID-19

A crisis that threatens public health, such as the outbreak of epidemic, or worse still, pandemic, entails a twofold test for human society: devising resourceful ways to outwit the biological health threat imposed upon us and ,not less important, coping with this threat in a way that protects human rights and interests of the individual.

The outbreak and spread of the novel coronavirus, COVID-19, is one such challenge. Grappling with COVID-19 is a complicated, multipronged battle with local, national, and global implications. It has to be waged on several fronts simultaneously: healthcare, economics, and foreign policy. Challenges of this scale call for inventive use of technology to help in developing various solutions.

The current outbreak is characterized by resorting to technology, in particular to data science and Artificial Intelligence (AI), to contain the virus’ spread and address its health implications. In the COVID-19 crisis, technology serves us in several ways:

A more ideal, future-looking, way involves harnessing technology to predict in advance the outbreak of epidemics and understand the pattern in which they spread. This is done by using AI, more particularly – machine learning (ML), which is capable of detecting patterns within big data sets indicating such outbreak.

While we work diligently to mitigate the risk of transmission and appreciate the apparent and potential benefits offered by technology, we must not overlook the risks that technological applications may pose to human and civil rights, as well as to social values. These risks raise the following key ethical difficulties:

One may even argue that providing remote medical care via robots and cameras instead of human care violates the human dignity of the patient. This argument raises the question of the boundaries of the ethical-professional commitment of the medical personnel, in particular during an epidemic: does the Hippocratic Oath entail the provision of care while risking the medical professional’s health and life?

Being informed about the situation is essential for allowing people to choose how to protect (or not) themselves and their loved ones, against the health risk posed to them by the virus. Knowledge and transparency are crucial also for the medical and public health professionals, so they know what and whom to avoid and which medical measures to employ.

Individual autonomy is also physically at risk, in the face of quarantine enforced by technological means, among others. Imposing large-scale confinement on civilians primarily denies them of the freedom of movement.  While designed to protect public health and should only be employed following careful consideration, quarantine also compromises the right to health and the right to life of non-infected individuals. Forced into quarantine, these people cannot flee the infected area to free themselves and their dear ones from the risk of disease transmission.

Quarantine is a truly radical measure that restricts individual rights and liberties. In order to be ethically justified, it is required to meet a number of conditions:

The range of technological measures employed to deal with the COVID-19, and the ingenuity of the solutions, reflect society’s determination to ward off the danger. However, the palpable threat to public health inevitably leads to the violation of rights and liberties of individuals. We must never grow accustomed to this compromise or regard the seemingly temporarily-frail umbrella of human rights, as part of normal life. Human society, through its leaders and policymakers, will only make it through the pandemic test if it navigates the right path, implementing technological measures in a manner that results in the most limited and temporary (short-termed) violation of human and civil rights, and only where strictly necessary. Only then will the citizens of the world be allowed to celebrate the coronation of a humane and just society.

Patient Rights Regulations Memorandum

On January 6, 2020, ITPI released a letter to the Digital Health Division of the Israeli Ministry of Health regarding the “Patient Rights Regulations Memorandum.”

In the past few years, the State of Israel has been striving to conduct research on digital health information in various ways and seeks to apply a universal and updated government policy to balance the right to research and information, while simultaneously respecting the patients’ rights to health and privacy. The Israeli Ministry of Health’s “Patient Rights Regulations Memorandum,” as part of the National Digital Health Plan, attempts to reach such a balance.

Using the “Privacy by Design” methodology (PbD), the Israel Tech Policy Institute’s position regarding this proposal is that it represents an achievable balance between two basic human rights, privacy, and health, and is also constitutional under Israel’s Basic Law of Human Dignity and Liberty. The “Patient Rights Regulations Memorandum” is proportionate and reasonable, and constitutes a series of compromises that embodies a necessary response to the constant technological changes of our era.

The “Privacy by Design” methodology is a set of privacy policy principals, creating a process that puts privacy in the forefront from the start of the project. They also offer an evaluation procedure, used ultimately to find a compromise between two competing human rights principles. In the case of “Patient Rights Regulations Memorandum,” the government strives to choose the necessary measures and processes to improve the quality of health-care and the advancement of medical research, while at the same time, formulating rules for maintaining confidentiality and privacy of information and protecting people’s rights. Between these two poles is a policy that will be placed at some point on balance, which represents the price we are willing to risk for the benefit we expect to gain.

The Privacy by Design analysis of the “Patients Rights Regulation Memorandum” concluded that the Government took into consideration the right to privacy, examined the risks, and devised solutions to mitigate the potential risks and vulnerabilities. Multiple suggestions and recommendations were also given to better comply with privacy guidelines. Some suggestions are:

The arrangement is based on a distributed group of Internal boards in the health organizations (Sick Funds, Hospitals, Army and Prison system) that will include experts and public representatives. They will evaluate the risk analysis and mitigating measures to be applied and will decide whether to approve the access request.

A risk assessment analysis should be performed on every application for access to patient health data for research and use approved if the expected benefit from the study outweighs the risk

Appropriate measures should be taken to reduce the risk of re-identifying people using different types of anonymization and encryption

Health organizations must make binding contractual arrangements with researchers, enforce them, and sanction those who violate procedures

An obligation to examine the purpose of the research in the requested information, and to conclude whether the use of the data is beneficial to the health of the individual or the public, will contribute to improving the quality of health care or medical research, or the promotion of human knowledge in the health field

Data approved should be minimized to research applicant actual need

The default will be giving access to the data in a virtual research platform controlled by the health organization. Releasing data to a researcher outside the health organization should be under the proof of significant circumstances not allowing for the use of the virtual platform and considerable gains to health by the research

The arrangement is a product of a 3-year process that included a multi-stakeholder National Health Council research, consultation and recommendations process

A full listing of the recommendations ITPI has made to the memorandum can be found in our full report.

On December 25, 2019, ITPI, with the Zvi Meitar Institute at IDC Herzliya, conducted a multi-stakeholder roundtable discussion on the suggested policy, and the implementation of the arrangement on hypothetical cases of health information requests. All sides of the debate, academia and industry researchers, health organizations, security experts and policymakers, came together to discuss their respective thoughts on the issue. 

The researchers commented that the anonymization and encryption of information present difficulties in cross-referencing data on patients. That the study of big data and artificial intelligence inherently requires for large data bodies in scope and cannot be limited in scope, and that Israeli bureaucracy and regulation demands that the researcher’s identity be Israeli and the research conducted in Israel, which limits the scope of research and international collaboration.

Health professionals contributed their concerns, mostly the difficulties of navigating the policies and regulations and recommend for the establishment of a central state infrastructure in which all the information of the health organizations can be concentrated. Policymakers added their concern regarding their requirement of strict due diligence, a lack of ranking for sensitive information, and the need for an appeal mechanism.

The entire ITPI report, in Hebrew, can be downloaded here.

Between Hong Kong and Tel Aviv: On facial recognition cameras, privacy, and freedom of expression

The Hong Kong government last week issued an order, which prohibits demonstrators from wearing masks so that law enforcement authorities can identify them. Demonstrators were also reported lately to have knocked down smart lampposts across the city for fear the Chinese government is using hidden cameras in the posts to spy on them.

Deployed initially to track illegal waste disposal and traffic conditions, including through the capturing license plates, the lampposts in Hong Kong have embedded sensors and cameras. With masks no longer a useful aid, demonstrators resort to more creative ways, such as jamming the facial recognition cameras by shining laser lights into them.

Facial recognition technology

Facial recognition technology can tell if two images belong to the same person. It works by generating a unique person’s face biometric profile and matching it with the biometric image data tagged with specific persons. The technology is effective even if only part of the face is exposed, sometimes even when a person turns his back to the camera. It works on photos acquired both by still or video photography when that person is on the move.  When the system matches between two images’ facial profile, the program generates a numerical value, which indicates the probability of the two faces belonging to the same person.

Armed with this technology, government authorities can track both criminals and legitimate protesters. The facial recognition technology emergence and the cameras fast deployment has led to significant enhancement of surveillance and tracking strategies. Over 64 countries are using facial recognition technology today, with China in the lead. It may come as no surprise that non-democratic nations are investing heavily in these technologies, but they are not alone: the Chinese technology company Huawei website lists quite a few European municipality’s customer success stories that implemented facial recognition technology for their smart city initiatives.

Demonstrating as a way of exercising freedom of speech

Public protests are far from new. From the protests in Tiananmen Square and across the Eastern Bloc in the 1990s to the massive frustrated Ethiopian origin Israelis demonstrations of in Israel this year, they are a popular means for expressing discontent. Interestingly, taking to the streets has not lost its power despite the social network prevalence, as evident from the high turnout numbers. Most democratic states protect the right to demonstrate under the law or constitution, based on the understanding that demonstrations allow people, who have no access to decision-makers, to voice their opinion and impact the public policy and agenda. In Israel, the freedom to demonstrate is a freedom of speech integral part, classified under Basic Law: Human Dignity and Liberty.  The Israeli Courts considers freedom of speech a supreme right since it constitutes a precondition for exercising other rights.

Nonetheless, it is not unlimited. The right to demonstrate is protected only as long as people exercise it in peaceful ways. When this is not the case, the police are authorized to take action to ensure the public order is maintained while refraining from compromising human rights, to the extent possible.

Technology’s impact on the freedom of speech

The debate on technology’s impact on democracy and human rights has intensified recently. Indeed, facial recognition technology use may discourage people from exercising their freedom of speech in legitimate ways (called “the cooling effect”). This argument lies in the understanding that people feel at ease to take to the streets and express non-popular opinions anonymously but will refrain from doing so even if the facial recognition does not involve any sanction.

The opponents to the facial recognition technology use argue, among others, that regulation is needed, that the technology use is disproportional, and that other means are available with a lesser impact on privacy. After all, people are not giving their consent to be photographed and have no control over the biometric data the cameras collect. Worse still, research shows that some software programs bases its matching on biased data, which may lead to false positives, in particular with dark skin persons or women.

There are valid concerns that the authorities will retain data and compile blacklists in a manner that infringes on human rights. Even more disturbing is the possibility of combining facial recognition with AI to mine physical and mental health data from a person’s facial features. While the biometric identification use on millions of people to track down a few suspects may be crucial for many people’s safety, it might also turn us into a society under surveillance with grave concerns to democracy and the right to privacy. It is for these very reasons that the Governor of California approved this week a law that prohibits the police from using facial recognition technology in the police officers’ body cameras for three years.

The legal status

In Israel, the Databases Registrar directive governs the surveillance cameras used in the public domain. While the law stipulates that taking a person’s image in his/her private domain (as opposed to the public domain) constitutes a privacy infringement, one can  argue that using a facial recognition technology in the public domain, where a person intentionally hides part of the face to make it unidentifiable is the same as taking a person’s image in his/her private domain. Under particular circumstances, taking a person’s picture in public may be deemed as perpetrated in the private domain (see: Zadik, civilian appeal 6902/06). Only ahead of time, informed consent can prevent such practice from constituting a privacy infringement, a condition that is very difficult to fulfill. Paradoxically, even the requirement from the authorities to inform the public about the existence of surveillance cameras, to mitigate the privacy infringement, might deter people from attending a demonstration.

Nonetheless, the privacy protection law exempts law enforcement authorities from their responsibility to privacy infringement, provided the infringement was reasonable and necessary to allow them to fulfill their duty. The United Kingdom Supreme Court recently ruled that facial recognition cameras use for maintaining public order and detecting criminals is lawful under both the European Human Rights Convention and the Data Protection Regulation. Note that the British police have complied with the European data protection law before implementing the cameras. Among others, the British police held a privacy impact assessment and used the findings to design a framework for data collection, processing, and retention policy that would comply with the law.


The legislator ought to ensure, through the checks and control imposition on the law enforcement authorities, that facial recognition technology does not infringe the right for privacy and freedom of expression in its quest to reap the benefits the technology offers in terms of public safety.

The law enforcement authorities, for their part, need to spell out the purpose for which they implement the technology, so police facial recognition use is only when the public interest is in real danger without it. They must commit to collecting only relevant, essential, and accurate data for the stated purpose and not use the data for any other purpose. Whenever running a facial recognition program to identify specific persons, if the match returns a negative result, the police must purge the biometric data related to this mismatch. Whenever the system detects a match, a human examiner must be named to inspect it before taking further action, to minimize the chances for a false positive. The law-enforcement body must also have rigorous information security controls in place and comply with all legal and regulatory requirements and restrictions that apply to surveillance cameras use in the public sphere.

Limor Shmerling Magazanik, Managing Director, Israel Tech Policy Institute; Advocate Noam Rosen, Policy Counsel, Israel Tech Policy Institute

This article was published by CalcalistTech

Nudge Theory Methodology for Internet Smart Regulation

On Nov 7 , a group of professionals, researchers and policy makers, engaged with Jonathan Winter Google Policy Fellow, Milken Innovation Center and his excellent work on using Nudge Theory methodology for Internet Smart Regulation.  ITPI’s Managing Director, Limor Shmerling Magazanik shared some thoughts about striking a balance between promoting innovative technology that promotes society and economy while maintaining our values and protecting human rights. Yoram Hacohen coined it: “Human Rights by Design”.

Limor Shmerling Magazanik summarizes the three principals for policy in technology driven sectors:

The need for clear and updated norms and rules for the internet content space were apparent during 2019. The 2020 new decade has thrown the global society into an unprecedented situation. Humanity has known pandemics before, but not at an era that can be characterized by such global connection both in physical transportation and digital communication. Although societies’ focus and priorities have changed, the root concerns around how things reflect and play out in our digital dimension still echo similar challenges.

Misinformation around the Covid-19 medical analysis, safety measures and treatment options require swift reaction. Use of personal information in fighting to limit the spread of the virus by applications and other means calculating proximity of people to identified patients and supporting decisions on forced quarantine are an important part in the multifaceted coping strategies, but they create a privacy concern and may lead to unwarranted liberty restrictions and other human rights infringements.

And how will this look after the crisis subsides? What will humanity be left with and how will we role back measures that should only be used in time of severe risk to people’s lives and only in order to save lives?

I wish to share with you some insights from the work I had the opportunity to advise on to Mr. Jonathan Winter, Google Policy Fellow at the Milken Innovation Center – Jerusalem Institute for Policy Research.

When designing policy for complex societal problems, in technology driven sectors, I offer to follow these principals:

Based on this knowledge:

In the report “Smart Regulation of Harmful Internet Content: Insights from Behavioral Science” Mr. Winter addresses the problem of the spread of harmful (but not necessarily illegal) content online, and the steps that regulators are taking in response to it, drawing on examples from the United Kingdom and Israel.

Specifically, the argument in the report is that mandating internet service providers (ISPs) to block content that the government deems as harmful, as has been proposed by several Israeli Knesset (Parliament) members, in relation to protecting children from inappropriate content, is both counter-productive and threatens fundamental democratic principles, such as the right to privacy and the right to free speech.

Accordingly, the report provides an alternative regulatory approach to deal with the threat of children’s exposure to harmful content – a threatthat is rightfully and understandably concerning for parents and policy makers alike. The alternative approach the report suggests relies on insights from behavioral science. These can inform regulators about Internet users’ behavior and further help them craft policies that can propel users to take active measures to reduce the risk. Insights drawn from behavioral science can also be enlightening for policy interventions and potentially help regulators effectively regulate while reducing the adverse consequences of direct government intervention in online content.

For the full article by Jonathan Winter, click the link below:

JWinter.Behavioral Approach to Internet Content Policy. 2020

“Legislating Online” Conference – The Knesset, Israel Parliament

The Israel Tech Policy Institute was represented in this event by Ms. Kelsey Finch, Policy counsel at the Future of Privacy Forum and Senior Fellow at the Institute.

Alongside were speakers from government, including the head of the Antitrust  Authority and a senior council from the Privacy Protection Authority, parliament members, prominent academics and industry representatives.

Ms. Finch presented current data regulation trends, privacy technology trends and evolving ethics in technology. She suggested that as Israel shapes its path forward on privacy, there is much to learn from GDPR – both its successes but also its challenges. Similarly it will be useful to follow the US developments on privacy law, as a major effort has launched to develop legislation. One of the paths Israel can take to ensuring a robust digital economy while protecting privacy is by incentivizing privacy enhancing technologies and research; focusing on ethics; and approaching data policy and regulation holistically, so that individuals are both protected from harm and companies and researchers are able to innovate with data and technology.

The New Israeli Privacy Protection (Data Security) Regulations

One of the most major developments in data protection in Israel in the past year has been the publication of the Privacy Protection (Data Security) Regulations, in May 2017, which came into effect in May 2018.

The Regulations classify databases to four groups according to the level of risk created by the processing activity in the databases: high, medium, basic and databases controlled by individuals that grant access to no more than three authorized individuals.

The duties of the controllers are determined with accordance to the ‘level of risk’, which is defined by the data sensitivity, the number of data subjects and number of authorized access holders.

Just recently, the Israeli Privacy Protection Authority announced  the operation of enforcement efforts to examine the implementation of the new Regulations and their intention to examine over  150 different entities by the end of the year.

The head of thePrivacy Protection Authority, Alon Bachar, said in a statement that the goal is to supervise the level of data protection in bodies that manage a lot of sensitive personal information – customer clubs, medical institutes, bodies that provide information management platforms for minors, institutions of higher education, etc.

The Entry Into Force of GDPR

To launch the Israel Institute of Technology Policy, we hosted two leading experts on privacy and data protection for an in-depth discussion.

ITPI Advisory Board Member Prof. Ken Bamberger of UC Berkeley Law School joined Sharon Shemesh Azaria, head of International Affairs at the Israeli Privacy Protection Authority.  The two discussed key privacy issues of the day, from GDPR, to privacy and security law in Israel.