Surveillance
Surveillance, Segregation, and Spatial Politics
Introduction
“This surveillance system operated by Hikvision targets Uyghurs based on racial attributes and flags them for detention at mass internment camps,” asserted the Irish Council for Civil Liberties, highlighting the profound human rights abuses linked to new surveillance technologies.[1]
CCTV has become a ubiquitous aspect of social infrastructure, outfitting public and private establishments alike. Even government buildings are no exception. However, technological advances and ethical concerns necessitate new considerations. For example, the CCTV cameras deployed in the Irish government, a brand called Hikvision, have been robustly criticized for being not only implicated in grave human rights violations but also in breach of serious national security standards.[2] Additionally, Hikvision cameras have AI-enabled facial recognition capacities, another highly controversial subject.
The Irish Council for Civil Liberties noted in an open letter that the Irish government, in deploying Hikvision cameras, had unknowingly used tools actively used in a human rights crisis. The open letter additionally emphasizes that it has been empirically proven that data reports from Hikvision cameras are sent back to China.[3] Hikvision represents a new era of technology, with technological advancements having dangerous discriminatory outcomes. While these technologies are often justified as tools for enhancing safety and efficacy, their implementation raises pressing ethical and legal concerns.
It is abundantly evident that CCTV, especially AI enabled facial recognition, is a cornerstone in the ongoing debate of data privacy rights. However, its broader impact in society, specifically with regard to ethical concerns, such as discriminatory elements, is critically understudied in the Irish context, resulting in atrocities such as the government usage of Hikvision cameras.
This paper argues that the integration of artificial intelligence with CCTV presents critical challenges that extend beyond privacy violations to include systematic discrimination and ethical failings. This is explored through a framework of the action and inaction of the Data Protection Commission, the regulatory body with the most responsibility in data protection dialogues. Through a focus on discrimination, this paper outlines the necessity for robust governance, ethical safeguards, and community engagement to prevent the perpetuation of systematic inequities in technological advancement.
Data Protection Commission
The Data Protection Commission is the supervisory authority in Ireland responsible for ensuring compliance with data protection laws, particularly the General Data Protection Regulation (GDPR) and the Data Protection Act 2018. The DPC has been heavily involved, especially recently, in data protection concerns posed by CCTV and AI. The Data Protection Commission (DPC) 2023 Annual Report specifically outlined a trend in increased litigation on surveillance technologies. CCTV is an often overlooked but rapidly emerging concern for Irish citizens. The existing regulatory environment is characterized by the Data Protection Act 2018 and the CCTV Guidelines issued by the Data Protection Commission. Key points of the CCTV Guidelines include:
I. Justification; that CCTV can only be used for legitimate, specific, and necessary purposes.
II. Proportionality; that CCTV must be proportionate to the purpose it serves.
III. Transparency; that individuals must be informed of the presence of CCTV.
IV. Access and retention; that access to CCTV should be limited, with data only being retained for as long as necessary.
V. Security; that appropriate technical and organizational measures should be in place to ensure the security of CCTV data.[4]
However, these points fail to adequately address the threat that CCTV poses to rights in a modern age, especially considering technological advancements such as AI, and AI-enabled facial recognition. These guidelines additionally fail to represent, acknowledge or address ethical concerns, including discriminatory impacts of these technologies.
While perspectives and regulations on AI are limited, the compelling conclusion was reached that:
“AI is, in effect, a form of profiling based on analysis of historic patterns in training data against which an individual may be assessed to infer other information about the individual or predict behaviours…greater emphasis is needed on ensuring appropriate caution when implementing AI based solutions.”[5]
This emerged from a report coordinated by the Data Protection Committee regarding Child-Oriented Data Processing. While produced in context to exclusively children’s rights, it remains true for AI generally and is relevant to AI facilitated facial recognition technology.
The congruence between AI and CCTV is explicitly referenced by Deputy Patrick Costello in an Oireachtas debate regarding whether facial recognition technology would be admissible in Ireland:
“I am trying to balance two committees meeting at the same time. It is interesting that the children’s committee is dealing with the impact of AI on children at the same time we are discussing the impact of AI, in a way, at this committee.”[6]
As can be extrapolated from Costello: AI, and its implications for facial recognition technology, are formidable, and require specific examination, especially regarding discriminatory impacts. The evolving categories of AI and CCTV are incredibly broad, and although the paper will make reference to both, there is a focus of Facial Recognition Technology, with Facial Recognition Technology being the outcome of AI applied to CCTV.
Facial Recognition
Facial Recognition technology occupies a contentious ethical space at the intersection of artificial intelligence and CCTV surveillance. This technology has a discriminatory impact, particularly against individuals with darker skin tones, which is prompting widespread concern and legal challenges. For instance, a 2019 study by Peter Fussey and Daragh Murray on the London Metropolitan Police revealed that 81% of suspects flagged by FRT were innocent, raising serious doubts about its reliability.[7] Concerningly, an ethics audit by Cambridge researchers evaluated three UK police deployments of FRT and found all failed to meet legal and ethical standards.[8]
Case law from the UK corresponds with the ethics audit. In a landmark 2020 ruling, the Court of Appeal in R (on the application of Bridges) v Chief Constable of South Wales Police declared the use of live automated facial recognition technology (LAFR) by the South Wales Police unlawful.[9] The court found that the deployment of LAFR, which involved capturing and processing images of the public for comparison against a watchlist, violated Article 8 of the European Convention on Human Rights, the right to privacy.[10] This decision stemmed from the police’s failure to establish a clear legal framework for LAFR use, resulting in excessive discretion and a lack of adequate safeguards. The court also determined that the police’s data protection impact assessment was deficient and did not comply with the Data Protection Act 2018. This ruling sets a crucial precedent, highlighting the need for robust legal frameworks and stringent safeguards to ensure that FRT deployment adheres to privacy rights and data protection principles.
Such evidence has led to bans on FRT in cities like San Francisco, Boston, and Oakland in 2019 and 2020, and just last June, the European Parliament voted to ban live FRT in public spaces.[11] Despite this, Ireland has continued to consider FRT legislation, risking the imposition of a discriminatory, ineffective, and intrusive technology that fails to meet ethical or legal standards.
Facial recognition technology, the intersection of AI and CCTV, demonstrates both direct and indirect discrimination in its application, particularly in the cases mentioned. Direct discrimination occurs when individuals are subjected to wrongful treatment directly linked to their protected characteristics, such as race.[12] For instance, the disproportionate misidentification of individuals with darker skin tones by facial recognition systems ties the impugned treatment (wrongful arrests) directly to their race.
Indirect discrimination, on the other hand, arises from ostensibly neutral provisions, such as the adoption of facial recognition technology, which results in unequal impacts on protected groups.[13] The use of such systems disadvantages individuals with darker skin tones relative to others due to inherent biases in the technology’s design and training datasets. This reflects institutional discrimination, where neutral policies systematically disadvantage marginalized groups. Additionally, sociologists emphasize that historic discrimination has contemporary consequences- that even ostensibly minor forms of discrimination, especially forms rooted in historical precedent, must be addressed.[14]
Under EU and Irish law, this kind of discrimination would fall under the scope of “apparently neutral provisions,” which include criteria, practices, or policies that unintentionally create disparities. Facial recognition technology perpetuates systemic inequities in these contexts, necessitating legal and institutional interventions to address these harms. However, the harm caused by CCTV and AI extends far beyond the phenomena of facial recognition technology. At their core, these issues are deeply entrenched in societal inequalities, further highlighting the discriminatory challenges posed by facial recognition.
CCTV: South Africa Case Study
The use of surveillance technologies, particularly AI-enabled CCTV systems, raises profound ethical concerns due to their potential to perpetuate social inequities and erode civil liberties. This is most powerfully seen in the example of South Africa. Concerns about the use of AI-enabled CCTV systems and Facial Recognition are now emerging internationally.
A surveillance and security company, Vumacam, has a centralized and privatized surveillance operation in South Africa that utilizes AI and facial recognition.[15] With over 6,600 cameras, the massive network serves operates in the exclusive interest of paying clients, as opposed to the broader public interest. Wealthier neighborhoods benefit disproportionately, while poorer areas like Soweto remain excluded due to the lack of paying customers. This spatial disparity highlights the inequality and discrimination of CCTV surveillance, where public spaces are effectively monetized for private gain.
Further enhancing discriminatory and ethical issues is the integration of advanced AI technologies, such as facial recognition and predictive policing. In South Africa, these systems, especially AI-enabled facial recognition, have demonstrated significant racial biases, disproportionately flagging Black South Africans as potential threats. This phenomenon has been described as a form of “AI-powered apartheid,” echoing the passbook system of apartheid-era South Africa that restricted the physical movements of Black individuals while granting white individuals unrestricted mobility.[16] Such systems not only reproduce historical injustices but also exacerbate existing racial inequalities under the guise of technological neutrality. The imposition of this technology is definitively discriminatory.
A counter argument exists, in both South Africa and Ireland, that these measures are necessary for public safety. However, research suggests that these measures do not enhance safety at all. For example, the installation of Huawei’s “Safe City” surveillance systems in various African urban centers resulted in no sustained reduction in violent crime.[17] In fact, cities like Nairobi and Islamabad experienced an increase in crime rates despite these technologies being represented as solutions to urban insecurity. This underscores the ethical dilemma of investing in systems that prioritize control and surveillance over addressing the root causes of crime, such as inequality and systemic discrimination. It additionally underscores the human rights concerns which emerge from employing technologies that are inherently biased.
In light of this, it becomes evident that surveillance technologies, far from being neutral tools, are deeply embedded in and reflective of societal power dynamics. Their deployment must be critically evaluated to prevent the perpetuation of spatial and racial discrimination, and to ensure that the rights and dignity of all individuals are upheld.
Irish Context
Although there is limited official Irish government data on racial profiling in policing, significant evidence suggests its occurrence. Reports such as “Policing and Racial Discrimination in Ireland” highlight that racial profiling in Ireland does exist, violates human rights, erodes trust between police and minority communities, and undermines the credibility of law enforcement.[18] These practices disproportionately affect marginalized groups, compounding societal inequities and contradicting efforts to build safer communities and cities.
Facial recognition technology, and its potential use in Irish policing, represents the serious ethical debate on issues of CCTV and technological advancement such as AI. Advocates, such as Irish Minister of Justice Helen McEntee, have argued that such technology could enhance investigations and free up Garda resources.[19] However, evidence from other jurisdictions, such as South Africa, show that AI-enabled facial recognition systems often exacerbate existing biases, disproportionately targeting racialized groups and flagging them as potential threats. These biases risk reinforcing systemic discrimination, particularly in predictive policing models, which rely on flawed data and biased algorithms. Furthermore, as previously demonstrated, use of these technologies not only fail to create safer cities, but also result in unfair and unjust accusations as seen by facial recognition technology. It is crucial to take these features into consideration when making decisions of such a broad magnitude regarding regulation in Ireland.
Surveillance, Segregation, and Spatial Politics: Linking Ireland and South Africa
Ireland has a longstanding history of discriminatory and harmful surveillance practices that deeply inform the current cultural debate on facial recognition technologies. A scholar, regarding the impact of surveillance on catholic communities during the troubles, stated that: “the fact of being seen, being watched and thus being made a potential target of violent action is deeply inserted in the lives of people and materialised in their bodies.”[20]
In South Africa, CCTV technologies have been described as facilitating a form of “AI-powered apartheid,” disproportionately targeting Black communities while privileging wealthier, predominantly white neighborhoods.[21] Such practices perpetuate historical patterns of discrimination, and create deep ethical concerns.
This dynamic bears striking parallels to Northern Ireland’s history of surveillance during the troubles. Surveillance systems were heavily concentrated in Catholic-majority areas, such as the Falls Road in Belfast and the Bogside in Derry, reinforcing spatial segregation and subjecting these communities to heightened scrutiny. Cameras and microphones, ostensibly installed for security purposes, became tools of control, surveilling a population viewed as “the other.”
Spatial politics remain vital in contemporary Ireland, where the nexus of ethnicity, space, and body continues to shape societal dynamics. New CCTV initiatives, including Facial Recognition technology, risk echoing these historical injustices. By embedding surveillance technologies in specific neighborhoods, such measures may reinforce segregation and exacerbate existing divides. Surveillance in Ireland, much like in South Africa, can potentially serve to enforce boundaries—both physical and social—between communities, perpetuating cycles of mistrust and marginalization.
As highlighted by Zurawski, surveillance in conflict zones like Northern Ireland instilled a pervasive sense of being watched, transforming individuals into potential targets.[22] This legacy underscores the deeply embedded nature of surveillance in the cultural and political fabric of the region. The deployment of AI-enabled CCTV in modern Ireland, without critical oversight, risks reviving these historical patterns, transforming public spaces into sites of segregation and control rather than inclusion and safety.
By linking the systemic biases of South African surveillance practices to Ireland’s historical and contemporary experiences, it becomes clear that unregulated surveillance technologies, absent profound conversations on discriminatory impacts, pose a profound risk. They have the potential to replicate and exacerbate existing inequalities, highlighting the urgent need for ethical governance and robust safeguards to ensure that these systems do not undermine the fundamental principles of equality and human dignity.
Recommendations
The growing integration of AI and surveillance technologies, especially facial recognition technology, across various sectors presents significant data protection challenges that are best addressed through the expertise of the Data Protection Commission. The DPC should robustly consider the best means of preserving privacy, equality, and crucially ensuring non-discrimination throughout the implementation of new technologies. Three central recommendations are provided in tackling these emerging issues.
I. Encourage Data Protection Impact Assessments (DPIAs).
a. Ensure that thorough DPIAs are conducted prior to deploying surveillance technologies, specifically ones which are AI enabled.
II. Encourage Bias Audits.
a. Implementing formal assessments of surveillance technologies with a specific focus on non-discrimination.
III. Engaging Stakeholders and Experts.
a. Collaborating AI ethicists, data scientists, and legal experts to develop robust guidelines for the responsible use of CCTV, AI, and Facial Recognition Technology.
b. Facilitating community engagement by involving civil society organizations, human rights groups, and affected communities to understand the societal impact of such technologies.
Conclusion
Advancements in technology, such as the Hikvision cameras mentioned in the introduction of the essay, are creating a crisis at the intersection of technology, ethics, and human rights. These systems, often espoused as enhancing security and efficiency, are presenting profound challenges to privacy, equality and non-discrimination.
Ireland is now at a critical juncture in deciding how to regulate and implement new technologies. Without robust governance, ethical safeguards, and comprehensive oversight, such as the recommendations included above, the deployment of systems such as AI-enabled facial recognition risks perpetuating systematic discrimination in a severe way. The Data Protection Commission has a crucial role to play in ensuring that new technologies serve the public interest without infringing on essential liberties. In enhancing its pre-existing engagement with CCTV and AI, the DPC can ensure that clear legal frameworks prevent discriminatory impacts.
By prioritizing Data Protection Impact Assessments, bias audits, and inclusive stakeholder engagement, Ireland can set a precedent for ethical and responsible technological integration.
Bibliography
Bell, Mark (2007) ‘Chapter Two: Direct Discrimination’ in D. Schiek et al. (eds.) Cases, Materials and Text on National, Supranational and International Non-Discrimination Law (Oxford: Hart Publishing), 185-322.
Court of Appeal in R (on the application of Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058.
Council of Europe. (1950). European Convention on Human Rights, Article 8. Retrieved from
https://www.echr.coe.int
.
Digital Policy Alliance. (n.d.). Hikvision technology in the Irish Parliament: Privacy concerns. Retrieved December 18, 2024, from https://digitalpolicy.ie/hikvision-technology-in-the-irish-parliament-privacy-concerns/.
Data Protection Commission. (n.d.). Guidance on the use of CCTV. Retrieved December 22, 2024, from https://www.dataprotection.ie/en/dpc-guidance/guidance-on-the-use-of-cctv.
Data Protection Commission. (2021, December). For a child-oriented approach to data processing. https://www.dataprotection.ie/sites/default/files/uploads/2021-12/Fundamentals%20for%20a%20Child-Oriented%20Approach%20to%20Data%20Processing_FINAL_EN.pdf.
Diversity Matters (2024) Policing and Racial Discrimination in Ireland (Irish Council for Civil Liberties; Irish Network Against Racism). https://inar.ie/wp-content/uploads/2024/04/1.-POLICING-AND-RACIAL-DISCRIMINATION-1.pdf.
Fussey, P., & Murray, D. (2019). Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology. University of Essex Human Rights Centre.
Radiya-Dixit, Evani, A Sociotechnical Audit: Assessing Police Use of Facial Recognition (Cambridge: Minderoo Centre for Technology and Democracy, 2022).
Hao, K., & Swart, H. (2022, April 19). South Africa’s private surveillance machine is fueling a digital apartheid. MIT Technology Review.
GAGLIARDONE, I. (2020). The Impact of Chinese Tech Provision on Civil Liberties in Africa. South African Institute of International Affairs. http://www.jstor.org/stable/resrep29588.
General Scheme of the Garda Síochána (Recording Devices) (Amendment) Bill: Discussion. [Parliamentary Discussion]. A. (2023, June 15). Dáil Éireann, Dublin, Ireland.
Irish Council for Civil Liberties. (n.d.). ICCL calls for immediate removal of Hikvision cameras from Oireachtas. Retrieved December 18, 2024, from https://www.iccl.ie/news/iccl-calls-for-immediate-removal-of-hikvision-cameras-from-oireachtas/
Irish Council for Civil Liberties. (n.d.). Garda use of facial recognition technology poses extreme risk to human rights. Retrieved from https://www.iccl.ie/police-justice-reform/garda-use-of-facial-recognition-technology-poses-extreme-risk-to-human-rights/
Small, Mario L., and Devah Pager (2020) ‘Sociological perspectives on racial discrimination’, Journal of Economic Perspectives 34(2), 49-67.
Zurawski, N. (2004). “ I Know Where You Live!”–Aspects of Watching, Surveillance and Social Control in a Conflict Zone (Northern Ireland). Surveillance & Society, 2(4).
[1] Irish Council for Civil Liberties. (n.d.). ICCL calls for immediate removal of Hikvision cameras from Oireachtas. Retrieved December 18, 2024, from https://www.iccl.ie/news/iccl-calls-for-immediate-removal-of-hikvision-cameras-from-oireachtas/.
[2] Digital Policy Alliance. (n.d.). Hikvision technology in the Irish Parliament: Privacy concerns. Retrieved December 18, 2024, from https://digitalpolicy.ie/hikvision-technology-in-the-irish-parliament-privacy-concerns/.
[3] Irish Council for Civil Liberties. (n.d.). ICCL calls for immediate removal of Hikvision cameras from Oireachtas. Retrieved December 18, 2024, from https://www.iccl.ie/news/iccl-calls-for-immediate-removal-of-hikvision-cameras-from-oireachtas/.
[4] Data Protection Commission. (n.d.). Guidance on the use of CCTV. Retrieved December 22, 2024, from https://www.dataprotection.ie/en/dpc-guidance/guidance-on-the-use-of-cctv.
[5] Data Protection Commission. (2021, December). For a child-oriented approach to data processing. https://www.dataprotection.ie/sites/default/files/uploads/2021-12/Fundamentals%20for%20a%20Child-Oriented%20Approach%20to%20Data%20Processing_FINAL_EN.pdf.
[6] A. (2023, June 15). General Scheme of the Garda Siochána (Recording Devices) (Amendment) Bill: Discussion. [Parliamentary Discussion]. Dáil Eireann, Dublin, Ireland.
[7] Fussey, P., & Murray, D. (2019). Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology. University of Essex Human Rights Centre.
[8] Radiya-Dixit, Evani, A Sociotechnical Audit: Assessing Police Use of Facial Recognition (Cambridge: Minderoo Centre for Technology and Democracy, 2022).
[9] Court of Appeal in R (on the application of Bridges) v Chief Constable of South Wales Police 2020] EWCA Civ 1058.
[10] Council of Europe. (1950). European Convention on Human Rights, Article 8. Retrieved from https://www.echr.coe.int.
[11] A. (2023, June 15). General Scheme of the Garda Síochána (Recording Devices) (Amendment) Bill: Discussion. [Parliamentary Discussion]. Dáil Éireann, Dublin, Ireland.
[12] Bell, Mark (2007) ‘Chapter Two: Direct Discrimination’ in D. Schiek et al. (eds.) Cases, Materials and Text on National, Supranational and International Non-Discrimination Law (Oxford: Hart Publishing), 185-322.
[13] Collins, Hugh, and Tarunabh Khaitan (eds.) (2018) Foundations of Indirect Discrimination Law (Oxford: Hart Publishing). https://tinyurl.com/ych4lesk
[14] Small, M. L., & Pager, D. (2020). Sociological Perspectives on Racial Discrimination. The Journal of Economic Perspectives, 34(2), 49–67. https://www.jstor.org/stable/26913184
[15] Hao, K., & Swart, H. (2022, April 19). South Africa’s private surveillance machine is fueling a digital apartheid. MIT Technology Review.
[16] Ibid.
[17] GAGLIARDONE, I. (2020). The Impact of Chinese Tech Provision on Civil Liberties in Africa. South African Institute of International Affairs. http://www.jstor.org/stable/resrep29588.
[18] Diversity Matters (2024) Policing and Racial Discrimination in Ireland (Irish Council for Civil Liberties; Irish Network Against Racism). https://inar.ie/wp-content/uploads/2024/04/1.-POLICING-AND-RACIAL-DISCRIMINATION-1.pdf.
[19] Irish Council for Civil Liberties. (n.d.). Garda use of facial recognition technology poses extreme risk to human rights. Retrieved from https://www.iccl.ie/police-justice-reform/garda-use-of-facial-recognition-technology-poses-extreme-risk-to-human-rights/.
[20] Zurawski, N. (2004). “ I Know Where You Live!”–Aspects of Watching, Surveillance and Social Control in a Conflict Zone (Northern Ireland). Surveillance & Society, 2(4).
[21] Hao, K., & Swart, H. (2022, April 19). South Africa’s private surveillance machine is fueling a digital apartheid. MIT Technology Review.
[22] Zurawski, N. (2004). “ I Know Where You Live!”–Aspects of Watching, Surveillance and Social Control in a Conflict Zone (Northern Ireland). Surveillance & Society, 2(4).
