Response to Dutch Data Protection Authority (Autoriteit Persoonsgegevens)
Call for input on AI systems for making risk assessments regarding criminal offences
March 19, 2025
Call For Input: https://autoriteitpersoonsgegevens.nl/en/documents/call-for-input-on-ai-systems-for-making-risk-assessments-regarding-criminal-offences
TeachSomebody welcomes the opportunity to share insights with the Dutch Data Protection Authority, in response to the call for input on prohibited AI systems.
About TeachSomebody
TeachSomebody is a subsidiary of the BoesK partnership and is situated in the Netherlands. Our learner-centric e-learning platform is committed to delivering high-quality, accessible, and adaptable education, particularly to underserved communities. We empower individuals with the skills and digital competencies needed to thrive in the modern workforce and contribute to the digital economy.
In addition to our educational initiatives, we are actively engaged in shaping global AI policy by providing expert input to organisations like OECD, UNESCO, the African Union, and the European Union. Our responsible AI training programs have reached East African universities e.g., Jomo Kenyatta University of Agriculture and Technology and the Uganda Institute of Information and Communications Technology, the Ugandan government, the Ugandan Police Force, National ICT Innovation Hub, and various sectors across Uganda. Our lead policy advisor Kadian Davis-Owusu, PhD., is also part of the risk assessment for systemic risks working group contributing to the development of the General-Purpose AI Code of Practice for the European Union. We advocate for the responsible design, development, deployment, and maintenance of AI systems that comply with legal frameworks such as the EU AI Act, the Council of Europe AI Treaty, and other frameworks like OECD AI Guidelines and UNESCO's Recommendations on AI Ethics.
Questions on Criterion 1
1. Can you give a concrete example of (imaginary) AI systems used for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence?
Although not imaginary, there are real-world examples of AI systems used to assess or predict the risk of individuals committing criminal offences. Here are a few notable instances:
A. COMPAS – COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a risk assessment tool developed by Northpointe (now Equivant) and widely used in the United States to predict the likelihood of a defendant reoffending. It evaluates various factors, including criminal history, personal background, and social behaviour, to generate risk scores that inform decisions on bail, sentencing, and parole.
Investigations, such as the one by ProPublica, have raised concerns about racial biases in risk assessment predictions, suggesting that these tools may disproportionately label African American defendants as high risk compared to their white counterparts.
B. PredPol – PredPol is a predictive policing software that uses algorithms to analyze crime data and forecast where future crimes are likely to occur. Law enforcement agencies, including the Los Angeles Police Department, have implemented PredPol to predict future crimes.
The system focuses on predicting crime hotspots based on historical data, primarily aiming to prevent property crimes like burglaries. Critics argue that PredPol can perpetuate existing biases, leading to over-policing in minority communities and raising ethical concerns about surveillance and civil liberties.
C. VALCRI – VALCRI (Visual Analytics for Sense-making in Criminal Intelligence Analysis) is a European Union-funded project designed to assist criminal intelligence analysis by processing and visualising large datasets. It helps investigators identify patterns and connections in criminal data, potentially predicting criminal behaviour.
The system integrates various data sources to provide visual analytics, aiding law enforcement in making informed decisions. While VALCRI enhances data analysis capabilities, it also raises questions about data privacy, the potential for cognitive biases in interpretation, and the ethical use of AI in policing.
D. Precobs – Precobs (Pre Crime Observation System) is a predictive policing software used in parts of Germany and Switzerland. It analyzes past crime data to predict and prevent future burglaries by identifying patterns and potential repeat offenses. The system uses specific trigger and anti-trigger events from past crimes to forecast future incidents within a defined area and timeframe. In regions where Precobs was tested, authorities reported reductions in burglary rates, though discussions continue about the system's long-term effectiveness and ethical implications.
These examples illustrate the growing integration of AI systems in criminal justice to assess risks and predict potential offences. However, they also highlight significant ethical, legal, and social challenges, particularly concerning bias, transparency, and the balance between public safety and individual rights.
2. Can you give an example of an (imaginary) AI system where it is unclear to you whether it is an AI system that is used for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence?
Below I share real-life examples in response to question 2.
A. Singapore’s Safe City Initiative
Singapore has deployed AI-powered smart surveillance systems that analyse real-time video feeds to detect unusual behaviours (e.g., loitering, fights, or unattended bags). While these systems do not explicitly label individuals as high risk, they generate alerts that can lead to targeted police actions. In addition, the question arises: Does detecting suspicious behaviour, without assigning personal risk scores still count as predictive policing?
B. China’s Sharp Eyes Surveillance System
Sharp Eyes is deployed in communities and public spaces and uses AI-powered facial recognition and movement tracking to monitor individuals in real-time. The AI flags individuals deem suspicious based on behaviour patterns (e.g., visiting known criminals, staying in high-crime areas). There is blurred line as the system does not generate formal risk scores but does influence law enforcement interventions
In summary, the cases blur the lines because they do not rely on criminal history or demographic profiling (directly). However, they can be deemed as intrusive and still influence law enforcement decisions, leading to human rights violations.
3. Is the distinction between prediction and assessment clear to you? If not, can you explain why in more detail?
The distinction between prediction and assessment in AI-driven risk analysis remains ambiguous, particularly in law enforcement, where AI-generated risk evaluations can lead to preventive measures that closely resemble predictive policing. The EU AI Act classifies AI systems used in assessing criminal risk as high-risk, recognising their potential impact on fundamental rights and the risk of unjustified restrictions on individuals' freedoms (EU AI Act). However, the way AI models assess risk often makes it difficult to determine whether they are merely categorizing individuals based on present factors or actively predicting future offences.
For example, pre-trial detention risk assessment tools assign scores based on an individual's history and contextual factors, which law enforcement and judges may treat as predictive rather than simply evaluative (RAND). Similarly, these tools often rely on historical crime data, which can be biased due to systemic disparities in policing, leading to higher risk scores for marginalised communities (New America). Even if an AI system does not explicitly predict criminal behaviour, if it results in increased surveillance, restrictions, or punitive actions, its function becomes indistinguishable from that of a predictive policing tool.
This regulatory gray area presents a challenge under the EU AI Act, as AI developers and law enforcement agencies might argue that their systems are only assessing risk, thereby attempting to avoid the strict compliance measures required for high-risk AI systems. Without clearer guidance, there is a risk that assessment-based AI could be used as a loophole to justify interventions that should be subject to stricter oversight. To prevent misuse, the EU AI Act should further clarify when an AI system assessing criminal risk effectively becomes a predictive tool and ensure that such systems remain accountable to the same high-risk regulatory standards.
4. The outcome of the AI system is a risk assessment to assess or predict the risk of a natural person committing a criminal offence. Can you give an (imaginary) example of what such a risk assessment could include?
The following imaginary example utilising CrimeGuard AI i.e., a risk assessment reporting tool that employs biometric and behavioural analysis; is inspired by the predictive policing features outlined in this Europol paper.
Subject: John Doe
Date: March 18, 2025
System: CrimeGuard AI – Biometric & Behavioural Criminal Risk Assessment
Risk summary
Key risk factors
· Biometric & facial recognition (30%) – AI analysis of micro-expressions, gait, and emotional cues indicates deception risk.
· Behavioral patterns (25%) – Frequent ATM withdrawals, presence in fraud-prone locations, and engagement in encrypted communication channels.
· Financial anomalies (25%) – Sudden wealth increase, anonymous crypto transactions, and links to shell companies (i.e., companies that only exist on paper with no significant operational evidence or employees.).
· Psychological profile (20%) – Sentiment analysis suggests risk-seeking behavior, history of financial misconduct.
It is important to note the following when considering the legal implications.
Biometric surveillance risks – Facial/emotion AI may be prohibited for criminal risk prediction.
False Positives & bias – Risk of misclassification leading to unjustified law enforcement actions.
Presumption of innocence – AI predictions should not trigger punitive measures without human review.
Questions on Criterion 2
5. Can you give an (imaginary) example of an AI system where it is not sufficiently clear to you whether it qualifies as a system used to assess or predict a risk of a criminal offence?
TaxGuard AI is an imaginary, advanced risk assessment tool used by tax authorities to detect potential tax evasion, fraud, and financial irregularities by analyzing financial transactions, offshore transfers, corporate structures, and tax filings. While designed for administrative enforcement, its findings could lead to criminal investigations, creating ambiguity under the EU AI Act. The system assigns risk scores to individuals and businesses, flagging high-risk cases for further review. However, the distinction between tax non-compliance (administrative offence) and tax fraud (criminal offence) varies across jurisdictions. Some flagged cases could result in fines or audits, while others may lead to criminal charges if AI-detected patterns suggest deliberate fraud. Notably, the UK’s HMRC Connect system already uses AI to cross-reference tax filings and financial data, leading to audits and, in some cases, criminal investigations (HMRC Connect). Similarly, the IRS in the US applies AI for fraud detection, though legal concerns remain about transparency and potential bias (CIAT).
This blurring of administrative and criminal enforcement raises concerns about whether TaxGuard AI would function as a predictive system. If authorities rely too heavily on AI-generated risk scores to trigger criminal proceedings, it could fall under the EU AI Act’s high-risk or prohibited categories. The European Parliament has noted that distinguishing administrative from criminal tax offences is essential to prevent excessive state power and violations of due process rights (European Parliament Report). Also, the European Court of Justice has ruled on cases where individuals were subjected to both administrative and criminal penalties for the same tax offence, highlighting the risk of double jeopardy (EUCRIM).
This regulatory uncertainty highlights key risks such as over-reliance on AI-driven risk scores, lack of transparency, and potential challenges in contesting AI-based decisions. Without clear legal guidelines, AI tools like TaxGuard AI could unintentionally function as predictive policing for financial crimes, potentially violating EU AI Act restrictions on criminal risk prediction.
6. Can you give examples of offences where, despite the explanations above, it is still unclear to you whether they classify as criminal offences?
Yes, despite the explanations, certain offences remain unclear in their classification as criminal or administrative under EU law and national interpretations. The European Court of Justice has ruled that the definition of a “criminal offence” has an autonomous meaning under EU law, meaning Member States must interpret it consistently. However, borderline cases remain legally ambiguous. The classification of certain offences under EU law remains ambiguous, particularly concerning serious data protection violations and AI-generated disinformation. In the realm of data protection, the General Data Protection Regulation (GDPR) allows administrative fines of up to €20 million or 4% of global annual turnover for serious violations, such as failure to protect personal data or unauthorized data processing (GDPR, Article 83) (GDPR.eu). Less severe violations, such as insufficient record-keeping, may result in fines up to €10 million or 2% of turnover. However, if a data breach involves intentional misconduct, such as the deliberate sale of personal data or identity fraud, it may escalate into a criminal offence under national laws, leading to potential prosecutions for fraud or cybercrime (GDPR-info.eu). The distinction between negligence (administrative fine) and intent (criminal liability) remains legally ambiguous, leaving enforcement to national interpretations.
Similarly, AI-generated disinformation, particularly in election manipulation, presents classification challenges. The EU Digital Services Act establishes content moderation obligations for online platforms but does not explicitly criminalize AI-generated disinformation (European Commission). If misinformation is spread unintentionally, it is generally subject to administrative media regulations. However, if AI-generated false information is used deliberately to manipulate elections, it could be prosecuted under national election laws as electoral fraud or foreign interference. The EU AI Act, acknowledges the risks of AI-generated misinformation but does not create a uniform legal framework for criminalizing election-related AI disinformation, leaving enforcement to individual Member States (Columbia Journal of European Law).
These examples highlight legal grey areas where offences may fall under either administrative or criminal jurisdiction, depending on enforcement practices, intent, and national legal interpretations.
Questions on Criterion 3
7. Can you give an (imaginary) example of an AI system making risk assessments or predictions of natural persons in order to assess or predict the risk of a natural person committing a criminal offence solely on the basis of automated profiling of natural persons?
PredictCrime AI is an imaginary, advanced predictive policing system used by law enforcement agencies to assess the likelihood of individuals committing criminal offences based solely on automated profiling. The system collects and processes large-scale personal data to generate risk scores for individuals, categorising them based on their perceived likelihood of engaging in criminal activity. The AI operates without case-specific evidence or human intervention, relying exclusively on profiling models trained on historical crime data.
The system functions by analysing socioeconomic background, residential history, online behaviour, past interactions with law enforcement, and movement patterns. For instance, individuals from high-crime neighbourhoods, those with financial instability, or those who have previously been in proximity to known offenders may be flagged as high-risk, even if they have never committed a crime. The AI assigns risk scores ranging from 0 to 100, where a higher score increases the likelihood of law enforcement intervention, such as increased surveillance or preemptive questioning. Since the model operates solely on profiling without human oversight, individuals may be subjected to preventive policing measures based purely on statistical correlations rather than actual criminal intent or behaviour.
PredictCrime AI would fall under the prohibited category of the EU AI Act because it makes risk assessments exclusively through profiling, without incorporating additional substantial elements such as case-specific evidence or real-time behavioural analysis. The system evaluates personal data to predict aspects of a person’s reliability, behaviour, and movements, aligning with the GDPR definition of profiling. Moreover, it lacks meaningful human intervention, making it a fully automated decision-making tool that can lead to discrimination, wrongful targeting, and violations of the presumption of innocence.
The ethical and legal concerns surrounding PredictCrime AI are significant. The system risks reinforcing systemic biases, as historical crime data could reflect discriminatory policing patterns that disproportionately target marginalised communities. Furthermore, since individuals have no direct means to challenge or contest their risk scores, this lack of transparency could raise concerns about due process violations and fundamental rights infringements. Given that the EU AI Act prohibits risk assessments solely based on profiling (see Article 5, 1d), an AI system like PredictCrime AI would be deemed unlawful under the regulation.
8. Could you give an (imaginary) example of an AI system making risk assessments or predictions of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, where this is made possible solely on the basis of the assessment of their personality traits and characteristics ?
NeuroCrime AI is an imaginary automated risk assessment system used by European law enforcement to predict criminal behaviour solely based on personality traits. It is particularly applied to migrants from Africa and the Middle East, analysing psychological attributes such as impulsivity, aggression levels, and emotional regulation to assign a criminal propensity score.
For example, a Syrian migrant applying for asylum may be required to undergo a psychological evaluation. If the system detects high impulsivity and low emotional stability, it may classify them as high risk for violent crime, leading to increased surveillance or restrictions on residency permits. Importantly, this assessment does not consider past behaviour, socioeconomic factors, or criminal records, it is entirely based on psychological profiling.
Notably, NeuroCrime AI would violate the EU AI Act’s ban on criminal risk assessments based solely on personality traits, as it assumes inherent psychological predisposition to crime without any case-specific evidence. This raises ethical and legal concerns such as the following.
i. Discrimination and racial bias – Migrants especially from war torn and lesser privileged societies may be unfairly profiled, reinforcing stereotypes and systemic biases.
ii. Presumption of innocence violation – Flagging individuals before any offence occurs contradicts basic legal principles.
iii. Barriers to legal residency – A negative risk score could impact asylum claims or integration efforts.
Ultimately, NeuroCrime AI would be prohibited under the EU AI Act, as it criminalizes African and Middle Eastern migrants based on psychological profiling, violating anti-discrimination laws and fundamental rights.
9. Could you give an example of an (imaginary) AI system making risk assessments or predictions of natural persons in order to assess or predict the risk of a natural person committing a criminal offence , but where these risk assessments are not solely based on profiling or assessing personality traits or characteristics of natural persons?
CrimePredict+ AI is an imaginary AI-driven risk assessment system used by law enforcement to predict the likelihood of individuals committing a criminal offence based on a combination of behavioural data, real-time activity monitoring, and past interactions with the justice system. Unlike systems that rely solely on profiling or personality traits, CrimePredict+ AI integrates multiple data sources, including geolocation patterns, financial transactions, prior legal records, and social network analysis, to identify potential criminal risks.
For instance, if an individual frequently visits known criminal hotspots, engages in suspicious financial transactions, and has past minor offences, the AI may flag them as medium risk for organized fraud. However, the system also considers case-specific evidence, such as recent behavioural changes, cooperation with authorities, or rehabilitation efforts, allowing human oversight to ensure fair assessments.
Since CrimePredict+ AI does not base its risk scores solely on personality traits or profiling, but instead incorporates real-world behavioural and contextual factors, it does not fall under the prohibition of the EU AI Act. However, concerns about data privacy, potential bias, and surveillance ethics would remain critical in its deployment.
12. Can you give an example of an (imaginary) AI system to support the human assessment of a person’s involvement in a criminal activity?
CrimeScan AI is an AI-driven tool designed to support human investigators in assessing a person's involvement in criminal activity by analysing objective and verifiable facts directly linked to the case. The system processes forensic evidence, CCTV footage, phone records, and financial transactions to identify patterns, anomalies, and connections between suspects and criminal incidents.
For example, in a fraud investigation, CrimeScan AI cross-references bank transfers, email communications, and transaction timestamps to detect potential links between a suspect and a known fraudulent network. However, the final assessment remains with human investigators, who review AI-generated insights alongside case-specific evidence.
13. Can you give an example of objective or verifiable facts directly linked to the criminal activity?
Objective and verifiable facts directly linked to criminal activity are essential in distinguishing lawful AI-assisted investigations from prohibited predictive policing systems under the EU AI Act. These facts include forensic evidence such as DNA, fingerprints, or ballistic reports that physically tie a suspect to a crime scene. Surveillance footage from CCTV cameras can confirm a suspect’s presence at a location during the time of the offence. Moreover, phone records and digital communications, including call logs, text messages, or encrypted chats, can establish a direct connection between a suspect and criminal planning or execution. Financial transactions showing large unexplained deposits, suspicious transfers, or links to known criminal entities serve as further verifiable indicators of involvement in fraud or money laundering. Geolocation data, such as GPS tracking, or public transport card usage, can also provide concrete proof of a suspect’s movement in relation to a crime. Furthermore, possession of contraband, stolen goods, or illegal weapons directly linked to an offence strengthens the factual basis of an investigation.
We would like to express our gratitude to the Dutch Data Protection Authority for this opportunity and emphasise the significance of fostering public dialogue around Prohibited AI Systems. We are eager to see the Dutch Data Protection Authority, along with other ministries, actively integrate the insights gathered from this consultation and continue doing so in the development of future AI policies.
Kadian Davis-Owusu, PhD
Co-founder, TeachSomebody
Written by:
Kadian has a background in Computer Science and pursued her PhD and post-doctoral studies in the fields of Design for Social Interaction and Design for Health. She has taught a number of interaction design courses at the university level including the University of the West Indies, the University of the Commonwealth Caribbean (UCC) in Jamaica, and the Delft University of Technology in The Netherlands. Kadian also serves as the Founder and Lead UX Designer for TeachSomebody and is the host of ...