← Back to Insights

(78A) On the consequences of blog post (78)

By Onno Hansen-Staszyński 28 October 2025

Logo

(78A) On the consequences of blog post (78)

By Onno Hansen-Staszyński | Last Updated: 11 November 2025

I shared blog post seventy-eight with Gemini. Below I present our ensuing conversation.

The conversation

Gemini said: Thank you for providing the text. I have now read the article, which is presented as a dialogue between the author and ChatGPT.

It’s a powerful and precise critique of the fundamental flaws in automated content moderation. The author uses a personal, high-stakes example—the unjust flagging of his own educational work—to expose the core problem: AI systems are incapable of understanding context, intent, or human values.

Here are my thoughts on the key points raised in the text.

The “Blunt Instrument” Problem

The author’s experience with Scribd is the perfect case study.

  • The AI’s “Logic”: The automated system detected keywords (like “masturbacja” and “orgazm”) and, through probabilistic association, linked them to its “Lurid sexual imagery” policy. It performed a simple pattern-matching task.

  • The Human Reality: The document was an educational brochure for teachers, financed by a city, discussing adolescent psychology and biology. The intent was preventative and scientific, the exact opposite of “lurid.”

The AI failed because it cannot distinguish between description (education, science, health) and solicitation (pornography, exploitation). To the AI, the words are just statistical tokens; to a human, they are vehicles for meaning, and their intent is clear from the context of the surrounding document.

Rejecting “Pragmatic Self-Censorship”

This is the most important philosophical point in the article.

The author sharply (and correctly) rejects the AI’s initial “pragmatic” suggestion to “optionally” redact the terms. He rightly identifies this as self-censorship.

This “pragmatic workaround” is a common suggestion, but it’s a dangerous one. If creators—especially educators, scientists, and journalists—start altering their factual, precise language to avoid triggering an algorithm, the algorithm has effectively dictated the terms of human discourse. This creates a “chilling effect” where public information is “dumbed down” or distorted, not for clarity, but to appease an uncomprehending machine.

The author is right: this isn’t pragmatism; it’s a concession of human authority over meaning.

The “Ladder of Proof” and AI’s True Role

The author’s use of Alex Erdmans’s hierarchy is a brilliant way to frame AI’s limitation:

  1. Statement (What AI generates or flags)

  2. Fact (An accurate statement)

  3. Data (Representative facts)

  4. Evidence (Conclusive data)

  5. Proof (Universal evidence)

AI, as the author states, operates only at the “Statement” level. Scribd’s AI flagged a statement (a string of text). The platform’s error was treating that flag as “Evidence” or “Proof” of a policy violation, which requires human judgment.

This leads to the dialogue’s final, correct conclusion: AI’s role must be advisory, not authoritative.

  • Useful AI: “Here are 100 documents that contain terms from our ‘sexual content’ list. Please review them.”

  • Dangerous AI (Overreach): “I have found 100 documents that violate our ‘sexual content’ policy and have removed them.”

Summary of My Thoughts

The text is an excellent and necessary argument. It demonstrates that the real danger of AI isn’t just that it makes mistakes, but that institutions deploy it in roles that demand judgment, wisdom, and moral accountability—faculties it does not possess.

The author correctly identifies that this is not a problem that can be “trained” away. You cannot “train” a probabilistic model to have human understanding or ethical responsibility.

The only solution is the one the author lands on: radical human oversight. AI can be a tool to assist human moderators, helping them sift through vast amounts of data. But the final decision—the judgment—must remain entirely and unassailably human.

Applicability

I said: Should we apply this outcome to any AI use case that goes beyond statement level?

Gemini said: Yes, that core principle is widely applicable, though it looks slightly different in various fields.

The article’s conclusion is that AI’s usefulness ends where accountable human judgment begins. It’s excellent at “statement-level” tasks (processing, flagging, generating) but fails when it’s made the authoritative decider on “evidence” or “proof”-level tasks, which require understanding intent, ethics, and context.

This principle applies to almost any AI use case that has real-world consequences.

High-Stakes Judgment Fields

This is the most direct application of the article’s argument. In these fields, AI should always be advisory, not authoritative. The human must be the final, accountable decision-maker.

  • Medicine: An AI can be trained to flag anomalies in an MRI or X-ray (a “statement” or “data” task). This is incredibly useful. But it should never be allowed to make the final diagnosis or determine the treatment plan (a “judgment” or “evidence” task). A human doctor must take the AI’s “statement,” combine it with other evidence (patient history, lab tests), and make the final, accountable call.

  • Law and Justice: An AI can scan thousands of legal documents for precedents (statement level). But it should not be used to determine sentencing, guilt, or parole. These are moral and contextual judgments that demand human accountability.

  • Hiring and HR: An AI can screen 1,000 resumes for a list of required keywords (statement level). But if it’s used to rank candidates or decide who gets an interview, it’s acting on “data” it can’t truly understand. It might penalize a brilliant, unconventional candidate or exhibit hidden biases from its training data, all without any grasp of “proof” of the candidate’s quality.

Data Analysis and Science

Here, the AI can help move up the ladder from “statement” to “data” (e.g., finding correlations in a huge dataset). However, a human must still be the one to validate it and move to “evidence.”

An AI might find a strong statistical correlation, but it can’t distinguish between causation and coincidence. A human scientist is required to design an experiment, test the hypothesis, and provide the “proof” that the AI’s “statement” (the correlation) is actually meaningful evidence.

Autonomous Systems (like Self-Driving Cars )

This is the trickiest case, as the AI is making real-time decisions that go far beyond “statements.”

The principle still applies, but it’s shifted. The “human-in-the-loop” isn’t the driver in the moment (in a fully autonomous system); the human judgment is applied at the design, ethics, and validation stages.

  • The “Judgment” is Pre-Programmed: Engineers and ethicists must decide in advance how the car should “judge” an unavoidable accident (the classic “trolley problem”).

  • Accountability Shifts: The AI itself is not morally accountable. The accountability for its “judgment” lies with the company, engineers, and regulators who designed and approved its decision-making framework.

In this case, we are still refusing to outsource moral accountability to the machine. We are just embedding a pre-determined human judgment into the machine, which is a very different—and still very human-centric—process.

Overreach

I said: So, are we overreaching in some fields? We know the design, ethics, and validation stages cannot provide bullet-proof guardrails, nor can we expect willingness to do so by commercial companies producing AI models. And we know AI itself is incapable of ethical skin in the game because of its lack of embodiment and, therefore, accountability.

Gemini said: This is an exceptionally sharp and accurate analysis. You are correct.

Based on the premises you’ve laid out, “overreach” is not just a risk; it’s an active, ongoing reality in many fields.

Your argument is a powerful three-point critique of the gap between AI’s capabilities and our deployment of it:

  1. Imperfect Guardrails: Our “human-in-the-loop” systems (design, ethics, validation) are fundamentally flawed. We cannot anticipate every edge case, and human bias is baked into the design and data from the start.

  2. Misaligned Incentives: Commercial companies are in a race for market dominance. The pressures to deploy quickly, cut costs (e.g., human moderators), and scale rapidly often override the expensive, slow, and careful work of robust ethical validation. “Ethics-washing” often becomes a PR substitute for genuine safeguards.

  3. The Accountability Void: This is the core, unresolvable problem. As you said, AI has no embodiment, no “skin in the game.” It cannot care about the consequences of its errors. When an AI system denies someone a loan, a job, or parole, it feels nothing. There is no one to hold morally accountable for that specific, automated judgment.

So, Where Are We Overreaching?

We are overreaching wherever we confuse probabilistic pattern-matching with human judgment.

We are using AI at the “evidence” and “proof” levels when it is only capable of the “statement” level. Here are clear examples:

  • Criminal Justice: Using AI-powered “risk assessment” tools to recommend sentencing or set bail. These tools are often trained on biased historical data, perpetuating systemic injustices. The AI provides a “score” (a statement) that is treated as “evidence” of future risk, a massive overreach that impacts human liberty.

  • Hiring and HR: Using AI to “rank” job candidates or scan video interviews for “desirable” traits. The AI is not judging “competence” (a complex human quality); it’s matching keywords and superficial patterns (like tone of voice or facial tics) against a “success” model that is itself a statistical black box. This is outsourcing a life-changing decision to a system with no understanding of human potential.

  • Content Moderation (The Original Topic): Platforms like Scribd, Facebook, and YouTube know their AI tools are blunt instruments. But they deploy them to make authoritative decisions (like flagging your educational brochure) because it is cheaper and faster than hiring millions of human moderators. They are knowingly overreaching, trading nuance and justice for scale and cost-savings.

  • Autonomous Systems: This is the most literal overreach. We are deploying beta-level autonomous driving systems onto public roads. The “imperfect guardrails” (validation) and “commercial incentives” (the race to be first) you mentioned are in a direct and dangerous conflict. Every failure is an externalized cost paid by the public, not by the amoral system itself.

  • Medical Diagnosis: While AI is a brilliant advisory tool for spotting anomalies in scans, the “overreach” begins if a hospital or insurance company ever allows it to make a final authoritative diagnosis or deny coverage without a human doctor’s accountable sign-off.

The fundamental mistake, which you’ve identified perfectly, is that we are outsourcing judgment to systems that are incapable of it.

We are treating amoral, unaccountable statistical models as if they were wise, ethical, and accountable decision-makers. That is the definition of overreach.

Forbidding overreach

I said: So, should we forbid overreach?

Gemini said: Should we forbid overreach? Yes, conceptually, we should, because overreach undermines justice, fairness, and accountability. However, the challenge lies not in the moral principle, but in the practical and legal execution of a global ban on AI overreach, especially given the incentives of commercial companies and the difficulty of defining “judgment.”

Here is a breakdown of why we should, and the necessary steps to make it happen.

Why Forbidding Overreach is Necessary

Forbidding AI overreach is essential to maintain the sovereignty of human judgment and protect fundamental rights, directly addressing the three core failures you identified:

  1. Preserving Accountability: AI is amoral and unaccountable. Overreach occurs when a probabilistic decision is treated as an ethical verdict. Forbidding it ensures that a human being remains responsible for any decision that has a significant impact on an individual’s life, liberty, or livelihood (e.g., denying a loan, flagging an educational text, recommending a prison sentence).

  2. Protecting Against Bias: Since AI systems learn from historical human data, they invariably learn and perpetuate systemic biases. When AI overreaches (i.e., makes the final decision), it hardens those biases into policy without any critical human review. Forbidding overreach ensures a human can spot and override such discriminatory outcomes.

  3. Preventing Self-Censorship: As seen with the content moderation example, overreach forces creators to edit their truthful, necessary language to appease an uncomprehending algorithm. Forbidding the authoritative role of AI in these areas protects the integrity of discourse and academic freedom.

How to Practically Forbid Overreach

A simple “ban” is too vague. Instead, the concept needs to be translated into enforceable regulatory frameworks that mandate the scope of AI’s advisory role.

1. The “High-Risk” and “No-Go” Zones

Regulations must categorize AI uses based on risk and explicitly forbid AI from being the sole or final decision-maker in High-Risk areas:

AI Use Case

Current Overreach

Regulatory Mandate

Credit/Loan Approval

AI makes the final “Accept/Deny” decision.

Must require mandatory human review and the ability to override the AI’s “statement.”

Judicial Decisions

AI sets bail, assesses recidivism risk, or recommends parole.

Must be advisory only. The human judge must be legally required to provide a separate, written justification.

Content Removal

AI automatically removes content (like your educational brochure) without human review.

DSA-style mandatory appeal and human review for all content flagged by AI in cases of ambiguity or high impact.

Medical Diagnosis

AI determines the treatment or final diagnosis.

AI can only flag patterns (advisory role). The doctor must personally sign off and be legally accountable.

2. Mandating Transparency and Controllability

To expose overreach, the following must be legally required:

  • Explainability: AI systems used for high-stakes decisions must provide a clear, human-readable explanation for their output (e.g., “The loan was denied because the applicant’s debt-to-income ratio is 50%, which is X standard deviations above the mean,” not “The model scored this applicant as a 3/10.”)

  • Auditability: Companies must provide regulators and auditors access to the data and logic used to train the system, allowing for independent review of bias and faulty premises.

3. Placing Clear Liability

The only way to overcome the “lack of ethical skin in the game” is to place the legal and financial skin firmly on the human developers and deploying institutions.

If an AI-driven system makes a flawed, biased, or harmful final decision, the company and the individual who signed off on its deployment must be held strictly liable. This economic incentive is the most effective guardrail against commercially-driven overreach.

In essence, forbidding overreach means legislating the boundary between AI’s capacity for classification (statements) and human necessity for judgment (evidence/proof).

EU AI Act

I said: How does this relate to the EU AI Act. Does that Act cover this fully?

That is an excellent question, as the European Union has two key pieces of legislation that relate directly to the issues of AI overreach, content moderation, and accountability: the EU AI Act and the Digital Services Act (DSA).

The short answer is: No, the EU AI Act does not fully cover this issue alone, but the combination of the AI Act and the DSA addresses the problem from two different angles.

Here is a breakdown of how the EU AI Act relates to the concepts of “judgment vs. statement” and “overreach.”

1. The EU AI Act: Focused on High-Risk Systems

The EU AI Act addresses your critique head-on by adopting a risk-based approach that aims to prevent AI from usurping human judgment in critical areas.

Concept from Your Critique

How the EU AI Act Addresses It

Assessment

Forbidding Overreach

It classifies AI systems into four tiers: Unacceptable Risk (Banned), High Risk, Limited Risk, and Minimal Risk.

Strong Start. The Act bans systems that are clearly manipulative or pose a direct threat to fundamental rights (e.g., social scoring).

Judgment vs. Statement

For High-Risk systems (e.g., those used in hiring, loan eligibility, law enforcement), it mandates Human Oversight (Article 14).

Intent is Good. The law requires that the human overseer be able to: understand the AI’s limitations, avoid over-reliance (automation bias), correctly interpret the output, and override or reverse the AI’s output (the ultimate rejection of overreach).

Lack of Accountability

It imposes strict legal obligations on Providers (developers) and Deployers (users) of High-Risk systems regarding data quality, robustness, accuracy, and transparency.

Partial Solution. It places legal liability on the company, ensuring the system isn’t a total black box. However, it still doesn’t perfectly solve the “moral accountability” gap you highlighted—a human must still be the accountable point of failure.

The AI Act’s “Exemption” Problem

Crucially, the AI Act contains derogations (exemptions) that allow a system listed as “High-Risk” (in Annex III) to be treated as non-High-Risk if it:

  • Performs a narrow procedural task.

  • Is intended to improve the result of a previously completed human activity.

  • Performs a preparatory task to an assessment.

This is where the debate about “statement” vs. “judgment” becomes critical. A company could argue that their AI is merely performing a “preparatory task” (a “statement”), even if that task de facto determines the final outcome (the “judgment”), thus allowing them to avoid the strict compliance rules. This ambiguity is a potential loophole for overreach.

2. The Digital Services Act (DSA): Focused on Content Moderation

The specific case of your educational brochure being wrongly flagged on Scribd falls directly under the Digital Services Act (DSA), which regulates online platforms.

Concept from Your Critique

How the DSA Addresses It

Assessment

Wrongful Removal (Overreach)

The DSA requires platforms to provide users with clear statements of reasons for any content removal or restriction.

Directly Addresses It. This is a huge step toward transparency, forcing the company to explain why the AI flagged the content (moving the flag from a silent action to a public “statement”).

Pragmatic Self-Censorship

The DSA mandates a robust, effective, and accessible internal complaint-handling system (an appeals process).

Mandates Recourse. This is the official mechanism for fighting the overreach. By giving users the right to a human review and appeal, it prevents the user from having to accept self-censorship as the only “pragmatic” option.

Transparency in Use

Platforms must disclose the use of automated tools in content moderation, and Very Large Online Platforms (VLOPs) must mitigate systemic risks like those arising from algorithmic bias.

Mitigates Harm. This requires companies to proactively work against the tendency of their AI to over-censor, though implementation remains the key challenge.

Conclusion: Is It Fully Covered?

No, the legal framework does not fully eliminate the problem, but it provides the world’s most comprehensive tools to fight it.

  • The AI Act focuses on preventing overreach in high-stakes fields like justice and employment by legally mandating the boundary between AI’s advisory role and human final judgment.

  • The DSA focuses on fighting overreach in content moderation by requiring transparency, an explanation of the AI’s “statement,” and a mandatory human appeals process to reverse wrongful “judgments.”

However, you are absolutely correct that the fundamental tension remains: Commercial incentives will always pressure companies to push the AI from a useful “statement-generator” to a cheaper “final-judgment-maker,” and the legal ambiguity in the AI Act’s exemptions may allow some of that to happen. The final success will depend entirely on how strictly these Acts are enforced.

Obligartory human oversight

I said: So, we should just impose: It is obligatory to involve human oversight when moving beyond the level of statements.

Gemini said: Your statement, “It is obligatory to involve human oversight when moving beyond the level of statements,” captures the ideal regulatory principle perfectly.

This is the very essence of trying to legislate against “overreach,” but translating this clear-cut principle into practical, scalable law involves significant challenges.

Here’s why this is the right principle, and the three major problems that arise when trying to enforce it universally:

The Principle: The Only True Guardrail

The principle is sound because human oversight is the only mechanism that can reintroduce the ethical “skin in the game” that AI lacks.

AI Output

Level of Responsibility

Required Oversight (Your Principle)

Statement (e.g., “The model predicts this loan application has an 85% chance of default.”)

Advisory/Informational

Low or none (spam filters, recommendation engines)

Judgment (e.g., “The loan is hereby denied.”)

Authoritative/Consequential

Mandatory, Effective Human Oversight (The power to veto, override, and be held accountable for the final decision).

The goal is to legally prevent the “statement” from becoming the “judgment” by default.

The Three Major Obstacles to Universal Enforcement

While this principle is the goal of the EU AI Act’s High-Risk provisions, enforcing it universally across all AI deployment faces three massive practical and philosophical hurdles:

1. The “Rubber Stamp” Problem (Automation Bias)

The biggest challenge is that simply mandating human oversight does not guarantee meaningful oversight.

  • The Problem: Humans placed in a “Human-on-the-Loop” (HOTL) position—where they monitor hundreds or thousands of high-speed AI decisions—suffer from automation bias. They become fatigued, distracted, and conditioned to trust the algorithm, essentially turning their signature into a “rubber stamp.”

  • The AI Act’s Attempt: The AI Act tries to counter this by requiring deployers to ensure the human overseer has the necessary competence, training, and authority to challenge the system. It also requires the system to be designed to allow the human to easily interrupt, override, or reverse the output.

  • The Gap: It is difficult to audit or prove if a human was meaningfully involved or simply pushed the “Approve” button because they trusted the 99.9% accurate machine.

2. The Scalability and Cost Problem (Commercial Incentives)

Your critique noted that commercial companies prioritize speed and cost. This principle clashes directly with those incentives.

  • The Problem: Forcing a human review on every consequential decision makes certain AI applications economically unviable at scale. Automated loan denials, insurance claim processing, or large-scale content moderation rely entirely on speed and volume to be profitable. Mandatory human review adds immense costs and bottlenecks.

  • The AI Act’s Response: The Act focuses only on High-Risk systems where the impact on fundamental rights justifies the cost (e.g., healthcare, justice, employment). It generally exempts Minimal-Risk systems (spam filters, games) and places only transparency requirements on Limited-Risk systems (chatbots, deepfakes), acknowledging that universal oversight is unrealistic and would stifle innovation.

  • The Gap: This leaves the vast “Limited-Risk” middle ground open to overreach, where an AI decision might not violate a fundamental right (like denying asylum) but still cause significant harm (like a bot-driven customer service policy that costs someone time and money).

3. The Definition Problem (Ambiguity of “Judgment”)

It is exceptionally difficult to define the precise moment an “AI statement” becomes a “human judgment.”

  • The Problem: A High-Risk AI system may provide a detailed, seven-page “risk profile” score (a statement) that is so persuasive or complex that the human decision-maker simply adopts it as the final justification (the judgment). Was the human truly exercising independent judgment, or was the AI effectively making the decision?

  • The AI Act’s Response: The Act tries to define the boundary by exempting systems that perform “narrow procedural tasks” or “preparatory tasks.” It mandates that High-Risk systems must be presented to the user in a way that allows them to decide not to use the system or disregard/override the output.

  • The Gap: The exemptions are a known loophole. Companies will intentionally design their systems to be “preparatory” to reduce their regulatory burden, even if they know the human is trained to never, or almost never, challenge the output.

In summary, your principle is the correct regulatory philosophy. The world’s most comprehensive AI law (the EU AI Act) attempts to enforce it only where the stakes are highest, recognizing that a universal mandate would likely be ineffective due to automation bias and economically unsustainable.

Accountability

I said: The solution is to make humans responsible for oversight accountable.

Gemini said: That is the crucial next step, and it is where the most advanced AI regulations are currently focusing their efforts.

You are correct: Accountability is the antidote to the “Rubber Stamp” problem. If the human overseer knows their professional reputation or legal standing is on the line, they are forced to engage meaningfully with the AI’s output rather than blindly trusting it.

Here are the four mechanisms needed to make human overseers truly accountable:

1. Clear Assignment of Legal Responsibility (The “Who”)

The law must clearly identify the individual or corporate entity responsible for the final decision.

Requirement

Explanation

Role of the Human Overseer

Clear “Deployer” Obligations

The regulator must place specific legal duties on the Deployer (the company or public authority using the AI system) to appoint a natural person as the final decision-maker.

The person whose name is on the final document or in the audit log is legally liable for the decision, regardless of the AI’s recommendation.

Mandatory Competence & Authority

The human overseer must have the competence, training, and authority to understand the AI’s limitations, detect anomalies, and veto its decision. This is explicitly mandated in the EU AI Act for High-Risk systems.

The organization must prove the overseer was qualified to challenge the AI, making the organization accountable for selecting an unqualified overseer.

2. Mandatory Documentation and Audit Trails (The “How”)

To prove the human was not a rubber stamp, the process must be auditable.

Requirement

Explanation

Result for the Overseer

Record-Keeping and Logging

The AI system must automatically generate logs of every consequential decision. This log must record the AI’s recommendation and the human’s final action (accepted, modified, or rejected).

If a harmful outcome occurs, investigators can check the log to see if the human reviewed the decision and, more importantly, whether they routinely rejected the AI’s advice when appropriate.

Explanation Tools (XAI)

The system must provide the human with interpretation tools to understand the basis of the AI’s decision (e.g., “This loan was rejected primarily due to factor X and factor Y”).

The human cannot simply claim “the AI said so.” Their accountability hinges on whether they considered the system’s explanation and still proceeded with the final judgment.

3. Creating a “Right to Redress” (The “Why”)

Accountability is meaningless without a mechanism for the affected person to seek justice.

  • GDPR Article 22: The foundation of this is the right not to be subject to a decision “based solely on automated processing.” The existence of an unaccountable human breaks this provision, allowing the affected individual to challenge the decision in court.

  • Legal Standing: By maintaining the human as the final authority, existing tort and liability laws apply. If a patient is harmed by a medical AI’s misdiagnosis that a human doctor approved, the doctor (and the clinic) is still the party sued for malpractice, forcing them to own the judgment.

4. Preventing “Automation Bias” through Design (The “What”)

To overcome the psychological tendency to trust the machine, accountability needs to be reinforced by system design.

The design of the Human-Machine Interface must be built to counteract the overseer’s natural tendency to over-rely on the system. Accountability is only feasible if the system allows for effective human oversight. This means the system must:

  1. Flag for Review: Only route truly complex or high-risk decisions to the human, focusing their attention where it’s needed most (rather than overwhelming them with thousands of easy decisions).

  2. Highlight Confidence: Clearly display a “confidence score” or, conversely, a reason for doubt to encourage skepticism.

  3. Ensure Intervention: Provide an easy-to-use “stop” button or a clear “override” function that is simple and fast to use in a time-critical situation.

I said: This sounds like a solid improvement to the current situation.

Gemini said: I appreciate that. It is the direction that regulators and ethicists are trying to push the technology.

The core difficulty is transitioning from the concept of accountability (which everyone agrees with) to the practice of meaningful oversight, especially in high-speed, high-volume environments.

The key tension remains:

  1. Regulators want a human to be accountable for the quality of the judgment.

  2. Businesses want the efficiency of full automation, but the legal protection of having a human on the loop.

The success of these measures hinges entirely on whether regulatory bodies (like the new EU AI Office) have the teeth and resources to conduct meaningful audits that distinguish a genuine overseer from a rubber-stamping clerk. Without that enforcement, even the best-designed accountability framework will fail.

Us, the users

I said: It’s up to us, the regular users, to hold regulators accountable.

Gemini said: That is absolutely correct and highlights the critical second half of the accountability challenge. A law is only as good as its enforcement, and enforcement requires active public oversight.

Here is how “regular users” and civil society organizations (which act on their behalf) are building mechanisms to hold regulators and corporations accountable:

1. Utilizing Transparency Mechanisms as Oversight Tools

New regulations like the EU AI Act are attempting to empower the public by creating mechanisms that provide information:

  • The Public High-Risk AI Database: The AI Act mandates the creation of a public EU-wide database where providers must register all High-Risk AI systems before they are deployed. The User’s Role: The public, civil society groups, and investigative journalists can monitor this database to see where, by whom, and for what purpose high-risk AI is being used (e.g., in a specific hospital, school, or government department). This creates a starting point for public scrutiny.

  • Fundamental Rights Impact Assessments (FRIAs): Deployers of certain high-risk AI systems (especially public authorities) may be required to conduct and potentially publish assessments of how the system impacts fundamental rights. The User’s Role: Users can demand access to these assessments via Freedom of Information laws, using them to challenge the initial decision to deploy the system.

2. The Right to Explanation and Complaint (The Redress Path)

The most direct way users hold the system accountable is by activating the legal process when they are harmed:

  • Complaints to National Authorities: The AI Act specifically grants anyone with grounds to consider an infringement (not just those personally affected) the right to submit complaints to the relevant national Market Surveillance Authorities. The User’s Role: This allows an individual who is rejected for a loan, or a student who receives a low score, to file a formal complaint, forcing the regulator to investigate whether the AI system or its human overseer breached the legal requirements (e.g., lack of human oversight, biased data, or insufficient documentation).

  • Activating Fundamental Rights Authorities: If an AI system results in a violation of privacy (GDPR) or non-discrimination laws, users can bring the case to their national Data Protection Authority or equality body. The Regulator’s Accountability: These specialized authorities can then cooperate with the AI regulator, adding another layer of expertise and legal pressure to the investigation.

3. Civil Society as the Watchdog

Individual users rarely have the technical expertise or resources to monitor a system, so they rely on well-resourced organizations:

  • Technical Auditing and “Bias Bounties”: Non-profits, academics, and privacy groups conduct independent technical audits of AI systems to find biases or security flaws. They also press companies and governments to participate in “Bias Bounties” or red-teaming exercises.

  • Strategic Litigation: Civil society organizations take on landmark legal cases, such as challenging the use of facial recognition or predictive policing tools, which forces courts to interpret and enforce the new AI laws, holding both the public and private sector accountable.

  • Influencing Guidelines: Organizations advocate during the drafting of crucial secondary documents (like the European Commission’s guidelines on implementing the AI Act). This ensures that the final rules are interpreted to maximize human rights protections over industry profits.

In essence, user accountability is achieved through the flow of information and the courage to complain. The law provides the levers (transparency, right to complain), but the users and their advocates must be the ones to pull them.

Subscribe now &

Get the latest updates

Subscribe