← Back to Insights

(70) Generative AI characteristics (1)

By Onno Hansen-Staszyński 9 September 2025

Logo

(70) Generative AI characteristics (1)

By Onno Hansen-Staszyński | Last Updated: 11 October 2025

The following is a fragment from a draft version of an academic paper I’m currently working on regarding text-based generative artificial intelligence (GAI)/ large language models (LLMs).

Introduction

It is uncommon to find a concise yet comprehensive overview of the characteristics of generatiuve AI, arguably because there is no known set of universal laws that govern all AI and machine learning (Meert et al., 2025). Most research and scientific literature instead focus on specialized subdomains. Nevertheless, the following section highlights on of three broader characteristics of GAI, structured around its input, processing, and output processes.

Characteristic 1: probability, not certainty

GAIs’ knowledge base is derived from vast quantities of digital data. GAIs use “reams of available text and probability calculations, constructing a massive statistical model that associates each word with a vector which locates it in a high-dimensional abstract space, then establishes similarity, and next choosing randomly among the more likely words.” (Hicks et al., 2024) According to Wolfram (2023), ChatGPT for instance searches continuously for a “reasonable continuation” of the text it is creating as an answer to a prompt.

The aim of GAIs is not to come up with true or even useful answers but to provide a response to a prompt that replicates human speech. (Hicks et al., 2024) Because of GAI’s “reckless disregard for the truth”, Hicks et al. (2024) call GAIs “bullshit machines”, after the concept by philosopher Harry Frankfurt.

GAIs’ set-up enables the seeping through of substantial biases in its answers, resulting from training datasets, algorithms, and human subjectivity “that often exacerbates biases across both stages of LLM development (data and algorithm)”. (Wei et al., 2025)

According to Segato (2025), the output of GAIs is an estimate, a probabilistic reply. He writes: “AI shines in ambiguity and uncertainty”. To him, this is not necessarily a bad thing: “It’s a new world, a world of wonder and possibilities, a world to discover and understand.”

Notwithstanding their probabilistic processes, GAIs present their output fluently, with projected confidence. OpenAI’s Kalai et al. (2025) present a possible cause for this: “We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty”.

Scholars diverge on how to assess this characteristic. While Neuman et al. (2025) state that GAIs “epitomize System 2 thinking”, after the human brain thinking modes popularized by Daniel Kahneman, Pearl (2018) sees this as insufficient in itself. He rejects the current statistical or model-free underpinnings of GAIs. Instead, Pearl proposes guidance by a model of reality to push it closer to the capability of understanding. Bishop (2021) and Thompson (2007) argue that this is not enough still. According to them, mental life is bodily life. Since GAIs lack embodiment, they will never fully encapsulate human semantics. As a result, in their view, an unbridgeable gap remains: a humanity gap.

Coveney and Succi (2025) show that for the current GAI design “the scope for improvement is absolutely untenable on account of the accuracy required for most scientific applications, let alone the power demands of the approach.” In other words, the current GAIs lack a feasible trajectory to become sufficiently accurate.

The implication of this characteristic is profound. Since GAIs have a fundamental indifference towards truth, any truthfulness in their output is merely a byproduct of their processes, not a deliberate goal. It, therefore, follows that their output cannot be inherently trusted. The logical conclusion is that one’s default stance should be to treat GAI output as if it were misinformation in the definition of the European Commission[1] until it’s proven otherwise. This doesn’t mean the output is always wrong, but rather that one should adopt a ‘zero-trust’ policy towards it. Ultimately, it’s on the user to verify GAIs’ claims by checking them against external, reliable sources.

Part (2)

Part (3)

Literature

·        Bishop, J. (2021). Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It. Front. Psychol. https://doi.org/10.3389/fpsyg.2020.513474

·        Coveney, P & Succi, S. (2025). The wall confronting large language models. ArXiv. https://arxiv.org/pdf/2507.19703v2

·        Hicks, M. et al. (2024). ChatGPT is bullshit. Ethics Inf Technol 26, 38. https://doi.org/10.1007/s10676-024-09775-5

·        Kalai, A. et al. (2025). Why Language Models Hallucinate. OpenAI. https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4aaa5/why-language-models-hallucinate.pdf

·        Meert, W. et al. (2025). Artificial Intelligence: A Perspective from the Field. The Cambridge Handbook of the law, ethics and policy of artificial intelligence. Smuha, N. (ed.) Cambridge University Press

·        Neuman, R. et al (2025). Auditing the Ethical Logic of Generative AI Models. ArXivhttps://arxiv.org/abs/2504.17544

·        Pearl. J. (2018). Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution. ArXivhttps://arxiv.org/abs/1801.04016

·        Segato, G. (2025). Building AI Products In The Probabilistic Era. giansegato.com. https://giansegato.com/essays/probabilistic-era

·        Thompson, E. (2007). Mind in Life. Biology, phenomenology, and the sciences of mind. Harvard University Press.

·        Wei, X. et al. (2025). Addressing bias in generative AI: Challenges and research opportunities in information management. Information & Management, Volume 62, Issue 2. https://doi.org/10.1016/j.im.2025.104103 

·        Wolfram, S. (2023). What Is ChatGPT Doing … and Why Does It Work? stephenwolfram.com. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

Footnote

[1] E.g. Tackling online disinformation: “Misinformation is false or misleading content shared without harmful intent though the effects can be still harmful.” https://digital-strategy.ec.europa.eu/en/policies/online-disinformation

Subscribe now &

Get the latest updates

Subscribe