91ÇàÇà²Ý

News

Lauri Järvilehto: Without the relevant data, AI starts to hallucinate

Lauri Järvilehto is a Professor of Practice who studies the relationship between the human mind and AI solutions. In this interview, he explores the reasoning, reflectivity and reliability of AI.
Lauri Järvilehto
Photo: Lauri Järvilehto

Lauri Järvilehto, what do you research and what is your role in Aalto?

I’m a Professor of Practice at Aalto University Department of Industrial Engineering and Management, two days a week. I’m currently studying the relationship between human mind and AI, mainly based on literature, but it would be interesting to test human and AI performance, especially in reasoning tasks. Until two years ago, AI didn't do well at all in those tasks. Now it is starting to reason in university-level maths tasks for instance, rather than just parroting the training material.

But AI is not yet up to the task of everyday reasoning. Imagine the following problem: If you put five ice cubes into a hot frying pan every minute, how many cubes will be there after four minutes? The AI answers 20. If you ask a human, the answer is zero, because they'll melt.

What is AI hallucination?

If I ask an AI to retrieve information from, say, my new book Konemieli (Machine Mind), it might start hallucinating because it can't find the relevant data. It can't verify the probability of any set of words and starts producing text that doesn't match reality.

The AI system will in fact always hallucinate because it has no cognitive or sensory mechanism to verify the information. It just spits out sets of words based on their statistical frequency in the training data. And if there are inaccuracies, they may slip into the AI-generated data as well.

AI tools have been made more reliable by developing the underlying statistical methods that drive them, but they are never 100% reliable. A recent study found that if different AIs evaluate each other's results, the end result is usually much better.

How does DeepSeek compare to other AI systems?

The mathematical architecture can gradually give rise to a verification mechanism. The reasoning model can stop for a moment when it has doubts about the veracity of something, and a self-correcting process can emerge. Self-critical observations gradually narrow the margin of error. This is exactly how DeepSeek works and may be the reason for its breakthrough. It can start saying: â€˜Here's something to think about, wait a minute, it didn't go like this...’

But even DeepSeek can't do everyday reasoning or reflection, because life experience can't be programmed.

AI has a hallucination rate of 1.3 to 90%. The range depends on the purpose and the input. For example, you can achieve a hallucination rate of around one percentage by asking the AI to summarise one or more articles. But if, for example, ChatGPT 4 was previously asked about an elephant swimming across the English Channel, in 90% of cases it would start telling some amazing story about an elephant called Kami in 1936. The Kami in question, accompanied by fanfare, swam into Dover harbour and was greeted with jubilation. Most of the language models don’t hallucinate like this anymore, but DeepSeek's answer is often all nonsense.

Can you briefly describe the relationship between AI and the human mind?

The language model is very close to the human nervous system. Empirical studies have shown that humans have cognitive biases, and that thinking is largely based on heuristics, i.e. learned habits. For example, 89% of university students answer this task incorrectly:

Linda is 31 years old, single, talkative and very smart. She was a philosophy major and as a student she was concerned about discrimination and social issues. During her studies, she took part in anti-nuclear demonstrations. Which is more likely: Linda is a bank teller or a bank teller active in the feminist movement?

The right answer is found in the set and the subset. Linda is more likely to be a bank teller.

Why and how could AI model the human unconscious?

Humans have a conscious or algorithmic mind, which is very limited. It has a processing capacity of 3-5 units at a time. The human unconscious mind is based on principles of association, i.e. certain types of things often occur together. Language models based on the interrelationship of words are capable of similar association.

Artificial intelligence systems just grind word by word. Human language works in much the same way. For example, we put our feet in our mouth.

What essentials are missing from AI?

AI lacks memory, spontaneity and autonomy. A thinking human-like machine can perhaps be achieved if these three things can be programmed into AI, or if they emerge as AI systems diversify.

There are some good approaches to memory, but at least for the time being, AI memory will be reset. Spontaneity refers to the human characteristic of having thoughts pop into your head every now and then. In contrast, AI systems do nothing unless a human asks them to. They just wait for the human to provide input before doing the math. But there are already strategies in place to make them do the calculations on their own time.

In addition, AI lacks self-reflection, and I don't know how you could mathematically code it. For example, Roger Penrose argues that the human mind is fundamentally different from AI, meaning that consciousness cannot arise for AI.

Can you describe the principle of AI?

AI is a word processing machine. It turns all words into numbers, and the numbers model how the words are statistically related to each other. For example, if we have a definition of the word "king", we can remove "man" from the definition, add the definition of "woman", and the result is "queen".

If you ask an AI to tell you a fairy tale, for example, it does a huge matrix multiplication operation after each word. The end result is a probability distribution of which words are most likely to occur after that set of words, based on a detailed model of the training data. It then starts the whole process all over again.

  • Updated:
  • Published:
Share
URL copied!

Read more news

A person walks past a colourful mural on a brick wall, illuminated by street lamps and electric lines overhead.
Cooperation, Research & Art, University Published:

New Academy Research Fellows and Academy Projects

A total of 44 Aalto researchers received Academy Research Fellowship and Academy Project funding from the Research Council of Finland – congratulations to all!
Two flags at Aalto University: a pride flag and a yellow flag. A modern building and green trees are in the background.
Press releases Published:

LGBTQ-Friendly Firms More Innovative

Firms with progressive LGBTQ policies produce more patents, have more patent citations, and have higher innovation quality as measured by patent originality, generality, and internationality.
Two light wooden stools, one with a rectangular and one with a rounded structure, placed against a neutral background.
Research & Art Published:

Aalto University's Wood Studio's future visions of Finland's most valuable wood are presented at the Finnish Forest Museum Lusto

Curly birch – the tree pressed by the devil – exhibition will be on display in Lusto until March 15, 2026.
Five people with a diploma and flowers.
Awards and Recognition, Campus, Research & Art Published:

Spring term open science highlight: Aalto Open Science Award Ceremony

We gathered at A Grid to celebrate the awardees of the Aalto Open Science Award 2024 and discuss open science topics with the Aalto community.