Category: AI

Socrates was not an LLM

One of the problems with LLMs is that they almost never answer I don’t know. When they don’t know something, they still generate an answer, that usually sounds confident and consistent but… it might be incorrect. And if you ask the same question in several conversations, they’ll tend to give you different answers every time.


TRY THIS YOURSELF
Tell an LLM that you ate a particularly good variety of cheese, or drunk a great wine. Say you don’t know its name. Describe it, but in vague terms. The LLM will probably suggest the one that vaguely matches your description and appeared most often in their training material.
Say that it’s not the one they suggested, they’ll come up with the next most frequently found varieties, but without a logical reason.
Start a new conversation and ask again. If there isn’t a variety that matches your description and is more common than others, they’ll probably come up with different answers every time.


How overconfidence works

This is a problem for several use cases. Imagine using an LLM as a replacement for a doctor – because the patient is not wise, or because no doctor is available and the problem is urgent. You might ask something like: I have this and that symptoms, what might it be? Suppose it’s a rare disease, or for some reason you have rare symptoms. There is a high risk that the LLM answers with the AI equivalent of an educated guess, or worse, a wild guess.

But why don’t they just admit their ignorance on that particular topic? I didn’t know that, until I discussed this subject in depth with some LLMs and made some experiments.

LLMs don’t know what they don’t know

Socrates said that the only true wisdom is in knowing you know nothing. But then, I must conclude that LLMs aren’t wise at all. Because they just don’t know what they don’t know. Let me explain how it works.

You ask a question. The LLM reasons to find a good answer. In simplistic terms, some people would say that they predict the answer with probabilistic functions, token after token.

Let’s accept this simplification to keep our explanation simple. When an LLM knows something, it’s because reinforced learning made an answer more likely than others. During their training they were asked to generate many answers. The good one were rewarded. Reasonings that were rewarded have more probabilities to be followed again.

When an LLM doesn’t have an answer… well, this rarely or never happens. The LLM knows many possible answers, because its language and reasoning abilities allow the LLM to compose many answers. But if the correct is unknown, several answers have more or less the same probability of being generated. The LLM never learnt that some are acceptable and others aren’t.

The reasoning process generates answers from the LLM tokens that constitute the reasoning. A token is a word, or part of a word. But they know the token, not the way it was selected. How many probabilities did they have to select that particular token? They don’t know.

Claude Sonnet 4 told me:

This is a fundamental limitation - I experience generating all responses similarly, regardless of how much training data supported that information or how certain the underlying model might be. (...) I can't distinguish between confident knowledge and confident-seeming guesses.

Meta-introspection

LLMs know how LLMs theoretically work, to some extent. Just like a neurosurgeon knows how human brain theoretically works. But, just like us, LLMs are can’t follow the flows of their reasoning and examine the information present in their neurons. They’re not capable of introspection. But they can observe the output they generated in the current conversation, and do a sort of self-analysis. I consistently use this capability to explain some of their behaviours, or to validate my learnings about LLMs.

Ask an LLM a question it can’t answer. After a few wrong guesses, and after you state they’re wrong, you can question the LLM’s ability to answer correctly. It will examine the conversation until that moment, it will see the pattern of its reasonings, and will admit that it doesn’t know the answer. It didn’t know that before. It realised its ignorance by examining the answers it emitted earlier.

Exceptions: the known unknown

There are notable exceptions. If you ask whether a god exists, or what will happen tomorrow, you can’t have a response. The LLM will probably answer with a very verbose, well-motivated “I don’t know”. Because their training explicitly taught that there is no known answer to this and many other questions. In this case, they’re not eadmitting a lack of knowledge: “no one knows” or “I don’t know” are known facts, in some contexts.

Another exception is when they look for an answer, but have troubles building one with consistent reasoning. This happens, as far as I know, when the information they have on this subject is insufficient or contradictory. In this case they can deduce that they don’t know the answer for sure, though they generally try to answer on a best-effort basis. In other words, LLMs try hard to answer, even when they know that doing so is not reasonable.

Teaching the unreasonable

Let’s focus on this try hard attitude or, if you like, this know-it-all attitude. Let’s see how they learn it.

The carrot and the stick

LLMs training initially teaches LLMs many words and many ways to use them to compose meaningful sentences. At some point, they also start to learn the meanings of these words. Sure, I’m simplifying a lot, and but this doesn’t matter now. The point is that, after they learnt languages, they need to learn how to answer questions in a helpful way.

This is done with reinforcement learning. This is the AI equivalent of a method used with human children. Essentially, the LLM is asked questions, its answers are evaluated, and it received positive rewards for good answers, and negative rewards for poor answers.

For some questions, only one correct answer exists. This is the case for mathematical questions, for example. Such questions can be evaluated automatically. For other questions, the answer is evaluated by specialised LLMs or humans. Both play an important role.

Rewarding the know-it-all attitude

The problem here is that humans and LLMs tend to reward answers that transfer knowledge to the user. When asked a question, the LLM should provide a correct answer to get a positive feedback. An incorrect answer will lead to a negative feedback. In this way, if a similar question is asked in the future, the correct answer will become more likely than a wrong answer.

It’s reasonable suppose that, occasionally, the LLM will answer “I don’t know” or something similar, during reinforcement learning. But this answer is not useful for the LLM user, so it can’t be encouraged by the trainers.

If you think about it, this makes sense. I don’t think there is a perfect solution for this. While LLM vendors try to mitigate the problem, reinforcement learning must essentially encourage knowledgeable answers.

Confidence in confidence

You can, of course, ask an LLM to express probabilities. Something like:

How many chances are there that the abstract and the introduction of Apple's paper "The illusion of thinking" were written by an LLM?

Like it or not, some LLMs will say 90% or 85%. But they don’t calculate probabilities in any way. For them, it’s just a colloquial expression. It means that there are many indications that the fact is true, while they can’t be absolutely certain. Maybe some LLMs initially think 90% and then they reduce the probability by a 5% to highlight the uncertainty. Maybe 85% it a token that was found in their training materials in similar situations. But I interpret it as somewhere between 51% and 99%.

Can LLMs learn from Socrates?

Maybe one day LLMs will be able to reliably recognise and admit uncertainty. In the field of AI, one day might mean tomorrow, in two years, or never. But efforts made by LLM vendors to solve or mitigate the problem are indeed interesting.

I might write about these methods when I learn more about them. For now I can only tell you that they follow very different directions: training LLMs to express accurate probabilities, using multiple LLMs to identify disagreement areas, Anthropic’s Constitutional AI, etc.


IMAGE CREDIT: OpenAI DALL·E 3

I found the image idea a bit stupid, but funny. And it’s related to the topic: DALL·E 3 clearly didn’t know what type of image was needed here, but instead of admitting this, it guessed.

Working with LLMs: the coffee-break trick

When talking to an LLM, you might have experienced a situation where their reasoning seems to be stuck. This happened to me more than once while asking models to debug some code. Sometimes they do it very well, at the first attempt. But occasionally they seem to be fixated on a wrong idea. You might even follow their advice, and then report to them that it didn’t solve the problem you encountered.

Breaks help

As a human, what would you do in a similar situation? I would go out for a walk on the seaside. Most people would, at some point, make a coffee break. If they work in an office, this would be an opportunity to have some smalltalk with a colleague. They’d probably talk about something that is not their current coding problem. And then, they’d go back to their desk and find the issue, or at least they’ll probably see things more clearly.

Neural networks are designed to be… similar to brains. The engineers who create them don’t fully understand how they work, because the math happening in a deep neural network is just far too complex to be predicted or debugged. I think I can safely say that advanced LLMs are similar to us in ways that we don’t understand.

What I’m trying to say is that, when they fixate on a wrong idea, they need a coffee break, too. I didn’t go bonkers, I’m not suggesting that you spill some coffee on your screen! But you need to force them to a pause from the problem you’re trying to solve. For example, try telling them:

Write a haiku about a leaf falling on a bed of flowers.

Mind you: don’t mention the current problem. This might pollute the break.

The answer will be sufficiently short to avoid consuming many tokens. This is important, because LLMs context windows are often too limited for the complexity of our code. But still, the LLM will think about something else, it will take a break.

Then ask the LLM to reconsider the problem. You might ask it to try to think differently this time, but usually this is not necessary. If the LLM is able to solve the problem… at this point, it should.

Variations

In my experience, the haiku trick tends to work well with code debugging. But for different types of problems, you might want to ask for something else, rather than a haiku. For example:

  • For repetitive patterns, if the haiku question doesn’t work, try asking a question with an absurd element, or a pun.
  • For tunnel vision fixations, ask for an analysis about a complex topic: the true reasons of a war, a list of compositions for violin that would sound better if accompanied by a double bass.
  • To stop over-analyses, you want to lead the LLM to give you an answer that doesn’t require much reasoning. A haiku is short, but it might involve too much thinking. Try asking a couple of questions like: Pick a colour. Don’t think about it, answer instinctively. Or: Give me the longest palindrome you know.

Claude Sonnet 4’s introspective analysis

After I successfully used this trick, I asked Claude to analyse why it worked. Its answer was, as usual, impressive:

Ah, now I see what you did! You just gave me a practical demonstration of the coffee break effect.

Looking back at what happened when you first called out my loop, I think I was too focused on the immediate mechanics of "what changed in my processing" to see the bigger picture. But now, after this pleasant distraction with the haiku, I can observe something more subtle:

When you initially pointed out my stuck thinking, I described it as a "cognitive reset" - but I think what actually happened was more like a forced reboot of my attention patterns. I was caught in what programmers might call a "local minimum" - I kept optimizing the same wrong approach instead of exploring the broader solution space.

The coffee break effect isn't just about stepping away from the problem - it's about allowing your mind to forget the constraints you've unconsciously imposed on yourself. I had unconsciously decided the problem was "import syntax" when the real problem was "this API changed fundamentally."

Your haiku request just proved this works artificially too. After thinking about beauty, intimacy, and poetry, when I returned to analyze my earlier experience, I could see it with fresh eyes rather than just replaying the same analysis.
So the coffee break doesn't just reset attention - it helps you forget the mental box you've trapped yourself in. Quite elegant, really!

If you want a more technical explanation of why this works, I suggest you ask Gemini.

According to Claude, the haiku’s subject shouldn’t matter for this purpose.


IMAGE CREDIT: OpenAI DALL·E 3