Before we begin, a word of warning: I know very little about AI.
Because of this, some of my explanations below may be a bit off-base. Certainly don’t take my word on anything technical – and if you think I’ve got something wrong, feel free to email me back explaining how ignorant I am.
However, even if that’s the case, there is a deeper point here than one about AI. It’s a point about us; about people. And on this one I’m not off base.
So even if the path we take to get there is a bit bumpy, stick with me until the destination.
___
When you get right down to it, there are only two types of question that can be asked:
-
- Strategic
- Non-strategic
Non-strategic questions are those with a fixed answer. How high is Kilimanjaro? What’s 234 divided by 763? Who was Henry VIII’s third wife? All questions with an unambiguously “right” answer.
Strategic questions on the other hand are those with no fixed answer. How do I lose weight? Should I divorce my husband? What is the one true religion? How do I grow my business? The reason these are “strategic” questions is because they require strategic answers: opinionated punts based on contextual knowledge. There are unlimited possible answers, and it is only experience which will determine what is “right” – and perhaps not even then.
The reason I’ve made this distinction is to tee us up for a discussion about AI – a discussion which, when you think about it, is unavoidably about questions. After all, that’s how it works right? (At least with the most visible and hyped versions of AI). You ask it a question, and it gives you an answer. Or, to use the proper terminology:
The human —> prompts
The AI —> responds
Now clearly when it comes to non-strategic questions, the abilities of ChatGPT etc. are truly staggering. After all, it “knows” pretty much everything. And unlike Google and vast information repositories, it understands exactly what you’re looking for.
However when it comes to strategic questions – ones which are context reliant – its abilities are less impressive. More generic platitudes than useful insights.
So far as I understand, there are two key reasons for this:
- The AI desperately wants to be “right”. This means that it will give answers which are probabilistically most likely to be true, which in the case of strategic questions means the most obvious answers. It could provide a riskier and more insightful answer of course, but because it doesn’t really understand what it’s saying this would increase the chances of it giving a clearly idiotic answer. This in turn would erode trust, so it chooses instead to play it safe – precisely the opposite of what you’re looking for with such questions.
- The AI can never appreciate context. It might appear that AIs know quite a lot, and in one sense they do, but in another sense they know almost nothing, because they have no way of accessing knowledge about the real world. They do not understand the situational reality that you and I operate within – they only know “book information”; stuff that’s been committed to paper. Which, when you think about it, is an infinitesimal fraction of the information that actually exists. So if, say, a dental surgery were to ask ChatGPT to develop a growth strategy, the system would have no way of knowing that this particular surgery lies on the route of a very busy school run, meaning it has great potential to specialise in families. That’s what I mean by context. Information that requires “embodiment” to access.
What these two factors reveal then, is that despite appearances human beings are actually far more powerful than AI when it comes to a particular type of knowledge.
For one thing, humans understand what it is they are saying, and are thus able to judge if what they are saying is useful or an empty truism. If you asked a human counsellor and an AI for relationship advice, even if they were operating from exactly the same educational texts, the human one would be able to give you a more insightful and subtle response simply because they don’t need to compensate for their lack of understanding by playing it safe.
And second, humans have access to context. Thus they are able to fit their answers to reality far more easily than an AI can – for the obvious reason that they can see reality. They are in it. They don’t exist in a world of pure theory and abstraction. They exist in a world of blood, earth, air, and life.
Bottom line?
We are the knowledgable ones.
Now assuming that I’m broadly right here, what does this mean for our relationship with the technology?
I think it means that when it comes to strategic questions, we’ve got it backwards.
You see, contrary to the way we normally think about it, our problem isn’t lack of information, or lack of answers. We have the knowledge we need. We don’t need something “smarter” to do our thinking for us.
No, what we need is help asking good questions.
Good questions are the source of strategic insight, not good information. Good questions don’t determine what we know, they determine how we interpret what we know: smart questions, smart answers – dumb questions, dumb answers.
Take business strategy as a case in point:
Every business can tell you why their strengths are good.
But few can tell you why their weaknesses are good.
Every business can tell you why their competitors suck.
But few can tell you why their competitors are great.
Every business can tell you how they are better than the other guy.
But few can tell you how they are the opposite of the other guy.
All this has got nothing to do with facts, and everything to do with angles of seeing – in other words, the questions you ask.
And this is most people’s problem.
They don’t have the imagination to ask interesting questions. They ask the obvious thing, and get an obvious answer. They think that their knowledge is to blame. They think that if only they knew more, then they’d alight on the killer answer.
But no!
They already know enough. They just need to be prompted to process their knowledge in a different way.
And that brings me back to AI.
What if instead of us asking it questions, it was asking us questions? What if the roles were reversed:
The AI —> prompts
The human —> responds
To me, so far as strategic questions are concerned, this is much more aligned with the skills of the two parties. Humans have the real world knowledge and sensitivity to provide genuinely profound answers. And the AI has the ability to ask provocative and probing questions, without needing to know things beyond its grasp.
Perhaps somebody has already built a tool to do this – I’m not aware of it, but like I said, I’m no expert. Even so, regardless of whether they do, or whether we see this relationship flip occur, my core point remains:
We already know enough.
We know more than enough.
But we don’t ask interesting questions.
You know, sometimes prospective clients ask me “how will we know if the strategy is the right one? How can we prove it?”. And my answer is always the same: it will be obvious.
The reason it will be obvious is because strategy never relies on new information – new facts which may or many not be reliable. No, it always relies on reinterpreting the information you already have. On seeing that which is obvious, self-evident, and yet currently unseen.
An AI could help you do this, no question. But once you get the idea, it’s not very hard to do it yourself.