You join me again for part 7 of my cultural context series, where I examine some of the silent macro forces which influence the currents of the world in which we act and strategise. Here are the previous 6 for reference, with the most recent first:
The crushing tedium of excellence
The schizophrenia machine: how brands remake our identities
The post-cultural myth and how it permeates everything we do
Remapping the world: the rise of horizontal loyalty
Sugar, sex, and the supernormal
Concentration, the untold story of the 21st century
___
It may have passed you by, but a couple of weeks ago saw the launch of a new piece of AI software called ChatGPT. Essentially it’s a chatbot, similar to those which companies that have terrible customer service use to “assist” you online, only this one is good. Scarily good.
To give you an example, I asked it a question requiring a degree of creative thought: “if a dog was a football fan and could talk, what would it say?”. Here’s a snippet of what it came back with:
“I am a huge football fan! I love watching the games, cheering on my favourite team, and feeling the excitement of the game. I even have a jersey with my name on it and a cosy spot on the couch to watch the games. I am a loyal and dedicated fan, and I never miss a game. I bark at the TV when my team scores and wag my tail when they make a great play. I also like to bark at the refs when they make a bad call, by my humans tell me to settle down”.
Obviously this is was not a factual request which the AI could simply have Googled and regurgitated – it required a degree of originality. And in that respect I would suggested it did a better job than 80% of humans would have managed given the same instruction. It’s not exactly “good”, more “competent” – but one of the things we’ve learned is that competent tends to beat good in the market over the long run. Just look at… well, literally any creative field you care to mention.
What I’ve found interesting about this launch is not so much the AI itself (impressive though it is), but rather the response to it – which has, on the whole, been shockingly positive.
I say shockingly because it seems far easier to imagine negative consequences from this technology than positive ones. I’m not talking about any Skynet-esque apocalyptic wars between man and machine here. No, I’m talking about the technology as it is in the here-and-now, which clearly has the capacity to replace the majority of low-to-mid level knowledge workers.
We are used to tech replacing manual workers of course; automated production lines, self-checkouts at the supermarkets etc. But so far us smug middle classes have remained largely immune, thanks to our peddling in “creativity” and abstraction rather than mechanistic tasks. Here however we have a machine that could quite easily replace, say, a junior SEO marketer with no bother. In an instant it can write a blog on any given topic which would happily grace your average corporate website, perfectly optimised to keywords which it also selected – and poof! That’s pretty much a whole industry right there.
Now I know what you’re thinking: wasn’t it always thus? Haven’t we always introduced new ideas and technologies into the world, and allowed their consequences to unfold? And haven’t we always adapted? This is little more than the latest in a long line of innovative concepts – from the reformation, to the Spinning Jenny, to the light bulb, the jet engine, the smart phone, and beyond. This is just the way human society works!
Not quite.
In this piece I want to discuss what I think is a highly consequential change in the way that new ideas like ChatGPT emerge and spread; one which makes them behave very differently from the innovations of the past, and which calls into the question the very ethics of innovation in the first place.
We can call this change “the friction deficit”. And it works something like this…
In the past the world was highly fragmented and divided – by geography, language, culture, resources, and all sorts of other things which made one locality and group of people separate from another. What this meant was that when a new idea emerged in one of these communities it would take it a long time to spread to others – i.e. it had to overcome “friction”. I’m not only talking here about technologies and intellectual concepts, but literally anything that has the capacity to spread. Disease is an equally relevant example. If a virus emerged in one city it would struggle to spread to others thanks to the “firewall” of distance between them. Any particularly toxic threat such as this would have the opportunity to burn itself out before engulfing the entire planet. For something to achieve such a spread, it first needed to “survive” in a reasonably benign way in its original location before it would ever have the chance.
If we consider the example of, say, Western industrialism – which has now become basically the default operating system for 90% of the planet – its progress was very slow, taking say 150 years to truly get global purchase. During this time there was plenty of opportunity to test the effects of this “way of being” in its original testbeds such as the UK. Had it been ruinously destructive, those places would have imploded or adapted long before the concept had the chance to “infect” the whole world – rather like what happened with Communism, which rotted at the root before its growth was complete. As it happened however the idea of Western industrialism proved durable. Durable enough to overcome geographical friction and earn its passage around the globe, at least.
(Of course, this durability doesn’t mean that the idea is benign over a longer time period. Western industrialism’s effect on culture, the environment, and human psychology could yet prove to be catastrophic over the long term, sure – but nevertheless it has been “safe” enough to thrive for a fairly long period of time.)
In short then, under this fragmented “high friction” model where ideas travelled slowly, innovation in all its forms was kept largely in check. Things had to “earn the right” to spread before doing so.
Today however this fragmentation no longer applies. Globalisation has rendered the world flat and frictionless, mean essentially that the “firewalls” have been removed. New things are no longer contained in a limited geography for an extended period of time to “see how they go” – instead they are everywhere all at once, rendering the entire planet the testbed.
The most obvious example of this is, of course, Covid. Reports now are suggesting that the virus was in fact already global as early as August 2019 – before we were even aware of it at all. This naturally suggests that our efforts to “control the virus” were even more futile than we already thought. The truth is that in a frictionless world there is no control – not of a virus, or anything else. When something exists, it exists everywhere immediately, for better or worse.
Another very different example can be seen on the cultural level. For most of our lifetimes, culture was geographically limited – for instance though different countries having their own TV stations, music, and movies. Global hits could emerge of course – just as global technological innovations could take hold – but there was never a sense that everything was global; that everything was ubiquitous. Now however the lack of friction is fast generating a global mono-culture; with platforms like Netflix, Disney+, and Amazon Prime coming to represent the globe’s TV stations. The effect of this is that we all gradually come to have the same cultural diet, the same experiences, and share the same ideas, which in turn accelerates the homogenisation of culture and a decline in creativity.
Creativity after all requires a diversity of inputs and experience, in order to form new interesting connections. If we remove the friction from the world and allow information to travel more freely, the “strong” information rapidly overpowers the “weak”, thereby lowering the diversity of inputs, and thus the creative output.
The point of all this is simple:
Geographical friction is an essential component of healthy innovation and creativity.
It allows for low risk experimentation. It keeps bad ideas from spreading. It allows good ideas to mature before they spread. And it promotes the very diversity of experience that actually generates ideas in the first place.
Remove that friction, and innovation doesn’t only become less effective. It becomes outright dangerous. Ideas with disastrous consequences can spread freely before we’ve had a chance to adapt or question whether we really want them after all. The stakes are raised and mistakes start to matter – far more than they ever did before.
If this is true then, why don’t people worry about it?
On a basic level, I think this is because we are all used to the old fragmented paradigm, and the “safe innovation” it supported. We don’t realise yet that the ground has shifted under our feet, and the dynamics have changed. We still see novelty as a positive thing.
On a more philosophical level, I think people believe on the whole that if an innovation is “good” then it will spread, but if it’s “bad” then it won’t. This is a variation of the free-market concept of the “marketplace of ideas”, where via free and open competition the best ideas are “selected” for survival, the worst perish, and we all profit together. This understanding would say that AI (for example) will only triumph if the market (i.e. people) want it – thereby proving its virtue.
Whilst I am broadly a supporter of free markets, this strikes me as an obvious fallacy. There is a big difference between a strong idea (i.e. one that perpetuates itself and outcompetes its alternatives) and a good idea (i.e. one that produces the greatest amount of holistic flourishing for the widest number of people over the long term). Pornography and smoking, to pick a couple of obvious examples, are clearly “strong” ideas but probably not particularly “good” ones. It doesn’t follow that if we were to allow them to perpetuate themselves to their fullest expression that the world would be a better place. By the same token Covid was a strong idea capable of self-willed growth, but not exactly a good one either. People will say that a human innovation and a disease aren’t the same thing, but dynamically of course they are. They are all novel concepts seeking replication and growth, without regard for second or third order consequences.
This then is where strategy becomes relevant.
In a frictionless environment, where the capacity for testing has become heavily diminished, more care is required when releasing new ideas into the world. As recently as 20 years ago I doubt this was so much the case, since geographical friction served as a satisfactory safety net that was able to temper unpredictable consequences. It allowed us to more or less “innovate with impunity” because hey, what’s the worst that could happen?
But with that safety net now gone, the responsibility has shifted to us. We have to take responsibility for our ideas, rather than simply allowing them to flourish under their own volition.
In Silicon Valley, it’s common for people to talk about “what technology wants”, as if it were its own life force with its own agenda, rather than something we create and control – and they are spot on with this insight. That is how technology works; in fact how all ideas work. When developing a strategy for our own business, we aim to harness this power of “self-willed growth” by setting things up so that growth will follow organically. This is fine of course because the average business isn’t about to turn the world upside down no matter how successful it becomes. But there are some cases where the reverse applies; where we want to manage the automatic perpetuation of an idea if the consequences are sufficiently large.
This is not an argument against innovation. It is an argument for considered innovation, now that the stakes of the game have changed thanks to the friction deficit.
To that end I’ll leave you with this analogy. The Amish are famous for being “anti-technology”, however strictly speaking that isn’t quite accurate. There are in fact various forms of modern technology the Amish embrace, for instance disposable diapers. Rather than being “anti-technology”, the Amish are in fact selective about technology. When a new innovation emerges they test it, consider its consequences on their society, and then choose whether or not they think it would be a net positive to adopt it. Sometimes they do. Sometimes they don’t.
By comparison in our culture, we don’t select technology; we allow it to select us. If it has the capacity to spread, then we allow it to. We read strength has a synonym for goodness, for worth. And as such we allow things to develop in whatever direction technology deems fit (i.e. the direction which is best for technology).
I wouldn’t propose being Amish. But they do realise at least on thing that we don’t: we have a choice. And I don’t need to remind you that choice is strategy. So a dash of strategic thinking wouldn’t go amiss.