Fighting False Precision (or A.I. Isn't Everything)
There’s a lot of talk these days about Artificial Intelligence. Much of the buzz behind AI and many of the “digital disruptors” that leverage it (think digital twins, IoT, blockchain applications and more) is well founded. Taken together, these capabilities and technologies will change almost every facet of our lives, from how we live to what we buy to how we work.
Some of this is happening already while other aspects will happen over time. At work, in particular, AI and the insight it delivers promises to help us become more targeted, more precise and far more focused in what we do.
We will solve some of our most acute problems because we’ll have full visibility into what’s causing them. We’ll be able to offer far more to our customers because we’ll understand what and why and how they buy from us. We’ll be able to operate much more efficiently and seamlessly because our value chains will be far better understood, analyzed and reprogrammed.
But anyone hoping AI will be the “end all and be all” in terms of our decision-making process is in for a surprise.
Yes, for a targeted set of problems, AI and the technologies it enables will take care of things on their own (smart automation, guided buying, smart warehousing, etc.). But the full extent of our business life is far more complex than these relatively simple use cases.
The thing is that human behavior adds material complexity to everyday situations, complexity that cannot easily be modeled or, even where it can be modeled, easily responded to. You and I might face the same situations but our collective experiences, our environment, our individual realities and decision criteria may well force us to act and behave very differently. Even where the answer appears clear cut, they may not be.
That’s simply the complexity of the human state. That’s life.
So the idea that “perfect” data and the related insight that artificial intelligence delivers, will be the complete solution is misguided. It’s really the pursuit of false precision.
Because the data might tell us one thing but our environment and our specific situation could well suggest another. Context and interpretation is absolutely essential and necessary to get to the right answer for a specific situation. As is assessing the “how-to’s” needed to get the answer implemented. How will it be done in this environment? By whom? What are the implementation obstacles and considerations?
This is why, while the future of AI is bright, it isn’t one devoid of the human experience, or of human involvement.
It’s one where the human factor is actually enabled. It’s enriched and focused on high value activities. The recognition that it enables the optimization of the human experience is the real point of AI.
Anything else is simply the pursuit of false precision.