Originally published on LinkedIn
More than ever, we have a number of choices when it comes to accessing the information and completing the tasks we need. Voice and sound, touch and contact-less gestures, biometric capabilities, fully autonomous systems… Convenient for some, critical for others who may only be able to interact with the world through a reduced set of senses. Accessibility, through innovative hardware and software solutions has never been better. However, over the last year or so and with the advent and consideration of generative AI in the mainstream, I still hear such things as, “now we’ll need to really think about the customer experience,” or, “I need to structure my data so that AI can read it.” It’s interesting how accessibility has gained a level of elevated importance due to the need to have a machine do our bidding; accessibility is not ‘their’ problem anymore, it’s ‘our’ problem.
Immediately though you might dart for the notion that AI will replace us all. With any leap in technology, there will always be fear — just ask the bank tellers how they felt when ATMs arrived. I’ll say one thing though, building AI systems is HARD. Many are trying to force deterministic functions into a probabilistic system and hoping for the best. Evals will help keep us on course, right? But wait for one moment...
The Knowledge Funnel
There’s a fantastic concept that Roger Martin (Roger L. Martin, Inc.) introduced in his fantastic book The Design of Business called the Knowledge Funnel. He describes the process of how knowledge starts out as a mystery and through hard work makes its way through the funnel to heuristic and finally algorithm. It’s a fantastically simple and powerful model. Mr. Martin has since followed this up with an article on how Generative AI essentially helps push knowledge through the funnel much, much quicker. As I look around, I have to agree. Making sense of concepts and data we don’t understand, synthesis of research, creating algorithms from initial rules of thumb, it aligns perfectly. As do the activities we perform within each stage of the knowledge funnel — exploring unknowns and pushing boundaries; formulating better frameworks that help us make decisions; ruthlessly optimising our software. We are always on the look out for ways to scale ourselves.
All of that is to say, we’re trying to make sense of the world around us, all of the time and finally we have our own assistant to bring along for the ride at any hour of the day — the only catch is that you need to provide it with all the context you have in your head in order for it to properly understand what you’re asking.
And there’s the rub.
Help me help you help me
Generative AI models are typically trained on huge datasets in order to provide the best, most informed guess at what you want to know. In order to improve matters, you need to explain yourself… fully. In order to get a simple answer to the question, “what should I have for dinner tonight?” preferences, resources and context needs to be provided else you’re getting a pretty wild guess. Even then I’m sure there’s a bunch of coaxing and compromise that needs to happen. Giving examples of good answers and potentially bad answers helps, but only so much. In the end you may as well feed in all of your favourite cookbooks, restaurants, foodie social feeds and say “something like this please.” Then maybe, just maybe, you get something back that’s tailored for you.
Let’s return to the notion of accessibility for a moment. It’s probably taken for granted now that we can upload basically anything into ChatGPT and it will make sense of it. PDFs, images, code, csv files, text files, video. That was not always the case. Well structured data is still at the heart of this, and perhaps now, finally, it’s something we’re taking seriously due to the fact we need to make a machine understand. Perhaps better late than never, but this heightened level of accessibility was always a need; it was just that in the past perhaps it was too hard to determine the return on investment—and so it was not really the priority for most. In fact it was often seen as a way to avoid huge expense due to lawsuits as opposed to a way of enhancing usability. Until that is making information and systems accessible meant an AI agent could now understand.
The problem (re)defined
I wasn’t joking earlier when I mentioned someone saying that, due to AI, they felt they needed to really dig into the customer experience. Being able to instruct a bot on how to interact with people means being able to adequately and accurately describe exactly what the situation is, and how it needs to respond. For that you need a deep understanding of what is going on. There is more emphasis than ever on being able to describe what’s needed and what the desired outcomes are. Deeply thinking about the problem, really defining the problem and then considering meaningful solutions is what makes the real difference now as there is nowhere to hide.
AI systems should ultimately help us be more effective, even more human as we consider how software is built, but there are still frightening consequences of where those systems might take us. It’s one thing to make a machine act with more humanity, it’s another thing to lose humanity to those same machines.
The companies winning with AI aren't the ones with the most sophisticated models or the biggest AI budgets. They're the ones who understood their problems deeply enough to know exactly how AI could help.
What's your experience with AI adoption in your organization? Are you seeing the pattern I'm describing? Join the conversation on LinkedIn