When technology is misunderstood and stymied by human nature
Few people appreciate that Artificial Intelligence (AI) has been around for decades. What’s got people excited in the past 24 months is Generative AI (GenAI), which is a specific and intellectually appealing niche.
But I think there’s less mileage in GenAI than so-called experts claim.
CFOs are increasingly under FOMO pressure to adopt AI as broadly as possible. However, you can’t AI yourself out of a problem or challenge you don’t understand.
AI, including Analytical (or Classic), Generative and even Agentic are just tools in the box for business transformation and process optimization.
Whilst the frothy projections predict that AI will have a dramatic effect on enterprise productivity and profitability (automating away jobs), the reality is that AI is largely a human augmentation – rather than automation – technology.
It can help humans make better decisions and deliver better outcomes, but not necessarily dramatically quicker. AI is optimized by a ‘Human in the Loop’ (HITL) – and it won’t, for example, help most companies save on payroll.
It is common to hear that GenAI can produce a narrative 100 times faster than a human, with the implication that 99 out of 100 people will become redundant.
This is a fallacy based on looking at the impact of a technology on small tasks occasionally performed, rather than the full work profile of individuals and the needs of the enterprise value streams, like revenue cycle, spend cycle, product cycle, customer service cycle, etc.
I’m all for GenAI use, but I do remind my own teams that it may not save you time (or the company money), because we need output that you can stand behind. No one wants a regurgitated narrative without some critical analysis, with necessary validation and refinement from people with contextual knowledge and expertise.
The Large Language Models (LLMs) behind GenAI are like ‘savants’ with impressive linguistic skills, but no contextual understanding or ability to validate. You should treat your LLM or GenAI tool like you would treat a well-educated college intern. Efficient enough at doing their work, just not necessarily work that you want to share without substantial review.
The conversational interface of GenAI-enabled chatbots is very appealing in the first flushes of engagement. However, it’s not the silver bullet that will replace the traditional user experience/user interface.
Succinct and accurate answers are key, as is trust in the originator – often requiring a problem solved, or at least validated, by a real person. And it is critical to understand the assumptions behind any answers or generated responses.
What’s more, GenAI prompts have a big influence on outcomes, and two customers seeking the same information may get completely different answers depending on their input. Again, the language savant problem.
Customers (internal or external) do not want to be the ‘sandbox’ for a poorly thought-out experiment.
Critical decision-making remains a human speciality, certainly supported by data and AI insights, but not subjugated by them. I’ve had several conversations recently about examples where data-driven and AI-enabled insights have provided the wrong inferences, often where correlation is assumed to be causation.
This has always been a challenge with statistical models where essential domain data is missing from the analysis.
It is dangerously easy to be complacent in the face of extremely attractive, plausible and beguiling results from algorithms, analyses and data visualizations.
Forecasts on the AI technology revolution continuing unabated vary. Some analysts say effectiveness is being diluted and compute costs are projected to be prohibitively expensive. Others say it can really help some people do their jobs, but won’t replace them.
My own viewpoint is that Al is an important arrow in the quiver or weapon in the arsenal. However, AI is not a strategy – unless your business sells AI tech or advisory services and demand for your services benefits from the hype and excitement.
We have been here before, and we will be here again. Over the past 15 years, we have had inflated expectations for Robotic Process Automation (RPA) and Blockchain, both of which are important technologies but have not moved the dial significantly on enterprise productivity.
We need to focus on the real barriers to enterprise productivity, many of which stem from human behavioural challenges, habits, and cognitive biases.
We need to be nurturing skills that AI can’t replicate – whether that’s empathy, practical expertise, collaboration, or the ability to think creatively.
Also, the broader potential of digital technology and how that can drive progress in efficiencies, productivity gains, customer engagement and improved cash flow.
To repeat, It’s not about AI for AI’s sake, it’s about solving genuine business problems in a way that improves the customer experience. Sometimes with AI where probabilistic answers are appropriate, but often not – especially when an absolute, deterministic answer is required.
The post When technology is misunderstood and stymied by human nature appeared first on Enterprise Times.
TAYLOR COUNTY, Texas (KTAB/KRBC) - Republican candidate for Taylor County Justice of the Peace Precinct…
TAYLOR COUNTY, Texas (KTAB/KRBC) - Republican candidate for Taylor County Justice of the Peace Precinct…
God of War developer Sony Santa Monica is reportedly working on a major spinoff focused…
The Marathon Server Slam is coming to an end, and Bungie has outlined the various…
This post was sent to our mailing list. Sign up for our free newsletter here.…
Juan Pujol García was one of the rare individuals whose participation in World War II…
This website uses cookies.