Has AI Lost the Public Trust?
The potential for artificial intelligence (AI) technology to enhance our quality of life is enormous. From autonomous vehicles that reduce deadly accidents, to medical breakthroughs that will cure diseases, to increased productivity that will drive economic growth, AI will disrupt virtually every area of human endeavour. However, to realize these benefits, there must be a trust relationship between AI and the public. We have to believe in AI for the good.
Every day, there are multiple media stories about AI, and the majority are negative: hallucinations, security, safety, privacy, environmental impact, biased data, AI bubble, job losses, dystopian scenarios and so on. Need I say more?
At the risk of anthropomorphizing, what does it mean to have a trust “relationship” with AI products? Gemini says humans need to have “a firm, secure belief in their reliability, integrity and good intentions” (retrieved March 9, 2026). ChatGPT says our interactions with AI should “feel reliable, safe, and transparent” (retrieved March 10, 2026). Does today’s AI technology meet any of those conditions?
We are in the age of an AI gold rush. A small number of trillion-dollar companies are racing to be first to the finish line and create so-called artificial general intelligence (AGI). First prize is apparently vast wealth. But these companies are running on steroids.
Being first is more important than how you get to the finish line. To win the AGI race means that hallucinations are an algorithmic byproduct to be solved later; that security is something that gets in the way; that size matters so we need more data no matter where it comes from; that spending hundreds of billions of dollars on data centers is critical despite the environmental impacts. To paraphrase the Dr. Ian Malcom character in the Jurassic Park (1993) movie, “Your corporations are so preoccupied with whether or not they can create AGI, they don’t stop to think if they should.” This is not the way to build trust.
Stanford’s 2025 AI Index Report publishes data showing that in the United States only 33% of the survey population trusts AI companies to protect personal data. In addition, 59% do not trust AI applications to be bias-free. An American Automobile Association survey found that only 13% trust autonomous vehicles. A Gallup-Bentley University survey found that “77% of adults do not trust businesses much (44%) or at all (33%) to use AI responsibly.”
There is a serious AI credibility problem.
The Need for a Reset: Privacy-Focused AI
AI needs a reset. Instead of just thinking about bigger, more powerful (yet flawed) corporate AI models, how about contemplating smaller, more practical (and useful) personal AI models? Restoring trust means interacting with the individual at a level that is meaningful to them. Move from the data center to the desktop.
Consider AI that:
- Analyzes your personal data in depth
- Provides accurate answers to questions about your data, and if it cannot answer then it responds with “I don’t know”
- Can be used safely and securely
- Respects the privacy of your data.
Some of this is already happening. For example, we have subsets of Large Language Models (LLMs) available for the desktop (LMs, Language Models). Further, local data can be integrated with an LM using Retrieval-Augmented Generation (RAG). Answers based on local data can have priority over that of external data.
But some issues remain painfully present, such as the well-known Lethal Trifecta for AI agents.
The Road Ahead for AI Innovation
There has been tremendous progress with developing applications to enhance workplace productivity, in particular those that streamline business processes, automate simple tasks, assist with creating documents and provide analytical capabilities. However, the majority of the world’s population is currently not employed in jobs that would expose them to this sophisticated AI technology. Most people have little interest in AI, think that Claude is a person, are sure that Co-Pilot works in the airline industry, believe that an agent must be someone trying to make a sale and chatting is something done between two people.
There are two obvious ways of building trust with AI, both of which have to happen on scale sooner than later. First, it behooves all of us to increase the level of AI education. Much like computer literacy is a component of all modern K-12 curricula, AI literacy must also be integrated. We teach students today how to use computers safely so that they do not fall for a phishing scam or install a virus.
AI also has its share of safety concerns that include hallucinations, inappropriate chatbot behavior and the secure use of agents. The better AI education that the general populace has, the easier it will be to build acceptance for safe, secure, accurate and private AI technology.
Second, by concentrating on the biggest market, eight billion potential customers, companies can build AI products of the people, for the people, and by the people. If AI-based products do it right by prioritizing personal privacy, guaranteeing accuracy, ensuring safety, and enforcing security, then the AI industry has a chance to increase user adoption, build trust in the technology, and accelerate realizing the benefits of AI for all.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
