
But it’s a myth that should not go unchallenged. Last year, the London School of Economics (LSE) published research which argued that training, rather than your birthday, is a much better gauge of success.
The report – Bridging the Generational AI Gap: Unlocking Productivity for All Generations – found that almost all (93%) of employees who received AI training went on to use it in their work, compared to just over half (57%) who were left to their own devices.
Crucially, the research found that employees with training were 2x more productive, saving 11 hours per week, compared to the untrained.
Indeed, Dr Daniel Jolles, who co-led the research, added that creating “diverse AI teams helps [to] remove age-based divides between employees, fosters collaboration, and drives stronger team outcomes”.
But while the research makes a compelling case, I believe there is another quality that has been overlooked that we also need to consider. And that’s curiosity.
Curiosity drives AI adoption
For me, it’s a trait that transcends age, training, or any other differentiator you care to mention. That’s because from what I’ve seen, the people who get the most value from AI are those who are willing to experiment, test ideas, and learn by doing.
They’re the ones pushing the boundaries and discovering – through trial and error – what works and what doesn’t.
But it’s an approach that is likely to set the alarm bells ringing for anyone with the remotest interest in security and risk mitigation. Giving people unfettered access to systems, software, and data is a recipe for disaster, even if it is to further our understanding of AI.
And that poses a curious conundrum. How do you give people the room to experiment and make mistakes while keeping systems and data safe?
It’s an area of study that scientists call “psychological safety” – that intangible feeling within a company or organisation that allows you to “ask a question, admit a mistake, or challenge an idea without fear of embarrassment or retribution”. And according to one expert at least, psychological safety is “the foundation of AI adoption and team learning”.
Which means that if employees are to be encouraged to experiment with AI, they also need the protection of robust IT security.
When caution becomes constraint
After all, AI is not without risk. It doesn’t take much for a careless employee to paste sensitive data into public models and open the door to unintended data exposure, compliance breaches, and reputational damage.
Then there are more deliberate threats, which might include prompt injection attacks, whereby malicious instructions are embedded in seemingly harmless content to trick AI systems into overriding safeguards or exposing sensitive information.
And there are other concerns, too. The speed at which AI is developing is so fast that some employees are using tools that haven’t been properly vetted. This could lead to employees relying on shadow AI tools that create untold compliance headaches for security teams.
The answer isn’t to ban AI outright, but to embrace a more autonomous IT posture. One where organisations have continuous visibility across their estate and the ability to automatically detect, isolate or remediate risky behaviour before it escalates. In an AI-powered world, security operations must move at AI speed too.
Security as an enabler
But if you accept the argument that for AI to flourish, we must give people the space to experiment, then such an approach would surely do the opposite. To take such a heavy-handed line would snuff out any creative curiosity or simply drive such activity underground. Either way, the result would see innovation sacrificed in the name of safety.
The smarter approach is structured freedom – give employees room to explore but make sure they do so within clear guardrails.
That means defining what data can – and cannot – be used. It means providing secure sandboxes for experimentation, while maintaining visibility over which tools are being accessed across endpoints. And it means monitoring usage patterns without criminalising curiosity.
Leadership is key
In terms of approach, it couldn’t be more different from those who treat AI as just another software upgrade, where the focus is fixed firmly on return on investment (ROI) and results. Unfortunately, while it’s an approach that may have worked in the past, in the case of AI, it’s a mindset that tends to rebuke teams for mistakes rather than rewarding them for what they’ve learned.
The real divide in the skills gap isn’t solely down to individuals. It stems from the difference between companies that require a clear roadmap to ROI and those that are willing to create an environment for safe experimentation. And that, ultimately, is down – not to age, training or even curiosity – but to leadership.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
