Secretly Using AI
When I first graduated, the tools we now take for granted simply did not exist. Today, the landscape has shifted dramatically, sparking a fierce debate about how AI-generated content is perceived. Interestingly, studies suggest that evaluations of creative writing often drop when a reader suspects AI involvement, even if the technology was used only for minor assistance. It seems we place a high premium on the struggle of the human process; if a work feels too easy to produce, we instinctively value the result less. I strongly believe in using tech for good, for example, a quick proof of your email in Gemini helps someone with dyslexia in the workplace. It shouldn't need to feel sneaky or secretive.
This raises a challenging question. Is using AI like an athlete taking performance-enhancing drugs? Some argue that by bypassing the heavy lifting of research or drafting, we are losing the "mental muscle-building" that academia and creative practice are designed to provide. In some circles, there is even a certain prestige in claiming, "I do not use it," as proof that one's thoughts are entirely their own. But in a world where these tools are becoming standard, we have to ask: are we being purists, or simply falling behind?
The data suggests the latter is already happening. According to 2025 surveys, roughly 45 per cent of authors now use AI, yet 74 per cent do not disclose it to their readers. Their primary uses are practical:
Research (81 per cent): Finding historical facts or niche data.
Marketing (73 per cent): Crafting blurbs, newsletters, and social posts.
Outlining (72 per cent): Stress-testing plots for continuity and pacing.
We see real-world impact in stories like that of "Coral Hart" (a pseudonym), who reportedly used Anthropic's Claude to produce over 200 romance novels in a single year, selling 50,000 copies and earning a six-figure income. While impressive, this speed challenges our traditional metrics of artistic merit.
During recent interviews, I have been surprised to find that "Do you use AI?" has become a standard question. Rather than hiding our usage, I believe the best approach is radical transparency. Using AI to summarise a 60-page PDF to extract relevant data points is not cheating; it is efficiency. While companies fear data leaks and potential exposure of corporate secrets, many workers have come to rely on these tools for their efficiency and productivity.
However, the choice of tool matters just as much as the method. In fields like academia and therapy, where confidentiality is paramount, using a platform that harvests data feels less like a shortcut and more like a breach of trust. This is why I have been particularly impressed by the direction of projects like Lumo.Built on a privacy-first framework, Lumo ensures that our data and the data of those we serve are not harvested for commercial gain. For those of us who care deeply about the impact of our work, choosing a tool that respects human rights transforms the act of using AI from a potential ethical compromise into a responsible evolution.
We are all learning to navigate this together. Whether you view AI as a brilliant research assistant or a potential distraction, the goal remains the same: to learn, to contribute and to stay true to our values. The question is not whether we use these tools, but how we wield them to enhance, rather than replace, the human spirit.