From AI Skeptic to Strategic User: My Journey of Changing Minds on Artificial Intelligence
Change is uncomfortable, especially when it involves admitting you might have been wrong about something you felt passionate about. As someone who spent the better part of two decades in software development and technology consulting, I prided myself on being able to spot overhyped trends from genuinely transformative technologies. For years, I was convinced that artificial intelligence fell squarely into the former category—a solution looking for problems, wrapped in Silicon Valley marketing speak.
Today, I find myself in an unexpected position: cautiously but actively using AI tools in my daily work. This isn’t a story of sudden conversion or evangelical enthusiasm. Instead, it’s an honest account of how real-world experience gradually chipped away at my certainty, forcing me to confront the gap between my assumptions and reality.
The Foundation of My Opposition
My skepticism wasn’t born from ignorance or fear of technology—quite the opposite. With over two-and-a-half decades of experience building enterprise software systems, I’d witnessed countless “revolutionary” technologies that promised to change everything but delivered incremental improvements at best. The dotcom boom of the early 2000s felt familiar: venture capital flowing toward buzzword-heavy startups, conference presentations heavy on promise but light on substance, and a general sense that we were being sold a future that didn’t yet exist.
The ethical concerns are valid
My primary objection centered on ethics. The training data powering these systems raised fundamental questions about consent and compensation. Artists, writers, and creators were finding their work scraped and repurposed without permission or payment. As someone who had freelanced as a technical writer early in my career, this felt like digital theft on an industrial scale.
The bias issues were equally troubling. I’d seen how algorithmic bias could perpetuate discrimination in hiring systems and loan approvals. The idea of scaling these problems through more sophisticated AI systems seemed reckless. When you’ve spent years debugging edge cases in code, you develop a healthy respect for unintended consequences.
Devaluing creatives
Perhaps more personally, I worried about the devaluation of human creativity and expertise. Writing had always been both a professional skill and personal passion for me. The prospect of AI systems generating “good enough” content at scale felt like a race to the bottom; a world where human insight and craftsmanship would be replaced by statistical approximations of creativity.
Mistrust of the Hype Machine
The marketing around AI felt dishonest. Companies were slapping “AI-powered” labels on basic automation tools. Chatbots that followed decision trees were being rebranded as artificial intelligence. The gap between promise and reality was so wide that it reinforced my belief that the entire sector was built on inflated expectations.
Having lived through the dot-com bubble and various other tech hype cycles, I recognized the pattern: revolutionary claims, massive investment, inevitable disappointment, then gradual adoption of the actually useful pieces. I was content to wait for the bubble to burst and the genuinely valuable applications to emerge from the wreckage.
When did things change for me, personally?
The beginning of my perspective shift wasn’t dramatic (shocking). In late 2022, a client asked me to evaluate whether ChatGPT could help their customer service team handle routine inquiries. My initial response was skeptical, but I owed them an honest answer, not a personal one. I really like IBM’s approach to consulting that they took in the 1990s: customer-centric, where what mattered was the customer getting the best solution possible, even if that didn’t include IBM hardware. For those who don’t recall, this was a major shift of their self-centric approach in the 1980s, where everything centered around getting IBM products into customers’ hands. I recall reading about the change in the late 90’s, just out of high school and in college, and it blew my mind; I probably read that article three or four times over. That drove my approach to consulting when I started years later.
The rubber meets the road
I spent a week testing the system with real customer service scenarios. What I found surprised me. The AI wasn’t replacing human judgment but augmenting it. Customer service representatives could draft responses faster, but they still needed to review, edit, and personalize each interaction. The tool was reducing the time spent on routine tasks without eliminating the human element.
More importantly, the quality wasn’t “good enough,” it was often better than the rushed responses overworked staff produced during busy periods. The AI had infinite patience and could maintain a consistent tone even when dealing with difficult customers. This wasn’t revolutionary, per se; salespeople have used scripts for ages. The big shift was AI not getting upset when it was called names (let me be very clear here: there’s no reason to be cruel to telemarketers or customer service reps; they’re doing a thankless job, often for not a lot of money and a lot of turnover).
This experience forced me to confront an uncomfortable truth: my opposition might have been based more on principle than evidence. I don’t mind admitting when I’m wrong, but I really dislike being wrong (this is ego and pride talking, and something I’m actively working on).
A personal breakthrough
The real inflection point came during a particularly challenging project deadline. I was struggling to write clear technical documentation for a complex API integration: the kind of writing that requires translating technical concepts for a non-technical audience. I’m generally good at translating (as long as I don’t go off on tangents), but it was late, I was tired, and the last thing I wanted to do was documentation (really, is that ever the first thing we want to do?).
I fed the AI my technical specifications and asked it to draft user-friendly documentation. The result wasn’t perfect, but it gave me a solid foundation to build upon. More importantly, it helped me identify gaps in my own explanations and suggested clearer ways to structure the information.
For the first time, I experienced AI as a collaborative tool rather than a replacement threat. It wasn’t doing my thinking for me; it was helping me think more clearly.
I’d like to take a moment and really reiterate that last point. One of the problems we’re facing in the software industry right now is that junior devs are either getting pushed out or getting asked to build everything with AI. Neither of these paths gives us experienced senior devs who can debug things when AI struggles.
Wrestling with Cognitive Dissonance
Changing deeply held beliefs is psychologically uncomfortable. I found myself in the awkward position of using AI tools while still harboring philosophical objections to their existence. This tension forced me to examine why I was so resistant to evidence that contradicted my assumptions.
I like to think of myself as aware when my beliefs and experience disconnect. That doesn’t make this any easier, and I also know we lie to ourselves all the time to protect our self-image.
The discomfort of being wrong
Part of my resistance was professional pride. I’d built a reputation as someone who could spot technological snake oil from a mile away. Admitting that AI might actually be useful felt like acknowledging a blind spot in my expertise. It’s humbling to realize that your skepticism, however well-reasoned, might have prevented you from recognizing genuine value.
I also had to confront the possibility that my ethical concerns, while valid, might not be sufficient reason to reject the technology entirely. The world was moving forward with AI development whether I participated or not. The question became: could I use these tools in ways that aligned with my values while still benefiting from their capabilities?
Finding a middle ground
The breakthrough came when I stopped thinking in binary terms. AI didn’t have to be either revolutionary or worthless; it could be incrementally valuable. It didn’t have to be either perfectly ethical or completely corrupt; it could be a flawed tool that required careful, intentional use.
I began to see parallels with other technologies I’d initially approached with skepticism. Social media had obvious downsides, but I’d learned to use it professionally while avoiding its more problematic aspects. Cloud computing raised security concerns, but proper implementation could actually improve security posture.
My Current Perspective: Cautious Integration
Today, I use AI tools regularly in my work, but with clear boundaries and ongoing skepticism. This isn’t a conversion story—it’s an evolution toward a more nuanced understanding of technology’s role in professional life.
Where does AI add value?
I’ve found AI most valuable for tasks that benefit from rapid iteration and broad knowledge synthesis. Research and initial drafting, code review and debugging suggestions, and brainstorming alternative approaches to technical problems have all become more efficient with AI assistance.
I find AI works best when it amplifies human capabilities rather than replacing human judgment. I use it to generate multiple options quickly, then apply my experience and expertise to evaluate and refine those options.
Maintaining Human Responsibility
I believe we need to use AI governed by clear principles. I never publish AI-generated content without significant human review and editing. I’m transparent with clients about which tools I use and how they fit into my workflow. I maintain final responsibility for all deliverables, regardless of which tools contributed to their creation.
Most importantly, I continue to invest in developing my own skills and knowledge. AI tools are only as valuable as the human expertise guiding their use. The goal is enhancement, not replacement.
Keeping a healthy sense of skepticism
(the philosophy, not the band…which is also pretty great if you like doom metal)
Despite my practical adoption of AI tools, significant concerns remain. The environmental cost of training and running these models is substantial. The labor displacement issues haven’t been resolved; they’ve just been temporarily obscured by economic growth in other sectors.
The bias and misinformation problems are real and ongoing. I’ve seen AI systems confidently present incorrect information or perpetuate harmful stereotypes. These tools require constant vigilance and human oversight.
Perhaps most concerning is the tendency for AI capabilities to be oversold and human judgment to be undervalued. The technology is powerful, but it’s not magic. It requires thoughtful implementation and realistic expectations. I recently pointed out to a group that I believe (and I really do) we’re going to see an increase in the need for senior software engineers by 2028…perhaps sooner. This is a very AI-forward group, mostly entrepreneurs, who believe “build fast and we don’t care what breaks, ship it” is the way to do everything (see DOGE for an example of how that goes when pushed to an extreme). I was accused of being biased (and I admit my own personal bias…but I also know what happens when people just build without concern for security and performance bottlenecks).
Lessons from Changing my Mind
This experience taught me valuable lessons about intellectual humility (which I believed myself to be pretty good at…I was wrong) and the importance of remaining open to evidence that challenges our assumptions. Strong convictions, while sometimes necessary, can also blind us to nuanced realities.
The most important insight is that we don’t have to choose between uncritical adoption and reflexive rejection. We can engage with new technologies thoughtfully, taking advantage of their benefits while remaining vigilant about their risks.
My journey from AI skeptic to cautious user isn’t a story of abandoning principles, but rather applying those principles more thoughtfully to a complex reality. The technology isn’t perfect, but neither is ignoring tools that can genuinely improve our work when used responsibly.
As AI continues to evolve, I expect my perspective will continue to evolve as well. The goal isn’t to find the “right” position and stick to it, but to remain thoughtful, ethical, and open to evidence…even when it challenges what we thought we knew.