I Admit it

I was wrong about Claude

This content was originally published on Medium (March 11th, 2026) and has been adapted by The Honking Goose platform.


Anthropic’s ethical values align with my own. I believe artificial intelligence should be developed responsibly so it does not harm society. Anthropic describes Claude as a constitutional AI. During training, developers provide the model with a set of principles and ask it to self-correct whenever one of those principles is violated. This means the training process does not rely solely on human labelers; a model that cannot self-correct during training is not yet considered safe. With a background in technology, I understood how Claude was designed. And yet I formed an opinion about it without trying the app.

Everyone I spoke with who had used Claude described it positively, including trusted family members, colleagues, and friends. They encouraged me to try it, and while I acknowledged them, I quietly set the suggestion aside. There was a free plan. Switching back to ChatGPT required nothing more than a keyboard shortcut. There was no commitment required, and minimal time cost. I had tried countless other AI tools over the years. There was no good reason to avoid Claude. And yet I did.

Last November, Google released its new code editor, Google Antigravity, which integrates Google and non-Google AI tools. I built several small websites with it and it made coding more fun. Eventually I ran into challenges that Google’s AI models struggled with, and trying new chats or typing “try another approach” in the chatbox did not work. So I tried OpenAI’s GPT Codex model, but it wasn’t any better. I gave in and tried Opus. Yes, I was that hesitant. Using Claude was a last resort, something I wouldn’t touch until all other measures had failed. I recalled seeing that Opus was recommended for coding tasks and selected the model. It began fixing bugs that other AI models couldn’t fix. The quotas were a lot stricter than Google’s own AI models, so I used it sparingly. I began to grow more comfortable using Claude models, justifying it to myself as something I was only doing for coding. Then March came. Maybe I saw a social media post, read a message on Discord, or had no reason at all. I downloaded the Claude app onto my iPhone.

When opening the app, I expected to trigger its guardrails or safety systems when asking questions about professional domains. I used Claude when I needed AI assistance rather than searching for limits. This was a fairer test than probing boundaries. The results surprised me. I was able to ask about mental health and psychology topics related to my university coursework. I worried that I would need to carefully word my prompts to avoid being blocked and that Claude would slow down my research workflows rather than accelerate them. My time using Claude was genuinely productive, with the tool proving helpful for both personal and academic use.

Claude has been helpful in my personal life and with research and prewriting tasks for my academic work. I pivoted from technology to psychology, which feels like a fresh start, but I sometimes need things explained that might be glossed over in a graduate program where many students spent their entire undergraduate years in the social sciences. I believe a combined background in psychology and technology will help me meet my goals.

When I have questions, web search tools like Copilot and Claude help me find what to read. Instant answers are harder to verify without a background in the field, so getting pointed to the right web resources and journal articles is really helpful when I feel stuck. With the right prompts, AI can also be a solid proofreading tool and a more affordable alternative to Grammarly or ProWritingAid.

When I need help with a CAPTCHA, my first move is usually to take a screenshot and send it to an AI tool. It is often correct, and if it is not, I can still phone a friend. That said, there are things I will not entrust to AI yet. Anything involving a password, account number, or other sensitive data is off-limits. AI is still a relatively new technology, and we do not fully understand its privacy and security limitations. Anything where the stakes are non-trivial requires double-checking against another source, which is actually a great way to build comfort with research tasks. I view that as a feature, not a limitation, of AI technology. I am excited to see how Claude and other AI tools evolve over the next five years.

I am unlikely to switch platforms, partially due to my accessibility needs. The Microsoft Windows and 365 ecosystems I use today fully support my screen reader, which is software that helps visually impaired people like me use computers. Alternative products like Google Workspace have been difficult to use with my screen reader.

Claude is an AI company, not a broader software company like Google or Microsoft, so they have not developed replacements for every app I use. I already pay for Microsoft 365 Premium, which includes Copilot’s premium features. I do not use AI enough to justify paying for multiple subscriptions.

The free plan of Claude offers generous quotas and access to many of Anthropic’s models. I will likely use Claude alongside Copilot, using my own judgment to decide which tool best fits a given problem. It also does not hurt to have a backup option ready for the occasional maintenance window or service disruption that any AI platform may experience.

If there is one thing I learned through this experience, it is to challenge my assumptions, stay curious, and experiment when uncertain. The risks of trying something new are low. I do not need to succeed or be right 100% of the time, but drawing on prior knowledge and experience certainly helps tip the odds in my favor. Be creative, take risks, and excel at everything you do.

Leave a comment