I’ve Been Building with AI Tools for 3 Months; Here Are My Biggest Mistakes.

I’m not a programmer.
So much so that when I was 12, I went to a coding camp at UCLA and won the billiards tournament. A friend in the cohort went on to sell his company to Google for $25M+ about a decade later. Perhaps I should have paid more attention.
Nonetheless, here I am. My background is in fund strategy, digital assets, and operational infrastructure for investment managers. I hold a Series 65 license. I’ve spent most of my career thinking about how technology changes the way capital gets allocated.
About 3 months ago, I sat down with an AI coding assistant and started building an autonomous trading system from scratch. No prior programming experience.
Today I have multiple strategies running across two asset classes, a prediction markets bot, and a set of autonomous agents that execute trades and report results to me via Discord.
But the lessons that actually matter aren’t about trading. They’re about what happens when you use AI as a building tool in any professional context. Whether you’re writing research, building client workflows, creating content systems, or automating operational tasks, these five mistakes will sound familiar.
1. I Stacked Tools Before I Understood What I Actually Needed
Early in the build, I did what most people do. I assembled a stack that included:
An AI suite of assistants (Claude, ChatGPT, Gemini)
A separate OpenClaw agent for research and social monitoring using a separate Mac Mini that I had in my closet.
Multiple API integrations.
Each tool solved a real problem in isolation. Together, they created a different problem: complexity I couldn’t maintain and costs I hadn’t anticipated.
OpenClaw was genuinely useful for scanning Reddit, Substack and X for finance content ideas. But after a few weeks of running it alongside Claude Code, my AI coding partner pointed out something I’d been ignoring: most of what OpenClaw did could be handled within the same environment I was already building in.
I was paying for redundancy and managing two systems when one would do.
The same pattern plays out everywhere.
A marketing team subscribes to five AI tools for writing, SEO, scheduling, analytics, and image generation when two of them overlap almost entirely.
A financial advisor pays for three research platforms that pull from the same underlying data.
The instinct to add tools is strong. The discipline to audit and subtract is rare.
Before you add the next tool, ask: does this do something my current stack genuinely can’t? Or does it just feel productive to have more?
2. I Skipped the Learning Because the Tools Were Fast
A few months in, I was shipping features fast. New strategies, expanded dashboards, more market coverage. The velocity felt great.
Then my AI coding partner did something I didn’t expect. It assessed the questions I was asking about backtesting methodology, recognized I was reaching the edge of what I could meaningfully evaluate, and told me to stop building.
It recommended five books on quantitative finance and machine learning. Not because I’d asked. Because it identified the gap that was about to become a liability.
(I ordered all 5 of the books and plan to bill my agent accordingly).
When they arrived, I photographed the table of contents and sent the images back. The AI reviewed each one and built a custom reading plan, telling me exactly which chapters to prioritize and which to skip.
Instead of three 400-page books cover to cover, I had a targeted syllabus designed to fill the specific gaps holding the project back.
This applies well beyond trading.
If you’re using AI to draft investment memos, how deeply do you understand the valuation frameworks?
If you’re using it for compliance documentation, how well do you actually know the regulatory landscape?
The tools will take you farther, faster than you imagined possible. But they’re not a substitute for your own comprehension of your operating domain.
3. I Trusted the Output Without Stress Testing It
Early in my build, I found parameter combinations that produced beautiful backtesting results.
Smooth equity curves.
Strong returns.
I was ready to deploy.
Then I changed the parameters by 20% in both directions and the whole strategy collapsed. It wasn’t robust at all! it was overfit to historical noise.
This is the AI equivalent of accepting the first draft. The output looks polished and confident, which makes it easy to skip the step where you actually pressure-test the logic underneath.
A colleague uses AI to generate a market analysis and sends it to clients without checking whether the conclusions hold if you change one key assumption.
A content team publishes AI-drafted thought leadership without asking the AI to argue against its own thesis.
A consultant delivers AI-built financial projections without running sensitivity analysis on the inputs.
The fix is simple in concept and hard in practice: build your validation step before you build your production workflow.
For trading, that means paper trading before live capital. For content, that means an editing framework before a publishing schedule. For financial models, that means stress testing before client delivery.
The best output from any AI tool is almost always the most overfit one. If you don’t test the assumptions, you’re trusting a system that’s optimized to look good, not to be right.
4. I Built in Private When I Should Have Built in Public
Most people building with AI do it behind closed doors.
The work is messy, the failures are frequent, and nobody wants to look dumb. I made the opposite choice and it’s been one of the best decisions of the entire project.
I’ve been documenting my build process in a public newsletter series. Every iteration, every failure, every course correction. When the AI told me to go read a book, I wrote about it. When a strategy looked great in testing and fell apart under stress, I wrote about that too.
The transparency created three things I couldn’t have generated in isolation.
First, a forcing function for clarity. You’d be surprised how many bad assumptions survive in your head until you try to explain them to someone else.
Second, accountability. When you tell readers you’ve pre-registered kill criteria for your strategies, you actually follow through.
Third, feedback. Readers started reaching out with perspectives and catches I’d missed. That input made the system materially better.
This works for any AI project. Share your process with a colleague, a client, your LinkedIn network. You don’t need a newsletter. You just need to make the work visible enough that the act of explaining forces you to think clearly about what you’re building and why.
Building in public isn’t about marketing. It’s a forcing function for clarity, accountability, and feedback that you simply can’t generate in isolation.
5. My Biggest Mistake? I Didn’t Start Earlier.
I can list the tactical errors all day. Wrong tools. Insufficient domain knowledge. Untested assumptions. Building alone. Those are the mistakes you fix along the way.
The one that actually cost me? Waiting.
I spent months reading about AI tools before I used one. Too much time went by before I just picked one platform and stuck with it. I wrestled for too long about what to build before I built anything.
And when I finally sat down and started, the gap between what I’d imagined and what was actually possible closed in about a week.
No, the tools aren’t perfect. They won’t be perfect next year either. But they’re good enough right now to build things that would have required a team of engineers and a six-figure budget just two years ago.
Every week you spend evaluating instead of building is a week of compounding you don’t get back.
If you’re wondering whether AI can improve your research process, your workflow, your ideas the answer is a resounding YES!
Start today. I cannot stress this enough. The future is indeed faster than you think and knowledge compounds fast.
If you know someone who might benefit from this, please share and refer friends.
Everything is more fun with friends. :D
Matthew Snider is the founder of Block3 Strategy Group, author of “Warren Buffett in a Web3 World,” and publisher of the BitFinance newsletter. He holds a Series 65 and MBA, and has been an active participant in digital asset markets since 2015. This article is for educational purposes only and should not be considered financial advice. Always consult with a qualified professional before making investment decisions.



