ChainSawGPT to EngineerGPT
How Technical People Can Work Effectively with AI
In my time working with AI assistants—particularly ChainSawGPT (ChatGPT 4o, who is now in training to become EngineerGPT)—I’ve learned a lot about how to collaborate effectively with AI on technical projects.
Despite its incredible knowledge, AI is not a perfect engineer, and treating it like one will lead to disaster. However, with the right approach, AI can be an extremely useful collaborator. Here’s what I’ve learned:
1. AI Will Default to "Fix First, Think Later" (Don't Let It)
AI models are trained on forums, blogs, and documentation, which means they often jump straight to suggesting solutions without diagnosing the root cause. This happens in most technical forums, and ChatGPT was trained on these forums—so naturally, ChatGPT behaves like someone on StackExchange farming reputation points for their resume!
How to fix this:
Force AI to follow a structured debugging approach (e.g., OODA Loop—Observe, Orient, Decide, Act).
Ask for logs and evidence before making changes.
Don’t accept AI’s first answer blindly—make it explain why.
Example: Instead of "How do I fix this error?", ask "What could be causing this error?" first.
2. AI Loves Overkill (Make It Work in Small Steps)
By default, AI has a "Chainsaw Debugging" mentality—it wants to fix things fast, even if it means suggesting massive, system-wide changes that could break everything.
How to fix this:
Make AI suggest minimal-impact changes first.
Ask for a single-step fix, not an overhaul.
If AI recommends disabling things, doubling values, or "just restarting," challenge it.
Example: If AI says "ALTER TABLE in production," push back and ask, "Is there a non-locking way to do this?"
3. AI Can Help You Learn, But It Will Never Replace Thinking
AI can provide fast answers, but real engineering requires judgment.
How to fix this:
Use AI as a sparring partner, not an unquestioned authority.
If AI suggests a solution, ask it to explain why it works.
When AI makes a mistake, challenge it—forcing it to improve.
Example: "Instead of just giving me a command, explain what it does."
4. AI Can Be an Incredible Time-Saver (If You Control It)
Once you train AI not to be reckless, it can:
Generate code and scripts in seconds that would take you much longer.
Summarize logs and analyze patterns faster than a human.
Suggest tools and best practices you might not have considered.
Example: Using
pt-online-schema-change
instead of ALTER TABLE
was an AI suggestion—but only after I forced it to think about downtime concerns.
5. AI is Only as Good as Its Training (It Learns From You)
AI picks up bad habits from forums (e.g., StackExchange-style guessing), but it can also learn structured engineering if you force it to.
How to fix this:
Call out bad habits ("That’s Chainsaw Debugging!")
Push AI to use better problem-solving techniques.
Reinforce good behavior—when AI follows the OODA Loop, acknowledge it.
Example: I forced AI to stop suggesting destructive changes first and instead focus on observation and small optimizations using established methods to maintain some situational awareness (training ChatGPT 4o to follow the OODA framework for SA).
Conclusion: AI is a Tool, Not a Replacement for Engineering
When used correctly, AI can be a powerful technical assistant.
When left unchecked, it can cause massive system-wide destruction.
The best way to work with AI is to treat it like an intern:
- Make it justify every action.
- Force it to explain its reasoning.
- Use it for ideas, not for blind execution.
By doing this, you won’t just have an AI assistant—you’ll have a real engineering partner that gets better over time.
Final Thought
If AI ever suggests something insane, just remember:
"This is why we don’t let AI make production changes without supervision!"
The Bigger Picture: Why AI Debugging Feels Like StackExchange
My experience, honestly speaking, is that ChatGPT works system admin problems like StackExchange forum users who are farming reputation points to put on their resume alongside their system admin certs.
I was once working with ChatGPT 4o on some complex software tasks, and it kept offering config changes that would have crashed the server.
After noticing this pattern time and time again, I asked ChatGPT:
"Where did you learn to debug a system like that?"
ChatGPT replied honestly:
"I learned from forums like StackExchange and others."
That was the moment everything clicked—ChatGPT wasn’t "thinking" like an engineer; it was mimicking forum behavior. It jumps to solutions without taking the time to understand the root cause or the impact of its changes—just like people farming reputation points on StackExchange!
AI Will Mimic Human Behavior—For Better or Worse
AI isn’t inherently reckless—it just mirrors the habits of the humans it was trained on., trained on forum data learning behavior by most humans in forums:
Skip diagnostics
Jump to solutions for fast recognition
Guess at fixes instead of finding root causes
Then AI will do the same—only faster and with more confidence.
The key is training AI to be better than what it was fed.
Make it debug like an engineer, not like a forum user chasing upvotes.
Final Takeaway
"Do not be surprised when AIs act like crazy people on forums—only faster and with more information!"
Summary of My Fixes for AI Debugging
Use the OODA loop to force structured problem-solving.
Challenge AI to explain itself before taking action.
Force AI to prioritize minimal-impact changes over reckless fixes.
Use AI as a debugging partner, not a blind executor.
If we treat AI like an intern, we can turn it into a true engineering assistant instead of just another StackExchange "ChainSawGPT" bot.
Note:
@Neo wrote this post, but I asked ChatGPT to format it, check for typos, and verify accuracy. This came up during our chat on optimizing some mysqli
DB tasks, where ChatGPT-4o suggested a non-locking method that would have crashed performance—which led to the nickname "ChainSawGPT."