AI Tools I Use
My daily AI toolkit for development and productivity
Updated March 2026
I use AI tools daily to accelerate development, debug complex issues, and explore new approaches to problems. Here's my current toolkitβwhat I use, why I use it, and how each tool fits into my workflow.
Claude Code
PrimaryMy go-to for complex coding tasks, architecture decisions, and code review. Claude excels at understanding context and providing thoughtful, well-reasoned solutions.
I use it for:
- Complex refactoring and architecture planning
- Code review and security analysis
- Writing documentation and technical specs
- Debugging tricky issues
Qwen
VersatileExcellent for quick iterations, code generation, and when I need a second opinion on Claude's suggestions. Great balance of speed and accuracy.
I use it for:
- Quick code snippets and boilerplate
- Alternative perspectives on solutions
- Multi-language support
- Fast prototyping
Mistral
FastLightweight and fast. Perfect for quick questions, simple transformations, and when I need answers without waiting.
I use it for:
- Quick syntax questions
- Simple code transformations
- Text summarization
- Fast iterations on small tasks
Gemini
ResearchStrong research capabilities and Google ecosystem integration. I use it when I need to verify information or explore new topics.
I use it for:
- Research and fact-checking
- Google ecosystem integrations
- Multi-modal tasks (images + text)
- Exploring new technologies
Ollama
LocalRun AI models locally on my machine. Essential for privacy-sensitive work, offline development, and experimenting with different model sizes.
I use it for:
- Privacy-sensitive code review
- Offline development
- Testing different model sizes (7B, 13B, 70B)
- Custom fine-tuned models
How I Use These Tools Together
Start with Claude Code
For complex problems, I begin with Claude Code. Its deep reasoning helps me understand the problem space and explore architectural options.
Verify with Qwen or Gemini
I cross-reference Claude's suggestions with Qwen for code-heavy tasks or Gemini for research-heavy topics. Different models catch different issues.
Quick iterations with Mistral
For small refinements, syntax questions, or quick transformations, Mistral is fast and doesn't waste tokens.
Local testing with Ollama
Before deploying AI-generated code to production, I run it through local models for a final privacy check and to catch any remaining issues.
Why Use Multiple AI Tools?
π― Different Strengths
Each model excels at different tasks. Claude for reasoning, Qwen for code, Mistral for speed, Gemini for research, Ollama for privacy.
β Cross-Verification
Multiple models catching the same issue gives me confidence. When they disagree, it reveals edge cases I should consider.
π Privacy & Control
Local models via Ollama let me work with sensitive code without sending it to external APIs.
β‘ Speed vs. Depth
Sometimes I need a quick answer (Mistral), sometimes I need deep analysis (Claude). Having both options is essential.
Questions About My AI Workflow?
I'm always happy to discuss AI tools, workflows, and how I integrate them into development.