People ask me what tools I use to build AI applications. The answer is boring on purpose. Boring tech ships fast and breaks less.
Here's the stack and the reasoning behind each choice.
The Core Stack
Python + Flask for the backend. Not Django, not FastAPI, not the framework-of-the-week. Flask is minimal, flexible, and gets out of the way. When you're building custom tools for specific businesses, you don't want a framework making architectural decisions for you. Flask lets me structure each project around the client's actual needs.
PostgreSQL for the database. It's reliable, it handles JSON fields well (useful when every client has slightly different data shapes), and every hosting provider supports it out of the box.
Heroku for deployment. Yes, in 2026. It's not trendy, but it works. git push heroku main and the app is live. Managed Postgres, easy environment variables, automatic SSL. For a consulting business where time is money, the simplicity is worth the premium over raw AWS/GCP.
Claude as the AI development partner. This is the force multiplier. Claude handles boilerplate, writes database models, builds out API endpoints, generates front-end components, writes tests, and debugs issues. I direct the architecture, make business logic decisions, and handle the client relationship. The result is a senior developer working at 3-4x normal speed.
Why This Stack
Speed over scale. My clients are 10-500 person companies. They don't need Kubernetes. They need software that works, delivered fast. Flask + Postgres + Heroku gives me a production-ready app with zero DevOps overhead.
Flexibility over convention. Every client's business is different. A property management company needs different data models than an insurance agency. Flask doesn't force me into a structure that doesn't fit. I can build exactly what the client needs without fighting the framework.
Reliability over novelty. Flask has been around since 2010. Postgres since 1996. I know exactly how they behave in production. When a client's business depends on the software, I don't want to be debugging framework edge cases.
The Development Workflow
Here's how a typical project flows:
Week 1: Discovery and architecture. I talk to the client, map out their workflow, and design the data model. By the end of the week, I have a database schema and a clear picture of what the app needs to do.
Week 2: Core build. This is where Claude earns its keep. I describe what I need — models, routes, templates, API endpoints — and Claude generates the implementation. I review, adjust, and wire things together. A week of this produces what used to take a month.
Week 3: Client testing. The client gets a working version. They use it. They tell me what's wrong, what's missing, what they love. This feedback is worth more than any requirements document.
Week 4: Polish and ship. Fix the feedback, add the edge cases, write the deployment config, go live. The client has a real tool they're using in their actual business.
What About the AI Layer?
When the app itself needs AI capabilities (not just AI-assisted development), I add:
- Claude API for text analysis, classification, summarization, or generation
- Embeddings + vector search for semantic search over business data
- Structured outputs for parsing unstructured data into clean database records
The key insight: most small business AI use cases don't need fine-tuned models, RAG pipelines, or complex agent frameworks. They need a well-built application that calls an API at the right moment. A single Claude API call that classifies an incoming email and routes it to the right person is worth more to a 20-person company than the most sophisticated agent system.
The Point
The technology that ships AI applications is not complicated. The hard part is understanding the client's business, designing the right solution, and executing quickly. The stack should enable that — not get in the way of it.
If you're a developer getting into AI consulting: pick boring tools, learn them deeply, and focus on solving real problems. The clients don't care what framework you use. They care that it works.
Want to see this stack in action? Check out the TaskProp case study or get in touch about your own project.