LLMs – Chat Interfaces vs. Raw APIs: Why I Choose Conversations

I recently read Max Woolf's post on LLM use, where he explains why he rarely uses generative LLMs directly, preferring raw APIs for control. It's an interesting take, but I fundamentally disagree. For me, chat interfaces aren't just convenient—they’re an essential part of understanding.

LLMs are more than code generators. They are interactive partners. When I use ChatGPT, Mistral, or Copilot in chat mode, it's not just about fast results. It's about exploring ideas, challenging my thinking, and refining concepts. The back-and-forth, the debugging, the reflection—it’s like pair programming with a tireless assistant. If I need to test an idea or explore a concept, the chat interface is perfect for that: it's always available, from any device, no API or IDE needed.

Max argues APIs allow for more fine-tuning—system prompts, temperature control, constraints. Sure. But in a chat session, you can iterate, switch topics, revisit past decisions, and even post-mortem the entire conversation, as a way to learn from it and log your decisions. And yes, I archive everything. I link these sessions to tickets in TickTick to revisit ideas. Try doing that with an API call.

The chat interface is a workspace, not a magic wand. It’s where you can think, break things, fix them, and learn. Isolating interactions to API calls removes that context, those learning moments. To me, that’s missing the point.

APIs are fine for deterministic output. But I prefer the chaos of conversation—it forces me to engage deeper, explore failures, and actually think. That’s why I don’t just use LLMs to generate. I use them to reason. Not just for hard skills, but soft skills too.


Mastodon