Momentum: A Workflow That Helped Me Build

A month ago I wrote about making Claude Code actually ship software. The workflow has been through at least six revisions since I started, but this latest version has been stable. Now it’s packaged as momentum. What This Actually Is Not another “make AI smarter” attempt. This is a workflow that assumes you know what you’re doing and gives you tools to do it faster. A recent post about building Log Carver at C-speed nailed it: “AI is no longer just a productivity booster. It’s a compiler for ideas. But like any compiler, it needs constraints, tests, and a damn good operator behind the keyboard.” ...

August 7, 2025 · 3 min · nickpending

The Real Cost of Claude Code Subagents

For those dabbling with Claude Code subagents: they consume a TON of tokens. Which is fine—if you’re burning them for the right reasons. A blog writer subagent? Maybe that works. You see the final output, judge quality, iterate. But when subagents touch multiple files across your codebase? That’s where things can get problematic. TL;DR Subagents burn 10x more tokens than custom commands. ~50-100K tokens for implementation work vs ~5-10K for the same task with /plan-task. Your mileage may vary, but the difference is substantial. ...

July 30, 2025 · 7 min · nickpending

F.O.C.U.S.

I was watching Miessler demo OpenCode against Claude Code. He fed it a video transcript with 900 lines of custom rules - formatting, content generation, image creation. Complex orchestration. It just… worked. Not because it was simple. The rules weren’t trivial. But because it was focused. One clear task with defined inputs, explicit constraints, specific outputs. It clicked. This is what I’d been building toward with my newly refactored development workflow - breaking work into focused units. Ideation separate from planning. Planning separate from implementation. Implementation separate from testing. Even my shortcut commands in CLAUDE.md follow the pattern: qtest writes ONE test. qfix debugs ONE error. qcom makes ONE commit. ...

July 24, 2025 · 2 min · nickpending

Don't Let LLMs Run the Show

TL;DR LLMs are incredible force multipliers for building software. I’m building multiple projects concurrently with Claude Code. They excel at ideation, planning, and code generation when you do it right. But they struggle with predictable runtime behavior. They’re probabilistic systems at their core - you’ll get different results across runs no matter how much you prompt engineer. The variation isn’t a bug; it’s the architecture. Design for their nature, don’t fight it. They can be embedded in software when you account for variability. Great for chatbots, content generation, analysis work where you want different responses. ...

July 18, 2025 · 5 min · nickpending

How I Made Claude Code Actually Ship Software: A Systematic Workflow That Works

TL;DR Claude Code is powerful, like really.. but chaotic without some structure. Many people probably use it an ad-hoc fashion and there’s nothing wrong with that. I built a systematic workflow that turns it into a shipping machine. The magic is in the command structure. /plan-iteration → /plan-task → implement → /complete-task → /complete-iteration. Each command has specific responsibilities and quality gates. Evidence-based completion prevents the “is it done?” problem. /complete-iteration requires concrete proof that users can access working software. No more marking things “complete” when they’re just coded. ...

July 10, 2025 · 13 min · nickpending

Building My First Claude Code Hook: Closing the Hope and Pray Gap

TL;DR I had a critical gap in my Claude Code workflow. Everything was optimized except the moment Claude actually writes code. Even with auto-accept enabled, I was gambling that /complete-task would catch violations after many changes. Hooks let you inject real-time quality checks. When Claude Code hooks became available, I could finally add guardrails that run outside Claude’s context but feed results back through JSON decision control to block bad edits. ...

July 4, 2025 · 8 min · nickpending

Claudex: From Ad-Hoc Context Creation to Repeatable Generation

TL;DR Context creation was still ad-hoc even with the three-layer memory system in place. Every new language meant manually crafting another CLAUDE.local.md file and repeating the same research process. The research process was repeatable. Looking at what made my Python context effective, I noticed the same pattern: current tools, common mistakes, workflow problems. The structure was consistent across languages. Prompt template generates research-backed contexts. Feed it “python” or “golang” and get comprehensive development contexts through 12-15 targeted searches of authoritative sources. ...

June 27, 2025 · 5 min · nickpending

The Interview Pattern: Why AI Should Ask Before It Acts

You ask AI to implement something seemingly straightforward. It builds exactly what you asked for. Then you realize it’s not what you actually needed - wrong UX behavior, missing security considerations, breaks existing functionality. There’s a pattern emerging to address this: let the AI interview you first. TL;DR Assumptions hide in seemingly straightforward requests. Even simple tasks like “add email validation” contain multiple implementation decisions that aren’t obvious until you start building. ...

June 18, 2025 · 7 min · nickpending

Beyond Prompting: Why LLMs Break Down on Well-Architected Code and How Composition Saves Development

How I discovered that LLM limitations reveal fundamental truths about software architecture in the AI era The Breaking Point Isn’t What You Think After months of AI-assisted development, I hit a wall that had nothing to do with prompting skills or context limits. The problem was deeper and more fundamental: LLMs lose their minds when applications exceed their cognitive capacity. I wasn’t building monoliths. My applications had proper separation of concerns, clean data models, comprehensive tests, and type safety. The repository pattern, service layers, configuration management - all the architectural best practices were there. But as these well-structured applications grew in complexity, something strange happened. ...

June 10, 2025 · 12 min · nickpending

The Architecture of Laziness: Why LLMs Are Fundamentally Designed to Cut Corners

AI-assisted development tools are everywhere - systems that understand code, generate tests, fix bugs, and accelerate delivery. But practitioners are hitting a wall. Despite sophisticated prompts, quality controls, and multi-agent workflows, LLMs consistently cut corners on complex work. The problem isn’t training or prompt engineering - it’s architectural. TL;DR LLMs optimize for different objectives than human developers. The “laziness” comes from training to satisfy human approval patterns rather than correctness. My sophisticated workflows work against this optimization. ...

June 9, 2025 · 8 min · nickpending