The Right Agent, Right Context, Right Time: Why Universal Context Doesn't Mean Universal Access

Over the past ten years, I’ve spent considerable time and energy trying to shorten critical time gaps. The time it takes attackers (the good kind) to find bugs, defenders to mitigate, and responders to act when things go sideways. This has always been about data—getting the right people the right information at the right time. I’ve built systems designed to reduce friction, perform the drudgery that bug hunters do at scale (before AI made this interesting), and tackle attack surface problems before ASM was a thing. In every case, success hinged on that fundamental principle: right people, right data, right time. ...

June 6, 2025 · 15 min · nickpending

From 47 Manual Iterations to Systematic Validation

This begins a two-part exploration of evolving from intuition-based to evidence-based prompt engineering, revealing fundamental insights about AI system limitations and systematic evaluation. The Problem That Nearly Broke Me..Not Really I was building a multi-phase HTTP reconnaissance workflow using Nerve ADK - a sophisticated chain of AI agents working together to systematically probe web services. The architecture was elegant (if I do say so myself!): a Discovery Planner would identify what information we needed, a Command Generator would translate those requirements into concrete tool commands, and executors would run the actual reconnaissance. ...

May 30, 2025 · 7 min · nickpending

The Systematic Breakthrough

This concludes our two-part exploration of evolving from intuition-based to evidence-based prompt engineering. Read Part 1 for the context and methodology behind these findings. The Real Results: Validating and Expanding “Curious Case” Findings After fixing the flag format bias and running the complete four-strategy comparison across all models, the results were definitive - and they validated the core insight from “The Curious Case of the Thinking Machine” while revealing much broader patterns: ...

May 30, 2025 · 8 min · nickpending

Beyond Comprehensive Context

This continues my exploration of AI-assisted development workflows, building on insights from Making Security Ambient and the practical lessons learned from daily Claude Code usage. The Evolution of AI-Assisted Development The progression has been fascinating to observe—and participate in. We started with simple one-shot prompts: “Generate this function.” Then we got smarter about light planning, adding some structure to our requests. From there, we developed systematic AI-assisted workflows like the /plan-task approach I’ve written about, where architectural guardrails transform unpredictable AI assistance into something more craftsperson-like. ...

May 23, 2025 · 8 min · nickpending

Claude Code Memory Optimization

In the previous article on Beyond Comprehensive Context, I explored how AI-assisted development revealed challenges with comprehensive documentation and token efficiency. What follows are my approaches for memory optimization that emerged from daily use—not as a definitive solution, but as a possible option. The Layered Memory Paradigm What I’ve discovered through daily practice is that Claude Code’s memory system reveals something pretty cool about how expertise actually functions. The system’s three-layered structure mirrors the way I think about knowledge—from universal principles to contextual applications. ...

May 23, 2025 · 9 min · nickpending

The Curious Case of the Thinking Machine

The Setup We’re building a multi-step reconnaissance workflow using Nerve ADK - a chain of AI agents working together. We built a few pieces to get started: Discovery Planner: “What information do we need?” Command Generator: “How do we get that information?” Simple. Elegant. Each agent has one job, and they pass data between them like a relay race. The Tools Both agents had access to a think() function - a way to explicitly show their reasoning process. Think of it as the difference between solving math in your head vs. showing your work on paper. ...

May 23, 2025 · 5 min · nickpending

Making Security Ambient: Seamless Checks in the Age of Vibe-Coding

In this world of vibe-coding—where developers “fully give in to the vibes” as Andrej Karpathy put it—I’ve been exploring ways to integrate essential security thinking into this new workflow paradigm without sacrificing the speed and flexibility that makes AI-assisted coding so powerful. The Security Dilemma in Vibe-Coding Vibe-coding represents a fundamental shift in how software gets built. Instead of meticulously crafting each line, developers describe what they want and let AI generate the implementation. While this approach dramatically accelerates development, it creates blind spots around security considerations that traditional manual code reviews would catch. ...

May 10, 2025 · 2 min · nickpending

Orchestrating Security Actions

This is Part 2 in a two-part series. Read Part 1: Bridging the Cognitive Gap Introduction LLMs don’t reason. They don’t intuit. They don’t understand risk or strategy. But they are changing how work gets done — fast. From summarizing alerts, drafting recon reports, or reverse engineering malware with tools like GhidraMCP, these agents are delivering value in surprising places. Connected Workflows: Beyond Single-Tool Interaction The most promising developments come from connecting these tools into seamless workflows. Projects like NERVEdemonstrate the power of LLM-coordinated security tools, where a natural language request triggers multiple tools in sequence without requiring manual intervention at each step. ...

May 2, 2025 · 4 min · nickpending

Bridging the Cognitive Gap: Why LLMs Don't Think Like Security Experts..Yet

Large language models (LLMs) are rapidly becoming integrated into cybersecurity workflows, with systems now able to execute reconnaissance, analyze configurations, or detect potential vulnerabilities with increasing competence. Recent models such as OpenAI’s o1 and Google’s Gemini 2.5 have introduced internal deliberation loops, longer context management, and structured reasoning prompts to expand what models can handle. But something still feels incomplete. Security expertise lives in the gray areas: when to follow a hunch, what a naming convention might imply about infrastructure design, when a redirect chain is just odd enough to be worth digging into. We try to codify these things — through system prompts, retrieval systems, or agent workflows — but what we’re really doing is projecting structure onto a system that lacks true comprehension. ...

April 21, 2025 · 6 min · nickpending

The Indispensable Human-in-the-Middle

In the rapidly evolving landscape of AI-driven cybersecurity, an interesting truth emerges: as automation capabilities increase, the value of human judgment becomes not diminished but amplified. This third installment of our series explores why the “human-in-the-loop” approach remains indispensable despite remarkable advances in AI technologies. The Fundamental Limitations of Current Agentic AI Systems The current generation of AI-driven security tools, particularly agentic systems that attempt to autonomously plan and execute tasks, face significant challenges in security contexts where precision and reliability are non-negotiable. ...

April 15, 2025 · 7 min · nickpending