A Practical Interlude in My Cybersecurity AI Series

I’m excited to share my new Censys MCP tools module that demonstrates how AI capabilities can enhance security workflows when guided by domain expertise.

My previous articles here and here explored how the integration of human insight and AI will shape security’s future, introducing the hacker-strategist archetype and examining prompt engineering’s strategic dimensions. Today, I’m offering a practical example connecting those concepts through a project that brings natural language interfaces to OSINT workflows.

As Desmond Tutu wisely said, “There is only one way to eat an elephant: a bite at a time.” This philosophy applies perfectly to AI-assisted development, but with an important caveat—the process should begin before writing a single line of code.

What I’ve found most valuable is starting with the design and architectural phase, using AI as both a collaborator and sounding board. Before diving into implementation, spend time exploring the problem space and solution space with your AI assistant. This dialogue helps clarify requirements, identify potential challenges, and build a coherent mental model of what you’re trying to create.

Once you have a solid architectural foundation, AI can help you decompose the problem into manageable chunks. This is where effective prompting becomes crucial—you can work with the AI to identify logical component boundaries, dependencies, and implementation priorities. The AI doesn’t replace your domain expertise in this process; rather, it amplifies it by helping you articulate and refine your thinking.

During this project, I found myself sometimes taking a less structured approach—occasionally giving Claude comprehensive tasks and other times breaking things down methodically. In future projects, I’ll be more deliberate about this design-first, incremental implementation approach. I’ll explore these techniques further in a future article, but the key insight is clear: while AI tools aren’t security-specific, when coupled with domain expertise, they create powerful opportunities for innovation in our field.

The distribution of effort shifted significantly during development—less time writing boilerplate, more time on architecture and validation. This shift directly aligns with the emerging hacker-strategist role I described.

The MCP Advantage: Direct Value Without Complex Agents

There’s currently a flurry of activity around MCP, and for good reason. It provides a straightforward way to extend an LLM’s capabilities without building complex agent architectures or multi-step workflows. While sophisticated agent systems certainly have their place, MCP offers immediate practical value through simple function-calling interfaces.

The security community is recognizing this potential. Just recently, PortSwigger released the Burp Suite MCP Extension, bringing these same capabilities to one of security’s most essential tools. This integration allows researchers to leverage Claude’s analytical capabilities directly within their web application testing workflows.

The broader community has responded with similar enthusiasm - the Awesome MCP Servers repository already showcases numerous creative implementations that extend Claude’s capabilities in practical ways.

Think of MCP as the security equivalent of smart home automation. Just as setting up simple voice commands can eliminate dozens of small daily frictions—turning lights on, adjusting thermostats, checking door locks—MCP eliminates the context-switching tax in security workflows. Instead of jumping between terminal windows, browser tabs, and API documentation, analysts can stay in their thinking space while accessing critical intelligence. These micro-optimizations compound throughout the day, preserving analytical momentum and reducing cognitive load during complex investigations.

This represents the subtle but powerful shift I’ve been discussing in my series: not replacing the analyst’s work, but augmenting their capabilities and enabling them to focus on the aspects of security that most require human insight and creativity.

Bridging Concepts and Implementation

This project sits at the intersection of theory and practice—demonstrating how AI can serve as a force multiplier for security work when thoughtfully implemented. The experience highlighted how structured prompting during development leads to better results; the more we plan and design, the better our prompts will be as implementation progresses.

Looking Forward

This modest project represents one experiment in the ongoing evolution of security tools. As AI capabilities improve, we’ll discover new ways to augment human expertise—but the core insight remains: the most valuable applications emerge from the intersection of human judgment and machine capability.

In my next major article, I’ll return to exploring the specialized knowledge requirements for effective security-focused prompt engineering. Until then, I invite you to experiment with your own AI-enhanced security tools and share your experiences.

This interlude connects to my ongoing series exploring cybersecurity’s AI-driven evolution. You can find the previous articles on my Substack.