Where I’ve Been

I’ve been quiet lately, not because I ran out of things to say, but because I’ve been busy tending to my modest garden, hunting for my next role, but mostly, I’ve been building an AI assistant that doesn’t forget everything the moment you close a terminal window. Some of you may recall my previous post, where I introduced a command-line-based vulnerability analysis tool. It did what it was supposed to: parsed CVEs, connected them to internal policies and configurations, and spat out reasonably coherent summaries and an adequate risk analysis for the enterprise.

That should’ve been enough. But of course, it wasn’t.

What started as a proof-of-concept slowly evolved into a fully-fledged side project: an LLM (Large Language Model)-powered CISO sidekick with document ingestion, memory, and orchestration that finds new and interesting ways to break every time I make a change.

As luck would have it, I have a bit of time to write today, because I’m currently waiting on a full re-ingestion of source documents used for memory and RAG (Retrieval Augmented Generation). It turns out rebuilding a vector DB of nearly 900,000 chunks takes a while. Long enough to reflect on how this all spiraled and tee up a few posts about what I’ve learned (or at least broken) along the way.


What Took So Long

The short version? This was supposed to be a small project. Then I started coding. Here are a few of my thoughts on building an AI-based agent/tool.

  • LLM-as-Coding-Assistant: Yes, I used an LLM to help build the LLM assistant. Yes, I see the irony. When it worked, it was super helpful, spitting out boilerplate code, fixing obvious bugs, scaffolding classes and modules, inserting logging statements, even writing decent first drafts of pipeline logic. But when it failed, it failed confidently—hallucinating imports from imaginary libraries, misreading its own context, forgetting what it suggested just minutes prior, and suggesting “fixes” that subtly broke everything. Eventually, I stopped trusting anything longer than 10 lines and started verifying every response like I was reviewing an audit report of my security program.
  • Vibe Coding: Since my coding copilot wasn’t being as helpful as I wanted, and my Python skills are mediocre at best, I thought I’d try vibe coding on my first attempt at a rewrite. I didn’t start with a design document. I started with, “Let’s just get something working.” That worked fine until I had five partially overlapping modules, three conflicting chunking strategies, and an internal function named maybe_fix_it(). At some point, I found myself reverse-engineering my own project to understand why it was behaving the way it was. I also found that, even with frontier models, the AI would often forget or willfully ignore existing decisions, reference old code that had been deprecated or refactored, or get stuck in a debugging loop (try X, try Y, try X, try Y). You have to stay on top of your vibe coding partner constantly or you’ll end up with a broken app, having to revert git, or even branch your code from a known working version into a new repo and start over.
  • Models: Local ones were private and cheap, but slow and occasionally unhelpful. Frontier models were fast and fluent, but expensive and a bit too eager to guess what I meant instead of what I asked. I eventually settled on a hybrid model setup that uses local LLMs for most use cases, a frontier API for niche ones, and a frontier AI for vibe coding.
  • RAG: Retrieval-augmented generation is conceptually simple—chunk documents, embed them, pull them back as needed. In practice, it was two weeks of debugging chunking logic, tuning retrieval thresholds, and reading embeddings with a level of obsession normally reserved for horoscopes. Getting RAG right is more complicated than it looks; ingestion is time-consuming, mistakes are often catastrophic, meaning a hard reset, and retrieval can be buggy and inconsistent (inconsistency is a recurring theme with AI).
  • Memory: I thought memory would be easy. Spoiler: it’s not. The assistant needs to remember enough to feel coherent, but not so much that it repeats itself or derails. Storing and recalling context across sessions involved building custom logic that now exists somewhere between “clever” and “duct-taped.” I’m sure the frontier systems (not just the LLM) are more the former, and mine is more the latter.
  • Orchestration: This thing now runs on a small army of containers—tokenizer, embedder, LLM, vector DB, orchestrator, API service, and a handful of background workers. When it works, it feels like magic. When it doesn’t, it’s 2 AM, and I’m debugging why a service named seed-orchestrator suddenly thinks it’s responsible for chunking PDFs upside down.
  • Security: As a CISO, I couldn’t not threat model it. That led to discovering potential data exposures, misconfigured API endpoints, and the sinking feeling that I had built the kind of tool I’d normally yell at someone about. Locking it down took time and is still not where I want it, because the nature of generative AI, augmented memory, and RAG makes keeping sensitive data sandboxed difficult.

What’s Coming Next (Maybe)

I want to say this is the first of a regular series where I will dive into one of the topics above or share an update on progress, but let’s be honest, I’m still kind of addicted to getting this thing working just right. Which means I’ll probably disappear again the next time I decide to re-embed 900,000 chunks, swap out my LLM backend, or rewrite an entire API because I don’t like how a container logs errors.

That said, I do plan to share more. Maybe between rebuilds. Maybe while waiting for my vector DB to re-index itself for the third time this week.

Here’s what’s on deck—unless something explodes, or I change chunking strategies again:

  • Vibe Coding an AI Sidekick – Building first, thinking later, and regretting just enough to keep going.
  • Model Wars: Local vs. Frontier – How I tried to balance performance, cost, and security.
  • RAG Isn’t Magic, It’s a Maintenance Plan – The dirty truth behind “just drop your docs into a folder and let Python do the rest.”
  • Memory Without Regret – How to make an assistant that remembers what matters and forgets what doesn’t (kind of).
  • Threat Modeling Myself – What happened when I looked at my own LLM stack with a critical eye.

Each post will go deeper into the pain, tradeoffs, and quiet victories of building a serious, usable AI assistant from scratch, as a security leader who knows better but did it anyway.

So What?

Have you ever sat through a security briefing, heard the words, “This CVE has a critical CVSS score of 9.8!” and thought to yourself, “Okay, great… but what does that actually mean for us?” You’re definitely not alone.

Album cover of 'So What' by Miles Davis featuring a silhouetted trumpet player in action with bold blue text.
Great question, greater song

Throughout my career as a CISO, I’ve spent a large chunk of my time asking exactly this question. Let’s face it: CVSS scores are helpful, but they’re also generic. They don’t account for the specifics of your enterprise — your infrastructure, your configurations, or your security posture. Essentially, they’re like weather forecasters predicting rain in “the Texas.” Helpful-ish, but you still don’t know if you will need an umbrella.

This frustration is exactly why I decided to build an AI-powered risk assessment agent using synthetic data to simulate a mid-to-large enterprise environment. Because at the end of the day, cybersecurity isn’t about reacting to generic alarms, it’s about understanding your risks in your context, and making clear, informed decisions based on reality, not theory. I didn’t want another tool that simply echoed what public databases already told me. I wanted something that could reason, prioritize, and reflect the unique fingerprint of a real-world enterprise, something that could finally answer the question that every overwhelmed security team secretly asks: “Out of everything that’s happening, what actually matters right now?

Meet My Blind Yet All-Seeing AI Sidekick

When I first kicked off this project, I had a basic plan: Can I ask an AI what a CVE actually means to the company instead of reading endless vendor bulletins that assume every system is exposed to the internet and ready to be set on fire?

Three elderly figures with pale skin and dark eyes gather around a cauldron, stirring its contents over a fire in a dim cave setting.
The Graeae see nothing, yet see all

At the time, it seemed simple enough. I threw together some Python scripts, used an LLM to generate some synthetic network configurations for simulation purposes, and piped CVE summaries straight into an LLM with a “what does this mean” prompt.

The first results? They were…useless. I had to work through a series of LLMs to understand their strengths and weaknesses first. Then, when I finally settled on a few that worked for different portions of the project, the results were enthusiastic, let’s say. Long-winded, overly cautious, about as useful as an airport announcement that says ‘a flight is delayed’ without mentioning which one. The AI could talk about “potential risk” and “hypothetical impacts,” but it was like asking a Magic 8-Ball for incident response advice (now that I think about it… <shake> “very doubtful”).

Clearly, if I wanted real insights, I’d need to teach it to think more like a security analyst — breaking down context, assessing technical fit, and prioritizing risk based on reality, not worst-case fantasy.

That kicked off a lot of trial and error (and a lot of coffee).

One of the things I’ve learned over the years working with enterprise applications is that enterprise data doesn’t come neatly gift-wrapped. It’s messy, inconsistent, and often spread across spreadsheets, PDFs, exported scan results, policy documents, and hastily copied and pasted firewall configs into Notepad. So, I made a decision: the system had to be document-agnostic. Whether the input was a structured CMDB export, a raw Qualys scan CSV, a Word document full of access policies, or a block of firewall rules saved as plain text, the agent needed to ingest, normalize, and chunk it into usable pieces automatically. That way, I wouldn’t have to waste time hand-massaging inputs — I could just drop whatever artifacts I had, and the AI would do the heavy lifting to turn them into meaningful context for analysis. It wasn’t glamorous work, but it’s the difference between a system that works in a demo and a system that works under pressure in real-world environments.

Then, I realized the model needed more than just the CVE description. It needed to understand my simulated environment — the servers, the cloud zones, the endpoint devices, the policies in place. I built a semantic document chunking system that splits large artifacts into digestible pieces and indexed them using embeddings so the AI could “search” and “retrieve” the most relevant ones.

Keyword generation became the next big unlock. Rather than blindly guessing which documents to pull, I trained a secondary step where the model reads the CVE, extracts important concepts (“Apache,” “Log4j,” “remote code execution,” etc.), and uses those as retrieval anchors. That alone boosted the signal-to-noise ratio dramatically.

Once I had the right context, I ran into another wall: the model’s tendency to “blob” everything together in one giant answer. It needed structure. So I built a prompt-chaining system — first summarizing the CVE, then identifying impacted systems, then scoring risk, and finally suggesting remediations.

Breaking the problem into bite-sized reasoning steps made a night-and-day difference in output quality.

Along the way, I layered in sampling controls — letting the pipeline randomly select, stratify, or cluster document samples depending on the risk appetite. I wired in tunables like temperature (creativity vs. precision) and top-p values (how adventurous the sampling is) so that depending on the need, I could dial up a “paint inside the lines” analysis or let it freewheel a bit when exploring remediation strategies.

Of course, having a risk score pop out at the end is great…unless it’s wrong. So I also built a confidence scoring model. It looks at how tightly the evidence matches the CVE, whether the system is internet-exposed, whether there’s an existing patching policy, and other environmental factors. Then it generates a confidence rating alongside the risk assessment — helping me separate “this is critical” from “this might be critical, but we’re guessing.”

Technology-wise, I wanted flexibility, not lock-in. So I designed the engine to be model-agnostic: I can hit frontier models like GPT-4 Turbo over an API when I want the big guns, or I can call a locally hosted LLM through Ollama when I want speed, privacy, or just to avoid burning API credits. It also made it easy to test different models and architectures without rewriting the entire system each time.

While I designed this application for flexibility in a personal project setting, enterprise deployment would require proper governance, API security, and operational controls.

Honestly, this project taught me more about building reliable AI pipelines than any article or tutorial ever could. Every “small thing” — prompt design, chunk sizing, keyword filtering, sampling methods, temperature tuning, scoring logic — mattered. Miss one piece and the whole illusion of “smart AI” collapses into a pile of generic advice, random babbling, or my prompt being fed back to me, reworded.

Today, the agent doesn’t just tell me “this CVE has a 9.8 CVSS score.”

It tells me “this vulnerability could affect five critical systems in your PCI environment, two of which are internet-facing, patching is overdue on one, and based on our policies, your exposure window is about 14 days unless mitigated.”

It feels less like asking a Magic 8-Ball and more like having a junior analyst who’s fast, smart, and (mercifully) never asks for PTO.

The “So What?” Factor

One of the biggest lessons I’ve learned the hard way in cybersecurity is that volume is not the same as insight. Anyone can generate a wall of “critical vulnerabilities” and “urgent alerts.” But the ability to know which fires matter — and which ones are just smoke — is what separates chaos from control.

That was the real test for this AI agent. Could it help me get past “everything is bad” and tell me what matters, when it matters, to whom it matters?

At first, even after all the fancy retrieval, keywording, and prompt-chaining work, the outputs still felt a little…well, panicked. Models (especially when left to their own devices) have a tendency to be overly cautious. Everything becomes DEFCON 1. Every CVE is a crisis. Every server is a ticking time bomb.

I realized the agent needed some proportionality and to communicate the risk in terms that were realisitic to the environment and the risk tolerances of the business.

This is where the risk scoring and confidence layering came into play:

  • Stage 1: Analyze the CVE independently, focusing purely on the vulnerability’s technical impact.
  • Stage 2: Contextualize against the retrieved environment data.
  • Stage 3: Identify asset exposure (internal-only, DMZ, internet-facing, etc.).
  • Stage 4: Layer department/business criticality.
  • Stage 5: Generate a real-world risk score specific to the environment.
  • Stage 6: Attach a confidence rating based on evidence strength and exposure clarity.

The result was focused, realistic outputs that I could then use to make decisions regarding urgency for my team to take action.

A terminal window displaying a command-line interface with various options for analyzing CVEs, including parameters for threshold, output format, sampling method, temperature, and keywords.
I added a ton of documentation and help along the way — mostly to remind myself of how to use my own app

Instead of massive spreadsheets screaming “9.8!!!”, I get summaries like:

  • “Critical for payment processing; exposed; patch now.”
  • “Affects internal systems; covered by segmentation; patch during maintenance.”
  • “Not present in environment; no action needed.”

It turns theoretical chaos into navigable risk management.

Less sirens, more signal.

The Limits and Promise of AI in Risk Analysis

If there’s one thing building this agent taught me, it’s this: Large Language Models aren’t magic.

They’re not going to replace human cybersecurity expertise, no matter how many VC pitches or keynote slides try to tell you otherwise.

What they can do — and where they shine — is accelerating the grunt work that slows down human decision-making. They can connect dots faster than a junior analyst, summarize mountains of documentation in seconds, and offer “best guess” risk prioritization that can be validated and refined by actual practitioners.

In other words, AI is shaping up to be what early dot-com dreamers once promised “decision support systems” would be — only this time, it might actually work.

But it’s crucial to understand the limits:

  • Contextual Errors: Models often miss subtle nuances without tight context retrieval.
  • Overconfidence: Without proper guardrails, they hallucinate with swagger.
  • Blind Spots: They only know what they’re given; missing data equals missing judgment.
  • Integrity Risks: They can fabricate plausible but incorrect “facts”.

In short: AI can help us move faster, but it still can’t tell us when we’re sprinting in the wrong direction.

The critical judgment still belongs to human experts.

If we treat AI as a partner — a capable but imperfect junior analyst — we can unlock enormous value.

If we treat it as a replacement for judgment, we’re setting ourselves up for failure.

What I’m seeing is that LLM and AI are not, at this point, fantasy replacements for people but amplifiers for skilled decision-makers.

And we’re going to need every bit of that amplification, because in cybersecurity, the real fight hasn’t even started yet.

My next article will be about how we, as an industry and occupation, are wholly unprepared and misaligned to what is potentially coming.

Tags: , , , , , ,

The Cyber Ecosystem Shift

As federal cyber leadership pulls back, the balance is shifting across states, agencies, and industries. Here’s what that means—and why timing matters.

Close-up of a spider web with intricate patterns, set against a blurred green background.

Ecosystems are interconnected, interdependent systems. Think of a forest, trees, insects, fungi, predators, all locked in this quiet, messy cooperation. When one part shifts, say, the apex predator disappears, the whole system doesn’t just hiccup, it rewires itself. Deer population booms, underbrush gets stripped bare, and the balance tilts. Same thing happens in our digital world. Poke one corner hard enough, and everything from infrastructure to policy starts reacting, sometimes in ways nobody quite expected.

Now apply that logic to our digital ecosystem. The internet isn’t a dump truck (or a series of tubes), it’s a snarled mess of cloud providers, enterprise intranets, municipal networks, and federal backbones, all duct-taped together with trust and APIs. Change one piece, say AWS tweaks its MFA defaults, or a state agency locks down a regional hub, and the ripple shows up somewhere you didn’t expect. What starts as a well-intentioned adjustment in one corner of the stack turns into a policy migraine or an outage in another. And the kicker? Most of the time, nobody sees it coming until the dominoes are already on the floor.

Close-up of a frayed edge of dark denim fabric, showing loose threads and a rough hemline against a pink cutting mat.

Federal cyber policy isn’t collapsing overnight, but the seams are starting to fray. Some of the changes, leaner structures and less bureaucracy, might even look good on paper. But others? They’re pulling apart the connective tissue that kept national coordination and trust intact. This isn’t a panic move, and it’s a heads-up. Step back and you can see it, budget cuts, strategy shifts, decentralization creeping in from the edges. It doesn’t happen all at once. But if we don’t call it early, we’ll look up and realize the scaffolding’s already gone.

Picture a coral reef teeming with life, until the temperature spikes or pollutants creep in. Corals bleach, the energy source vanishes, and the entire chain starts to wobble. It’s the Great Barrier Reef right now, and the parallel to cybersecurity is clear. When core institutions, the “corals” of our digital ecosystem, lose capacity, collapse doesn’t announce itself. It just arrives, one broken link at a time.

A sea turtle swimming over bleached coral reefs, illustrating the impact of environmental changes on marine ecosystems.

Just like with coral reefs, when core structures in cybersecurity begin to erode, the impact isn’t immediate, but it’s inevitable. A funding cut here, a mission shift there, and the stability we’ve built up quietly starts to wobble. Unless you’re watching closely, you won’t notice until something breaks.

Federal agencies like CISA used to be the center of gravity, broadcasting threat intel, coordinating response, and keeping the edges from fraying. But that model is being redrawn. The 2023 National Cybersecurity Strategy shifts more burden to state and local governments, and while CISA’s $3B budget sounds solid, the number of plates it’s expected to spin keeps growing. The result? A patchwork of regional expectations, inconsistent readiness, and a shrinking safety net.

You can already see the strain. MS-ISAC, the real-time intelligence backbone for states and municipalities, is now dealing with internal pressure. Funding shifts, leadership turnover, and an evolving federal mission are testing its reach. If you’re a large state with dedicated cyber teams, you might be fine. If not? You’re on your own.

Then there’s NIST, the quiet architect of standards and sanity. Its frameworks have defined how we approach risk, compliance, procurement, and emerging tech. But as budget cuts take hold, we risk losing one of the few common denominators across the private sector and critical infrastructure. Industry leaders are already warning that NIST’s shrinking role could weaken U.S. standing in global cybersecurity and AI governance. And without that compass, organizations are left improvising, right when clarity matters most.

We’ve seen this before. When the Bush administration rolled back New Source Review requirements under the Clean Air Act, older power plants got a pass on modernization. On paper, it looked like efficiency. In practice, it pushed accountability to states, some prepared, some not. The same pattern is repeating: as federal guardrails loosen, we’re left with a fragmented map and no shared baseline.

A coal power plant emitting large plumes of smoke into the sky, situated next to a green field, with power lines in the foreground.

For the private sector, this decentralization trend isn’t some abstract policy shift, it’s showing up in daily operations, audits, and post-incident writeups. With fewer federal signals to follow and uneven support across states, CISOs are left reading tea leaves to figure out what ‘good’ looks like. And it’s not just the big players feeling it. Small and mid-sized organizations, the ones without federal contracts or lobbyists, are being pushed to navigate a fragmented threat landscape with fewer tools, fewer signals, and more pressure. Here’s where that squeeze is hitting hardest:

  • More Responsibility, Less Backup: With the federal umbrella pulling back, companies are holding the bag, more responsibility, fewer lifelines. That means shoring up internal defenses, training teams harder, and pressure-testing incident response plans like it’s game day. Some orgs are ready. Many aren’t. And without consistent intel from federal partners, it’s like sending everyone into the storm with different weather maps, and hoping none of them are wrong.
  • Navigating Regulatory Fragmentation: The regulatory landscape’s starting to feel like a choose-your-own-adventure novel written by 50 different authors. Companies working across multiple states are juggling conflicting requirements, one state says report in 48 hours, another says 72. Some want threat modeling. Others want incident logs in triplicate. The only consistency is the confusion. This isn’t just a compliance headache, it’s a risk amplifier. In the cyber version of the Clean Air Act rollback, inconsistent oversight doesn’t just lead to fines, it creates gaps that adversaries can walk through. And the coordination bridges that used to help navigate it all? They’re getting washed out, just when the waters are rising.
  • Diminished Information Sharing and Collaboration: CIPAC didn’t make splashy headlines when it went away, but it should’ve. It was more than a mailing list. It was one of the last remaining connectors between federal agencies and the industries trying to stay ahead of threats. Without it, threat intel flows slower, silos grow taller, and coordination falls apart. DHS’s decision to sunset the platform was a quiet signal that collaboration just dropped a few notches in the priority stack. And if that sounds familiar, it’s because we’ve seen it before. When the EPA’s Office of Research and Development was nearly shut down and the State Department dropped its global air monitoring program, we didn’t just lose policy, we lost visibility. The same thing is happening in cyber. If we’re flying blind, it’s because we pulled the plug on the radar.
  • Supply Chain Vulnerabilities: Supply chains don’t respect jurisdiction lines, but cybersecurity policies increasingly do. When one supplier follows State A’s “guidance” and another is beholden to State B’s wish list, the gaps between them get wide enough for attackers to stroll through. Nobody’s waiting for a uniform standard anymore, and that’s a problem. Without federal coordination, we’re back to defending critical infrastructure with inconsistent controls and hope. In ecosystems, natural or digital, fragmentation doesn’t end in resilience. It ends in exposure.

So what do we do with all this? Step one: admit the terrain has shifted. This isn’t a future-state scenario, it’s already happening in real time. The federal safety net has holes, and we’re mid-air. That doesn’t mean we panic. It means we get serious about what we can control, and sharper about the gaps that need patching.

A map of the United States highlighting state cybersecurity capabilities, with colored markings indicating regions with established, emerging, or no public cybersecurity initiatives.
To help visualize the uneven progress across the country, I generated a map highlighting which states have already built—or are actively building—their own cybersecurity capabilities. Some have full-on fusion centers and dedicated commands. Others are just now hiring their first cyber lead. And then there are the quiet zones: states with little to no publicly visible investment.

Step two: focus where it counts. Here’s where we should put our energy:

  • Strengthen Internal Cybersecurity Frameworks: This one’s table stakes. Double down on what you own. That means well-tested playbooks, training that sticks, and threat modeling that doesn’t just check a box. If your incident response plan hasn’t been reviewed since your last compliance audit, it’s not a plan, it’s a liability.
  • Engage in State-Level Initiatives: It might not be exciting, and yes, it might feel like volunteering for a committee with no coffee, but this is where policy gets baked. Being in the room means you can steer things a little. Being outside the room means you’re stuck living with whatever gets passed. Pick your pain.
  • Foster Industry Collaboration: If the feds aren’t holding the flashlight anymore, we’ll need to light it ourselves. Start local. Pull together a threat-sharing group. Sync on near misses. Share real playbooks, not sanitized slide decks. It’s not about building consensus, it’s about staying in the fight together.
  • Monitor Regulatory Developments: This one’s not optional. With states going full-speed in different directions, the only way to stay out of trouble is to get proactive. Assign ownership. Track changes. Don’t wait for your legal team to panic after a breach disclosure deadline passes. If you don’t have a handle on the regulatory chessboard, you’re already in check.

As we navigate this new era of decentralized cybersecurity, the simplest and most effective move might just be reversing course on some of the recent federal cuts, especially those that hit information sharing, coordination, and standards development. Restoring funding to CISA, CIPAC, and NIST isn’t a silver bullet, but it buys time and stability while we figure out what a more distributed future should actually look like.

If that’s not in the cards, politically or financially, then we need new models. The ISAC approach has always been a strong foundation, but it has to evolve. Right now, it’s priced out of reach for a lot of the organizations that need it most. What we need is a modernized framework for collaboration: open-access, lower cost, built to scale for the reality most small and midsize companies live in. Maybe it’s clearinghouses. Maybe it’s subsidized platforms. Maybe AI finally earns its keep. Whatever it is, it has to prioritize shared visibility and actionability, not just slick dashboards.

We’ve protected ecosystems before, digital and otherwise. It doesn’t require perfection. Just coordination. Whether through renewed federal investment or smarter industry-led alternatives, we have to reconnect the threads before they snap. Because if they do, we won’t just be reacting to the next breach, we’ll be rebuilding from it.

A small sapling growing amidst fallen leaves in a forest setting, symbolizing resilience and interconnected ecosystems.

Tags: , , ,

FFFFFFFound in the archive

I was cleaning up my hard drive when I found an unpublished blog post I had written in 2008 during my stint at American Airlines as an information security architect. The funny thing is that my views here have stood the test of time during my career as a security professional. If someone asked me to write on this topic in 2025 (17 years after I first wrote it), I don’t know if my current take would be much different. It would be way saltier and cynical, but otherwise unchanged.

Tap Dancing in a Minefield
or
How to be an Effective Security Professional

I am learning very quickly that an information security professional must wear many hats and be a subject matter expert on a wide array of subjects. But ultimately, no matter how much training and how many controls and policies are put into place, the effectiveness of a security pro will be measured by the amount of buy- n the business has into security as a concept and how much security is on the minds of architects, developers, and administrators. If you don’t have their cooperation, you will find yourself chasing down and trying to correct fundamental problems that could have been prevented in the early stages of the life cycle. You will have a group of very talented people trying to find ways around your controls and policies from within your enterprise. Definitely not a win-win. So, how do security professionals deal with this issue? How do we get everyone on the same page to march toward a more secure enterprise? I find that there are times when the art of social engineering comes in quite handy when dealing with people within your own company. Depending on the ego, excuse me, person I am dealing with and my knowledge of the subject, I will employ some of the following techniques in order to better secure applications and systems from the ground up.


Let’s discuss what the options are…
I use this one when people come to me early in the SDLC. I want to encourage this behavior, so I will absolutely let the developers and architects help guide the security profile of their project. I actually enjoy this method the most as I learn the most from discussing why certain security features or functions may not be feasible, but something I hadn’t thought of could be used in its place.

If you were going to get into this app, how would you do it?
This is for the times when I am shown a completed architecture but am little unfamiliar with the technology deployed or if I don’t see any apparent weakness in the proposal. I don’t use it that often, but when I do, architects and developers typically have fun with the question and come up with some pretty wild scenarios I would never lose sleep over.


The Socratic Method (or Doing the Machiavelli)
This technique is for the times when I am given an architecture or proposal that has holes that are pretty easy to see or if I know which direction I want to head in but don’t want to issue edicts. I will start asking pointed questions to get the architects and developers thinking on a certain track, slowly (sometimes painfully) leading them to the design flaws or the inherent weaknesses. When I get them to see the issue, I will begin another line of questions that gets them towards the solution I think is best. Of course, I will listen to them if they convince me that fears are unfounded or if there are mitigating controls I don’t see or didn’t know about.

What do you want me to say?
I took this one straight from the auditors’ playbook. Sometimes, political pressures are put upon architects and developers to do things they know are wrong. So they come to me looking for my disapproval along with the rationale they can take back to their superiors. Sometimes, the security professional’s role is that of the official bad guy, and who am I to disappoint?

If we could tear this down and start over…
This one is reserved for legacy systems that are being refreshed using the same design and architecture as the previous version. Whether the reason is technical constraints, political pressure, or intellectual laziness, I try to reinvent the wheel whenever possible, so I use this method to get the creative juices of the developers and architects going. Sometimes, the architecture gets approved as-is with minimal changes, but I have also seen complete redesigns after I have posed this question in a meeting.

You will respect my authoritah!
This is the nuclear option for security professionals. I have to use this one every so often, but I have it at the ready at all times. If a project manager, developer, business unit manager, etc., are not willing to budge on their proposal despite my attempts to get them to make their system more secure, I will dig my feet in, draw the proverbial line in the sand, and basically force someone above my head to override me. I guess, in some ways, it’s a CYA maneuver that shifts the blame if something goes wrong and my vocalized fears are realized.


This is a very short list, and most of the time, I use techniques that incorporate more than one of the above, with others I haven’t included. Securing a large enterprise is a difficult task that cannot be solved with technology alone. I am fairly confident I will write about this subject in the future, perhaps with some concrete examples of how I used one of the techniques to make an insecure proposal into a (more) secure system.

2008 Dan at Work

Tags: , , ,

Secure the Vibe

Photo by Raimond Klavins on Unsplash

Let’s talk about a phenomenon that’s been creeping into codebases like weeds into my garden: vibe coding — a now-mainstream dev habit of writing code that feels intuitively correct, even when it skips over documentation, testing, or basic security checks. It’s the developer equivalent of jazz: all improvisation, no sheet music. Wikipedia defines it as writing code in a way that feels right, rather than strictly following specs, best practices or (let’s be honest) any actual documentation. There might be a vague understanding of the goal, but the execution? That’s all vibes, baby.

It’s become trendy. Somewhere along the way, dev culture started glamorizing the lone wolf hacker-genius who just “feels the code” — and now, in the age of autocomplete and generative AI, that intuition is often coming from a language model, not even the dev themself.

Shipping MVPs built with half-understood frameworks, pasted-in code from decade-old forums, and function names that read like inside jokes. And hey — sometimes it actually works. But more often, it leaves behind a trail of bugs, breaches, and confused engineers wondering how their system got turned into Swiss cheese.


This new generation of vibe coding skips past all the boring stuff: planning, documenting, designing. But in 2025, it’s not just guts and caffeine driving this trend — it’s generative AI.

Developers aren’t making all these decisions themselves; they’re just riding the autocomplete wave. It’s like building a rocket out of duct tape because you asked your chatbot for blueprints and didn’t even bother double-checking its work. Sure, you might get off the ground — but will it land? Will it explode on reentry?

The core problem? No one stops to ask, “What happens if this fails?” or “How could someone abuse this?”

Security isn’t just missing from the checklist — there is no checklist. It’s all about speed. Vibe coders are shipping what the AI suggests with the confidence of someone who definitely did not write tests. I’ve seen production APIs with no rate limiting, no auth, and a data model that apparently communicates only in base64-encoded emoji. And when you ask why, the answer is something like, “It just felt cleaner this way.”


Now, before the security folks start pointing fingers, let’s take a quick walk down memory lane. Infosec’s been vibing for years.

Remember when “just say no” was considered a valid strategy? Instead of doing actual risk assessments or modeling threats, security pros (myself included) would just deny the request outright or force an app into a design that was either over-protective or made no damn sense.

And when the friction got too high? We weren’t enabling secure solutions — we were creating the exact conditions that led to workarounds, misconfigurations, and a rise in shadow IT.

That’s the thing about security vibing: it skews hard toward overcorrection. We say no ’cause it’s easier than actually understanding a complex system. We throw in blanket restrictions because nuance takes time. We confuse rigidity with resilience.

This approach doesn’t prevent breaches. It delays delivery, alienates devs, and eventually backfires. When teams feel like they can’t count on security for help, they stop asking. They build without us. They operate in the dark. And ironically — we end up less secure.

We vibed our way through security decisions with the same gut-feel gusto we now criticize in developers.

“I don’t think that app should talk to the internet.”

“Just deny all and allow the exceptions.”

Entire security architectures have been built on not much more than vibes and the hope that the business doesn’t ask the hard questions.


Both dev and security cultures suffer from the same over-romanticization of cleverness. There’s this weird prestige in being the person who can do it all from memory — who doesn’t document anything because “they get it.”

But cleverness without discipline is like a ship with no rudder.

In dev, that means fragile codebases only the original author understands — and they just left for a startup. In security, it means inconsistent controls that look impressive but don’t actually work.

Worse — it means defenses that are brittle. Policies that get in the way instead of enabling safe behavior. Devs start routing around them. Shadow IT starts popping up. Because the official path is too slow, too strict, or too confusing.

Cleverness, unchecked by empathy and collaboration, builds walls — not bridges.

The fallout? Security becomes the department of “no.” Developers stop asking. The business tunes us out because we’ve cried wolf too many times without providing clear value. So while we roll our eyes at unreviewed code in prod, we also need to admit when our own clever shortcuts contributed to the chaos. We’ve all vibed before.


It’s time to do better.


Insecure code. Breaches. Delays. Compliance headaches. That’s the invoice vibe coding sends when it finally catches up with you.

It’s easy to miss the cost because the speed feels good. You’re hitting milestones, shipping features, getting high-fives. But that speed is usually hiding a mountain of tech debt — and worse, a pile of unaddressed security risk.

The truth is: good code should feel boring. Secure design should feel slow.

Documentation, testing, threat modeling, code reviews — these are not glamorous, but they’re what let you sleep at night knowing your containers won’t be out there on the public internet hemorrhaging data to anyone who knocks.

And just when the vibes couldn’t get more chaotic, here comes generative AI.


Generative AI tools like GitHub Copilot and ChatGPT have poured jet fuel on the vibe coding trend.

Now devs aren’t just skipping the fundamentals — they’re shipping code they don’t fully understand. Ask any developer who auto-completed their way through an entire function if they can explain it line by line. You’ll get a shrug and a nervous laugh.

That’s the danger. The AI doesn’t know your threat model. It doesn’t know your infra. It doesn’t know your customer data shouldn’t be logged in plaintext or that the code it just spit out skips verifying JWT signatures.

It hands you something that resembles code, delivered with the misplaced swagger of an intern who watched one YouTube video and now thinks they’re Linus Torvalds. (bless ’em, they really do try.)

A 2022 study by Stanford and NYU showed GitHub Copilot suggested code with security vulns in 39.33% of test scenarios. Input sanitization issues, broken crypto, missing auth checks — the hits keep coming. And devs just shipped it. Because it looked right.

This aren’t edge cases — these are common patterns.

Security teams need to be watching this closely. Code generation is not a shortcut if it creates silent vulnerabilities. If anything, it demands more scrutiny. The code might be from an AI, but that doesn’t mean it earned your trust.


Review it. Test it. Validate it. Assume nothing.


We’re not doomed to vibe forever. (Though some of y’all are really testing that theory.)

Some teams have figured this out. In a past life (2015–2018-ish), my team was building an AppSec program with lightweight threat modeling, pre-merge checks, and security champions embedded with devs — all designed to bake in security early, not bolt it on after the fact.

At OWASP Global AppSec 2023, Twilio shared a similar story: embedding security directly into CI/CD pipelines without killing dev velocity. The result? Fewer bugs, faster shipping, happier teams.

We didn’t kill the vibe. We channeled it.


Vibe coding might get you to MVP. But it won’t save you from the breach report that starts with:

“The exposed S3 bucket was traced back to an undocumented service running in production.”

The truth is — it’s not always the zero-days getting us. It’s the old stuff. The gut calls. The undocumented configs. The AI-suggested snippets we didn’t look too closely at.

But we’re not powerless. We know how to do this right.

Threat modeling ain’t rocket science. Reviewing AI code is not optional. Shipping fast is fine — as long as you’re not shipping security incidents to your customers.

So the next time the code feels done but looks like improv jazz — don’t ship the vibes.


Interrogate the decisions. Validate the assumptions.

Make sure your gut is not dragging you into prod without a seatbelt.

Because vibes aren’t security.


And clever isn’t coverage.

Tags: , , , ,

Follow-Up: Yes, Risk Assessments Matter. No, Accepting Risk Isn’t a Strategy.

First off, I’m thrilled the original post hit a nerve—the good kind, mostly. But it seems some folks thought I was saying we should throw risk assessments out the window and just YOLO our way through cybersecurity. That was not the point.

So let me clarify: Risk assessments are absolutely a core part of any real security program. If you don’t understand your environment, your assets, your exposure, and what could go wrong, then you’re not doing security—you’re just playing whack-a-mole with firewalls.

But here’s where things go off the rails: somewhere along the way, risk management became synonymous with security. And even worse, we decided that accepting risk was just as valid as mitigating it.

Security vs. Risk Management: Two Different Jobs

Let’s break it down:

  • Risk Management is about understanding, evaluating, and prioritizing risk. It’s essential for making informed decisions. Think of it like an insurance underwriter—calculate the odds, assign a value, and document the risk.
  • Security is about stopping bad things from happening. It’s technical. It’s operational. It’s messy. It’s hands-on. And it has one goal: reduce the likelihood and impact of a successful attack.

If your security team is acting like a bunch of underwriters instead of defenders, you’re gonna have a bad time.

The Risk Acceptance Trap

Accepting risk is not inherently wrong—but it should be the exception, not the plan. It should be reserved for cases where:

  • There is no feasible technical mitigation.
  • The exposure is limited and monitored.
  • Leadership has made an informed, time-bound decision.

And let me be blunt here: risk acceptance should never happen without the CISO’s strong objection being documented, discussed, and understood. If security leaders are just rubber-stamping business decisions to accept risk, then we’ve traded our role as protectors for the role of note-takers. That’s not why we’re here.

The CISO’s job is to say, loudly and clearly, “This is a bad idea.” And if the business still chooses to accept the risk? Fine. But everyone should walk away from that meeting knowing full well that the defender of the realm raised the red flag.

Far too often, risk acceptance becomes the default:

  • “We don’t have budget for that.” Accept the risk.
  • “It’s been like that for years.” Accept the risk.
  • “The vendor said it’s fine.” Accept the risk.

At that point, you’re not managing risk—you’re just giving up and filing paperwork to make it look intentional.

What Security Actually Is

Let’s get back to basics. Security is:

  • Hygiene: Patching, hardening configs, removing legacy crap, and eliminating known weaknesses.
  • Preventative Controls: Least privilege, segmentation, MFA, EDR, and all the other stuff that makes it harder for attackers to succeed.
  • Detection and Response: Assume breach, monitor everything, and respond quickly when something goes sideways.

Notice what’s not in that list? Writing down the problem and moving on.

And yes, before anyone lights up the comments again—risk quantification has its place. We absolutely need to understand potential impact and likelihood so we can prioritize wisely, especially when we’re fighting for resources. But let’s not confuse calculating the odds with actually reducing them. Security isn’t just an actuarial exercise—it’s a contact sport.

Risk Assessments Should Drive Action

A good risk assessment isn’t an end-state. It’s a call to arms. It should say:

  • Here’s what’s broken.
  • Here’s how bad it is.
  • Here’s how we’re fixing it.

And if we can’t fix it, we document it while actively trying to eliminate or reduce it.

Security is a fight. It’s a grind. It’s never-ending. But it’s not an actuarial exercise. If the output of your security team is a risk register with a long list of “accepted” items and very few actual changes to your attack surface, you’re not defending—you’re deferring.

The Bottom Line

Yes, assess your risks. No, don’t accept them by default. Security is about changing the outcome, not predicting it.

So let’s stop acting like underwriters and start acting like defenders. Let’s use risk assessments as fuel for action, not an excuse for inaction.

Now, back to work. That patch isn’t going to deploy itself.

Hackers Don’t Check the Risk Register

Why Over-Reliance on Risk Management is Hurting Cybersecurity

Imagine you’re gearing up for a fight. You know your opponent’s strengths, weaknesses, and favorite moves. But instead of training, sharpening your reflexes, and reinforcing your defenses, you meticulously write down everything you know in a notebook and call it a day. When the fight starts, you get knocked out in five seconds because, you guessed it, your opponent didn’t read your notes.

Welcome to modern cybersecurity, where organizations are so focused on documenting risks that they forget to actually defend themselves.

The Paper Fortress

Risk management, in theory, is supposed to make us more secure. Identify risks, assess impact, document them in a risk register, and take action. Sounds great, right? Except we’ve stopped at the documentation phase. The modern enterprise security program has become obsessed with spreadsheets, risk scores, and governance workflows while attackers gleefully take advantage of gaping security holes that remain unfixed.

Let me make this painfully clear: writing down that something is risky does not mitigate the risk. Documenting your exposure to ransomware doesn’t stop ransomware. Acknowledging that shadow IT exists in your environment doesn’t prevent your developers from deploying unpatched applications. Treating security as a paperwork exercise is like buying a fire extinguisher and never learning how to use it; compliance box checked, still completely unprepared.

The Illusion of Risk Treatment

The fundamental problem is that risk registers have become the de facto substitute for security action rather than a tool to enable it. In most organizations, risk treatment means either:

  1. Accepting the risk because it’s not currently on fire.
  2. Mitigating the risk with more documentation, policies, or training.
  3. Transferring the risk to a vendor or insurance company.

Notice what’s missing? Actually fixing the damn issue!

Organizations are spending more time on risk analysis than they are on asset and vulnerability management. They’re modeling threats instead of deploying zero trust controls. They’re debating impact levels while attackers are already inside their networks, undetected. Risk documentation should be an output of security efforts, not the primary focus.

The Real Alternative: Attack Surface Management and Active Defense

Security should not be a paper exercise. It should be a technical, hands-on discipline that prioritizes understanding your attack surface and minimizing opportunities for exploitation. Instead of over-indexing on risk registers, security teams should be focusing on:

1. Asset and Vulnerability Management: Know What You Own and Secure It

You can’t defend what you don’t know exists. Organizations need real-time asset discovery, vulnerability scanning, and automated patching, not just a line item in a risk register that says “Legacy servers pose a high risk.” If it’s high risk, fix it or remove it.

2. Zero Trust: Assume Breach and Lock Down Access

Instead of acknowledging in a risk review that “Users have excessive privileges,” enforce least privilege access, implement strong authentication, and monitor every access request. Trust nothing, verify everything, and design your architecture like an adversary is already inside.

3. Rapid Detection and Response: Stop Dwelling on Risks and Start Catching Attackers

Attackers aren’t waiting for you to finish a quarterly risk review. Invest in security operations, real-time detection, and automated response capabilities. Every security program should have robust endpoint detection, SIEM correlation, and an incident response playbook that’s actually tested, not just sitting in Confluence.

4. Configuration Hardening and Secure Defaults: Break the Attack Chain Before It Starts

Most successful attacks don’t involve zero-day exploits; they take advantage of bad configurations, weak passwords, and unpatched software. Instead of noting “Default credentials present a high risk,” enforce strong password policies, disable unnecessary services, and remove insecure legacy protocols.

Risk Registers: The Exception, Not the Rule

To be clear, I’m not saying we should burn the risk register (as cathartic as that would be). What I am saying is that the risk register should not be the centerpiece of your security program. It should be a supporting document, capturing edge cases where technical risks cannot be fully mitigated by technical controls. That should be the exception, not the default approach.

Security leaders should be asking: What have we done about this risk today? If the answer is “We documented it,” then congratulations, you’ve done nothing. The only acceptable answers should be:

  • We fixed it.
  • We implemented a technical control to reduce the exposure.
  • We have active monitoring in place and can detect and respond immediately.

If none of these are true, it’s not risk management, it’s risk procrastination.

Less Paper, More Security

Hackers don’t check the risk register before launching an attack. They don’t care how well you’ve documented your risks. They care about your unpatched servers, your exposed cloud assets, your weak credentials, and your unmonitored attack paths.


Security isn’t about documenting risk; it’s about reducing it. If your security program is primarily focused on risk registers rather than reducing your attack surface, implementing zero trust, and accelerating detection and response, then you’re not protecting the business. You’re just writing about how you could have.


The time for security theater is over, so close the spreadsheet and secure the network. Most of all, stop treating risk documentation as a security control.

The Real Cost of AI Isn’t Just the Price Tag

OpenAI’s rumored plan to charge $20,000 a month for “PhD-level” AI agents is making headlines, but the real concern isn’t the price—it’s the implications. This leak feels like a market test, a way to gauge reactions and refine pricing. But the bigger issue is what happens after AI reaches that perfect price point.

AI at this level will replace highly skilled roles at scale, fundamentally reshaping industries. That’s a given. What happens when AI handles all the critical thinking and high-end decision-making? Will we still have a way to train future experts?

We assume AI will continue to improve, but true breakthroughs come from experimentation, failure, and human insight. If we hand over too much to AI without keeping the means of learning and discovery open, we risk entering an era of stagnation.

This isn’t just about who can afford AI—it’s about who gets to develop the next big ideas.

We need to think carefully about what expertise means in an AI-driven world. If we don’t, we may find ourselves in a future where we still depend on certain types of work but have lost the ability to do it ourselves.

Tags: , , ,

Lateral Movement is Ludicrous Speed, and Your Security Needs to Keep Up

Cyberattacks are no longer slow, methodical heists. They’re smash-and-grab operations at ludicrous speed. According to ReliaQuest, attackers can move laterally inside your network in just 27 minutes, with an average of 48 minutes. That’s less time than it takes to find a movie on Netflix, decide you don’t actually want to watch it, and end up scrolling TikTok instead.

Bad News for Blue Team

Let’s put that in perspective: If someone broke into your office, you’d probably notice. But in the digital world, attackers waltz through your infrastructure faster than IT can finish its morning coffee, and before you even realize something’s up, they’re two departments over stealing sensitive data like it’s a Black Friday sale.

Ludicrous Speed? Ludicrous Speed!

So, what’s making lateral movement so fast? A few culprits:

1. The IoT Dumpster Fire

The Internet of Things (IoT) was supposed to make our lives smarter. Instead, it turned enterprise security into an all-you-can-hack buffet. Zscaler’s ThreatLabz reports a 45% year-over-year spike in IoT malware attacks—because, shocker, all those smart fridges and connected coffee makers aren’t exactly Fort Knox.

It’s not just consumer gadgets either. Businesses are drowning in IoT devices: security cameras, smart lighting, even industrial sensors. And since many of these barely have security baked in, they make for fantastic entry points. Attackers love them. It’s like leaving your back door open with a neon sign that says “Come on in, we’ve got data!”

No! Not the coffee!!!

Why do companies still use insecure IoT? Because ripping out “smart” security cameras and replacing them with secured ones is expensive, and nothing says “we care about cybersecurity” like a budget meeting that ends with ‘just accept the risk.’

2. AI is Helping the Bad Guys Too

Generative AI (GAI) and machine learning were supposed to make security smarter. Instead, they’re arming attackers with better, faster, and more creative ways to mess with your network.

AI Powered Attacks

The UK’s National Cyber Security Centre (NCSC) is already warning that AI is refining malware, helping hackers find vulnerabilities, and accelerating lateral movement. In other words, attackers aren’t just guessing where to go next—they’re letting AI crunch the numbers for them. It’s like having an evil Siri whispering, “I found five unpatched servers nearby. Would you like to exploit them now or set a reminder for later?”

And here’s the kicker—we’re making it easy for them.

Compounding these challenges is the persistent issue of unpatched software vulnerabilities. Despite the availability of patches, many organizations delay or neglect their implementation, leaving systems wide open. A 2019 Ponemon Institute survey found that 60% of breach victims were compromised due to known vulnerabilities they hadn’t patched. That’s right—attackers didn’t need cutting-edge zero-days, just lazy IT processes.

If you’re still rolling your eyes at the idea of patching, just remember: hackers don’t break in, they log in—often through a vulnerability that had a fix available six months ago.

3. Your “Perimeter” is Dead. Has Been for a While.

Remember when security was all about firewalls and VPNs? Good times. Too bad attackers don’t care about perimeters anymore. If your security plan still assumes you can keep threats outside the walls, you’re playing medieval defense in an age of cyber ninjas who teleport straight to your database.

Actual footage of a cyber ninja gaining access to your Oracle 11g cluster

Once attackers get inside, they don’t need to break anything—they just move sideways, blending in like an employee who “forgot their badge” but somehow has full admin access. And since most companies still have way too much implicit trust inside their networks, stopping lateral movement is basically hoping attackers will trip over a cable and knock themselves out.

The Fix: Zero Trust, Because Trust is for Suckers

If attackers are moving faster than ever, then the security model needs a serious upgrade. That’s where Zero Trust Architecture (ZTA) comes in.

Zero Trust operates on a simple but brutal philosophy: “Trust no one. Verify everything.” Every user, every device, every request—doesn’t matter if it’s coming from inside the network or outside, it gets scrutinized. The goal? Make lateral movement a nightmare for attackers.

With Zero Trust, your network stops being a big, open-plan office and becomes a series of locked rooms, where every door requires verification. An attacker getting in is bad, sure, but if they can’t go anywhere, you’ve already won half the battle.

The Other Fix: You Need Eyes Everywhere

Zero Trust isn’t enough on its own. You also need detection and response that works in real time—because no matter how good your defenses are, something will slip through. That’s where Managed Detection and Response (MDR) comes in.

MDR is like having a team of hyper-caffeinated security analysts and AI-powered sensors watching your network 24/7. It spots anomalies, flags sketchy behavior, and calls in the cavalry before attackers can get too comfy.

Here’s the winning combo:
✔ Zero Trust locks attackers down.
✔ MDR catches them in the act.
✔ You stop lateral movement before it turns into a disaster.

Final Thought: Slow Security is Dead Security

They say “slow and steady wins the race.” That’s great for tortoises, but in cybersecurity? Slow gets you breached.

The Tortoise Doesn’t Stand a Chance

If your security posture isn’t as fast and adaptive as the attackers coming for you, you’re already behind. The only way forward is to embrace Zero Trust, adopt real-time detection, and stop thinking like it’s 2015. Because in this fight, speed isn’t optional—it’s survival.

Now, go lock down your network before an attacker finishes this article, takes a sip of Red Bull, and owns your Active Directory.

How Legacy Code Confounds Modern Audits

I was writing a post last week about the misunderstandings happening after some young whippersnappers started poking around in COBOL. So, after a 25-year hiatus from the language, I decided to have some fun and take a trip down memory lane. It took me a few days filled with lots of “oh yeah, now I remember” moments. I know I had more fun writing this than you’ll have reading it! 😂😂😂

IDENTIFICATION DIVISION.
PROGRAM-ID. COBOL-LEGACY-INSIGHTS.
AUTHOR. Dan G.

ENVIRONMENT DIVISION.
CONFIGURATION SECTION.
SOURCE-COMPUTER. IBM-7090.
OBJECT-COMPUTER. IBM-7090.

DATA DIVISION.
WORKING-STORAGE SECTION.
01 US-GOVERNMENT-SYSTEMS.
   05 SSA-SYSTEM      PIC X(10) VALUE 'ACTIVE'.
   05 IRS-SYSTEM      PIC X(10) VALUE 'ACTIVE'.

01 COBOL-EXPERTISE    PIC X(70) VALUE 'Master of Science in IT with extensive graduate COBOL coursework.'.

01 BENEFICIARY-AGE    PIC 9(3) VALUE 150.
01 VALID-AGE          PIC 9(3) VALUE 120.
01 AGE-ERROR-FLAG     PIC X VALUE 'N'.

PROCEDURE DIVISION.
BEGIN.
    DISPLAY 'COBOL—the programming language that refuses to retire, much like many members of the US Congress.'
    DISPLAY 'Despite being over six decades old, COBOL remains the backbone of many U.S. government systems, especially in agencies like the Social Security Administration (SSA) and the Internal Revenue Service (IRS).'
    DISPLAY 'Its reliability and efficiency in handling large-scale data processing have kept it in play, even as newer languages have emerged.'

    DISPLAY 'Recently, the Trump administration''s Department of Government Efficiency (DOGE) decided to take a deep dive into these legacy systems.'
    DISPLAY 'Armed with enthusiasm but perhaps lacking a COBOL-to-English dictionary, they stumbled upon records indicating beneficiaries aged 150 years or more.'
    DISPLAY 'Cue the headlines: "Millions of Dead People on Social Security?"'
    DISPLAY 'In reality, these eyebrow-raising ages are often placeholders resulting from missing or incomplete data in the antiquated COBOL-based systems.'
    DISPLAY 'It''s like finding a VHS tape and assuming it''s a high-tech security threat.'

    IF BENEFICIARY-AGE > VALID-AGE THEN
        MOVE 'Y' TO AGE-ERROR-FLAG
    END-IF.

    IF AGE-ERROR-FLAG = 'Y' THEN
        DISPLAY 'Error: Beneficiary age exceeds valid range. Likely due to placeholder date.'
    ELSE
        DISPLAY 'Beneficiary age is within the valid range.'
    END-IF.

    DISPLAY 'This misinterpretation highlights a critical issue: while COBOL systems are sturdy workhorses, they require knowledgeable handlers.'
    DISPLAY 'Without proper understanding, attempts to modernize or audit these systems can lead to more confusion than clarity.'
    DISPLAY 'It''s a bit like handing a smartphone to a caveman; without context, things can get messy.'

    DISPLAY 'The root of the "150-year-old" issue lies in COBOL''s handling of dates.'
    DISPLAY 'COBOL lacks a native date data type, so programmers often used a reference date, such as May 20, 1875—the date of the "Convention du Mètre," an international treaty that established a standardized metric system.'
    DISPLAY 'So when birthdate information is missing or incomplete, the system defaults to this reference point, making individuals appear 150 years old in the year 2025.'
    DISPLAY 'This quirk is a result of historical programming practices rather than evidence of fraud.'

    DISPLAY 'To prevent such misunderstandings and enhance efficiency, the focus should be on modernizing and transforming these legacy systems.'
    DISPLAY 'Updating the underlying code and data structures will not only resolve anomalies like the "150-year-old" issue but also improve overall system performance and security.'
    DISPLAY 'Modernization efforts are essential to ensure that critical government services operate accurately and efficiently in today''s technological landscape.'

    STOP RUN.