Just last weekend, I went to a new restaurant and, scanning the table for a menu, was handed a coaster with a pixelated square. Groaning, I pulled out my phone, held my breath, and scanned the code with my camera. While convenient, QR codes are risky. They can hide malicious links that lead to phishing sites or trigger unauthorized actions. Unlike traditional URLs, QR codes don’t reveal their destination, making it easier for bad actors to exploit them.
Cybercriminals have been abusing QR code-based authentication in apps like Signal, WhatsApp, and Discord. In Signal’s case, Russian-backed hackers have crafted malicious QR codes that, when scanned, link a victim’s account to an attacker-controlled device—allowing real-time message interception. WhatsApp users have also been targeted through phishing attacks with deceptive QR codes, compromising account security. Even Discord has seen scammers distribute fraudulent QR codes promising free perks, which, when scanned, grant attackers unauthorized access. These incidents underscore the vulnerabilities inherent in QR code-based authentication across platforms.
You’d forgive my double-take when I heard Google plans to replace SMS-based two-factor authentication with QR codes for Google sites. While I’m all for ending SMS-based 2FA, swapping one phish-prone system for another isn’t a clear win.
If Google is modeling their system after Steam, they are mitigating some of the most egregious flaws, but not all. Steam’s QR authentication uses dynamic codes that expire every 30 seconds to limit misuse. Still, concerns persist. Bypassing the traditional approve/deny prompt in the Steam Guard Mobile Authenticator could make unauthorized access easier if your phone is compromised, and weak encryption in some implementations might let attackers forge codes for phishing.
We’ll see what the future holds for QR-based authentication, but unless there’s a fundamental change in how these codes are generated, scanned, and secured, they won’t become the panacea for digital security.
There’s something about kicking people when they’re down that really rubs me the wrong way.
North Korean hackers are targeting freelance developers with fake job interviews, tricking them into installing malware. Attackers know that job seekers want to impress, so they use urgency and too-good-to-be-true offers to lure victims in. The red flags?
I’m thrilled to share that I’ll be speaking at the RSA Conference Cyber Leaders Forum as both a program committee member and will be presenting two “Hot Topics” (short, interactive sessions): 🔹 How to Partner with the CFO 🔹 Maximizing Your Cybersecurity Investments
For those of us in security leadership, we know that success isn’t just about deploying tech—it’s about ensuring security is viewed as a business enabler and ensuring our investments deliver value.
I’m honored to be part of such an incredible program and can’t wait to connect with fellow cyber leaders. See you there!
The World Economic Forum’s Global Cybersecurity Outlook 2025 report highlights a concerning trend: despite the escalating risks posed by traditional and AI-driven attacks, many companies are still disturbingly complacent about cybercrime. The disconnection between stated risk tolerance and security posture not only jeopardizes sensitive data but also undermines stakeholder trust and organizational resilience.
The one-time costs of breaches can be existential for a company, so it’s important that cybersecurity is consistently funded as a part of doing business. Deploying effective security technologies, ensuring all systems are well maintained, investing in employee training, and conducting regular risk assessments are essential to fortify business operations against increasingly sophisticated cyber threats.
The only way to keep complacency from creeping back in is consistent, strategic investment in security—not just when an incident happens, but every single day. Senior leaders need to trust their CISO to make the right risk-based calls on where to focus time, talent, and budget. Security isn’t a one-time fix; it’s an ongoing commitment to staying ahead of threats before they become headlines.
There’s nothing like bonding over science with your seventh grader to make you feel both proud and profoundly inadequate. My son and I recently tackled his honors science project by diving headfirst into machine learning (ML) and exoplanet hunting. It was a bold move. I mean, who doesn’t want to turn a simple middle school project into a crash course on Python-based ML on Linux? As it turns out, finding exoplanets in a sea of “nothing-to-see-here” light curve data wasn’t just challenging — it was humbling.
Here’s the kicker: our struggles with this project made me realize something bigger — cybersecurity and exoplanet detection have a lot in common, and not in the “NASA uses Wi-Fi too” kind of way. Both involve sifting through an overwhelming amount of data, looking for that one-in-a-million anomaly, and both face a serious imbalance problem when it comes to training AI models.
So let’s take a journey through exoplanets, ML models, cybersecurity, and why AI might just be a hacker’s best friend.
The Pillars of Creation
How It Started: A Python, a CNN, and a Science Fair Walk into a Dataset
Our project began like most great adventures — with optimism and zero idea what we were getting into. The plan was simple: use light curve data from NASA’s Kepler mission to train a Python-based ML model to detect exoplanets. Light curves are like cosmic heartbeats — graphs showing how a star’s brightness changes over time. A dip in brightness might mean a planet is passing in front of the star, like a tiny celestial peekaboo.
We started with a feed-forward neural network (FFNN) because, well, it seemed approachable. Spoiler alert: it wasn’t. The FFNN essentially looked at the data and went, “Nah, I’m just going to guess ‘no exoplanet’ every time.” And you know what? It was technically accurate — just not helpful.
Next came the heavy artillery: convolutional neural networks (CNNs). CNNs are like the Sherlock Holmes of ML, designed to pick up patterns in complex data. Still, even with a CNN, our model had one favorite prediction: “no exoplanet.” Every. Single. Time.
The Cartwheel Galaxy
Houston, We Have an Imbalance Problem
The real issue was our dataset. Exoplanet examples were outnumbered by non-exoplanet examples by about a bajillion to one (okay, a little less, but you get the idea). Machine learning models are like kids at a buffet — they’ll pick what’s easiest. In this case, it was easier for the model to just say “no exoplanet” and call it a day.
We tried everything to address this:
Class Weights: These are like putting your thumb on the scale to make the model pay more attention to the underrepresented class. Didn’t work.
SMOTE (Synthetic Minority Oversampling Technique): A fancy way of creating fake exoplanet data. Still didn’t work.
Ensemble Models: Multiple CNNs working together in a “voting” system. Better but not great.
No matter what we did, the imbalance problem reigned supreme.
The Cosmic Cliffs
The Cybersecurity Parallel: Attackers in a Galaxy of Defenders
Here’s where things got interesting. Our ML struggles mirrored a core challenge in cybersecurity: spotting attacks in oceans of normal behavior. Cybersecurity tools sift through terabytes of logs daily, looking for that one indicator of compromise (IoC). But like our exoplanet model, they’re battling an imbalance problem. Attack data is scarce and highly varied, while normal activity dominates the dataset.
Now imagine this: attackers leveraging AI. Offensive AI tools can learn to mimic normal behavior while crafting attacks. Think phishing emails indistinguishable from legitimate ones, or malware that adapts faster than you can say “zero-day exploit.”
Meanwhile, defensive AI tools struggle because:
Data Scarcity: Logs are packed with “no attack” entries, making true positives rare.
False Positives: Overreacting to anomalies is a quick way to lose credibility.
Adaptation: Attackers can train their tools on the same defenses, essentially gaming the system.
It’s like trying to catch a needle in a haystack when the needle has camouflage and knows how you’re searching.
Potential Exoplanets
The Models and Methods Behind the Madness
Let’s get nerdy for a minute. On Linux, Python is the de facto standard for ML, and tools like PyTorch and TensorFlow are the go-to frameworks. For our exoplanet project, we cycled through several model types:
Feed-Forward Neural Networks (FFNNs): Great for structured data but outmatched by the complexity of light curves.
Convolutional Neural Networks (CNNs): Designed for pattern recognition, like identifying cats in photos — or exoplanets in light curves. Still not magic.
Ensemble Learning: Combining multiple models to vote on predictions. Like democracy, it works better in theory than practice.
Cybersecurity AI often uses similar approaches but tailored for anomaly detection. Models like autoencoders and recurrent neural networks (RNNs) excel at spotting unusual sequences in time-series data. Still, they’re only as good as the data they’re trained on — and therein lies the rub.
Galaxy light reaching the James Webb telescope from up to 13 billion years ago
Why Attackers Have the Edge
Let’s be real: AI is going to be a game-changer for attackers. They don’t have to deal with the same data imbalance issues because they’re the ones creating the anomalies.
Offensive AI can:
Generate Phishing Content: AI tools like ChatGPT can craft phishing emails that pass human scrutiny.
Mimic Legitimate Behavior: Malware that looks and acts like a benign application? Check.
Automate Attacks: Tools that probe for vulnerabilities faster than any human ever could.
Defenders, on the other hand, are stuck playing catch-up. Their AI tools rely on historical data, which may not capture the latest attack methods. And when they do detect something, they face a new problem: what now? Alert fatigue is real, and most SOC teams can’t chase every lead.
Imbalanced data is imbalanced
The Path Forward: Breaking the Cycle
So, is there hope? Maybe. Just like exoplanet detection could improve with better data (and maybe quantum computing), cybersecurity can evolve. Here’s what needs to happen:
Better Data Curation: Balance datasets with synthetic but realistic attack scenarios.
Continuous Learning: Deploy models that adapt to new threats in real time.
Collaboration: Share threat intelligence across organizations to improve the collective defense.
And maybe — just maybe — we need a breakthrough in AI technology, akin to going from the Wright brothers to supersonic jets.
The Confusion Matrix was very confused on this 3-ensemble run (there aren’t 31 exoplanets in the training set)
Final Thoughts: The Cosmos and the Cloud
At the end of our project, my son and I didn’t find exoplanets, but we did find perspective. In both the vastness of space and the chaos of cyberspace, the real challenge is the same: finding the lone anomalies that matter.
AI can help, but it’s not a silver bullet. Whether you’re searching for planets or preventing breaches, the key is recognizing the limits of technology and working smarter to overcome them.
Now if you’ll excuse me, I’ve got to explain to a seventh grader why “learning from failure” is as good as winning. Wish me luck.
Thanks to the James Webb telescope for providing all the images
Addendum: Confessions of an AI Newbie
Before you finish this piece and start worrying that I’m sitting here building Skynet in my free time, let me set the record straight: while I certainly know my way around the world of cybersecurity (it’s been my professional home for decades), I am a relative noob when it comes to the intricacies of AI and ML coding. My son and I may have dabbled in neural networks and Python, but let’s be honest — there are people out there who eat, sleep, and breathe this stuff. They’re the real wizards of the AI world, and I tip my hat to them.
If my light curve escapades taught me anything, it’s that AI is incredibly complex, and getting it to do something truly revolutionary — whether it’s spotting exoplanets or stopping a cyberattack — requires a level of expertise and innovation that far exceeds what I brought to my son’s seventh-grade science project. Thankfully, those folks are out there. As you’re reading this, I’m hopeful that brilliant data scientists, AI researchers, and other experts are tackling these challenges with tools and techniques I can only dream of understanding.
In the meantime, I’ll keep doing what I do best: asking hard questions, cracking a few dad jokes, and doing my part to make the cyber world a little safer. And maybe, just maybe, inspiring a seventh grader to aim for the stars — both figuratively and literally.
I published a rather lengthy blog post about the importance of patch management to the success of a security program. Due to the length of the post I thought I’d add a tl;dr version of the primary points from the article.
Bad security patching practices can and will undermine the budget spent on security tools
Security patching needs to be a priority coming from the top of the organization
CISO’s must ensure they partner with IT and business units and foster a “self-service” approach to vulnerability scanning and testing
CISO’s and CIO’s must closely partner on configuration and patch management tools and administration
CISO’s must present security patching data that focuses on practical risk to the company
CISO’s must present security patching data that specifies the performance of each IT and business unit
Leadership must hold IT and business leaders accountable for their group’s security patching performance
CISO’s should not offer a risk-acceptance or security exception path for software vulnerabilities
I’m going to cover something that arguably has the greatest impact on the security posture of an organization and is not something that information security is typically responsible for.
It’s something that can make or break a company’s entire security regime and negate every bit of security investment.
Poor security patch management.
A security patch is a piece of software designed to fix vulnerabilities or weaknesses in a system that could be exploited by malicious actors. By regularly applying security patches, organizations can protect their systems from being compromised and ensure that they remain secure and operational.
Traditionally, the information security team has processes for “vulnerability management” which is meant to discover, catalog, and track software vulnerabilities. Software vulnerabilities are weaknesses in code that can be exploited to gain unauthorized access or cause damage. While there are technical aspects to the vulnerability management process, the primary role is one of governance. The vulnerability management program is one of influence but does not directly fix vulnerabilities or patch systems. Therefore, the success of the program is primarily up to IT.
Knowing that the success of the vulnerability management program – and thus the entire information security program – is dependent on how well security patch management is executed, it is imperative that senior leadership prioritize security patching on par or above business imperatives. The “tone from the top” can either help achieve the desired security patching outcomes, or completely undermine them. For example, if a CISO reports to executive leadership that there are critical vulnerabilities that put the organization at risk and the response is short of “get it fixed ASAP,” then most likely nothing will be done – or will be done eventually, without urgency.
Even with executive buy-in, there needs to be a regular cadence of reporting that includes tracking not only raw numbers but also aging of vulnerabilities previously reported. Senior leadership then can set a “I don’t want to see this vulnerability again” tone for critical vulnerabilities.
The vulnerability report must be meaningful to the audience. Don’t waste their time with huge aggregate numbers. Instead, the reports should clearly articulate which vulnerabilities present risk to the organization and what patches would have the greatest impact in lowering risk. As the old saying goes, “if everything is critical, nothing is critical” so you need to choose what you communicate carefully.
However, these reports have a reputation for “naming and shaming” and only work for so long. Instead, the CISO needs to be partnering with the responsible leaders to help them succeed, and the quickest way to do that is to help them help you.
I am a huge fan of self-service, giving the IT operations team access to the vulnerability scanning system. It gives everyone the same set of facts to work from, security can partner when vulnerabilities are first detected and determine which patches should be prioritized. This not only speeds up the process of patching, it also buys quite a bit of political goodwill as the business unit leadership will be able to prioritize the patching effort. It also removes the CISO from the “gotcha” role they find themselves in when presenting reports to shocked leaders who will not have seen the information previously.
In addition, the CIO and CISO should partner on an effective system management product that can quickly and easily push out security patches. These tools are not cheap and require quite a bit of management to set up and maintain so it’s imperative that both IT and the security organization share both the costs and administrative burden of these tools.
Now for the other side of the partnership coin – accountability. The vulnerability reports must show aging critical vulnerabilities organized not only by platform type but also business unit. The only way to get traction in security patch management is to ensure there is complete leadership alignment and that if teams fall behind, there is accountability from the top which may include: timelines, follow-up, and visible consequences for continued non-compliance.
One last note, there is a temptation to treat vulnerability and patch reporting as simply an administrative security control and not a fundamental component of IT technical operations’ health. Unfortunately, this mentality may lead to critical vulnerabilities being handled with management exceptions or risk acceptance. As I’ve said to colleagues countless times, “attackers don’t check our risk register before launching their exploits.” Once again, tone from the top is imperative and leadership needs to understand that security patching is critical to the success of the health and safety of the IT ecosystem and cannot be documented away like other security risks.
When I hear a CISO speaking about threats on an information security podcasts I know most everyone probably thinks they are talking about nation-state or criminal actors.
The truth is that they are more likely talking about things like retaining talent, holding onto budget, getting IT to get their shit together, over-zealous auditors, dealing with seemingly constant vendor failures, and trying to keep insurance underwriters in line.
More on these thoughts at a later time.
./dg
photorealistic rodan’s thinker statue using laptop – DALL-E
I’m glad that the federal government along with states have started taking action against the app for its apparent theft of sensitive information. This is good and all but what too so long?