Seclog - #173

In this week's Seclog, the pervasive and transformative influence of Artificial Intelligence on the cybersecurity landscape takes center stage. We see AI agents like OpenClaw introducing new security goalposts by blurring lines between trusted systems and potential insider threats, demanding a re-evaluation of security priorities. Concurrently, AI's capability as a powerful tool for both offense and defense is highlighted, with Claude Code and LangChain DeepAgents successfully discovering long-standing Linux kernel vulnerabilities and zero-days in critical drivers, while Salesforce introduces an AI-powered URL content auditor. Discussions also emerge regarding the commoditization of vulnerability discovery by AI versus the enduring human expertise required for exploitability. Beyond AI, classic vulnerabilities like XXE in Tolgee persist, forensic techniques unveil privacy concerns with Signal notifications, and the impending threat of cryptographically-relevant quantum computers prompts urgent consideration for post-quantum solutions.

📚 SecMisc #

Inventory of Offensive Cyber Companies - xorl.wordpress.com

This resource provides a curated inventory of private companies involved in nation-state offensive cyber operations, covering entities that develop and sell offensive capabilities to state-sponsored actors. Useful for threat intelligence teams tracking commercial spyware vendors, exploit brokers, and offensive tool suppliers in attribution workflows.

ExtSentry: Browser Extension Threat Intelligence - extsentry.github.io

ExtSentry is a community-driven platform providing IOC feeds for malicious and sensitive browser extensions, compatible with 16+ security platform formats. Valuable for SOC teams looking to integrate browser extension threat data into existing detection pipelines and block malicious extensions at scale.

AI Agents Redefine Enterprise Security Risks - krebsonsecurity.com

Autonomous AI agents like OpenClaw, with deep access to user systems and online services, significantly expand the enterprise attack surface beyond traditional boundaries. Their proactive, unprompted nature makes it increasingly difficult to distinguish legitimate automated actions from malicious insider threats using conventional monitoring. Organizations must reassess access controls, implement granular permissions for AI agents, and develop detection strategies specifically targeting AI-driven anomalous behavior.

Claude Code Finds Ancient Linux Kernel Bug - mtlynch.io

Claude Code successfully identified a 23-year-old remotely exploitable Linux kernel vulnerability, demonstrating that AI-assisted auditing can surface deeply buried bugs that evaded decades of manual review. This accelerates the patching cycle for foundational components and raises the bar for software security assurance. The finding reinforces the case for continuous, AI-assisted code auditing of critical open-source infrastructure.

OpenClaw Autonomous AI Agent Log Poisoning - research.eye.security

A log poisoning vulnerability in OpenClaw allows attackers to inject malicious data into logs, enabling misattribution, DoS, or chained exploitation if downstream systems process those logs. OpenClaw's deep system and cloud access amplifies impact—compromised logs could cascade into critical infrastructure. Defenders should treat AI agent log streams as untrusted input and apply strict validation and sanitization.

AI Pipeline Finds ASUS Kernel Driver Zero-Day - blog.ahmadz.ai

An automated pipeline using LangChain DeepAgents and Gemini 2.5 discovered a zero-day in an ASUS kernel driver by analyzing import tables, decompiling dispatch handlers, and generating reports end-to-end. This drastically reduces the manual effort typically required for low-level driver vulnerability research. The approach is a strong signal that AI-driven fuzzing and static analysis pipelines are becoming viable for production-grade zero-day discovery.

Tolgee XML Importers Vulnerable to XXE Attacks - simonkoeck.com

Tolgee's XML translation importers lack secure XML parser configuration, enabling XXE attacks that allow arbitrary file reads (e.g., /etc/passwd) via crafted XML uploads. The vulnerability was confirmed on Tolgee's cloud platform, meaning any tenant could exploit it against the shared infrastructure. A clear reminder to enforce secure-by-default XML parser settings—disable DTDs and external entity resolution everywhere.

CERT Polska 2025 Annual Security Report - cert.pl

CERT Polska's 2025 annual report provides a comprehensive overview of national-level threat detection, incident handling, and knowledge sharing across its 30 years of operations. The report offers strategic insights into evolving threat landscapes at the national CERT level that can inform broader defensive strategies. Useful as a benchmark for comparing regional threat trends and CERT operational maturity.

AI's Impact on Vulnerability Research Discussed - jericho.blog

This post dives into whether AI is fundamentally disrupting vulnerability research, building on the Ptacek "cooked" debate. The core argument addresses the tension between AI commoditizing bug discovery and the irreplaceable human skill of proving exploitability. Essential reading for researchers navigating the shifting economics and skill requirements of the vulnerability research profession.

EXPMON Detects Adobe Reader Zero-Day Exploit - justhaifei1.blogspot.com

EXPMON detected a sophisticated Adobe Reader zero-day abusing util.readFileIntoStream() for arbitrary local file reads and RSS.addFeed() for data exfiltration, with potential for RCE/SBX under specific conditions. The exploit chain demonstrates advanced fingerprinting and staged payload delivery targeting PDF users. Defenders should prioritize Adobe Reader patching and consider restricting JavaScript execution in PDF readers.

Audited.xyz Finds Claude Code Vulnerability - audited.xyz

An external audit of Anthropic's leaked Claude Code source uncovered a non-critical "defense in depth" vulnerability, despite internal tools like Claude Code Review and Mythos. This underscores that even AI-developed security tooling benefits from independent external review. A strong case study for the value of diverse testing methodologies in layered security programs.

Massive AI Infrastructure Investments Noted - stiennon.substack.com

Microsoft's projected $500B investment in AI infrastructure (including Stargate) signals a massive expansion of the AI attack surface across critical models, data pipelines, and compute clusters. The scale of these deployments will introduce novel security challenges that current frameworks are not designed to address. Security teams should begin planning for AI infrastructure-specific threat models and controls.

Prompt Caching Reduces LLM Security Costs - projectdiscovery.io

ProjectDiscovery achieved a 59% LLM cost reduction via prompt caching in its Neo platform, which uses multi-agent workflows for vulnerability assessment and code review. This makes large-scale, continuous AI-driven security testing economically viable for more organizations. A practical optimization pattern worth adopting for any team running multi-step LLM security workflows.

FBI Recovers Signal Messages From iPhone Notifications - andreafortuna.org

The FBI recovered Signal messages from an iPhone's notification database after the app was uninstalled—without breaking Signal's encryption or compromising its servers. This reveals a forensic vector where ephemeral notification data persists in iOS, undermining assumptions about secure messaging app data deletion. High-risk users should disable notification previews for sensitive messaging apps and be aware of OS-level data remnants.

Google Workspace Account Suspension Detailed - zencapital.substack.com

This post details the experience of a sudden Google Workspace account suspension, highlighting the risks of vendor lock-in with cloud-based productivity platforms. Automated enforcement and opaque appeal processes can leave organizations without access to critical data and communications. A practical reminder to maintain independent backups, alternative communication channels, and documented escalation paths for cloud provider disputes.

Cryptography Engineer Discusses Quantum Timelines - words.filippo.io

A cryptography engineer assesses that the risk of cryptographically-relevant quantum computers (CRQCs) emerging within the next few years is now dispositive, not speculative. This urgently shifts post-quantum cryptography from a "nice to have" to a mandatory migration priority. Organizations should begin inventorying cryptographic dependencies and planning for cryptographic agility now, before harvest-now-decrypt-later attacks render current protections obsolete.

Anthropic Releases Claude Mythos Preview - red.anthropic.com

Anthropic released a preview of Claude Mythos, an advanced AI security capability likely focused on code analysis and vulnerability detection. This expands Anthropic's AI-driven security tooling suite, offering new automated capabilities for defenders. Worth monitoring as it matures—both for its defensive utility and for understanding how AI security products themselves become targets.

Cirro Maps Azure Attack Paths, Risks - bishopfox.com

Cirro maps Azure attack paths across identity, RBAC, resources, and data layers, visualizing how misconfigurations can be chained for lateral movement and privilege escalation. This graph-based approach surfaces hidden risks that traditional permission audits miss. A strong addition to the cloud security toolkit for teams managing complex Azure environments.

Lessons From a "Humans-Only" CTF - sylvie.fyi

This post reflects on organizing a "humans-only" CTF in the age of pervasive LLMs, tackling the challenge of verifying genuine human problem-solving versus AI-assisted shortcuts. The experience exposes fundamental issues in skill assessment and competition integrity as AI tools become ubiquitous. Valuable insights for anyone designing security training, hiring challenges, or certification exams.

🐦 SecX #

BreachForums Admin Identified via IP/Password Reuse - x.com

A BreachForums admin was identified through basic OPSEC failures—real IP exposure and password reuse across personas—despite claiming security expertise. A textbook case reinforcing that even technically skilled adversaries fall to fundamental operational security lapses.

Reverse Engineering Unix Malware History - x.com

Val Smith reflects on a 2004 talk about reverse engineering Unix malware, providing historical context for the evolution of malware analysis techniques. Useful perspective for understanding how foundational RE methodologies still underpin modern threat analysis workflows.

Bug Discovery vs. Exploitability Gap - x.com

Alex Matrosov highlights the critical gap between AI-commoditized bug pattern detection and the specialized human expertise needed to prove actual exploitability. A key insight for the market: automated discovery is not automated exploitation—human skill remains the bottleneck and the value differentiator.

THC Releases Anonymous Email Forwarders - x.com

THC announced an anonymous email forwarding service with "no logz, no limitz," offering enhanced privacy for legitimate users but also a ready-made tool for phishing infrastructure and anonymous C2 communication. Dual-use capability worth tracking for threat intelligence and abuse monitoring.

🎥 SecVideo #

Project Glasswing: Software Security Initiative - youtube.com

Project Glasswing brings together AWS, Anthropic, Apple, Google, and JPMorganChase in a collaborative initiative to address systemic software vulnerabilities and supply chain risks. A significant industry signal that major players are moving toward coordinated, cross-organizational approaches to software security at scale.

💻 SecGit #

Salesforce Releases AI URL Content Auditor - github.com

Salesforce's URL Content Auditor uses AI to scan public web content—images, PDFs, and videos—for sensitive data exposure, compliance violations, and privacy risks. A practical tool for security teams running external attack surface monitoring or data leak detection programs.

Pull Android Apps as XAPK Without Root - github.com

The pull-xapk tool extracts installed Android apps in XAPK format without root, streamlining the app acquisition step in mobile security assessments. Essential for mobile pentesters and malware analysts who need quick, non-invasive app extraction for reverse engineering workflows.

← All Seclogs

Press / to search, Esc to close