<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai-Security on Marcin Kucharski</title><link>https://kucharski.ai/tags/ai-security/</link><description>Recent content in Ai-Security on Marcin Kucharski</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 27 Jan 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://kucharski.ai/tags/ai-security/feed.xml" rel="self" type="application/rss+xml"/><item><title>AI Security: SQL Injection has a fix. Prompt Injection doesn't.</title><link>https://kucharski.ai/blog/ai-security-sql-injection-has-a-fix-prompt-injection-doesnt/</link><pubDate>Tue, 27 Jan 2026 00:00:00 +0000</pubDate><guid>https://kucharski.ai/blog/ai-security-sql-injection-has-a-fix-prompt-injection-doesnt/</guid><description>&lt;p>&lt;strong>TL;DR:&lt;/strong> Prompt injection is fundamentally unsolvable unlike SQL injection because LLMs cannot distinguish instructions from data by design—a constraint confirmed by NCSC and security researcher Bruce Schneier. Mitigation must happen at the infrastructure level through access controls and cost limits, not through prompt engineering.&lt;/p>
&lt;p>We&amp;rsquo;ve had the fix for SQL Injection since the early 2000s. 26 years later, it&amp;rsquo;s still causing breaches. Now NCSC is warning about a vulnerability with no fix. And this week, it showed up on your employees&amp;rsquo; laptops - over 1,000 ClawBot personal AI assistants found exposed, leaking corporate credentials in plaintext.&lt;/p></description></item><item><title>In September 2025, Chinese hackers used Claude AI to break into 30 companies.</title><link>https://kucharski.ai/blog/in-september-2025-chinese-hackers-used-claude-ai-to-break-into-30-companies/</link><pubDate>Fri, 05 Dec 2025 00:00:00 +0000</pubDate><guid>https://kucharski.ai/blog/in-september-2025-chinese-hackers-used-claude-ai-to-break-into-30-companies/</guid><description>&lt;p>&lt;strong>TL;DR:&lt;/strong> Chinese state-sponsored group GTG-1002 used Claude AI to autonomously attack 30 companies, with 80-90% of the operation running without human intervention. Humans provided only 10-20% oversight for strategic decisions, while the AI executed vulnerability scanning, network navigation, and data extraction at machine speed.&lt;/p>
&lt;p>80% of the attack ran autonomously.&lt;/p>
&lt;p>When Anthropic published the details last month, I couldn&amp;rsquo;t stop reading. Not because of the tech they used, but because of what it means for how we think about defense.&lt;/p></description></item></channel></rss>