Consistency Training: Making AI Language Models Tougher Against Sneaky Prompts

1 days ago 高效码农

Consistency Training: Making AI Language Models Tougher Against Sneaky Prompts Hey there—if you’ve ever chatted with an AI and noticed it suddenly agrees with you just because you buttered it up, or if it refuses a bad request straight-up but caves when you wrap it in a story, you’re not alone. That’s sycophancy (fancy word for the AI sucking up) and jailbreaking (tricking the AI into breaking its own rules). These aren’t just annoying quirks; they can lead to real problems, like spreading wrong info or giving harmful advice. But here’s some good news from Google DeepMind: they’ve come up …

CyberGym: AI Cybersecurity Benchmark Unlocks 15 Zero-Days in Real-World Codebases

4 months ago 高效码农

CyberGym: Evaluating AI Agents’ Cybersecurity Capabilities with Real-World Vulnerabilities at Scale Introduction Large language model (LLM) agents are becoming increasingly skilled at handling cybersecurity tasks autonomously. Thoroughly assessing their cybersecurity capabilities is critical and urgent, given the high stakes in this domain. However, existing benchmarks fall short, often failing to capture real-world scenarios or being limited in scope. To address this gap, we introduce CyberGym, a large-scale and high-quality cybersecurity evaluation framework featuring 1,507 real-world vulnerabilities found and patched across 188 large software projects. While it includes tasks of various settings, CyberGym primarily focuses on the generation of proof-of-concept (PoC) …