• 2026년 5월 1일 열린 CETaS 컨퍼런스에서 Alexander Babuta 이사는 수학, 사이버 보안, 소프트웨어 공학 및 취약점 탐지 분야에서 Anthropic의 Claude Mythos Preview가 이룬 진보를 강조했습니다.
• 이 프런티어 모델은 보안 전문가들을 위한 개선된 자동화 도구를 제공할 것으로 기대를 모으고 있습니다.
• 전문가들은 고도화된 해킹 위협에 맞선 AI의 방어 잠재력에 대해 낙관적인 전망을 내놓았습니다.
• Anthropic released Claude API with native fine-tuning capabilities on April 19, allowing enterprises to customize the model on proprietary datasets without requiring base model retraining.
• The fine-tuning service supports context windows up to 200,000 tokens and is priced at $8 per million input tokens and $24 per million output tokens, competitive with OpenAI's pricing structure.
• This release directly challenges OpenAI's GPT-5 announcement and positions Anthropic as an alternative for organizations seeking customizable AI solutions with enhanced safety guardrails.
• Anthropic postponed releasing Claude Mythos, an AI excelling at coding and vulnerability scanning, following high-level meetings with US financial regulators.
• Mythos demonstrated ability to chain unknown security flaws in software at unprecedented speed, sparking 'agent-to-agent war' concerns in cyberspace.
• Partners like Amazon, Apple, Microsoft, Google, Cisco, CrowdStrike, and JPMorgan Chase received restricted previews under Project Glasswing.
• Anthropic launched Claude Mythos, a cybersecurity AI model that has discovered thousands of zero-day bugs, enhancing defenses against exploits.
• The preview version identifies vulnerabilities in open-source projects and will soon charge for detailed release information to clients.
• While boosting cybersecurity, the tool raises concerns over potential misuse by threat actors to weaponize the findings.
• Anthropic has discontinued third-party tools like OpenClaw for Claude subscribers, citing unsustainable demand straining its infrastructure.
• The move prioritizes core model stability as user growth surges, impacting developers relying on extensions for customized workflows.
• It highlights operational challenges for leading AI providers balancing openness with capacity limits in a rapidly scaling market.
Nearly 2,000 internal files were briefly leaked after ‘human error’, raising fresh security questions at the AI companyAnthropic accidentally released part of the internal source code for its AI-powered coding assistant Claude Code due to “human error”, the company said on Tuesday.An internal-use file mistakenly included in a software update pointed to an archive containing nearly 2,000 files and 500,000 lines of code, which were quickly copied to developer platform GitHub. A post on X sharing a link to the leaked code had more than 29m views early on Wednesday, and a rewritten version of the source code quickly became GitHub’s fastest-ever downloaded repository. Anthropic issued copyright takedown requests to try to contain the code’s spread. Within the code, users spotted blueprints for a Tamagotchi-esque coding assistant and an always-on AI agent, per the Verge. Continue reading...
• Anthropic accidentally published a blog post revealing the 'Kairos' always-on AI agent in Claude's codebase, prompting internal cybersecurity reviews.
• The leak occurred last week, with cybersecurity teams addressing the exposure of sensitive agent details on April 1, 2026.
• It underscores risks in AI model transparency, potentially impacting US AI safety standards and developer trust.
• The U.S. government has designated Anthropic's Claude AI model as a potential supply-chain risk amid evaluations for military applications.
• This follows Department of Defense introductions of new guardrail policies for military AI use, with xAI’s Grok also entering classified systems.
• The move highlights growing scrutiny on AI models' security and reliability for national defense.
• Anthropic released Claude Cowork on macOS, bringing agentic AI capabilities to everyday knowledge work beyond developer-focused tools and enabling multi-step task automation.
• Agentic AI systems can plan, execute, and complete multi-step tasks without constant human intervention, representing a significant capability leap in AI assistant functionality.
• Claude Cowork democratizes agentic capabilities for non-technical users, expanding the potential market for advanced AI assistants in office productivity and knowledge work.
AI chatbot Claude going down is just one example of a recent IT outage. One of the main vulnerabilities of the modern internet is to blame for the growing number of incidents