• 1 मई, 2026 को CETaS conference में, निदेशक Alexander Babuta ने गणित, cybersecurity, software engineering और vulnerability detection में Anthropic के Claude Mythos Preview की प्रगति पर प्रकाश डाला।
• यह frontier model सुरक्षा पेशेवरों के लिए बेहतर automated tools का वादा करता है।
• विशेषज्ञ powered hacking threats के खिलाफ AI की रक्षात्मक क्षमता पर आशावाद व्यक्त करते हैं।
• Anthropic released Claude API with native fine-tuning capabilities on April 19, allowing enterprises to customize the model on proprietary datasets without requiring base model retraining.
• The fine-tuning service supports context windows up to 200,000 tokens and is priced at $8 per million input tokens and $24 per million output tokens, competitive with OpenAI's pricing structure.
• This release directly challenges OpenAI's GPT-5 announcement and positions Anthropic as an alternative for organizations seeking customizable AI solutions with enhanced safety guardrails.
• Anthropic postponed releasing Claude Mythos, an AI excelling at coding and vulnerability scanning, following high-level meetings with US financial regulators.
• Mythos demonstrated ability to chain unknown security flaws in software at unprecedented speed, sparking 'agent-to-agent war' concerns in cyberspace.
• Partners like Amazon, Apple, Microsoft, Google, Cisco, CrowdStrike, and JPMorgan Chase received restricted previews under Project Glasswing.
• Anthropic launched Claude Mythos, a cybersecurity AI model that has discovered thousands of zero-day bugs, enhancing defenses against exploits.
• The preview version identifies vulnerabilities in open-source projects and will soon charge for detailed release information to clients.
• While boosting cybersecurity, the tool raises concerns over potential misuse by threat actors to weaponize the findings.
• Anthropic has discontinued third-party tools like OpenClaw for Claude subscribers, citing unsustainable demand straining its infrastructure.
• The move prioritizes core model stability as user growth surges, impacting developers relying on extensions for customized workflows.
• It highlights operational challenges for leading AI providers balancing openness with capacity limits in a rapidly scaling market.
Nearly 2,000 internal files were briefly leaked after ‘human error’, raising fresh security questions at the AI companyAnthropic accidentally released part of the internal source code for its AI-powered coding assistant Claude Code due to “human error”, the company said on Tuesday.An internal-use file mistakenly included in a software update pointed to an archive containing nearly 2,000 files and 500,000 lines of code, which were quickly copied to developer platform GitHub. A post on X sharing a link to the leaked code had more than 29m views early on Wednesday, and a rewritten version of the source code quickly became GitHub’s fastest-ever downloaded repository. Anthropic issued copyright takedown requests to try to contain the code’s spread. Within the code, users spotted blueprints for a Tamagotchi-esque coding assistant and an always-on AI agent, per the Verge. Continue reading...
• Anthropic accidentally published a blog post revealing the 'Kairos' always-on AI agent in Claude's codebase, prompting internal cybersecurity reviews.
• The leak occurred last week, with cybersecurity teams addressing the exposure of sensitive agent details on April 1, 2026.
• It underscores risks in AI model transparency, potentially impacting US AI safety standards and developer trust.
• The U.S. government has designated Anthropic's Claude AI model as a potential supply-chain risk amid evaluations for military applications.
• This follows Department of Defense introductions of new guardrail policies for military AI use, with xAI’s Grok also entering classified systems.
• The move highlights growing scrutiny on AI models' security and reliability for national defense.
• Anthropic released Claude Cowork on macOS, bringing agentic AI capabilities to everyday knowledge work beyond developer-focused tools and enabling multi-step task automation.
• Agentic AI systems can plan, execute, and complete multi-step tasks without constant human intervention, representing a significant capability leap in AI assistant functionality.
• Claude Cowork democratizes agentic capabilities for non-technical users, expanding the potential market for advanced AI assistants in office productivity and knowledge work.