AI Pulse 🚨
A bite-sized curation of this week's most important AI news.
🚀 OpenAI launches GPT-5.2 model beats or ties top industry professionals on 70.9% of GDPval real‑world knowledge‑work tasks and achieves 55.6% on SWE‑Bench Pro coding. On GDPval, GPT‑5.2 Thinking delivers outputs more than 11x faster and at under 1% of the cost of human experts.
🎯 OpenAI quitetly adopting skills in ChatGPT and Codex CLI with skills covering spreadsheets, PDFs, and documents and more.
🔬 Google launches Deep Research API enabling developers to build autonomous research agents achieving 46.4% on Humanity’s Last Exam.
🖱️ Cursor launches visual editor that maps UI drag-and-drop changes directly to React code. Users can adjust elements on live sites while agents trace changes back to source files.
🖼️ OpenAI launches new ChatGPT Image 1.5. The update includes better prompt understanding, higher quality outputs, and faster generation times for creating images directly in ChatGPT.
🤖 Claude Agent SDK gets major update with support for 1M context windows, sandboxing, and a new TypeScript interface for building custom agents.
🔌 Claude Code launches first-party plugins marketplace making it easier to discover and install popular plugins.
🏰 Disney invests $1 billion in OpenAI for a three-year exclusive deal licensing 200+ characters from Disney, Marvel, Pixar, and Star Wars for Sora and ChatGPT Images. Disney-curated videos expected on Disney+ in early 2026.
🎨 Adobe integrates into ChatGPT with Photoshop, Adobe Express, and Acrobat available directly in conversation. Users can now edit images, PDFs, and create designs without leaving the chat.
📊 OpenAI releases Enterprise AI report showing 320x growth in reasoning token consumption year-over-year. Workers save 40-60 minutes daily, and Projects and Custom GPTs usage surged 19x.
📖 Google launches Code Wiki maintaining updated, structured documentation for code repositories. Features AI-powered chat, automated diagrams, and direct code linking.
⚡ NVIDIA debuts Nemotron 3 family with 4x higher throughput than Nemotron 2 and support for 1M token context windows. Models range from 30B to 500B parameters.