Indian AI model DESTROYS o3-mini, Google DeepSearch is open source, OpenAI's new models and TypeScript SDK, and more - Week #22
Hello AI Enthusiasts!
Welcome to the Twenty-Second edition of "This Week in AI Engineering"!
This week, Fathom R1 14B cracks one of the world’s toughest exams while outperforming OpenAI’s o3-mini, Google open-sources their entire DeepSearch stack, NVIDIA releases Nemotron Research Reasoning Qwen 1.5B, Microsoft introduces Sora-style text-to-video generation in Bing, OpenAI debuts Audio Endeavor and Audio Voyager, and the Agents SDK in TypeScript drops with real-time streaming capabilities.
With this, we'll also explore some under-the-radar tools that can supercharge your development workflow.
Don’t have time read the newsletter? Listen to it on the go!
Indian AI Model DESTROYS OpenAI o3-mini
Built under India’s National AI Mission, Fathom R1 14B is a 14 billion-parameter reasoning model developed by Fractal AI. Despite its relatively modest parameter count, it has already made headlines by cracking the IIT JEE Advanced, arguably the most challenging college entrance exam globally, on its first attempt. To gauge its global reasoning prowess, the Fathom team benchmarked it on Olympiad-grade math contests: it scored 52.71 percent on AIME 25 and 35.26 percent on HMMT 25, surpassing both OpenAI’s o3-mini and Light R1 14B. Remarkably, all these results came without any retries or a massive inference stack.
Lean Context Window and Low Budget
16K Context Window: Unlike many modern models that require 32K+ context lengths, Fathom R1 14B operates effectively within a 16K window, reducing memory and compute overhead.
Sub-$1,000 Training Budget: The entire training pipeline, including weights, datasets, and recipes, was completed for under $1,000, demonstrating that state-of-the-art reasoning can be achieved at a fraction of typical costs.
Open-Source Commitment
Fully Open-Source: All weights, datasets, and training recipes are publicly available, empowering researchers and developers to run a powerful reasoning model locally without breaking the bank.
Reinforcement Learning & Multi-Stage Tuning: The second version of Fathom R1 14B incorporates reinforcement learning and a multi-stage fine-tuning schedule, further improving performance on logic and math tasks.
Key Use Cases
Local Reasoning Workloads: Ideal for on-premises deployments where cloud inference costs or data privacy concerns are paramount.
STEM Education Tools: With demonstrated success on rigorous math contests, Fathom R1 14B can power educational platforms that require step-by-step problem solving.
Research & Benchmarking: Its open-source nature and low inference footprint make it an excellent baseline for future reasoning model research.
Google’s Deep Resarch Stack Is Open Source
Google has open-sourced its entire DeepSearch stack, the same system it uses internally to perform ultra-fast multimodal document search. This stack comprises a modified ScaNN indexer, a 50,000-piece SentencePiece tokenizer, and T5-based dual encoders for result ranking.
Ultra-Low Latency at Scale
< 0.5 ms Query Latency: Even when searching through 100 million documents, DeepSearch maintains under half-millisecond response times, thanks to its optimized ScaNN indexer and efficient vector retrieval.
50K-Piece SentencePiece Tokenizer: A large, granular vocabulary enhances tokenization quality for both text and multimodal inputs, ensuring precise embedding generation.
Modular & Customizable Architecture
T5-Based Dual Encoders: One encoder processes document embeddings, while the other handles query embeddings, enabling fine-tuned ranking and relevance scoring.
Flexible Indexing: Users can swap in custom embedding backbones or tweak the ScaNN parameters to optimize for specific domains, legal corpora, academic papers, product catalogs, etc.
Potential Impact
Enterprise Search Applications: Launching domain-specific search engines with minimal latency, whether for customer support portals or internal knowledge bases.
Multimodal Retrieval: Easily integrate image, audio, and text search in a unified pipeline, opening possibilities for enriching e-commerce, digital libraries, and media archives.
Open Collaboration: Researchers can now study and improve Google’s state-of-the-art search stack, fostering innovation in vector retrieval and ranking methods.
Nvidia’s New Advanced Reasoning Model
NVIDIA’s new Nemotron Research Reasoning Qwen 1.5B is a 1.5 billion-parameter open-weight model specifically fine-tuned for advanced reasoning tasks, spanning math, coding, science, and logic puzzles. It adopts extended reinforcement learning schedules, entropy collapse prevention, DAPO optimization, and KL regularization to unlock deeper reasoning strategies.
Prolonged Reinforcement Learning Innovations
Entropy Collapse Prevention: Stabilizes training by maintaining sufficient exploration signals, avoiding premature convergence on suboptimal reasoning patterns.
DAPO & KL Regularization: Ensures alignment between the policy distribution and high-quality reasoning trajectories, resulting in more coherent, step-by-step answers.
Benchmark Gains Over DeepSeek R1 1.5B
Logic Puzzle Performance: Up to 54.8 percent improvement on established logic puzzle benchmarks compared to DeepSeek R1 1.5B.
STEM Task Uplifts: Significant boosts on math and instruction-following tasks, making it a top contender for research on reasoning-centric architectures.
Research-Only Release
Open-Weight Distribution: Available to the community for experimentation, while NVIDIA encourages responsible usage and thorough evaluation before any production deployment.
Future Directions: Serves as a foundation for next-gen reasoning research, inviting collaboration on deeper RL techniques, curriculum design, and real-world task applications.
Sora-Style Text-to-Video Generation in Bing
Microsoft has integrated Sora-style text-to-video generation directly into Bing, for free. Users type a prompt such as “futuristic skyline with flying cars,” and within 15 seconds they receive a 5-second, 1080p video clip. Under the hood, this service leverages a Variational Autoencoder (VAE) with temporal diffusion and frame-level tokenization to ensure coherent motion and visual fidelity.
Core Technical Highlights
VAE + Temporal Diffusion: The model jointly optimizes spatial quality and temporal consistency, achieving a CLIP coherence score of 0.87 on benchmark tests.
Frame-Level Tokenization: Breaks video generation into discrete tokens per frame, reducing jitter and enhancing continuity across frames.
Real-Time Inference: Generates 1080p, 5-second clips in roughly 15 seconds on Microsoft’s cloud infrastructure, making it competitive with paid offerings in terms of both speed and quality.
Key Use Cases
Quick Prototyping for Creators: Ideal for marketing teams, social media creatives, and indie filmmakers who need rapid, on-demand video concepts without complex toolchains.
Dynamic Ad Generation: Brands can produce short, high-quality video ads at scale, customizing prompts for different products or campaigns in seconds.
Educational & Outreach Content: Teachers and educators can generate explanatory videos or visual demonstrations without video-editing expertise.
OpenAI’s Newest Audio Models
OpenAI’s latest audio models, Audio Endeavor and Audio Voyager, push the boundaries of what’s possible in long-form audio understanding and real-time voice applications.
Audio Endeavor
Dual-Encoder Architecture: Processes up to 200,000 audio tokens alongside 32,000 text tokens in a single pass, enabling summarization of 15-minute podcasts without relying on Whisper.
Use Cases: Podcast summarization, call center analytics, and long-document audio indexing, where processing speed and accuracy are critical.
Audio Voyager
Unified Multitask Model: Handles transcription, sentiment analysis, speaker separation, and summarization in one network, streamlining end-to-end audio workflows.
Beta Timeline: Industry sources suggest a potential beta release by the end of June 2025, making this the most anticipated audio model update of the year.
Developer Implications
Podcast Tools & Analytics: Build dashboards that automatically ingest raw audio, separate speakers, analyze sentiment, and produce concise show notes in real time.
Call Center AI: Deploy models that can transcribe live calls, detect customer sentiment, and generate action items, all without stitching together multiple APIs.
Voice-First Applications: From virtual assistants to interactive learning platforms, these models unlock new possibilities in multi-task audio processing.
OpenAI Agents SDK in TypeScript
OpenAI’s new Agents SDK for TypeScript introduces a powerful framework for building real-time, multi-agent workflows and voice agents, complete with streaming insights, guardrails, and human-in-the-loop support.
RealtimeAgent: Streaming Actions & Thoughts
200 ms Updates: Rather than waiting for a final response, developers receive the agent’s “thoughts,” actions (e.g., API calls, function invocations), and outputs every 200 milliseconds.
Token Usage Monitoring: Tracks token consumption in real time, giving full visibility into inference costs and helping optimize prompts on the fly.
Prebuilt Agents & Extensibility
Bundled Tool Agents: Includes out-of-the-box agents such as searchWeb, queryDatabase, and sendEmail, reducing bootstrapping time for common tasks.
Human-in-the-Loop: Pause, approve, or modify agent actions mid-run, enabling compliance checks, quality assurance, and manual overrides in production systems.
Voice Agent Support via WebRTC: Developers can create conversational voice interfaces that leverage Text-to-Speech and Speech-to-Text pipelines, all within the same SDK.
Advanced Features
Parallel Tool Calls: Execute multiple external API calls simultaneously and aggregate responses, perfect for RAG settings or multi-service orchestration.
Structured Outputs: Enforce JSON schemas for agent responses, simplifying downstream parsing and integration with existing pipelines.
Non-OpenAI Model Compatibility: Through the Vercel SDK, agents can integrate with other LLM providers, offering flexibility for hybrid deployments.
Key Use Cases
AI-Powered Customer Support: Build agents that fetch user data, query knowledge bases, and draft email responses in real time, with human supervisors on standby.
Automated Research Assistants: Agents that simultaneously search the web, summarize findings, and generate reports, streaming updates to frontend dashboards.
Voice-Driven Workflows: From meeting transcription to instant follow-up emails, voice agents can handle entire workflows hands-free, opening doors for accessibility and productivity tools.
Tools & Releases YOU Should Know About
LM Studio provides a versatile environment for fine-tuning, deploying, and using language models. Ideal for developers and researchers, it supports running large language models on local hardware, making it a strong choice for custom model training and deployment without relying on cloud-based solutions.
MetaGPT is an extensible multi-agent orchestration framework that lets you define, coordinate, and manage a network of AI agents working toward complex goals. Ideal for scenarios where tasks can be decomposed into sub-tasks, MetaGPT handles agent communication, task scheduling, and result aggregation, enabling developers to build scalable, collaborative AI workflows without hand-rolling the intricacies of inter-agent coordination.
Stenography is an automated code-documentation tool that analyzes your source files and generates clear, context-aware documentation on the fly. By parsing function signatures, comments, and code structure, it produces Markdown or HTML docs that stay in sync with your codebase. Stenography streamlines developer onboarding and upkeep of API references by ensuring documentation is always up to date with minimal manual effort.
And that wraps up this issue of "This Week in AI Engineering", brought to you by jam.dev— your flight recorder for AI apps! Non-deterministic AI issues are hard to repro, unless you have Jam! Instant replay the session, prompt + logs to debug ⚡️
Thank you for tuning in! Be sure to share this newsletter with your fellow AI enthusiasts and subscribe to get the latest updates directly in your inbox.
Until next time, happy building!