Daily Tech Feed: From the Labs

Deep dives into foundational AI and ML research papers

Episodes

  • 5: From Blood Sacrifice to Universal Translator

    In July 2024, a French nonprofit's open-source voice AI went viral for demanding human sacrifice mid-conversation. Seven months later, the same team used the same architecture to build a real-time speech translator that runs on your phone. This is the story of...

    Show notes
  • 4: The Week China Open-Sourced The Frontier

    In a 48-hour span, three Chinese AI labs independently released frontier-class open-weight models. Step 3.5 Flash from StepFun delivers frontier intelligence with just 11 billion active parameters. MiniMax M2.5 offers comparable performance at one-twentieth th...

    Show notes
  • 3: DreamDojo — Teaching Robots to Dream

    Researchers from UC Berkeley, NVIDIA, and UT Austin introduce DreamDojo, a framework that teaches robots physical skills by learning from large-scale human videos. Instead of expensive robot-specific data, DreamDojo distills 5 years of human video into a gener...

    Show notes
  • 2: Generative Modeling via Drifting — One-Step Image Generation

    Researchers from MIT and Harvard propose Drifting Models, a new paradigm for generative modeling that achieves state-of-the-art image generation in a single forward pass. Instead of iterating at inference time like diffusion models, Drifting Models evolve the ...

    Show notes
  • 1: Attention Is All You Need — The Paper That Changed Everything

    In our inaugural episode, we dive deep into Attention Is All You Need — the 15-page paper from June 2017 that introduced the Transformer architecture and reshaped all of artificial intelligence. We break down how it works, why the title is a Beatles joke, and ...

    Show notes