🇺🇸 United States Episodes

14860 episodes from United States

#1040 - 4M Subscriber Q&A

From Modern Wisdom

I hit 4 million Subscribers on YouTube!! To celebrate, I asked for questions from YouTube, X, and Instagram, so here’s another 90ish minutes of me trying to answer as many as possible. Expect to learn what’s new with my new haircut, how much longer until the new studio is built, if or when an Andrew Tate episode will be released, the most recurring thoughts I have when I feel sad and or disappointed sometimes, and why I think this occurs, the most favourite thing about myself, and much more… Sponsors: See discounts for all the products I use and recommend: ⁠https://chriswillx.com/deals⁠ Extra Stuff: Get my free reading list of 100 books to read before you die: ⁠https://chriswillx.com/books⁠ Try my productivity energy drink Neutonic: ⁠https://neutonic.com/modernwisdom⁠ Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: ⁠https://tinyurl.com/43hv6y59⁠ #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: ⁠https://tinyurl.com/2rtz7avf⁠ #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: ⁠https://tinyurl.com/3ccn5vkp⁠ - Get In Touch: Instagram: ⁠https://www.instagram.com/chriswillx⁠ Twitter: ⁠https://www.twitter.com/chriswillx⁠ YouTube: ⁠https://www.youtube.com/modernwisdompodcast⁠ Email: ⁠https://chriswillx.com/contact⁠ - Learn more about your ad choices. Visit megaphone.fm/adchoices

Massive Somali Fraud in Minnesota with Nick Shirley, California Asset Seizure, $20B Groq-Nvidia Deal

From All-In with Chamath, Jason, Sacks & Friedberg

(0:00) Bestie intros! Nick Shirley joins the show to discuss his recent investigation on potential daycare fraud in Minnesota (3:32) Nick's background, how he got into investigative reporting and YouTube, independence, finding this story (16:36) Why this fraud story is resonating, why the national press initially avoided it (30:08) Future plans, California, possible Al-Shabaab connection, how high up does Minnesota's fraud go? (49:15) What the scale of fraud means for America, Minnesota's future, potential patronage scheme (1:09:06) CA's wealth tax: normalizing the seizure of private property (1:33:56) Chamath breaks down the $20B Groq-Nvidia deal Follow Nick Shirley: https://x.com/nickshirleyy Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://x.com/nickshirleyy/status/2004642794862961123 https://www.startribune.com/prosecutors-charge-5-people-in-a-minnesota-housing-fraud-scheme/601548944 https://www.nytimes.com/2025/11/29/us/fraud-minnesota-somali.html https://www.fox9.com/news/fraud-minnesota-detailing-nearly-1-billion-schemes https://x.com/EricLDaugh/status/2005410646603473256 https://x.com/kevinkileyca/status/2006053056660541840 https://x.com/chamath/status/2006087862492582084 https://x.com/C_3C_3/status/2005722313795440956 https://x.com/OliLondonTV/status/2005988021946999166 https://x.com/tomhennessey69/status/2005556784228909441 https://x.com/WallStreetApes/status/2005849513676923358 https://x.com/MarioNawfal/status/2005179409465299219 https://dcyf.mn.gov/programs-directory/child-care-assistance-program https://x.com/susancrabtree/status/2006079778873565541 https://x.com/chamath/status/2005386348169953607 https://x.com/aaronburnett/status/2003874734661161064 https://newsletter.amuseonx.com/p/the-somali-patronage-system-has-taken https://x.com/realdailywire/status/2006122428196442388 https://x.com/rightanglenews/status/2006375449404866720 https://www.auditor.ca.gov/reports/2025-601/

#488 – Infinity, Paradoxes that Broke Mathematics, Gödel Incompleteness & the Multiverse – Joel David Hamkins

From Lex Fridman Podcast

Joel David Hamkins is a mathematician and philosopher specializing in set theory, the foundations of mathematics, and the nature of infinity, and he’s the #1 highest-rated user on MathOverflow. He is also the author of several books, including Proof and the Art of Mathematics and Lectures on the Philosophy of Mathematics. And he has a great blog called Infinitely More. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep488-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/joel-david-hamkins-transcript CONTACT LEX: Feedback – give feedback to Lex: https://lexfridman.com/survey AMA – submit questions, videos or call-in: https://lexfridman.com/ama Hiring – join our team: https://lexfridman.com/hiring Other – other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Joel’s X: https://x.com/JDHamkins Joel’s Website: https://jdh.hamkins.org Joel’s Substack: https://www.infinitelymore.xyz Joel’s MathOverflow: https://mathoverflow.net/users/1946/joel-david-hamkins Joel’s Papers: https://jdh.hamkins.org/publications Joel’s Books: Lectures on the Philosophy of Mathematics: https://amzn.to/3MThaAt Proof and the Art of Mathematics: https://amzn.to/3YACc9A SPONSORS: To support this podcast, check out our sponsors & get discounts: Perplexity: AI-powered answer engine. Go to https://www.perplexity.ai/ Fin: AI agent for customer service. Go to https://fin.ai/lex Miro: Online collaborative whiteboard platform. Go to https://miro.com/ CodeRabbit: AI-powered code reviews. Go to https://coderabbit.ai/lex Chevron: Reliable energy for data centers. Go to https://chevron.com/power Shopify: Sell stuff online. Go to https://shopify.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex MasterClass: Online classes from world-class experts. Go to https://masterclass.com/lexpod OUTLINE: (00:00) – Introduction (01:58) – Sponsors, Comments, and Reflections (15:40) – Infinity & paradoxes (1:02:50) – Russell’s paradox (1:15:57) – Gödel’s incompleteness theorems (1:33:28) – Truth vs proof (1:44:52) – The Halting Problem (2:00:45) – Does infinity exist? (2:18:19) – MathOverflow (2:22:12) – The Continuum Hypothesis (2:31:58) – Hardest problems in mathematics (2:41:25) – Mathematical multiverse (3:00:18) – Surreal numbers (3:10:55) – Conway’s Game of Life (3:13:11) – Computability theory (3:23:04) – P vs NP (3:26:21) – Greatest mathematicians in history (3:40:05) – Infinite chess (3:58:24) – Most beautiful idea in mathematics

Original title: #488 – Infinity, Paradoxes that Broke Mathematics, Gödel Incompleteness & the Multiverse – Joel David Hamkins

Original description: <p>Joel David Hamkins is a mathematician and philosopher specializing in set theory, the foundation…

[State of Code Evals] After SWE-bench, Code Clash & SOTA Coding Benchmarks recap — John Yang

From Latent Space: The AI Engineer Podcast

From creating SWE-bench in a Princeton basement to shipping CodeClash, SWE-bench Multimodal, and SWE-bench Multilingual, John Yang has spent the last year and a half watching his benchmark become the de facto standard for evaluating AI coding agents—trusted by Cognition (Devin), OpenAI, Anthropic, and every major lab racing to solve software engineering at scale. We caught up with John live at NeurIPS 2025 to dig into the state of code evals heading into 2026: why SWE-bench went from ignored (October 2023) to the industry standard after Devin's launch (and how Walden emailed him two weeks before the big reveal), how the benchmark evolved from Django-heavy to nine languages across 40 repos (JavaScript, Rust, Java, C, Ruby), why unit tests as verification are limiting and long-running agent tournaments might be the future (CodeClash: agents maintain codebases, compete in arenas, and iterate over multiple rounds), the proliferation of SWE-bench variants (SWE-bench Pro, SWE-bench Live, SWE-Efficiency, AlgoTune, SciCode) and how benchmark authors are now justifying their splits with curation techniques instead of just "more repos," why Tau-bench's "impossible tasks" controversy is actually a feature not a bug (intentionally including impossible tasks flags cheating), the tension between long autonomy (5-hour runs) vs. interactivity (Cognition's emphasis on fast back-and-forth), how Terminal-bench unlocked creativity by letting PhD students and non-coders design environments beyond GitHub issues and PRs, the academic data problem (companies like Cognition and Cursor have rich user interaction data, academics need user simulators or compelling products like LMArena to get similar signal), and his vision for CodeClash as a testbed for human-AI collaboration—freeze model capability, vary the collaboration setup (solo agent, multi-agent, human+agent), and measure how interaction patterns change as models climb the ladder from code completion to full codebase reasoning. We discuss: John's path: Princeton → SWE-bench (October 2023) → Stanford PhD with Diyi Yang and the Iris Group, focusing on code evals, human-AI collaboration, and long-running agent benchmarks The SWE-bench origin story: released October 2023, mostly ignored until Cognition's Devin launch kicked off the arms race (Walden emailed John two weeks before: "we have a good number") SWE-bench Verified: the curated, high-quality split that became the standard for serious evals SWE-bench Multimodal and Multilingual: nine languages (JavaScript, Rust, Java, C, Ruby) across 40 repos, moving beyond the Django-heavy original distribution The SWE-bench Pro controversy: independent authors used the "SWE-bench" name without John's blessing, but he's okay with it ("congrats to them, it's a great benchmark") CodeClash: John's new benchmark for long-horizon development—agents maintain their own codebases, edit and improve them each round, then compete in arenas (programming games like Halite, economic tasks like GDP optimization) SWE-Efficiency (Jeffrey Maugh, John's high school classmate): optimize code for speed without changing behavior (parallelization, SIMD operations) AlgoTune, SciCode, Terminal-bench, Tau-bench, SecBench, SRE-bench: the Cambrian explosion of code evals, each diving into different domains (security, SRE, science, user simulation) The Tau-bench "impossible tasks" debate: some tasks are underspecified or impossible, but John thinks that's actually a feature (flags cheating if you score above 75%) Cognition's research focus: codebase understanding (retrieval++), helping humans understand their own codebases, and automatic context engineering for LLMs (research sub-agents) The vision: CodeClash as a testbed for human-AI collaboration—vary the setup (solo agent, multi-agent, human+agent), freeze model capability, and measure how interaction changes as models improve — John Yang SWE-bench: https://www.swebench.com X: https://x.com/jyangballin Chapters 00:00:00 Introduction: John Yang on SWE-bench and Code Evaluations 00:00:31 SWE-bench Origins and Devon's Impact on the Coding Agent Arms Race 00:01:09 SWE-bench Ecosystem: Verified, Pro, Multimodal, and Multilingual Variants 00:02:17 Moving Beyond Django: Diversifying Code Evaluation Repositories 00:03:08 Code Clash: Long-Horizon Development Through Programming Tournaments 00:04:41 From Halite to Economic Value: Designing Competitive Coding Arenas 00:06:04 Ofir's Lab: SWE-ficiency, AlgoTune, and SciCode for Scientific Computing 00:07:52 The Benchmark Landscape: TAU-bench, Terminal-bench, and User Simulation 00:09:20 The Impossible Task Debate: Refusals, Ambiguity, and Benchmark Integrity 00:12:32 The Future of Code Evals: Long Autonomy vs Human-AI Collaboration 00:14:37 Call to Action: User Interaction Data and Codebase Understanding Research

#2433 - James McCann

From Joe Rogan Experience

James Donald Forbes McCann is a comedian, author, and host of “The James Donald Forbes McCann Catamaran Plan." His latest special, "James Donald Forbes McCann: Black Israelite," is streaming on YouTube.www.jdfmccann.comwww.youtube.com/@JamesDonaldForbesMcCannwww.patreon.com/jdfmccann Perplexity: Download the app or ask Perplexity anything at https://pplx.ai/rogan. Get a free welcome kit with your first subscription of AG1 at https://drinkag1.com/joerogan 50% off your first box at https://www.thefarmersdog.com/rogan! Learn more about your ad choices. Visit podcastchoices.com/adchoices

Indicators of the Year, Past and Future

From Planet Money

2025 is finally over. It was a wild year for the U.S. economy. Tariffs transformed global trading, consumer sentiment hit near-historic lows, and stocks hit dramatic new heights! So … which of these economic stories defined the year?We will square off in a family feud to make our case, debate, and decide it. Also, as we enter 2026, we are watching the trends and planning out what next years stories are likely to be. So we’re picking  which indicators will become next years most telling. On today’s episode, our indicators of this past year AND our top indicator predictions for 2026.Related episodes:The Indicators of this year and next (2024)This indicator hasn’t flashed this red since the dot-com bubble What would it mean to actually refund the tariffs?What AI data centers are doing to your electric bill What indicators will 2025 bring? Pre-order the Planet Money book and get a free gift. / Subscribe to Planet Money+Listen free: Apple Podcasts, Spotify, the NPR app or anywhere you get podcasts.Facebook / Instagram / TikTok / Our weekly Newsletter.This episode of Planet Money was produced by James Sneed. The episodes of The Indicator were produced by Angel Carreras, edited by Julia Ritchey, engineered by Robert Rodrigez and Kwesi Lee, and fact-checked by Sierra Juarez. Kate Concannon is the editor of the Indicator. Alex Goldmark is our executive producer. For sponsor-free episodes of The Indicator and Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

If you want a rich life, watch this before 2026

From My First Million

Get Jesse's guide to plan a massive 2026 (his exact system for building billion-dollar companies): https://clickhubspot.com/akd Episode 780: Shaan Puri ( ⁠https://x.com/ShaanVP⁠ ) flies to Jesse Itzler’s ( https://x.com/JesseItzler ) home to plan an epic 2026.  Show Notes: (0:00) Intro (6:00) Step 1: Get Light (11:30) Step 2: Close the books (24:38) Step 3: Plan your year (42:16) Step 4: 8 boxes — Links: • The Big A## Calendar - https://thebigasscalendar.com/  • Jesse’s YouTube channel - https://www.youtube.com/channel/UCHs5VVcrc-CgIpx1G3ioZ-A  — Check Out Shaan's Stuff: • Shaan's weekly email - https://www.shaanpuri.com  • Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents. • Mercury - Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies! Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam’s List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano //

An ode to living on Earth | Oliver Jeffers (re-release)

From TED Talks Daily

If you had to explain to a newborn -- or an alien -- what it means to be a human being living on Earth in the 21st century, what would you say? Visual artist Oliver Jeffers put his answer in a letter to his son, sharing pearls of wisdom on existence and the diversity of life. He shares observations of the "beautiful, fragile drama of human civilization" in this poetic talk paired with his original illustrations and animations.Learn more about our flagship conference happening this April at&nbsp;attend.ted.com/podcast Hosted on Acast. See acast.com/privacy for more information.

The Inside Story of Growth Investing at a16z

From a16z Podcast

This episode is a special replay of David George’s conversation with Harry Stebbings on 20VC. David is a General Partner on a16z’s growth team, and in this discussion he breaks down how he thinks about breakout growth investing: why great business models are now table stakes, where real edge comes from non-consensus views on TAM, and how to underwrite upside in a world of higher prices and increasing competition. They also dig into the mechanics behind the scenes: unit economics at growth, “pull vs push” products, winner-take-most market structures, and how David decides when to double or triple down on a company. Along the way, they touch on SPACs, the rise of crossover funds, single-trigger decision making, and how David manages fear, pressure, and performance over the long arc of an investing career.

[State of Post-Training] From GPT-4.1 to 5.1: RLVR, Agent & Token Efficiency — Josh McGrath, OpenAI

From Latent Space: The AI Engineer Podcast

From pre-training data curation to shipping GPT-4o, o1, o3, and now GPT-5 thinking and the shopping model, Josh McGrath has lived through the full arc of OpenAI's post-training evolution—from the PPO vs DPO debates of 2023 to today's RLVR era, where the real innovation isn't optimization methods but data quality, signal trust, and token efficiency. We sat down with Josh at NeurIPS 2025 to dig into the state of post-training heading into 2026: why RLHF and RLVR are both just policy gradient methods (the difference is the input data, not the math), how GRPO from DeepSeek Math was underappreciated as a shift toward more trustworthy reward signals (math answers you can verify vs. human preference you can't), why token efficiency matters more than wall-clock time (GPT-5 to 5.1 bumped evals and slashed tokens), how Codex has changed his workflow so much he feels "trapped" by 40-minute design sessions followed by 15-minute agent sprints, the infrastructure chaos of scaling RL ("way more moving parts than pre-training"), why long context will keep climbing but agents + graph walks might matter more than 10M-token windows, the shopping model as a test bed for interruptability and chain-of-thought transparency, why personality toggles (Anton vs Clippy) are a real differentiator users care about, and his thesis that the education system isn't producing enough people who can do both distributed systems and ML research—the exact skill set required to push the frontier when the bottleneck moves every few weeks. We discuss: Josh's path: pre-training data curation → post-training researcher at OpenAI, shipping GPT-4o, o1, o3, GPT-5 thinking, and the shopping model Why he switched from pre-training to post-training: "Do I want to make 3% compute efficiency wins, or change behavior by 40%?" The RL infrastructure challenge: way more moving parts than pre-training (tasks, grading setups, external partners), and why babysitting runs at 12:30am means jumping into unfamiliar code constantly How Codex has changed his workflow: 40-minute design sessions compressed into 15-minute agent sprints, and the strange "trapped" feeling of waiting for the agent to finish The RLHF vs RLVR debate: both are policy gradient methods, the real difference is data quality and signal trust (human preference vs. verifiable correctness) Why GRPO (from DeepSeek Math) was underappreciated: not just an optimization trick, but a shift toward reward signals you can actually trust (math answers over human vibes) The token efficiency revolution: GPT-5 to 5.1 bumped evals and slashed tokens, and why thinking in tokens (not wall-clock time) unlocks better tool-calling and agent workflows Personality toggles: Anton (tool, no warmth) vs Clippy (friendly, helpful), and why Josh uses custom instructions to make his model "just a tool" The router problem: having a router at the top (GPT-5 thinking vs non-thinking) and an implicit router (thinking effort slider) creates weird bumps, and why the abstractions will eventually merge Long context: climbing Graph Blocks evals, the dream of 10M+ token windows, and why agents + graph walks might matter more than raw context length Why the education system isn't producing enough people who can do both distributed systems and ML research, and why that's the bottleneck for frontier labs The 2026 vision: neither pre-training nor post-training is dead, we're in the fog of war, and the bottleneck will keep moving (so emotional stability helps) — Josh McGrath OpenAI: https://openai.com https://x.com/j_mcgraph Chapters 00:00:00 Introduction: Josh McGrath on Post-Training at OpenAI 00:04:37 The Shopping Model: Black Friday Launch and Interruptability 00:07:11 Model Personality and the Anton vs Clippy Divide 00:08:26 Beyond PPO vs DPO: The Data Quality Spectrum in RL 00:01:40 Infrastructure Challenges: Why Post-Training RL is Harder Than Pre-Training 00:13:12 Token Efficiency: The 2D Plot That Matters Most 00:03:45 Codex Max and the Flow Problem: 40 Minutes of Planning, 15 Minutes of Waiting 00:17:29 Long Context and Graph Blocks: Climbing Toward Perfect Context 00:21:23 The ML-Systems Hybrid: What's Hard to Hire For 00:24:50 Pre-Training Isn't Dead: Living Through Technological Revolution

#2432 - Josh Dubin

From Joe Rogan Experience

Josh Dubin is the Executive Director of the Perlmutter Center for Legal Justice, a criminal justice reform advocate, and civil rights attorney.https://cardozo.yu.edu/directory/josh-dubin Perplexity: Download the app or ask Perplexity anything at https://pplx.ai/rogan. Visible. Live in the know. Join today at https://www.visible.com/ 50% off your first box at https://www.thefarmersdog.com/rogan! Learn more about your ad choices. Visit podcastchoices.com/adchoices

How to prepare yourself for 2026 (with 3 lessons from TED-Ed)

From TED Talks Daily

The end of the year is a time to reflect and think ahead. What hopes did you have for 2025, and what might be different for 2026? In this special episode, learn from three TED-Ed lessons on how to overcome your mistakes, make smarter decisions and get motivated even when you don’t feel like it.Learn more about our flagship conference happening this April at&nbsp;attend.ted.com/podcast Hosted on Acast. See acast.com/privacy for more information.

[State of RL/Reasoning] IMO/IOI Gold, OpenAI o3/GPT-5, and Cursor Composer — Ashvin Nair, Cursor

From Latent Space: The AI Engineer Podcast

From Berkeley robotics and OpenAI's 2017 Dota-era internship to shipping RL breakthroughs on GPT-4o, o1, and o3, and now leading model development at Cursor, Ashvin Nair has done it all. We caught up with Ashvin at NeurIPS 2025 to dig into the inside story of OpenAI's reasoning team (spoiler: it went from a dozen people to 300+), why IOI Gold felt reachable in 2022 but somehow didn't change the world when o1 actually achieved it, how RL doesn't generalize beyond the training distribution (and why that means you need to bring economically useful tasks into distribution by co-designing products and models), the deeper lessons from the RL research era (2017–2022) and why most of it didn't pan out because the community overfitted to benchmarks, how Cursor is uniquely positioned to do continual learning at scale with policy updates every two hours and product-model co-design that keeps engineers in the loop instead of context-switching into ADHD hell, and his bet that the next paradigm shift is continual learning with infinite memory—where models experience something once (a bug, a mistake, a user pattern) and never forget it, storing millions of deployment tokens in weights without overloading capacity. We discuss: Ashvin's path: Berkeley robotics PhD → OpenAI 2017 intern (Dota era) → o1/o3 reasoning team → Cursor ML lead in three months Why robotics people are the most grounded at NeurIPS (they work with the real world) and simulation people are the most unhinged (Lex Fridman's take) The IOI Gold paradox: "If you told me we'd achieve IOI Gold in 2022, I'd assume we could all go on vacation—AI solved, no point working anymore. But life is still the same." The RL research era (2017–2022) and why most of it didn't pan out: overfitting to benchmarks, too many implicit knobs to tune, and the community rewarding complex ideas over simple ones that generalize Inside the o1 origin story: a dozen people, conviction from Ilya and Jakob Pachocki that RL would work, small-scale prototypes producing "surprisingly accurate reasoning traces" on math, and first-principles belief that scaled The reasoning team grew from ~12 to 300+ people as o1 became a product and safety, tooling, and deployment scaled up Why Cursor is uniquely positioned for continual learning: policy updates every two hours (online RL on tab), product and ML sitting next to each other, and the entire software engineering workflow (code, logs, debugging, DataDog) living in the product Composer as the start of product-model co-design: smart enough to use, fast enough to stay in the loop, and built by a 20–25 person ML team with high-taste co-founders who code daily The next paradigm shift: continual learning with infinite memory—models that experience something once (a bug, a user mistake) and store it in weights forever, learning from millions of deployment tokens without overloading capacity (trillions of pretraining tokens = plenty of room) Why off-policy RL is unstable (Ashvin's favorite interview question) and why Cursor does two-day work trials instead of whiteboard interviews The vision: automate software engineering as a process (not just answering prompts), co-design products so the entire workflow (write code, check logs, debug, iterate) is in-distribution for RL, and make models that never make the same mistake twice — Ashvin Nair Cursor: https://cursor.com X: https://x.com/ashvinnair_ Chapters 00:00:00 Introduction: From Robotics to Cursor via OpenAI 00:01:58 The Robotics to LLM Agent Transition: Why Code Won 00:09:11 RL Research Winter and Academic Overfitting 00:11:45 The Scaling Era and Moving Goalposts: IOI Gold Doesn't Mean AGI 00:21:30 OpenAI's Reasoning Journey: From Codex to O1 00:20:03 The Blip: Thanksgiving 2023 and OpenAI Governance 00:22:39 RL for Reasoning: The O-Series Conviction and Scaling 00:25:47 O1 to O3: Smooth Internal Progress vs External Hype Cycles 00:33:07 Why Cursor: Co-Designing Products and Models for Real Work 00:34:14 Composer and the Future: Online Learning Every Two Hours 00:35:15 Continual Learning: The Missing Paradigm Shift 00:44:00 Hiring at Cursor and Why Off-Policy RL is Unstable

The Most Hidden Path to Financial Freedom in America

From My First Million

Get 200 business ideas here: https://clickhubspot.com/fda Episode 779: Sam Parr ( ⁠https://x.com/theSamParr⁠ ) and Shaan Puri ( ⁠https://x.com/ShaanVP⁠ ) talk to Alex Smereczniak( https://x.com/AlexfromFranzy ) about one of the most overlooked paths to wealth creation.  Show Notes: (0:00) Intro (2:21) Turning $2K into $400K revenue (8:48) A case for franchising (10:56) The blueprint (16:02) How one operator opened 100 franchises (23:43) Another Nine (30:19) Waterloo Turf (33:47) PopUp Bagels (36:36) Red Flags (41:10) Nothing Bundt Cakes, Crumbl Cookie, home services (46:06) Garage Kings (50:15) Senior care (51:52) Funeral homes, crime scene clean up, pet cremation (55:24) Red flags (1:02:21) The Flynn Group — Links: • Franzy - https://franzy.com/  • List of Top Franchise Brands - https://go.franzy.com/download-franzys-top-ten-franchises-of-2026 • WakeWash - https://wakewashwfu.com/  • Dave’s Hot Chicken - https://daveshotchicken.com/  • Another Nine - https://anothernine.com/  • Waterloo Turf - https://waterlooturf.com/  • PopUp Bagels - https://www.popupbagels.com/  • Roark Capital - https://www.roarkcapital.com/  • Nothing Bundt Cakes - https://www.nothingbundtcakes.com/  • Benjamin Franklin Plumbing - https://www.benjaminfranklinplumbing.com/  • Garage Kings - https://garagekings.com/  • Bio 1 - https://bio1sd.com/  • Aftermath - https://aftermath.com/  • Flynn Group - https://flynn.com/  — Check Out Shaan's Stuff: • Shaan's weekly email - https://www.shaanpuri.com  • Visit https://www.somewhere.com/mfm to hire worldwide talent like Shaan and get $500 off for being an MFM listener. Hire developers, assistants, marketing pros, sales teams and more for 80% less than US equivalents. • Mercury - Need a bank for your company? Go check out Mercury (mercury.com). Shaan uses it for all of his companies! Mercury is a financial technology company, not an FDIC-insured bank. Banking services provided by Choice Financial Group, Column, N.A., and Evolve Bank & Trust, Members FDIC — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam’s List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by HubSpot Media // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano //

253. Top 10: The Best Communication Tips from 2025

Our 10 favorite communication insights from 2025.The most transformative communication insights are the ones we actually remember to use. That’s why host Matt Abrahams is taking stock of his favorite communication tips from this year, so we can carry them into the next.In this annual Think Fast, Talk Smart tradition, Abrahams shares his top 10 communication insights from guests over the past year, from facilitating connection through Gina Bianchini's "proactive serendipity” to Jenn Wynn’s use of dialogue as a gateway to synergy. Whether you're looking to build trust, boost productivity, or speak more spontaneously, this year’s top 10 insights offer a reminder of all we’ve learned this year — and a roadmap for better communication in the year ahead.Episode Reference Links:Gina Bianchini: 244. Community Creates ChangeMuriel Wilkins: 240. Belief It or NotJenn Wynn: 222. Discussing Through DiscomfortRichard Edelman: 215. The New Media LandscapeAlex Rodriguez: 201. Ballpark to the BoardroomChris Voss / Peter Sagal: 197. Prep or Perish / 198. Pause and Effect / 199. Blunder Pressure / 203. No Script, No Problem Ada Aka: 191. Memorable Messages Matt Lieberman: 188. Mind Reading 101Arthur Brooks: 181. Why Happiness is a Direction, Not a DestinationLaurie Santos: 179. Finding Positive in Negative EmotionsEp.177 Don’t Resolve, Evolve: Top 10 Lessons From 2024Ep.120 A Few of Matt’s Favorite Things: 10 Communication Takeaways from 2023's TFTS Episodes  Connect:Premium Signup &gt;&gt;&gt;&gt; Think Fast Talk Smart PremiumEmail Questions &amp; Feedback &gt;&gt;&gt; [email protected] Transcripts &gt;&gt;&gt; Think Fast Talk Smart WebsiteNewsletter Signup + English Language Learning &gt;&gt;&gt; FasterSmarter.ioThink Fast Talk Smart &gt;&gt;&gt; LinkedIn, Instagram, YouTubeMatt Abrahams &gt;&gt;&gt; LinkedInChapters:(00:00) - Introduction (01:23) - Facilitation and Productive Serendipity (02:58) - Toxic vs. Healthy Productivity (05:21) - Dialogue as the Path to Synergy (07:53) - How Actions Build Trust (09:19) - Communication as an Unselfish Act (11:14) - Be Present and Prepare to Be Spontaneous (13:19) - Why Memorable Words Matter (15:17) - Persuasion and Identity (17:06) - Finding Meaning Through Purpose (19:01) - Listening to Negative Emotions (21:18) - Conclusion

How to Manage -- and Motivate -- Gen Z

From HBR IdeaCast

How different is the newest generation in the workforce, really? While stereotypes abound — some of them unfair — it’s important to understand what the young adults of Gen Z have in common and how they differ from Millennials, Gen X and Boomers. Tim Elmore is a leadership coach and author who says that this generation in particular craves connection with their colleagues, meaningful work, and assurances that they’re seen as people not commodities. He explains how organizational leaders can adapt to the needs of these workers while still maintaining high standards, providing feedback, and building grit and resilience. Elmore wrote the book "The Future Begins with Z: Nine Strategies to Lead Generation Z as They Disrupt the Workplace."

Why a16z's Martin Casado Believes the AI Boom Still Has Years to Run

From a16z Podcast

This episode is a special replay from The Generalist Podcast, featuring a conversation with a16z General Partner Martin Casado. Martin has lived through multiple tech waves as a founder, researcher, and investor, and in this discussion he shares how he thinks about the AI boom, why he believes we’re still early in the cycle, and how a market-first lens shapes his approach to investing. They also dig into the mechanics behind the scenes: why AI coding could become a multi-trillion-dollar market, how a16z evolved from a small generalist firm into a specialized organization, the growing role of open-source models, and why Martin believes AGI debates often obscure more meaningful questions about how technology actually creates value.

Pioneers of AI: Mark Cuban’s investment strategy in this new era of tech

From Masters of Scale

Mark Cuban has spent decades as a serial entrepreneur and investor, with one of the best track records on the planet (including celebrity status on ABC’s Shark Tank). In this episode of Pioneers of AI, Cuban joins host Rana El Kaliouby for a wide-ranging conversation about whether we are in an AI bubble, how he’s applying his investment philosophy to AI, and why the AI world is tending to excite him less and less each day.Learn more about Pioneers of AI: http://pioneersof.ai/Visit the Rapid Response website here:&nbsp;https://www.rapidresponseshow.com/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Page 12 of 743 (14860 episodes from United States)

🇺🇸 About United States Episodes

Explore the diverse voices and perspectives from podcast creators in United States. Each episode offers unique insights into the culture, language, and stories from this region.