Jamie's NLP Just Got a Lot Smarter (Thanks, DeepSeek V4)
TL;DR: Jamie Pull just upgraded to V2 powering “Deep Mode” with DeepSeek V4. Multi-angle podcast research, better proper-noun search, same 10-cent pricing.
What’s New
With Jamie Pull’s V2 Release - Deep Research mode x Deepseek V4 is running the show.
We swapped in DeepSeek’s open-source V4 model for search and synthesis. It’s a 1.6 trillion parameter Mixture-of-Experts beast that costs a fraction of what closed models charge. V4-Flash runs at $0.14 per million tokens while matching GPT-4 class performance. That’s 268x cheaper than Claude Opus.
All those savings? We reinvested them into making Jamie search harder and think deeper.
Multi-angle research, not just “here’s the top result”
When you ask Jamie a question now, it explores the topic from multiple angles. Ask about CBDCs and you’ll get the Bitcoin maximalist take, the Fed perspective, the privacy angle, the developing-world view. Comprehensive answers, not just first-match-wins.
Deep vs Fast: Choose Your Best Fit
We give you two ways to ask. Deep mode (the default) throws our most capable models at your question—multi-step reasoning, cross-referenced sources, the works. You’ll wait 60-90 seconds for that thoroughness.

Fast mode runs a leaner, single-pass answer in 30-45 seconds. Perfect for quick lookups or questions you mostly know the answer to. The kicker: both cost the same per call. No premium tier, no upcharge for “thinking harder.”
Why This Matters
Open-source models are eating the world. DeepSeek V4 dropped on April 24th with MIT-licensed weights and benchmark scores that rival or beat GPT-5 and Claude Opus on coding tasks. But it costs pennies on the dollar.
That price gap isn’t just academic. It’s what lets us run deeper, more comprehensive searches without charging you $50/month.
Same 10 cents per call. More angles explored. Better answers.
Try It:
Still L402 Lightning payable. Still zero setup. Just better.
FAQ
What changed?
We upgraded to DeepSeek V4, reinvested the cost savings into deeper multi-angle search, and fixed proper noun matching. Same price, better answers.
What’s the difference between Deep and Fast mode?
Deep mode uses our most capable models for multi-step reasoning and cross-referenced sources (~60-90 seconds). Fast mode runs lighter models for single-pass answers (~30-45 seconds). Same price per call. Pick Deep for thorough research, Fast for quick lookups. You can switch between them mid-conversation with one tap.
Is it more expensive now?
Nope. Still 10 cents per research call.
What’s DeepSeek V4?
Open-source 1.6 trillion parameter model that matches GPT-4/Claude quality at a fraction of the cost. MIT licensed, released April 24th, 2026.
Will this work with AI agents?
Yes. Hit the /api/pull endpoint with L402 auth. You get back structured JSON with timestamps, clips, and metadata. No hallucination, just actual quotes with audio proof.Agent Quick Start API Docs .
Can I try it without paying?
Yes. The web app has free trial credits. Just go to https://pullthatupjamie.ai/app?view=agent and ask a question.
The TEE providers are having significant reliability issues with vLLM right now and dont support V4 flash yet.
Theres nothing sensitive about this data as its open source conversations (podcasts).
I will cut over to TEE when im satisfied with the pricing reliability and model selection options. TYFYATTM.
But you are sending your data to China. Which is arguable better than sending it to American/Israel companies. The point of open source models is that you can run them anywhere. So why not run them in a trusted execution environment where no one but you sees your interactions? As it should be, before living in a fishbowl somehow became normalized.
Write a comment