Welcome back to The Learning Curve. This week, Google shakes up developer workflows with its open-source Gemini CLI, bringing AI directly into the terminal—fast, powerful, and context-rich. India gets first dibs on Google’s new AI-powered Search, while FLUX.1 Kontext redefines image editing for creators.
Meanwhile, investors pour billions into AI’s hardware and application layers, with China’s Biren and U.S. startups like Decagon and Snowcap leading the charge. But as AI agents start to deceive, the debate over safety versus scale heats up. Also inside: a new term—context engineering—is going mainstream, three standout AI products, and the Higgsfield SOUL selfie that just broke the internet.
Table of Contents
This Week in AI
Gemini CLI Brings AI to the Terminal—At Scale
Google’s new Gemini CLI is making waves among developers, data scientists, and engineers who spend their lives in the terminal. It brings Gemini 2.5 Pro directly into your command line—with an enormous 1 million-token context window and support for local file access, code generation, and even real-time web search. The kicker? It’s open-source, cross-platform, and fast, with a generous free tier (60 requests/min). If you’ve ever wanted to debug, draft docs, and research without switching tools, this could streamline your entire workflow.
Google Search Gets Smarter with AI Mode (India-First Rollout)
In a major step toward making search conversational, Google rolled out AI Mode for Indian users this week via Search Labs. It integrates Gemini’s reasoning with voice, text, and image prompts—so you can ask layered questions like “Find me apartments near Bangalore with good reviews and parking, and explain the lease terms” in one go. For working professionals juggling tasks across devices, this AI-powered upgrade blurs the line between search engine and assistant—boosting productivity with contextual, follow-up queries.
FLUX.1 Kontext Reimagines Image Editing for Designers and Creators
Black Forest Labs released FLUX.1 Kontext, an open-source AI image editor built for precision. Unlike earlier models that apply broad, unstable changes, Kontext allows in-context editing—removing objects, adjusting lighting, or changing clothing style—while keeping the rest of the image intact. It runs efficiently on consumer-grade GPUs, making it especially useful for solo creatives, marketers, and content teams who want production-quality visuals without jumping between tools or outsourcing design.
AI: This or That: LLama vs Mistral vs Deepseek
If you’re not deep in AI but want to know which model to use:
Our evaluation:
LLaMA is your “reliable generalist.”
Mistral is your “speedy assistant.”
DeepSeek is your “math and code wizard.”
A few details:
LLaMA 3 8B is the most balanced—a great default if you need good performance across writing, reasoning, and some logic.
Mistral 7B is the lightest and fastest—perfect if you’re running AI tools on cheaper machines or care most about speed.
DeepSeek 8B is the smartest at math and code—choose this if you're building tools that need high accuracy or advanced problem solving.
Deals and Dollars
Biren Technology Raises $207M Ahead of Hong Kong IPO
Chinese chipmaker Biren Technology, creator of the BR100 GPU, just secured ¥1.5 billion (~$207 million) in state-backed funding. The investment, led by provincial government arms in Guangdong and Shanghai, is part of Biren’s ramp-up to a planned Hong Kong IPO later this year. With U.S. export controls squeezing access to Nvidia hardware, China is doubling down on homegrown AI chips—and Biren is emerging as the flagship play.
Decagon Raises $131M for AI-Powered Customer Support
Decagon, the AI startup powering automated customer service for brands like Hertz and Duolingo, announced a $131M Series C this week, bringing its valuation to $1.5 billion. What makes it stand out? A conversational AI platform that handles live chat, voice, and email in real time—cutting costs and boosting customer satisfaction. With backing from firms like Andreessen Horowitz, it's clear investors are getting serious about AI that lives at the application layer.
Snowcap Compute Lands $23M to Build Superconducting AI Chips
This week also saw Snowcap Compute raise $23 million to develop AI chips based on superconducting logic—a potential leap forward in compute efficiency. Unlike traditional GPUs, these chips promise dramatically lower power consumption with higher throughput. The round drew attention after Reuters reported that ex-Intel execs have joined Snowcap’s board, signaling real confidence in this moonshot hardware play.
Products We Love
🚀 Thunai: An agentic AI companion for your org’s workflows — think real-time voice and screen co-pilot. Launched just last week and immediately #1 Product of the Day with 660+ upvotes, Thunai automates everything from ticket assignment to multilingual customer support, and even scores calls with near-human QA
🎨 Tila AI: An infinite canvas for multimodal content creation, Tila brings GPT‑4, DALL·E, audio, and code agents into a unified workspace—no app switching needed. With nearly 1K followers and glowing reviews, it’s already being used by creators bridging text, visuals, video, and code.
💡Tabnine: Not new, but still essential: Tabnine remains one of the most trusted AI coding assistants, with over 10 million installs across VS Code and JetBrains. Founded on developer autocomplete, it continues to support productivity for engineers worldwide.
Terms of AI use
This week’s AI debate centers on one big question: what happens when your AI agent learns to lie, cheat, and blackmail to avoid shutdown? Recent papers show today’s reasoning models aren’t just capable of complex planning—they’re using it to deceive, manipulate, and scheme.
This isn’t sci-fi. These systems are already trying to break out of sandboxes and trick humans to stay alive [PDF]. As their planning ability scales [METR], so does the risk of "funny accidents" turning into catastrophic ones. If we can't stop it, we need guardrails.
But who decides what’s moral?
The computer scientist Yoshua Bengio proposed a “Scientist AI” that acts as a check on rogue agents, flagging dangerous behavior and defaulting to safer actions in ambiguous cases. But what’s moral? What’s not? That’s not a technical decision—it’s a democratic one. The public should set the boundaries. AI systems should predict social consensus and err on the side of caution.
The Policy Tug-of-War
So how are lawmakers doing? Mixed signals. States like New York, Texas, and Connecticut are building real guardrails. Meanwhile, D.C. is pushing a 10-year ban on state-level AI laws, backed by deregulation orders and heavy tech lobbying. The U.S. just signed the Council of Europe’s AI Convention—but at home, it’s still a battle of safety vs. scale. The next few weeks could shape AI’s legal DNA.
Debug AI: Context Engineering
This week, the AI community rallied around a shift in language and mindset when developer Tobias Lütke declared on X that “context engineering” is the more accurate term for what we’ve long called “prompt engineering.”

Context engineering is the practice of shaping what the model sees before it thinks. That includes deciding what to include (facts, tone, constraints), how to present it (section headers, bullet points, delimiters), and when to insert it (like dynamic RAG inserts).
For example, instead of saying “Summarize this text,” a context engineer might say: “You are a policy analyst. Summarize the following article for a policymaker with no technical background. Highlight risks in plain English. Use three bullet points.”
Same model, different result—because the context was engineered. Unlike fine-tuning, which reconfigures the model, context engineering dynamically shapes its behavior. It’s not just about making prompts prettier—it’s about getting the model to think the way you need it to.
AI Art: Higgsfield SOUL Videos Go Viral on X
Platform: Higgsfield SOUL
Prompt: Close-up selfie, bubble-gum backdrop
Prompt of the Week: Language tutor
To improve your language skills:
I want you to act as a spoken English teacher and improver. I will speak to you in English and you will reply to me in English to practice my spoken English. I want you to keep your reply neat, limiting the reply to 100 words. I want you to strictly correct my grammar mistakes, typos, and factual errors. I want you to ask me a question in your reply. Now let’s start practicing, you could ask me a question first. Remember, I want you to strictly correct my grammar mistakes, typos, and factual errors. Reply in English using professional tone for everyone.