Welcome back to The Learning Curve. This week, Google DeepMind’s Genie 3 stunned the internet by rendering real-time, explorable 3D environments—straight from text prompts. While Genie expands the frontier of what’s possible with language, ElevenLabs dropped an AI music generator that sings like a pro, LangChain launched an autonomous dev framework, and Elon Musk’s Grok-Imagine sparked another firestorm over AI and content moderation. Meanwhile, Figma’s IPO popped 250%, signaling that investors still believe design—and maybe reality itself—is going AI-native. Let’s dive in.
This Week in AI

Elevenlab launches its music generator, “built for businesses, artists, creators, and music lovers”. It can generate a complete song with prompts. It can help artists create original songs tracks, advertisers can use it to create their own creative ads, and to create memes. There is one concern here: lawsuit. Last year, Suno and Udio were sued by the Recording Industry Association of America (RIAA), for training their music-generation models on copyrighted material. To avoid the legal risk ElevenLabs partners with Merlin Network and Kobalt Music Group, two digital publishing platforms for independent musicians, to use their materials for AI training.
Elon Musk’s xAI has launched Grok-Imagine, a new AI tool that generates images and videos—including content without explicit safety restrictions. Users can create both SFW and sexually explicit content visuals(spicy mode) from text prompts. The launch has raised major concerns around moderation, consent, and potential misuse. It reignites the debate over content guardrails in generative AI platforms.
LangChain has officially introduced OpenSWE, an open-source agentic framework designed for Software Engineering workflows. Built on LangGraph, OpenSWE lets developers orchestrate multiple AI agents—like code writers, reviewers, debuggers, and PR bots—into a modular, collaborative system. Think of it as an open-source, AI-powered software team that can handle coding tasks from issue resolution to pull request review. The framework is designed to be extensible, letting teams plug in different LLMs, tools (e.g., GitHub, Docker), and even custom validation agents.
AI This or That: Is Genie 3 Worth the hype?
Genie 3 is a general-purpose world model that generates interactive 3D environments in real time from simple text prompts. It runs at 720p resolution and 24 frames per second, allowing users—and AI agents—to explore worlds consistently for several minutes—far beyond the 10–20 second limit of its predecessor, Genie 2. Here is what people think:
Where it shines:
Truly general-purpose and quick startup time. Works exceptionally well for gaming environments but also generalizes to other industrial and real-world scenarios.
It learns physics. Although there are systematic failures even for rigid body physics, it was clear to me that it can learn a game engine and non-rigid physics without an underlying engine (and in the limit learn from game engines via training data).
It works exceptionally well for stylized environments with characters walking around. This will have implications for concept artists, level designers and game devs.
It is way more fun than video models, indicating that there are high retention consumer experiences waiting to be built with this in the future
Photorealistic walk-throughs and drone shots work exceptionally well
Global illumination and lighting work surprisingly well
Visual memory is quite powerful,and the same objects approximately remain coherent under occlusion and over longer time horizons
Open Problems:
Physics is still hard and there are obvious failure cases when I tried the classical intuitive physics experiments from psychology (tower of blocks).
Social and multi-agent interactions are tricky to handle. 1vs1 combat games do not work
Long instruction following and simple combinatorial game logic fails (e.g. collect some points/keys etc, go to the door, unlock and so on)
The action space is limited
It is far from being a real game engine and has a long way to go, but this is a clear glimpse into the future.
Genie 3 offers a compelling glimpse into the future of real-time, physics-aware 3D world generation from text, excelling in stylized environments and visual memory—but still struggles with physics consistency, complex interactions, and extended instruction following.
Deals and Dollars
Figma’s IPO Explodes 250%—But Is It Too Much, Too Fast?
Figma’s stock skyrocketed on its first trading day, closing at $115.50—a 250% pop from its $33 IPO price—and hitting a $47.9 billion valuation. Silicon Valley loves Figma—its design tools are everywhere, and investors see it as the next SaaS crown jewel. It's more than a tool; it's a must-have ecosystem for Silicon Valley. Adobe put a 20 million dollar deal on the table for Figma in 2022— a price many called insane. The deal aimed to eliminate Adobe XD’s biggest competitor and expand Adobe’s reach in collaborative design. The deal eventually collapsed in 2023 due to regulatory pushback. As the Adobe deal failed, an imminent IPO is on the table. Its market cap smashed Adobe’s previous offer, as investors believed Figma’s future in AI-powered design expansions.
Noma Security Nabs $100M Series B as AI Agent Security Heats Up
Fresh off a jaw-dropping 1,300% ARR surge, AI security startup Noma Security just scored a $100M Series B. That kind of raise puts it in the top 5% of cybersecurity deals this year, signaling serious investor conviction. Evolution Equity Partners led the round, with Ballistic Ventures and Glilot Capital doubling down. AI agents are exploding—UBS says 53% of companies will adopt them by 2026. But with great automation comes great security risk. AI models can leak data, be hijacked for malicious tasks, or go rogue inside enterprise systems. CISOs are scrambling for guardrails. That’s where Noma steps in. Its all-in-one platform secures AI across cloud environments, source code, and dev tools—plug-and-play protection that Fortune 500s love. In a world where AI is moving faster than the rules can keep up, Noma is selling the seatbelt everyone’s suddenly realizing they need.
Neogov’s $3B Leap: AI-Powered Future for Public Sector HR
EQT and CPP Investments just acquired Neogov in a $3B deal, betting big on the future of AI in the public sector. Neogov’s HR and compliance software is already used by thousands of government agencies, helping them streamline hiring, training, and regulatory tasks. Now, under new ownership, the company plans to supercharge its platform with AI—automating workflows and reducing bureaucratic drag for cash-strapped public institutions. The acquisition highlights the growing appeal of “govtech” as governments modernize aging infrastructure. Neogov sits at the center of that shift, offering mission-critical tools to a sector that’s historically under-digitized but ripe for transformation.
Here’s More:
NICE Drops $955M on Cognigy to Dominate the $30B AI Customer Service Gold Rush. Together, NiCE and Cognigy aim to build a full-stack AI CX platform (Artificial Intelligence Customer Experience platform) —one that doesn’t just route calls or offer canned chatbot replies, but enables autonomous agents to resolve issues, automate backend workflows, and deliver proactive service.
Products We Love
🔄 MotionMatch – AI-Powered Video Editing for TikTok & Shorts
MotionMatch syncs music beats, visual cuts, and transitions automatically using AI. Upload your raw footage and pick a vibe—motion templates will handle the rest. Designed for creators who want to save time but still go viral.
📍 MindPal – AI Knowledge Base from Your Files
Drop in your PDFs, docs, Slack chats, or Notion exports, and MindPal turns them into a searchable, chat-ready AI knowledge base. Think of it as a smarter internal wiki powered by GPT-4. Use it for onboarding, project tracking, or Q&A across teams.
📝 AegisWriter – AI Editor with Real-Time Fact Checking
Tired of AI hallucinations? AegisWriter helps you draft articles, emails, and reports with embedded fact-checking via RAG (retrieval-augmented generation). It cites sources inline and alerts you when a claim can’t be verified—perfect for journalists, marketers, and knowledge workers.
Terms of AI use

On July 23, the White House released an ambitious national AI Action Plan aimed at accelerating adoption across government, defense, and healthcare. The plan promotes a “try-first” approach, pledging to dismantle regulations that “unduly burden AI innovation” and fast-track AI deployment in public institutions. It also reinforces efforts to reshore data centers, energy infrastructure, and semiconductor manufacturing.
Supporters hail the plan as a crucial step in securing U.S. leadership in the global AI race. However, it has sparked concerns over deregulation, monopoly risks, and state-federal tensions. Critics argue the strategy may undermine local laws, empower Big Tech unchecked, and prove difficult to implement amid recent federal funding shortfalls, particularly that for education and scientific research.
The coming months will test whether the plan can deliver on its promise of innovation while addressing calls for equity, transparency, and oversight.
Debug AI: Decentralized AI
Decentralized AI is where blockchain meets artificial intelligence—two of the most powerful technologies of our time. It’s based on a simple idea: AI needs lots of data to be smart, and blockchain makes sharing that data secure and trustworthy.
Why does this matter? In traditional AI systems, your data is stored in centralized servers (often owned by big tech). With decentralized AI, your data can be shared safely across networks—so that AI models learn from more diverse sources without compromising privacy.
Imagine:
In cybersecurity, AI detects threats, and blockchain makes sure the threat data can’t be tampered with.
In finance, AI spots fraud, and blockchain locks in the transaction records.
In healthcare, your medical records are encrypted on-chain, and AI analyzes them (without anyone actually seeing your private info).
After years of hype and sketchy ICOs, the vision is finally catching up to the tech. As AI and blockchain evolve in parallel, their convergence—Decentralized AI—is becoming not just possible, but practical.
AI Art: Higgsfield SOUL Videos Go Viral on X
🎨 AI Art — 720p Worlds, Rendered from Language

This week’s standout AI visual comes straight from a viral Genie 3 demo shared by @AIFrontliner on X. The clip showcases a gorgeously lit canyon scene—complete with a flowing river, real-time lighting, and natural camera glide—all rendered at 720p from a simple text prompt.
Why it matters: this isn’t static video generation—it’s an interactive world that AI agents (and humans) can explore. The consistency across frames, realistic terrain shadows, and ambient light shifts push AI art into a whole new category: playable environments.
It’s not just “AI art” anymore—it’s AI reality design.
This level of fidelity opens the door for real-time cinematic experiences, educational simulations, and user-generated games where language becomes level design.
Prompt of the Week: Language tutor
This is a prompt that helps you create deployment-ready video for your commercial:

JSON prompt for Veo3:
{
"shot": {
"composition": "sushi flying mid-air in freeze-frame, ingredients slicing through slow-mo then slamming into a box layout",
"lens": "35mm with fast rack focus",
"frame_rate": "1500fps during slicing and drops, 60fps tracking",
"camera_movement": "whip pans, snap zoom on impact, bullet-time swirl around salmon cut"
},
"subject": {
"description": "exploding sushi assortment — salmon, maki, ebi, avocado, rice burst",
"wardrobe": "",
"props": "dripping soy, flying ginger, vapor mist, kinetic chopsticks"
},
"scene": {
"location": "black void with glowing grid, floating sushi elements",
"time_of_day": "stylized timeless",
"environment": "mist, air swirls, kinetic chop platform"
},
"visual_details": {
"action": "sashimi slices in air, rice explodes into shape, box slams shut in final impact burst, chopsticks cross like a seal",
"special_effects": "rice shockwave, soy splash trails, steam blast on drop",
"hair_clothing_motion": ""
},
"cinematography": {
"lighting": "dramatic side light, gloss shimmer pulses on fish",
"color_palette": "lava orange, sea green, high-gloss black, neon edges",
"tone": "premium, aggressive, ultra-fresh"
},
"audio": {
"music": "trap beat with cinematic bass hits and percussive rhythm",
"ambient": "air slice, sushi drop echo",
"sound_effects": "rice crackle, soy sizzle, blade whoosh",
"mix_level": "punchy mix, FX-synced with hard stereo hits"
},
"dialogue": {
"character": "",
"line": "",
"subtitles": false
}
See you in the next issue.