I Tested Google’s Veo 3: The Truth Behind the Keynote
At Google’s I/O 2025 conference, the announcement of Veo 3 sent ripples across the internet. Viewers were left unable to distinguish the content generated by Veo 3 from that created by humans. However, if you’ve been following Silicon Valley’s promises, this isn’t the first time you’ve heard such claims.
I still remember when OpenAI’s Sora “revolutionized” video generation in 2024. Later revelations showed that these clips required extensive human labor to fix continuity issues, smooth out errors, and splice multiple AI attempts into coherent narratives. Most of them were little more than glorified montages.
Then reality set in. When the raw outputs of Sora leaked, they displayed the familiar AI video problems: temporal inconsistencies, physics violations, morphing objects, and the infamous “AI hands” that plagued earlier systems.
Now Google claims that Veo 3 can generate flawless videos with synchronized audio from simple text prompts. I can’t help but feel a sense of déjà vu: awe-inspiring demos, adoring media coverage, and everyone talking about the displacement of human creatives.
Veo 3, like its predecessors, represents another chapter in Silicon Valley’s favorite fairy tale: that human labor can be automated away. Yet the reality is far more complicated than these companies are willing to admit.
Testing Veo 3: The Reality Behind Google’s Polished Demos
Google’s demos showcase Veo 3 generating videos that seem very passable in modern times. The videos feature humans with fingers intact, no extra limbs, synchronized audio, and cinematic quality.
My Real-World Test Results
On May 27, 2025, I tested Veo 3 with a detailed cinematic prompt inspired by Spielberg’s style. I envisioned a soldier’s emotional reunion with his young son, complete with specific requirements for golden hour lighting, intimate camera movement, and scripted dialogue.
Here is my full prompt.
The reality was disappointing. After my one free attempt, Veo 3 produced only a brief, low-quality clip that bore no resemblance to my cinematic vision. The audio consisted of random exclamations (“Oh!” “La!”) instead of the requested dialogue. The video quality was poor, lacking the specified lighting, framing, or emotional depth.
Clearly, this was not good.
So, I decided to give it another try during the free trial. However, I didn’t have enough credits. Here are the payment tiers if you’re curious about how to get enough credits to continue prompting.
This single test session produced results that required extensive post-production work to become remotely usable—exactly the hidden human labor that Google’s keynote omits.
The Post-Production Work Google Won’t Admit
Google’s showcase videos (like the one above) most likely required extensive human intervention. Here’s the post-production work hidden behind those “effortless” results:
-
Color Grading and Visual Consistency: AI-generated clips often suffer from inconsistent lighting, color temperature shifts, and visual artifacts that require professional colorists and online editors to correct. What appears as seamless output is actually the result of frame-by-frame adjustments.
-
Audio Post-Production: While Veo 3 can generate synchronized sound, the quality clearly needed enhancement based on my test. There’s no way the AI created the sound design for the above video without human help.
-
Composite Editing: The most impressive demos most likely combine multiple AI generations. Editors must select the best segments from dozens of attempts, splicing them together to hide the AI’s failures and amplify its successes.
-
Prompt Engineering: Achieving professional results requires extensive experimentation with prompts, parameters, and generation seeds. Teams often generate hundreds of variations before achieving usable footage. Even in the above video, there are a lot of weird things happening in the car.
What’s marketed as “effortless AI generation” actually represents a hybrid workflow where human expertise remains essential—but systematically devalued and hidden from public view.
If you hope to make a full-length film where each shot doesn’t take you out of the edit, the color, sound, and edit timing need to be seamless. Otherwise, it becomes jarring and hard to watch. You are still better off just going out and shooting this scene with real humans if you want the full desired effect.
The Hidden Human Labor in “AI-Generated” Videos
Beyond the hidden human labor, there’s an environmental crisis that nobody discusses: the massive energy consumption of failed AI generation attempts.
The Hidden Energy Cost
-
Each Veo 3 generation requires significant GPU processing power.
-
Failed attempts (the majority) waste energy with no usable output.
-
Professional projects might require 100+ attempts for a few shots.
-
Training these models consumes energy equivalent to thousands of homes annually.
Google’s polished demos don’t mention the hundreds of discarded attempts behind each “successful” clip. When scaled across thousands of users generating millions of clips, the carbon footprint becomes staggering—all to produce content that still requires human expertise to become usable.
Traditional video production, while resource-intensive, produces predictable results. AI generation creates environmental waste through computational inefficiency, generating unusable output that gets discarded while consuming the same energy as successful attempts.
We’ve Seen This Before: The Predictable AI Hype Cycle
OpenAI’s Sora: From Revolutionary Demo to Reality Check
OpenAI’s Sora launch in early 2024 followed a similar playbook. Initial demos showcased breathtaking videos of woolly mammoths in snowy landscapes, bustling Tokyo street scenes, and surreal artistic visions that seemed impossible for AI to create.
Sora’s announcement sparked widespread speculation about AI’s role in filmmaking.
Media coverage often emphasized its potential to reduce production costs, while online forums debated whether human creativity could become obsolete.
Filmmaker Tyler Perry even paused a studio expansion after testing AI tools like Sora, stating, “Jobs are going to be lost—this is a real danger.” (Sounds like a good excuse to get out of a deal to me.)
Though no major studios announced immediate layoffs, freelance editors and VFX artists reported growing anxiety about job security in online communities like Reddit’s r/Filmmakers.
Follow the Money: Why AI Companies Oversell Automation
Understanding the AI hype cycle requires following the money. Tech companies don’t just oversell AI capabilities to consumers—they weaponize these claims to drive investment rounds, IPO valuations, and stock prices.
The promise of “job replacement at scale” has become Silicon Valley’s most potent fundraising tool.
The Venture Capital Pressure Cooker
Consider the economic incentives at play:
-
Venture Capital Pressure: Investors demand exponential growth potential. A tool that “helps human creativity” doesn’t justify billion-dollar valuations the way “replacing entire creative industries” does. Companies learn to pitch disruption, not collaboration.
-
Public Market Performance: AI companies trade at massive multiples based on automation promises. Admitting dependence on human labor would crater valuations overnight.
-
Enterprise Sales: Corporate buyers don’t want to hear about hybrid human-AI workflows. They want to cut headcount and reduce labor costs. Sales teams learn to emphasize job replacement potential while downplaying implementation complexity.
This creates a cycle where companies must oversell AI capabilities to maintain their valuations, even when internal teams know the technology isn’t ready for autonomous operation. The result is premature deployment of AI systems that require extensive human support—support that gets hidden to maintain the automation narrative.
Executive teams at traditional companies, lacking technical expertise, buy into these promises and begin planning layoffs before understanding what the technology actually requires.
Workers get displaced not because AI can do their jobs, but because executives believe it can based on misleading marketing.
The Human Cost of AI Theater
The human cost of AI overselling extends far beyond Silicon Valley. When companies promise job replacement and fail to deliver, real workers bear the consequences of premature automation attempts.
Creative Industry Job Cuts Based on False Promises
-
Creative Industry Destabilization: Creative companies have begun cutting human positions based on AI promises, only to discover that projects still require extensive human expertise. The result will be fewer stable jobs, more gig work, and increased pressure on remaining workers to compete with AI-assisted workflows.
-
Skill Devaluation: By presenting human expertise as easily replaceable, AI marketing systematically devalues creative and technical skills. Workers who spent years developing their craft find their contributions reframed as inefficiencies to be automated away.
The Anxiety Economy: When Workers Compete with Myths
-
Economic Anxiety: Even workers whose jobs remain safe face constant uncertainty about AI replacement. This anxiety depresses wages and working conditions as employers gain leverage by threatening automation.
-
Community Impact: When major employers embrace premature automation, entire communities suffer. Local businesses that depend on stable employment face decline as workers experience job insecurity and reduced spending power.
Perhaps most perniciously, the AI hype cycle creates a self-fulfilling prophecy. By convincing executives that human work is obsolete, companies create conditions where workers are treated as temporary placeholders for future AI systems—regardless of whether those systems can actually deliver.
Building AI That Actually Helps Humans
The tragedy of current AI development isn’t that the technology lacks potential—it’s that this potential is being squandered in service of short-term profits rather than long-term human flourishing.
Veo 3 and similar tools could genuinely assist in human creativity, reduce tedious work, and democratize creative production. But realizing this vision requires fundamental changes to how AI is developed, marketed, and deployed.
What Honest AI Development Looks Like
-
Require Transparency in Marketing: Companies should disclose the human labor behind AI demonstrations. If a video needed 50 hours of post-production, say so. If systems require human oversight, admit it.
-
Center Workers in Development: AI tools should be designed with input from affected workers. Focus on reducing repetitive tasks while preserving human agency and creativity, not eliminating jobs.
-
Compensate Training Data Sources: Artists, writers, and creators whose work trains AI models deserve fair payment and consent. Training data should be sourced transparently with clear attribution.
-
Choose Evolution Over Revolution: Focus on tools that help humans work more efficiently rather than promising wholesale replacement of creative workers.
Regulatory Solutions We Need Now
-
Establish AI Capability Standards: Governments must prevent companies from marketing systems as autonomous when they depend on hidden human labor.
-
Protect Workers During AI Transitions: Create retraining programs and transition support for workers whose industries are being disrupted by AI implementation.
-
Environmental Impact Disclosure: Require companies to report the energy consumption and carbon footprint of AI training and generation processes.
The Pattern Repeats: From Promise to Reality Check
Google’s Veo 3 follows the same predictable script we’ve seen play out with Sora, Amazon’s “Just Walk Out,” and countless other AI “breakthroughs.”
Stunning keynote demos hide armies of human workers fixing what the machines can’t handle. Marketing teams sell disruption while engineering teams quietly build human-dependent workflows. Executives plan layoffs based on capabilities that exist only in edited highlight reels.
Tech companies have discovered they can inflate valuations and justify massive investments by promising to eliminate human workers, even when their own systems require more human intervention than traditional workflows sometimes.
The cost isn’t just measured in disappointed investors or overhyped technology—it’s measured in real careers destroyed by premature automation attempts and communities destabilized by economic anxiety.
A Different Path Forward
Here’s what honest AI development could look like: Companies could acknowledge that their tools augment rather than replace human capabilities. Marketing could emphasize collaboration instead of elimination. Development teams could work with creative professionals to build tools that reduce tedious work while preserving human agency and artistic vision.
This collaborative approach wouldn’t generate the same viral headlines or venture capital excitement, but it would create genuinely useful technology that improves working conditions rather than eliminating jobs.
More importantly, it would build trust between technologists and the creative communities whose expertise makes AI possible.
The Questions That Matter
The next time you see a breathtaking AI demo, ask yourself: How many attempts did this take? What human work happened after generation? Who benefited from the hype, and who paid the price when reality didn’t match the promises?
These aren’t just technical questions—they’re moral ones. In an industry that profits from automating human work, we need to demand transparency about what these tools actually require and honesty about their impact on real people’s livelihoods.
Veo 3 and other AI video software could become a powerful creative tool. But first, we need to stop accepting Silicon Valley’s automation theater and start building technology that serves human creativity rather than replacing it.
The choice isn’t between embracing AI or rejecting progress—it’s between honest development that includes workers or continued deception that extracts value from the very people who make these breakthroughs possible.
The revolution we need isn’t about generating perfect videos from text prompts. It’s about creating an industry that values human expertise alongside technological capability. That future is still possible, but only if we stop pretending the current path leads anywhere good.