In 2015, Víctor Perez and Diego Rodriguez met in a Barcelona engineering lab built more for decibels and code than destiny. Two AV engineering students chasing better sound. A few years later, they landed in Cornell’s graduate program until a fellowship from the King of Spain (yes, that king) lit the exit sign. They dropped out. Not because they couldn’t finish because they didn’t need to. The idea was already louder than the degree: build the tool creatives didn’t even know they needed yet.
Fast forward to 2025 and Krea just locked in a $47 million Series B, led by Bain Capital Ventures with continued support from Andreessen Horowitz, Abstract Ventures, and a full bench that includes HF0, A*, Gradient Ventures. That brings total funding to $83 million and a half-a-billion-dollar valuation that didn’t come from smoke and mirrors, it came from precision, velocity, and the kind of product-market fit most startups would sell their roadmap for.
What makes Krea so lethal in the generative AI arms race isn’t just the stack, it’s the symphony. A unified platform that blends image, video, and 3D generation into a single workflow built for the tactile minds of creatives. Realtime editing. Touch-optimized controls. Instant feedback as you type. This isn’t your usual “AI for creatives” paint-by-prompt interface; it’s the first time digital tools have actually felt like an extension of human hands and imagination. Pixar’s in. So are Samsung, LEGO, Loop Earplugs, and Perplexity AI. And when hobbyists and Hollywood are using the same platform? That’s not product diversity. That’s dominance.
And they didn’t just build it pretty, they built it fast. Krea is pushing over 5 million generated assets a month. February alone brought in 15.29 million site visits with 3% MoM growth. Users are growing at 38% since 2024, and 16% of that traffic is coming from India, with strong footholds across the U.S., Russia, Saudi Arabia, and Brazil. That kind of global traction doesn’t happen without infrastructure that slaps, self-hosted GPU clusters, 4K+ video upscaling, and a research lab obsessed with controllability and hyper-personalization.
Victor and Diego now lead a 17-person team stacked with talent out of MIT, Berkeley, Princeton, and the University of Chicago. And the next phase? Enhanced video workflows, enterprise-ready model training, and deeper integrations into the creative universe; Photoshop, Figma, Canva, and direct publishing to social. They’re building the highway between imagination and output, and there’s no toll booth in sight.