Building ShopOS from zero to a $20M seed
ShopOS was one of the clearest zero-to-one builds of my career.
I joined as a founding engineer to help turn a simple but painful commerce problem into a usable GenAI product: brands were spending too much time, money, and manual effort producing catalogs, product imagery, videos, and marketing assets. The opportunity was to collapse that workflow using image generation, video generation, and LLM-driven orchestration.
My role was to help build the technical foundation, shape the engineering team, and turn a fast-moving set of GenAI experiments into something enterprise brands could actually use.
TL;DR
- Founding engineer at ShopOS, originally launched as House of Models
- Built GenAI pipelines for catalog and marketing asset generation using LLMs, image generation, video generation, and ComfyUI
- Helped build the engineering team and upskill them around rapidly changing AI tooling
- Shipped for brands including Shein, Reliance, Myntra, Ajio, BabyShop, Campus Sutra, and Celio
- Helped take the company from early experiments to a $20M seed round
- Left due to US visa constraints, and still advise on product direction and hiring
Context
I started working on ShopOS in 2024, as my Master's at DePaul was wrapping up.
Sai Krishna VK - co-founder from Scapic - was exploring a few ideas at the time. One of them stood out immediately: e-commerce brands spend an enormous amount of time and money generating catalogs, product imagery, campaign assets, and localized creative. The workflow is fragmented, expensive, and slow.
Having spent years in commerce and product visualization, I had already seen how painful content production could become at scale. The timing also felt right. Image and video generation models were improving quickly, and it was becoming possible to imagine a workflow that did more than generate a single good-looking asset - one that could reliably support commercial content production.
The product later became ShopOS.
My role
I joined as a founding engineer.
There was no established stack, no mature process, and no real playbook for what "production-ready GenAI pipelines for commerce" should look like. My role was to help build the system from the ground up:
- early experimentation with generation pipelines
- orchestration across LLMs, image-, and video-models workflows
- improving output quality for enterprise use
- helping build the engineering team
- educating and upskilling the team on rapidly evolving AI tooling
- shaping how the product evolved based on customer feedback
A large part of the work was technical. Another big part was organizational: building a team that could operate effectively in a space where the underlying tools were changing every week.
Building the pipeline
The earliest phase was almost entirely experimentation.
We were testing models, LoRAs, and generation workflows constantly. We ran ComfyUI at scale, balanced infrastructure cost against iteration speed, and tried to find repeatable ways to generate commercially usable results across both images and videos.
The technical problem was not just generation. It was reliability.
For commerce use cases, "almost right" is wrong. A model that generates a visually compelling asset is still failing if:
- the product color is inaccurate
- the material looks wrong
- proportions drift
- motion feels unnatural in generated video
- outputs are inconsistent across a catalog batch
- or the result breaks brand expectations
We had to reduce hallucinations and build workflows that could produce results brands would actually approve, not just results that looked impressive in demos.
Before "agentic AI" had a name
One of the more interesting things about ShopOS is that the system was effectively agentic before that term became popular.
The workflow was never a single model call. It was a multi-step pipeline involving:
- LLMs
- image generation
- video generation
- quality checks
- brand guideline validation
- human review loops
- error handling and fallback logic
Each stage could fail independently. Each needed its own checks, retries, and constraints.
That was a major lesson for me: in production, the hard part of GenAI is not usually the generation model itself. It is the orchestration around it.
Getting to enterprise quality
As the core pipeline started to stabilize, the real challenge became enterprise expectations.
Every customer had a different quality bar, different categories, and different operational realities. What worked for one brand did not automatically work for another.
We had to build a system that could adapt to:
- different product types
- different brand guidelines
- different image and video quality expectations
- different review and approval cycles
- different content volume requirements
The work was highly iterative. In some cases, that meant re-generating entire catalogs or reworking creative outputs when requirements changed or when the outputs did not meet the bar.
That pressure was useful. It forced the product to move from "interesting GenAI demo" to "workflow brands can actually depend on."
Customers and traction
The customer list grew quickly and included:
- Shein
- Reliance
- Myntra
- Ajio
- BabyShop
- Campus Sutra
- Celio
These were not speculative pilots with low expectations. Each came with real commercial requirements and high standards for quality, consistency, and speed.
As the system improved, it also became clear that there was a path from service-heavy delivery toward a more self-serve product. We were no longer just testing whether the pipeline could work. We were beginning to see how it could become a product category.
Team and culture
Because the technology was moving so quickly, team building had to happen in parallel with product development.
Nobody really had years of production experience in these exact pipelines, because the category itself was still forming. That meant a lot of what we were doing was learned in motion.
I helped build the engineering team while also explicitly educating and upskilling them around experimental AI tooling. We created working practices for workflows that were still unstable, trained people on tools and models that were evolving weekly, and sometimes wrote playbooks that became outdated in days.
It was chaotic, but also energizing. The culture had to be experimental by necessity.
That experience taught me a lot about zero-to-one team building: when the technology is unstable, clarity of thinking matters even more than process maturity.
The fundraise
As the product matured and customer traction became clearer, the company started talking to investors.
The signal was strong enough that Binny Bansal, co-founder of Flipkart, understood the opportunity quickly. The result was a $20M seed round.
For me, the significance of that milestone was less about fundraising optics and more about what it validated: a workflow that started as late-night experimentation had become credible enough to support a serious company build.
Why I left
I eventually had to step away because the team decided to operate from India, and US visa limitations made it difficult for me to continue full-time in the role.
It was the right decision for the company, even if it was personally frustrating.
I still advise on product direction and occasionally help with hiring. The experience remains one of the most formative zero-to-one builds I've been part of.
Outcomes
- Founding engineer at ShopOS
- Helped build the engineering team and technical foundation from zero
- Explicitly educated and upskilled the team on experimental AI tooling and workflows
- Built GenAI pipelines for catalog and marketing asset generation
- Used LLMs, image generation, video generation, and ComfyUI to create enterprise-grade workflows
- Landed customers including Shein, Reliance, Myntra, Ajio, BabyShop, Campus Sutra, and Celio
- Helped take the company to a $20M seed round
- Built a workflow that treated generation as a multi-step orchestration problem rather than a single model call
- Helped shape the path from experimental pipeline to product direction
What I'd do differently
I would evaluate more off-the-shelf orchestration earlier.
At the time, we built a lot of custom orchestration because that was the most direct path. Today, I would first look at which parts of that stack have become commodity infrastructure and reserve bespoke engineering for the parts that actually differentiate the product.
I would formalize onboarding sooner.
We relied heavily on pairing and tribal knowledge, which worked early but slowed ramp-up as the team grew. Even lightweight internal documentation - short Looms, architecture notes, workflow snapshots - would have reduced the learning curve significantly.
I would instrument quality evaluation more aggressively.
We spent a lot of energy getting outputs to look commercially correct. In hindsight, I would push earlier for more structured evaluation around output quality, review failure modes, and brand-specific acceptance patterns.
What I learned
ShopOS taught me how to build a team and a product on top of technology that did not really have a stable playbook yet.
The Flipkart years taught me how to ship at scale inside a large organization. ShopOS taught me the opposite: how to create something from nothing, under uncertainty, while customers still expect production quality from day one.
That combination - scale on one side, zero-to-one on the other - shaped how I think about AI systems now.