When people talk about marketplaces, they usually talk about demand – growth, users, and marketing channels.

But in practice, marketplaces live or die by supply – and by how early you turn supply into a system rather than a manual operation.

I learned this the hard way while building and scaling a two-sided marketplace in the local services space. Over several years, my team and I grew the platform from an early proof-of-concept to tens of thousands of customers and thousands of active service providers, while maintaining high service quality and retention.

Having made plenty of our own mistakes on this journey, I thought I’d share some of our learnings on how we built a scalable supply system, what broke first, and what actually worked (trust me – some of these things founders tend to underestimate heavily!).

A Bit of Context

Hi, I’m Kirill. I’m a co-founder of a dog walking and pet sitting marketplace and a product-led operator who has spent the past decade building two-sided platforms. I’ve worked across product, operations, fundraising, and supply growth – and most of my hardest lessons came from scaling real-world services (where trust and reliability are everything!).

The Early Setup: Manual, Offline, and Fragile

At the very start of our project, we were really, really small. We had around a dozen dog walkers onboard, a small group of early customers who came through media mentions, and revenue that barely covered basic expenses.

We were doing almost everything manually.

Orders were distributed in chat groups. Providers replied with “+1” if they were available. Someone from the team then assigned the job by hand. New candidates came through occasional posts on social media or small publications and were invited to offline meetings.

At that stage, it worked – mostly because volume was low. But it also meant that the entire company depended on a handful of people manually holding everything together.

Learning to Grow Without Money

Like many first-time founders, we believed that growth would be driven by investment. So we built a pitch deck, prepared a financial model, and started talking to investors. The response was almost always the same:  “Interesting idea, but too small. Too niche. Too early.”

We didn’t have strong traction yet. We didn’t have famous advisors. We didn’t have a network. So for a while, we stopped fundraising and focused on organic growth.

We partnered with industry experts, collaborated with larger companies, and relied heavily on free media coverage. It was slow, sometimes frustrating, but it forced us to learn how to operate efficiently. By the time revenue reached a more meaningful level, we had already built a culture of careful cost management and experimentation.

That mindset later became crucial when we started scaling supply.

The First Onboarding System: Offline and Exhausting

Our initial approach to recruiting service providers was entirely offline.

Every one or two weeks, we organised in-person training sessions at a physical location. Candidates were invited to attend, learn the rules, and demonstrate basic skills. A quality manager observed them and made a judgment call.

From the outside, it looked professional. From the inside, it was exhausting.

Worst of all, we were making important decisions based on very little real-world data.

When Growth Broke the Process

Once marketing started working better and demand increased, the old system collapsed. We suddenly needed hundreds of new providers per month, while our funnel was built for dozens. Unsurprisingly, conversions dropped, costs increased, and quality control became borderline impossible – which is straight up dangerous in the niche we had picked.

It was then that we realised that we weren’t dealing with a hiring problem, but it was a structural problem all along. Meaning that, fixing it would require rebuilding the entire onboarding process from scratch.

Our first instinct was to digitize the existing process – but the way we tried to go about it didn’t work. To put it simply, transitioning an offline training session to Zoom doesn’t make it scalable – it just makes it cheaper and worse.

So instead, we asked a different question:

“How do we evaluate people in the same environment where they’ll actually work?”

That question changed the way we approached the challenge.

Rebuilding the Funnel from First Principles

The breakthrough came when we stopped trying to replicate offline processes online – and instead designed the funnel around real behaviour and real usage.

The new system had one core idea: automate everything that doesn’t require human judgment, and deepen evaluation where it actually matters.

The new funnel also started online, but it wasn’t just a form.

Candidates applied through a simple entry point and were immediately directed into a learning environment – a very basic internal LMS we provided. There, they learned how the service actually worked – standards, rules, payments, tools, and expectations.

This step did two things at once:

After training, candidates completed a short knowledge test. This turned out to be one of the strongest predictors of future performance. People who couldn’t follow basic rules early almost always caused problems later.

Only after that did we introduce lightweight background screening – enough to reduce risk without destroying unit economics.

The Real Innovation: Distributed Internships & Data Collection

The most important change came next. Instead of forcing candidates into a single physical location, we embedded onboarding into the existing network. Candidates could book internships across the city, working alongside experienced providers or assisting on real services under supervision.

This removed the biggest bottleneck instantly:

More importantly, we finally saw how candidates behaved in real conditions.

After each internship, mentors filled out a structured evaluation. Not just gut feelings, but specific criteria combined with open feedback.

We then aggregated signals from:

The system produced a clear recommendation, while leaving room for human oversight. We could finally make efficient data-driven decisions about each service provider – at required pace and scale.

Worth noting the onboarding process didn’t end at activation. For every new provider, our Customer Success team contacted the first several clients and gathered qualitative feedback. If early signals were poor, we treated onboarding as incomplete – regardless of how good the funnel metrics looked. This final step aligned incentives across the organisation: supply growth meant nothing without real customer satisfaction.

What Changed in Practice

Once the system stabilised, the effect was dramatic.

In a representative period:

At the same time, quality metrics improved rather than declined. Even more important, supply growth became predictable.

In one representative period, the flow looked roughly like this:

Out of around 4,000–5,600 applicants:

– about 2,000 completed the online training,

– roughly 900 booked at least one internship,

– around 700 completed it,

– and just over 400 became fully activated providers.


End-to-end, only around 8–10% of applicants made it through the entire funnel. Not the most impressive figure at a first glance, but to us, it meant that we were still able to maintain service quality at scale. After all, above the skyrocketing sign-ups, we cared for long-term reliability – especially being pet owners ourselves.

What Changed Within the Company

One structural decision made this sustainable: ownership. We introduced a dedicated role responsible for supply growth, with clear KPIs around:

This person wasn’t just in charge of our human resources – their performance revolved around growth metrics in the first place. The funnel itself was treated like a product – constantly iterated, reordered, and optimised based on data.

Key to Success: Iterate, Iterate, Iterate

In other words, we didn’t just wake up one day with a perfect funnel in our minds. We rebuilt it dozens of times.

For example, in early versions, we ran background checks before training. Later, we realised we were spending resources on candidates who would fail basic tests anyway. So we moved screening after training.

At another stage, we required two internships for every candidate. Over time, we noticed that providers who scored highly in the first internship almost always passed the second. So we made the second internship conditional.

We also experimented with the order of learning, testing, and feedback, trying to understand which steps actually predicted real-world performance.

Every change was evaluated against three metrics:

– cost per activated provider,

– early customer ratings,

– retention after the first few months.

If a change improved one metric but hurt the others, it was rolled back.

Treating this process as a product, with a usual startup mindset – build an MVP and experiment until everyone is happy with the result – eventually paid off.

Lessons I’d Share with Marketplace Founders

If I had to summarise this experience in a few principles:

Final Thought

Many marketplaces don’t fail because they can’t acquire users. Obtaining and scaling trust is a greater challenge that fewer founders talk about.

In industries like ours, where our customers’ furry family members were at stake, trust simply had to be at the core of our operating system.

Not every industry is the same in this aspect, but these learnings still help me to this day in every other vertical I’ve worked.

I hope others make use of them, too.