Spoiler: AI models are only as good as their training data, and Blazor just doesn't have enough of it yet.

Look, I get it. Blazor is cool. It's modern. It lets you write C# for the front end instead of JavaScript. It's what all the .NET developers want to use in 2025.

But here's the thing I discovered while rewriting a legacy lottery simulation app: GitHub Copilot absolutely crushes it with the more straightforward ASP.NET Razor Pages, but completely falls apart when you try to get it to generate Blazor code.

And I'm not talking about minor issues. I mean it straight-up doesn't work. Claude Sonnet 4.5? Same problem. ChatGPT? Yep, struggles there too.

Before you flame me in the comments—yes, I know this will probably change in the next 6 months. AI is evolving fast. But right now, at the time of writing, if you want AI to help you build ASP.NET web UIs, you're better off with the "boring" technology. Here's what happened.


Watch Video Here

https://youtu.be/sQdByQML_w8?si=rQnGHIPcEnd66Tzn&embedable=true


Source Code

The legacy Lottron2000 source code is available at this GitHub repository

The rewrite (only parts of the code, NOT the full source) available at this GitHub repository

The Setup: Rewriting a Legacy .NET Framework App

I have this old lottery simulation app that needed to be modernized. You know the drill—ancient .NET Framework code that's been sitting there for years, doing its job but looking rough.

I didn't want to just slap some code together. I wanted this done right. So I built out the backend with Clean Architecture:

• Domain logic in its own project, completely separate from data access

• Entity models for the database, DTOs for the front end

• Services layer handling all the business logic

• Proper dependency injection setup

• Everything nicely organized in Visual Studio

The backend was solid. GitHub Copilot actually helped quite a bit there. The architecture was clear, the patterns were well-established, and Copilot could easily infer what I was trying to do.

Then came the front end. And that's where things got interesting.

The Experiment: Let's Try Razor Pages First

Full disclosure: I'm not a front-end guy. I can write backend code all day, but asking me to design a UI? That's where I struggle. So I was genuinely curious if AI could fill that gap for me.

Here's how I set this up (and this is important):

I didn't just throw Copilot at an empty project and say "build me an app."

Instead, I:

1. Created an empty Razor Pages project in Visual Studio myself—gave Copilot the right template to work with

2. Had my entire well-structured backend already there—Copilot could see the domain models, the DTOs, the services

3. Wrote high-level prompts in a Word doc—basically pseudo-specs describing WHAT should be on the page, not HOW to build it

4. Let Copilot figure out the implementation details

My prompts were things like: "Display the DTO objects in this order, show the user options for number generation, display the results with profit/loss calculations."

I didn't specify drop-downs or text boxes or layouts. I didn't create mockups. Just high-level requirements.

The Results: Razor Pages Just... Worked

I'm not going to lie, I was surprised.

GitHub Copilot generated:

• A working PlayLottery page with proper form controls

• Clean code-behind that wasn't a mess

• ViewModels that actually made sense

• A UI that looked... decent

• Results display showing all the calculations correctly

It compiled. It ran. First try.

Sure, I wanted to refine some things—add the winning numbers to the display, style the tickets with borders, that kind of stuff. But each time I asked Copilot to make a change, it just... did it. Correctly.

The workflow felt natural: "Hey, add this feature." Copilot adds it. "Make it look like this." Copilot styles it. Done.

Then I Tried Blazor. It Was a Disaster.

Okay, so Razor Pages worked great. But come on—it's 2025. I should be using Blazor, right? It's the modern way to build .NET web apps. Component-based, interactive, all that good stuff.

So I tried the same approach with Blazor on a few different projects.

It failed. Every time.

Not "kind of worked but needed tweaking" failed. Like "this code won't even compile" failed. Or "it runs but crashes immediately" failed. Or "the architecture is just completely wrong" failed.

And it wasn't just GitHub Copilot. I tried Claude Sonnet 4.5. I tried ChatGPT. Same issues across the board.

These are supposed to be the best AI coding assistants available, and they're all struggling with the same thing.

Why Is Blazor So Hard for AI?

Here's my theory (I stress these are ONLY my educated guesses - not backed by factual research). I think it comes down to training data:

1. There's Just Not Enough Blazor Code Out There

Razor Pages has been around since 2017. That's 8 years of Stack Overflow answers, GitHub repos, blog posts, and production code for AI models to learn from.

Blazor? It only hit production in 2019, and honestly, a lot of the code written in those early days is outdated now because the framework kept changing.

2. Blazor Keeps Changing

What worked in Blazor Server in 2020 is different from Blazor WebAssembly in 2022, which is different from Blazor Web Apps in 2026. The patterns, the templates, the best practices—they're all evolving. That makes the training data inconsistent and confusing for AI models.

3. It's Not As Popular As React or Vue

Real talk: Blazor hasn't taken over the world. React, Vue, Angular—those frameworks have massive ecosystems. More developers using them means more training data for AI. Blazor is still relatively niche in comparison.

4. Too Many Configuration Options

Blazor Server or WebAssembly? Interactive or static rendering? Code-behind or inline code? Auto mode? The number of ways you can configure a Blazor project creates ambiguity, and AI models don't handle ambiguity well.

5. Microsoft Keeps Tweaking the Templates

The .NET team is still refining how Blazor projects are structured. Each release brings changes. That's great for innovation, but terrible for creating a stable corpus of training examples for AI to learn from.

What I Learned: AI Needs Good Training Data, Not Hype

Here's the bottom line: AI coding assistants are incredibly good at what they've seen a million times before. They're not so good at newer, less-common frameworks—even if those frameworks are technically superior.

If you want to maximize AI assistance:

• Pick mature, widely-adopted frameworks over cutting-edge ones

• Give AI a well-architected codebase to work with—good architecture amplifies what AI can do

• Set it up for success: don't ask it to generate everything from scratch, give it context

• Expect to iterate—AI gets you 80% of the way there, you refine the rest

• Know that this situation is temporary—what's hard today will be easy in 6 months

This isn't about Razor Pages being better than Blazor. It's not. Blazor is legitimately the future of .NET web development.

But right now, if you need AI to help you build a .NET web UI and you want it to actually work, Razor Pages is the play.

This Will Change (Probably Soon)

Here's the thing I want to emphasize: this is a snapshot at the time of writing.

AI is evolving insanely fast. By the time you read this, it might already be outdated. Blazor is maturing. More developers are adopting it. More code is being written. The training data is accumulating.

And AI models themselves are getting better at handling complex, multi-configuration frameworks. They're moving beyond simple pattern matching toward actual understanding of software architecture.

So yeah, by the time the next version of .NET rolls around, this whole article might be irrelevant. GitHub Copilot might be crushing Blazor projects just as easily as it does Razor Pages today.

But for right now? If you're in my shoes—trying to build a .NET web app with AI assistance—you need to know what actually works.

Final Thoughts

I went into this project excited about Blazor. I came out of it shipping a Razor Pages app.

Not because Razor Pages is better. Not because I'm anti-Blazor. But because when you're working with AI, you need to meet it where it is, not where you wish it was.

The real lesson here isn't about frameworks. It's about understanding that AI coding assistants are tools with limitations, and those limitations are tied to their training data.

Use the boring tech that works. Build a solid architecture. Give AI the context it needs. And iterate until you get what you want.

That's what worked for me. Hope it helps you too.

This article is based on a real project rewriting a legacy .NET Framework lottery simulation app using GitHub Copilot in Visual Studio 2026. Your mileage may vary. AI capabilities are evolving rapidly—what's true today might not be true tomorrow.