Here are my thoughts after using GitHub Copilot Pro in real-world ASP.NET 8 development over the past three months. Since technology evolves rapidly, it's worth noting that these impressions are based on my experience as of March 2025..

1. Free trial prompted me to subscribe to GitHub Copilot Pro

I have read about AI code generators and watched some demo videos in the past, but I was not convinced that they are really production-ready.

Without a reason, 3 months ago, due to a GitHub Copilot Free account being automatically activated in my Visual Studio 2022 in my ASP.NET8 project, so-called “ghost text” code suggestions started to appear. I was shocked, at moments it was a brilliant prediction of what I was about to write/code.

For those that are unfamiliar, “ghost text” is GitHub Copilot (GHC) suggestions presented to the user in grayed semi-transparent text that appears without reason as a prediction by AI of what the user is to do next. If a user likes the suggested code, just confirms it, or ignores it and does his/her work.

In a few days, I decided to subscribe to the full GitHub Copilot PRO subscription to check out that tool.

2. Preparing for GitHub Copilot - AI usage

2.1 Training

I always take tools seriously, so I read manuals in advance, to be able to use tools to their full potential and be aware of limitations.

I have listened for about 10 hours of videos on GitHub Copilot, topics like “prompt engineering”, “what is context”, etc. I made my own “cheat sheet” of prompt commands and shortcut keys. After 10 hours of training, I was ready to try it in my real-life professional coding in ASP.NET 8/C#/Bootstrap/EF8/JS environment.

2.2 Prompt Engineering in general

In my opinion, “Prompt Engineering” is a defeat for AI. One of the first definitions I heard of AI systems 20 years ago was that AI would be achieved when we would be able to talk to computer systems in natural language.

Now they tell you that GHC is an AI system, but you can not really talk to it in “natural language”; you need to use “prompt engineering”, which is really a sublanguage of natural language, and use symbols like /, #, and @. That looks to me like some mixture of natural language and programming language. They want to sell you their AI systems they have NOW, and 5 years from now, they will probably be telling you “now we have REAL AI, no prompt engineering needed anymore”.

So, the expression “prompt engineering” is coming from the period when the only way of interacting with an AI system was via a command prompt. Then, some “art” or “science” ( I would call it “pseudo-science”) in creating commands would help you make those AI systems work better. I have read several such articles, which are all “common sense”, but since the target AI system is always a “black box”, there are no real metrics to show if one author's recommendations are better than another person's list of rules. Also, systems evolved and changed over time, so strictly speaking, those authors would need to test their recommendations against the new generation of systems again. Typically, they do not do that, but offer “common sense” rationale which is based on the perception of AI as another human intellect. And what is “common sense” for humans might not be the same for AI systems. So, I am a bit skeptical and do not fully believe all the recommendations regarding “prompt engineering” that are out there because there are no real metrics and tests against different generations of AI systems. They offer just “common sense” and anecdotal proof from a few command executions.

2.3. Prompt Engineering in GitHub Copilot

So, when talking about “prompt engineering” in the context of GitHub Copilot (GHC) system, that includes not only the command line interface, but also some interaction via the Visual Studio GUI. That is basically “the user interface of GitHub Copilot”.

If one plans to use GitHub Copilot efficiently, they need to get familiar with GitHub Copilot UI. So, I did, learned all the commands like /fix, /optimize, #file1.cs, Alt+/ (invoke GitHub Copilot) etc.

2.4 Universe of the conversation

When I was studying philosophy in high school many years ago, I was taught about the concept of the “Universe of the conversation” in every conversation implied, and topics in conversation typically refer current “Universe of the conversation”. It helps people understand what is being talked about, as certain topics and terms are assumed or taken for granted within that framework.

2.5 What is “context” in the AI world

Tech companies doing AI invented the term “context,” which has a similar meaning to above mentioned philosophy term. I would like to keep terms separate, because Tech companies like to force their definitions of what the world should look like, in an effort to sell their products and shares. Also, there will probably be a definition of AI-Context-2025 and a new definition of AI-Context-2026, and so on, as technology develops. And philosophy terms stay the same.

So, the current context definition as of March 2025 (you can call it AI-Context-2025 ) would be: additional information the user needs to supply to the AI system for it to understand what it is required to do.

2.6 What is “context” in the GitHub Copilot

In training videos for GitHub Copilot, there has been a lot of emphasis on providing the proper “context” for your requests. To me, it looks like they ask for explicitly enumerating the files that contain relevant code. I would assume that “implied context” would be your Visual Studio project/solution, but it is not, in this moment of time, at least.

Actually, there is a little GUI check-box in GitHub Copilot VS2022 you click to confirm that you want the current open document to be included in the “context” of your every request. (by the way, they call it “prompt engineering” and you are clicking GUI check boxes… maybe “GUI engineering” would be a better name 😉 ). You are also asked to enumerate relevant files by using the # prefix, like #file1.cs.

So, if you want to use GitHub Copilot efficiently, there is a certain procedure on how to use it and a recommended command prompt/GUI interface for it. So, be it. I have learned/read all the instructions, and wanted to see the AI thing generate nice code for my VS2022 project.

The way I get it, they want you to be very specific in your request and enumerate all the relevant files with code. I see it as similar to giving instructions to another programmer, with certain specificity. It is not difficult, compared to the numerous programming languages developers learn.

3. Impressions after 1st week

GHC is just a code-assistant tool. It is not that “intelligent,” and “smart”, but it is good with repetitive tasks and can save some time typing. It is good for re-applying a pattern you have in code again and again, but it was very, very stupid in creating an original solution.

It is a waste of time to “chat to it”, faster is to go to Google and read for yourself to solve some original problem.

But once I create a nice pattern to solve something, it can save some typing, because it learns that pattern and reapplies it again automatically.

It also tends to generate a lot of “trash code,” so a human must filter what is generated, but it is not difficult to use the “delete” button, and keep just “good snippets”.

Let’s say, based on what I have seen so far, I expect it can save me 5% of the time in typing.

4. Impressions after 1.5 months

GitHub-Copilot (Gen-AI) is helpful, but not great. It is useful sometimes, but only for local scope problems, cannot see the bigger picture.

Sometimes it is brilliant, but sometimes it makes too many mistakes, and when asked, gives answers in several pages of text, wasting your time, especially because the verbose answers it gives are often off-topic.

In serious problems is useless; better to read a StackOverflow article by myself and figure it out. But sometimes, it excels and generates pretty good code for repetitive tasks.

My “personal feeling” is “it does not know it well”, it is “trying to guess it out”, and since it is a machine with huge memory of millions of lines of code memorized, guesses are sometimes brilliant, sometimes off-topic.

5. Impressions after 3 months

GitHub Copilot (GHC) is a Gen-AI tool that is quite useful in tasks of limited scope.

6. How to describe AI systems like GitHub Copilot

A typical good definition of something new consists of 2 parts: 1) the object/concept it is similar to, and 2) how it is different from a similar object/concept.

So, when talking about Intelligent systems, people usually take humans as a reference value. They tend to say: AI-Gen system is on the level of a Junior programmer, but is better/worse in this or that.

But I feel that for AI systems like GitHub Copilot (GHC), humans are not a good reference. Humans gradually progress in their intellectual ability, they have the ability first to solve simple tasks, then more complicated and so on.

I do not know much about autism, except for Hollywood movies like “Rain Man” (1988) with Tom Cruise. But if we are to compare GHC to humans, GHC looks like that autistic character from the movie. It can be brilliant and solve complex puzzles fast, but it can fail on a very simple task.

I would put AI systems like GHC in their own category. Their speed and huge memory, and ability to generate a huge amount of text/code fast make them incomparable to humans. That is like an idiot that has memory and mathematical ability a million times better than any human, but is still an idiot in front of the simple problem. Can you call it stupid because it does not “write code logically” but instead probably searches millions of lines of code in its memory and finds the solution to the problem faster than you?

7. After the experience, how am I using GitHub Copilot now

7.1 GitHub Copilot makes many C# mistakes

Regarding code-assisted generation, GHC is such a huge disappointment that it can not get the C# syntax right all the time and check the existence of C# properties/methods by itself. That is definitely not what one expects to see from a machine. My feeling is that it can not reason logically at all, otherwise, it would be able to follow simple syntax rules all the time, and not create a mess with extra brackets or hallucinate about non-existing C# class methods or properties.

A big shock was when GHC was asked to add comments, it deleted the active line of code because a similar line of code was commented out. That GHC thing does not fully understand what is “active line of code,” otherwise it would not delete it. It seems it just sees text of some kind, and generates “look-alike” text. More like a child with huge memory and speed toying with code, than a “pair programmer” or a “peer programmer” as it is advertised.

7.2 When to use GitHub Copilot

So, I have coding tasks to do, and toying with GHC was fun, but now it is time to be serious. My time is limited, and my energy needs to be focused productively.

8. Marketing for AI products is very strong

Marketing for AI products by tech companies is very strong, so one must put in an effort to stay grounded regarding the abilities of AI products in the present moment of time.

9. Conclusion

GitHub Copilot (GHC) (as of March 2025) is a useful tool, and I will continue to use it in my programming. It does save me a measurable amount of time by providing code snippets and suggestions sometimes.

The technology level currently is that GitHub Copilot (GHC) can not be trusted with a bit complicated task involving several files at the same time. In such scenarios, results are incomplete and not time-efficient compared to direct manual programming.

A serious problem is that GitHub Copilot (GHC) tends to hallucinate about C# methods and properties that do not exist. The GHC-generated code does not compile right away, requiring a lot of manual work to finish it.

Things are evolving quickly, so the notes above are already almost outdated—apparently, a new tool called GPT-4o was released just a few days ago. It seems that AI tech companies are quietly upgrading these tools in the background, making them better and better without users receiving any explicit notice.

Some reports in the media suggest that large language models (LLMs) have reached their maximum and that by pure scaling, they are not becoming better. Maybe we will need to wait for some theoretical progress in AI science before we see real progress with AI tools.

Version of GitHub Copilot from March 2025 is useful, though not as great as advertised. We can expect that over time it will become better. But, still needs to be seen/verified in reality.

At the end, of course, ask for a second opinion on everything that was said in the article above.