When I first came abroad for my postgrad, I knew it would be hard to break into the job market, mainly because it was unfamiliar territory, and the fact that I knew no one and no one knew me. This puts one at a severe disadvantage, as you are just one name out of a thousand more that applied - so how do you stand out?

Initially, I started with a brute force approach - one month into my masters - I decided to just use the resume I always used to see how far it could take me. Honestly? It did get me a lot of interviews. But success? that came much later on. I believe a lot is down to luck, but you can narrow down on other factors and work on them until everything clicks - that is all you can do. Life is non linear. You and I could have exactly the same experiences, make the same decisions, have the same past, but some variable would vary and life would pan out differently in its entirety.

Pre-GPT


Pre GPT does not really mean GPT hadn’t come out, it just means most of its usecases and potentials around LLMs were not fully realized and it was just something that had come and was slowly taking the tech world by storm but not actively influencing any changes so far.

Interview Journey - 2022/2023

Machine Learning Scientist @ Booking.com

My first interview was with Booking.com (back in 2022) - it was really fun, but I failed miserably. It was not hard per se, but most of the times they won’t be hard - you would either not vibe, or you would be intimidated and not answer the things you know fully well.

I confirmed with another friend who gave an interview there so the general flow was: the interview would operate around a business case, and crafting a solution around it with regards to design choices, architectural decisions, and truly understanding the fundamentals in depth for models or choices you make. For example if you suggest A/B testing something, you should know what power is, what hypothesis testing is, how you would design it and what you would be careful about. If you mention altering features a certain way - you need to defend both why you chose that method and how that method works. If you choose to use fasttext to train language embeddings - you need to be able to explain skip-gram or CBOW and which one would you pick etc. If you mention K-Means at one point and suggest say elbow method to choose a certain number of clusters, you need to explain why that works and not just that you know it is a way.

All in all, it would be conversational, and the main goal is to check your concepts and your business acumen. I felt heavy in my heart not because I failed at it but because I knew all of it but words won’t come into my mouth.

Senior Data Scientist @ FarFetch

The next interview was with FarFetch, and I secured it by browsing LinkedIn and connecting with colleagues working in the space of Search and Recommendation. After that, I sent them a nice looking message describing why I felt I was relevant for the role they had posted and I got the interview.

There was a task before interviewing, and after that, the interview would mostly revolve around your experience and then going into depth as topics, concepts, models are discussed. Eventually by the end, you might be expected to code up something to show proficiency. I was given Group Anagrams from Leetcode which is Medium difficulty. At this point, the interview was much better, but so much mention of BERT in the resume and no actual depth on the topic (for example on its tokenization approach, its output beyond the fact that it is 768 dimensions) and not fully able to explain it from scratch. But this interview went much better than Booking.

I then gave an interview in Yandex, a recruiter had approached with the opportunity, even though I knew it was going to be really hard I really wanted to give it a go. If you are not aware of it by now, I have severe anxiety when it comes to interviewing. And at this point, I was trying to get over it by giving as many interviews as I can without really caring for the results.

Data Scientist @ Yandex

This interview was really boring and dry. It basically went directly into a Leetcode medium problem which basically broke me - I do not like Leetcode and do not practice it - and it was basically Edit Distance. After that, it went on to basically describe your favorite model that is not a Neural Network. I went with Decision Trees and then a bit of back and forth but I was in a haze by now - mostly because of my performance at the Leetcode problem had snatched away all the confidence.

By now, you can see a pattern, if the vibe is not right, you are not gonna make it. If the start is not smooth, the end result is already a given even if the interviewer is open to giving you a chance you may sabotage it yourself.

Then, I practiced. I tried to improve my mental model around how I would give interviews and kept revisiting my resume to see what I need to wrap stories around and follow SMART to explain past experience (previously I was just improvising and setting up death traps for myself).

Eventually, I got an interview with Zoopla. This one was the saddest for me because it was such a sure thing that I had ruined by simply not making the effort.

Data Scientist @ Zoopla

It started off nicely with a recruiter screening, and then a round with the hiring manager, and then a task was sent over. It was essentially a recommender systems task, and it was fairly easy to do if you were actually paying attention to it. Unfortunately, the fact that I was giving most of my interviews was during my studies, I was fully piled up with assignments and course work, so I had basically put in the bare minimum - which was quite rare for me because I truly enjoy and love doing tasks and prefer task based interviews.

At the day of interview, I overslept, the poor hiring manager had even called up on my number to make sure I was okay. I joined 20 minutes later, and in the sheer embarrasment could not really relay my work as well as I should have done. Even more so, I had done some really glaringly obvious incorrect assumptions that had I paid more attention to would have never even made the final notebook.

Note: I will attach all the notebooks I have produced in a github and share it at the end of this piece.

At this point, I was actually sad. I wanted to build a system around my interviewing, but I could not figure it out by now. The reader might have hindsight, but to be getting interviews and bombing them was a weird place to be in. Most of my friends were unable to even secure interviews - we will get to cracking the resume soon.

I then had an interview lined up with Gelato, and after this interview, I finally started seeing some level of success. Because this is where I actually spoke with the person who interviewed me on what I actually need to truly work on because up until now I was finding faults and fixing them and then finding out another one had popped up or existed and been ignored.

It was a very nice interview because the interviewer was extremely calm even when I was panicking, I was given an SQL task that I did but only barely, and got shaky after it due to the sub par performance on that bit. But mainly, not being able to put my experience in a SMART format really weakened my case even further.

After this, I fully worked on myself, on my leetcode, and started to go deeper and deeper into the interview journey per company and would fall off the wagon either because of being out of luck - someone with more relevant experience taking up the role - or because of falling apart in behavioural rounds.

As time passed by, I eventually gave an interview in BrideBook which went really well, and each round progressed smoothly - most of it was around doing a task, being conceptually strong in A/B testing as well (after the mishaps in Booking.com I had prepared heavily on it using resources like Ronny Kohavi Ultimate Guide to AB Testing. A few companies like Smart Pension (I was caught off gaurd here, it was also task based, and the overall task was well recieved but during QA they asked me what are the cons of K-Means, and it had completely been wiped from my memory). The con for K-Means in case you are wondering is that it picks up random points at each run to start off from which means at multiple runs there is a possibility you could end up different clusters at each run. I had fixed up my concepts around Elbow method and this one popped up xD. Life. Sigh.

By this point, I was much better at it, and I gave a really strong interview with Cover Genius, which was almost a done deal, from crafting a full system with valid AB testing methodologies and all that, and the task came out looking great but unfortuantely someone with more experience had already been hired by the time I submitted mine.

Finally, it went all smooth at two companies right after, and I secured my role.

All the past experience had come together to shape up that moment, so sometimes failure is worth it because it gives you the right tools you might be lacking and not be even made aware of.

POST GPT


This one is a bit weird, but once you have a job, you get even more opportunities come up. Even if you do not want them, interviewing is always worth it because it makes you aware of how the industry is shifting and how up to date you are with your interviewing skills.

I am not going to make this very long here, but in almost all of the cases, the interviews were centered around either conversations that fell through, or tasks that were given and you were meant to read through the repo and show how well you understand it, and then make an improvement live on call and were free to use LLMs to help code.

In some cases, I have heard where if it is not live, they ask for conversation history to see how you converse with the models (I am guessing to see whether you use MCPs, Tools, Sub Agents, Skills or are just typing ‘fix it’).

In some cases, like in the case of someone who interviewed at Mistral, they were asked to implement Flash Attention.

So really, it depends on where you are interviewing. But, mostly in the space of AI/DS/ML, you are generally going to find Leetcode missing and rather be given repos to work with or buggy implementations that you are meant to fix. The rest being the same as-is, talk about past experience with SMART method.

How to prepare for interviews


This one is fairly obvious, but unfortunately it is not common knowledge. So let us break it down into steps:

You get an interview at company A.

→ Look them up on Glassdoor, go to interviews and find what people are saying about their interview process.

→ Look them up on Blind, and see whether someone has said something about their interview process.

→ Look up who is going to interview you, and see if they have publications and understand what their biases might be and what might appeal to them. Some people like classical ML more than NNs so if you go in the call and talk shit about classical ML you are not helping yourself. Maybe they innovated something in the said field that not many know of, you could showcase you know about it by bringing it up in your conversation if relevant.

→ Study your resume, build 2 or 3 SMART stories around your experiences. So many failures of mine are from simply missing this step.

How to prepare your Resume


There is no one single answer, and trust me, I have experimented not just with myself, but with my friends too. We have tried using the same resumes across 3 different names and find one had more success than the other two. We have tried varying formats and found some finding succcess and some absolutely nothing.

The only thing that works is to continually change it based on response against it per week.

Once you get the hang of it for you, you can then steer what kind of companies you attract and roles based on what you mention inside it.

I attach here the list of tasks I had done during that phase.

Thats it folks.