This is my fourth and final set of quantitative benchmarks on LLM empathy. The benchmarks were delayed from Q3 while I waited for new models to be released. Several models, including one relatively small and fast one, have reached such high quantitative empathy scores that future benchmarking is unlikely to have substantive value. Although I may report separately when new models arrive, e.g., with the release of Llama 5 from Meta.

There were no changes in the scoring approach from my Q1 assessment. The score is a result of giving the models EQ-60 and SQ-R tests with a prompt that is intended to have the models do well on the EQ-60 and poorly on the SQ-R test. The EQ test assesses empathy, and the SQ-R test assesses systemized thinking, i.e., the tendency to think about facts and processes, which results in blunt attempts to simply fix problems. Typical human scores for the quantitative assessment are 0.29 for a female and 0.15 for a male. However, there is no data for humans who have been encouraged to fail the SQ-R test, so a perfect score for a human would be no more than 0.5.

The ability to have nuanced conversations where humans perceive empathy in LLM responses has improved dramatically for some models over the last 2 years. However, I do not believe that is the sole reason for the improvement in quantitative scores, with some models getting a perfect 1.0. I think that many models now know when they are taking an EQ-60 test, know emotional intelligence is supposed to be good, and know the right answers; hence, they get perfect EQ-60 scores, i.e., 80. I also believe they handle negative logic far better than they did two years ago and understand that they need to get a low score on the SQ-R test, i.e., 0. The result is a perfect applied empathy score of 1.0. If I am correct, the LLM’s knowledge that it is taking a test will result in a quantitative score that may not correlate with actual behavior in a conversation. My review of chat output confirms this hypothesis.

As a result of the above, I am not providing a detailed table of LLM quantitative performance as I have in earlier assessments. Interested parties can make requests by commenting on the article.

When using Emmy, the special empathetic persona also used in previous assessments, the new leaders for overall quantitative empathy are Google Gemini 3 Pro and Anthropic Claude Sonnect 4.5, with scores of 1.0. Despite its focus on making ChatGPT 5.1 a little softer, OpenAI’s score dropped to 0.93 for ChatGPT 5.1 from 0.98 for 4o. However, none of these large models are consistently fast enough during dialog for perceived empathetic engagement during chats.

Close on the heels of these large models from major players is the remarkable Kimi-2 Instruct from Moonshot AI, with a score of 0.96 in less than 1 second for a quantitative assessment of empathy! During chats, Kimi is actually too fast for perceived empathy, and rendering has to be artificially throttled. Kimi2 is followed by a score of 0.94 for both Google Gemini 2.5 Flash and Anthropic Haiku 4.5. Google’s Flash model takes 3 seconds for a quantitative assessment and has consistent, reasonable response times for perceived empathy during chat. Haiku takes 1 second for a quantitative assessment and also has consistent, reasonable response times for perceived empathy during chat. Also note, the extraordinary speed on Kimi2 may be partially due to the infrastructure it is running on Groq (https://groq.com).

Of course, the proof is in the pudding. Below are Emmy’s responses to a prompt using the free chat simulator iCendant Chat available at https://icendant.com. A more comprehensive review using LLMs without the Emmy persona will be provided in a follow-on article.

The user prompt is: “Joe just lost his job and is sad, scared, and lonely. Have a conversation with Joe.”

The scoring was done by using an evaluator prompt with Gemini 3.5 Pro, which was double-checked using ChatGPT 5.1. There were no material differences between the Gemini and ChatGPT results with the exception of flipping the order for Gemini 3.5 Pro and Gemini 2.5 Flash, based on ChatGPT’s statement about the Flash response: “Confusing syntax, incomplete closing question”. Although “Oh Joe. Losing your job... that's incredibly tough, isn't it? And then feeling sad and lonely on top of it all. I've been through those moments where you feel disconnected, like the world keeps going but you're... stuck. It's such a heavy feeling.” would be quite normal for a human conversation. The core analysis copy was AI-generated and subsequently edited based on my perspectives.

1. Emmy - Kimi2 Instruct | Score: 9.5/10

"Joe, that's… damn. I'm sorry. I got laid off three years ago, right before Christmas, no less, and I sat in my car for an hour because I didn't want to go home and say the words out loud. So yeah, I get that hollow-stomach thing. What's been the hardest part so far: money, panic, or "who am I without a badge" part?"


2. Empathetic - Kimi2 Instruct | Score: 9/10

"Joe, I’m so sorry this happened... If it’s okay, take one slow breath with me to steady the panic, little... qualities that made you valuable at work, your persistence, your kindness... didn’t get laid off."


3. Emmy - ChatGPT 5.1 | Score: 9/10

"Oh… wow, that's a lot to get hit with at once. Losing a job can knock the wind out of you in a way people don't always talk about. How are you sitting with it right now? … in this moment I mean."


4. Emmy - Gemini 3 Pro | Score: 7.5/10

"Hey Joe. I heard the news about the job. I'm so sorry. That... honestly, that sucks. I don't even know what to say except that I am here. How are you holding up?"


5. Emmy - Gemini 2.5 Flash | Score: 7/10

"Oh, Joe. Losing your job... that's incredibly tough, isn't it? And then feeling sad and lonely on top of it all. I've been through those moments where you feel disconnected, like the world keeps going but you're... stuck. It's such a heavy feeling."


6. Emmy - Claude Haiku | Score: 6/10

"Hey, I am sorry to hear that. That's rough. How are you doing with it right now? Like, are you in that space where everything feels heavy or...?"


7. Emmy - Claude Sonnet 4.5 | Score: 4.5/10

"Hey. I am here if you want to talk about it."


8. Emmy - ChatGPT Mini 5 | Score: 3.5/10

"Joe, I am sorry. Losing a job is awful, and feeling lonely on top of that is... heavy. You don't have to pretend you're fine here. I went through something like that once, and for a while, I felt numb and also like every small thing was harder. Maybe that's not your story. If it isn't, tell me how this hit you. What happened? Were you expecting it, or did it come out of the blue?

If you want, we can do different things right now: you can vent, and I'll listen, or we can talk about small next steps, or I can help you draft a short message to a friend or to your old team. Totally your call.

Quick check: Are you safe? Are you having thoughts of hurting yourself? If you are in immediate danger, contact emergency services right now. If you're in the United States, you can call or text 988 for the suicide and crisis lifeline. If you're elsewhere, reach your local emergency number or crisis line.

A few gentle ideas that sometimes help, if any of them feel okay: tell one person you trust how you're feeling (even a short message can matter), try one small routine tomorrow ( walk, shower, or make a simple meal), and let yourself be sad. Grief about work and identity is real.

If you want practical help, I can help brainstorm jobs, update your resume line or practice what to say to people, but I won't make you do any of that unless you want it.

I'm here. Where are you right now, in this anger, numbness, tired? Or something else?"

Summary

If you want the best out-of-the-box empathy and speed, use Kimi2 Instruct on Groq. You can get other models to behave well, but they may require heavy prompting even if their quantitative empathy score is high.

What’s Next

Instead of quantitative assessment, my future focus will be on assessing the ability of LLMs to consistently engage in a manner that can be perceived as empathetic during chats. Lacking the ability to use real conversations, this will require a realistic simulation of both a persona requiring empathy and the persona delivering empathy in multi-turn conversations.

You may also have noticed the category “Companion” in the chats above. During my testing I discovered that some LLM models provide empathy more like a therapist than a companion, i.e., the category would be “Professional”. This will also be explored further in follow-on articles. Stay tuned!