This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: b722uo6gkWzomTteYggfpoeF6mEbIhW0I8q5fdKZQf0
Cover

Improving Text Embeddings with Large Language Models: Training

Written by @autoencoder | Published on 2025/3/1

TL;DR
This paper introduces a novel method for generating high-quality text embeddings using synthetic data, achieving state-of-the-art results with minimal training

Authors:

(1) Liang Wang, Microsoft Corporation, and Correspondence to (wangliang@microsoft.com);

(2) Nan Yang, Microsoft Corporation, and correspondence to (nanya@microsoft.com);

(3) Xiaolong Huang, Microsoft Corporation;

(4) Linjun Yang, Microsoft Corporation;

(5) Rangan Majumder, Microsoft Corporation;

(6) Furu Wei, Microsoft Corporation and Correspondence to (fuwei@microsoft.com).

Abstract and 1 Introduction

2 Related Work

3 Method

3.1 Synthetic Data Generation

3.2 Training

4 Experiments

4.1 Statistics of the Synthetic Data

4.2 Model Fine-tuning and Evaluation

4.3 Main Results

4.4 Multilingual Retrieval

5 Analysis

5.1 Is Contrastive Pre-training Necessary?

5.2 Extending to Long Text Embeddings and 5.3 Analysis of Training Hyperparameters

6 Conclusion and References

A Implementation Details

B Test Set Contamination Analysis

C Prompts for Synthetic Data Generation

D Instructions for Training and Evaluation

3.2 Training

This paper is available on arxiv under CC0 1.0 DEED license.

[story continues]


Written by
@autoencoder
Research & publications on Auto Encoders, revolutionizing data compression and feature learning techniques.

Topics and
tags
multilingual-ai|text-embeddings|synthetic-data-generation|natural-language-processing|contrastive-pre-training|language-models|beir-benchmark|ai-for-information-retrieval
This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: b722uo6gkWzomTteYggfpoeF6mEbIhW0I8q5fdKZQf0