Universal Sentence Encoder
Topics: AI (Deep Learning), LLMO / GEO, Semantic Search
This Google paper presents a new unsupervised approach to learning sentence embeddings for semantic similarity by training models to predict conversational input–response pairs. It compares two encoder architectures—a Deep Averaging Network and a Transformer—and shows that adding a supervised natural language inference objective in a multitask setup further improves the quality of the embeddings. The resulting representations achieve state-of-the-art performance among neural methods on the STS Benchmark and perform competitively on a community question-answering similarity task.