Abstract
RexBERT, a family of BERT-style encoders designed for e-commerce semantics, achieves superior performance on domain-specific tasks through specialized pretraining and high-quality in-domain data.
Encoder-only transformers remain indispensable in retrieval, classification, and ranking systems where latency, stability, and cost are paramount. Most general purpose encoders, however, are trained on generic corpora with limited coverage of specialized domains. We introduce RexBERT, a family of BERT-style encoders designed specifically for e-commerce semantics. We make three contributions. First, we release Ecom-niverse, a 350 billion token corpus curated from diverse retail and shopping sources. We describe a modular pipeline that isolates and extracts e-commerce content from FineFineWeb and other open web resources, and characterize the resulting domain distribution. Second, we present a reproducible pretraining recipe building on ModernBERT's architectural advances. The recipe consists of three phases: general pre-training, context extension, and annealed domain specialization. Third, we train RexBERT models ranging from 17M to 400M parameters and evaluate them on token classification, semantic similarity, and general natural language understanding tasks using e-commerce datasets. Despite having 2-3x fewer parameters, RexBERT outperforms larger general-purpose encoders and matches or surpasses modern long-context models on domain-specific benchmarks. Our results demonstrate that high quality in-domain data combined with a principled training approach provides a stronger foundation for e-commerce applications than indiscriminate scaling alone.
Community
RexBERT Paper is finally out!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TabiBERT: A Large-Scale ModernBERT Foundation Model and A Unified Benchmark for Turkish (2025)
- Mecellem Models: Turkish Models Trained from Scratch and Continually Pre-trained for the Legal Domain (2026)
- Compass-Embedding v4: Robust Contrastive Learning for Multilingual E-commerce Embeddings (2025)
- Pantagruel: Unified Self-Supervised Encoders for French Text and Speech (2026)
- Domain-Adaptive and Scalable Dense Retrieval for Content-Based Recommendation (2026)
- Luxical: High-Speed Lexical-Dense Text Embeddings (2025)
- AdNanny: One Reasoning LLM for All Offline Ads Recommendation Tasks (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 4
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper