Skip to content

Releases: raphaelsty/neural-cherche

1.4.3

06 Jun 21:12
Compare
Choose a tag to compare

Update of the ColBERT Ranker:

  • Avoid raising an error when there are duplicates documents with two distinct ids.

1.4.2

02 Jun 20:06
Compare
Choose a tag to compare

This release reduce BM25 memory usage by casting sparse matrice values to float32. Avoid copy of the sparse matrix when computing BM25 scores.

1.4.0

01 Jun 16:50
Compare
Choose a tag to compare

Update dependancies, make BM25 retriever normalisation compatible with previous version of sklearn.

1.3.1

01 Jun 14:13
Compare
Choose a tag to compare

Update LeNLP dependancy version in order to run without error on older Ubuntu version.

1.3.0

30 May 21:22
Compare
Choose a tag to compare

The version 1.3.0 introduce:

  • A new BM25 retriever powered by LeNLP vectorizer written in rust. SOTA.

  • An updated TfIdf retriever with LeNLP TfidfVectorizer by default which is written in rust, not SOTA but really fast.

  • Breaking change with the evaluation code. The evaluation module is much simpler and will handle duplicates queries.

  • Update of the documentation.

1.1.0

27 Feb 16:56
783c06d
Compare
Choose a tag to compare

Neural-Cherche 1.1.0 🥳

  • ColBERT Retriever is now available (complementary to the ColBERT ranker).
  • Improved default settings for every models.
  • Attention mask added to models.
  • ColBERT and SparseEmbed pre-trained checkpoints on HuggingFace: raphaelsty/neural-cherche-colbert and raphaelsty/neural-cherche-sparse-embed.
  • Improved ranking loss.
  • Addition of benchmarks.

Overall, this version makes easier to fine-tune ColBERT, SparseEmbed and Splade and achieve excellent results without default parameters.

1.0.0

16 Nov 22:56
d2fb904
Compare
Choose a tag to compare

Introducing Neural-Cherche 1.0.0: The Evolution of Sparsembed

I'm thrilled to announce the launch of Neural-Cherche 1.0.0, a significant upgrade from Sparsembed, packed with innovative features and enhancements:

  • ColBERT Fine-Tuning & Ranking: Enhance your search capabilities with fine-tuned ColBERT for more precise and efficient ranking.

  • Revamped Retrievers with Enhanced API: Experience our newly optimized retrievers. They now come with an improved API that enables users to comprehensively capture and analyze all model outputs.

  • Optimized Training with Refined Hyperparameters: Benefit from our enhanced training procedure, featuring good default hyperparameters for better performance.

  • Efficiency Boost with Splade and SparseEmbed: These components have been upgraded to utilize more efficient Sparse Matrices, boosting overall effectiveness.

  • Intelligent Embedding Management: Once computed, embeddings are now transferred to the CPU, remaining there until needed again. This approach enables extensive, large-scale offline neural searching without overwhelming GPU resources.

  • Comprehensive Documentation: Get up to speed quickly with the documentation.

  • Improved Evaluation API

  • A Fresh, New Look with a cool Logo

Embrace the future of neural search with Neural-Cherche 1.0.0 – a giant leap forward from Sparsembed!

0.1.1

26 Aug 16:19
Compare
Choose a tag to compare

Avoid intersection errors with Sparsembed

0.1.0

23 Aug 17:11
Compare
Choose a tag to compare
Update sparsembed

0.0.9

22 Aug 22:32
Compare
Choose a tag to compare
Update sparsembed