Skip to content

Commit

Permalink
Merge pull request #17 from raphaelsty/raphaelsty-patch-3
Browse files Browse the repository at this point in the history
Update README.md
  • Loading branch information
raphaelsty authored Feb 27, 2024
2 parents 64ad0d3 + ae65ded commit 783c06d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ Neural-Cherche also provides a `SparseEmbed`, a `SPLADE`, a `TFIDF` retriever an

### Pre-trained Models

We provide to pre-trained checkpoints specifically designed for neural-cherche: [raphaelsty/neural-cherche-sparse-embed](https://huggingface.co/raphaelsty/neural-cherche-sparse-embed) and [raphaelsty/neural-cherche-colbert](https://huggingface.co/raphaelsty/neural-cherche-colbert). Those checkpoints are fine-tuned on a subset of the MS-MARCO dataset and would benefit from being fine-tuned on your specific dataset. You can fine-tune ColBERT from any Sentence Transformer pre-trained checkpoint in order to fit your specific language. You shoukd use a MLM based-checkpoint to fine-tune SparseEmbed.
We provide pre-trained checkpoints specifically designed for neural-cherche: [raphaelsty/neural-cherche-sparse-embed](https://huggingface.co/raphaelsty/neural-cherche-sparse-embed) and [raphaelsty/neural-cherche-colbert](https://huggingface.co/raphaelsty/neural-cherche-colbert). Those checkpoints are fine-tuned on a subset of the MS-MARCO dataset and would benefit from being fine-tuned on your specific dataset. You can fine-tune ColBERT from any Sentence Transformer pre-trained checkpoint in order to fit your specific language. You should use a MLM based-checkpoint to fine-tune SparseEmbed.

<table class="tg">
<thead>
Expand Down

0 comments on commit 783c06d

Please sign in to comment.