You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
`setwise likelihood`: LLMs are prompted to judge which document is the most relevant to the given query. Candidate documents are reranked based on the likelihood of generating the label as the most relevant document by LLMs. It is the base rerank method used in (https://arxiv.org/pdf/2310.09497).
302
+
`setwise likelihood`: LLMs are prompted to judge which document is the most relevant to the given query. Candidate documents are reranked based on the likelihood of generating the label as the most relevant document by LLMs. It is the base rerank method used in [A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models](https://arxiv.org/pdf/2310.09497).
298
303
299
304
```python
300
305
from trustrag.modules.reranker.llm_reranker import LLMRerankerConfig, SetWiseReranker
@@ -427,6 +432,7 @@ If the group is full or for cooperation and exchange, please contact:
427
432
>This project thanks the following open-source projects for their support and contributions:
from trustrag.modules.reranker.llm_reranker import LLMRerankerConfig, PointWiseReranker
276
+
reranker_config = LLMRerankerConfig(
277
+
model_name_or_path="flan-t5-small"
278
+
)
279
+
llm_reranker = PointWiseReranker(reranker_config)
280
+
```
281
+
</details>
282
+
283
+
<details>
284
+
<summary>PairWise-Rerank</summary>
285
+
我们目前实现了2种Pairwise排序方法:
286
+
287
+
`全排序`: 提示LLMs判断哪个文档比另一个文档与给定查询更相关。候选文档基于他们赢得的次数进行排序。该方法源于[Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting](https://arxiv.org/pdf/2306.17563).
288
+
289
+
`冒泡排序`: 提示LLMs判断哪个文档比另一个文档与给定查询更相关。候选文档使用冒泡排序算法重新排序。该方法源于[Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting](https://arxiv.org/pdf/2306.17563).
290
+
291
+
```python
292
+
from trustrag.modules.reranker.llm_reranker import LLMRerankerConfig, PairWiseReranker
293
+
reranker_config = LLMRerankerConfig(
294
+
model_name_or_path="qwen2-7B-instruct"
295
+
)
296
+
llm_reranker = PairWiseReranker(reranker_config)
297
+
```
298
+
</details>
299
+
300
+
<details>
301
+
<summary>ListWise-Rerank</summary>
302
+
正在实施...
303
+
</details>
304
+
305
+
<details>
306
+
<summary>TourRank</summary>
307
+
正在实施...
308
+
</details>
309
+
310
+
<details>
311
+
<summary>SetWise-Rerank</summary>
312
+
我们目前实现了1种Setwise排序方法:
313
+
314
+
`概率重排`: 提示LLMs判断哪个文档是与给定查询最相关的。基于LLMs生成作为最相关文档的标签的可能性对候选文档进行重排序。该方法源于[A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models](https://arxiv.org/pdf/2310.09497).
315
+
316
+
```python
317
+
from trustrag.modules.reranker.llm_reranker import LLMRerankerConfig, SetWiseReranker
0 commit comments