Skip to content

Commit d96bcbd

Browse files
authoredSep 26, 2024··
add Self-taught-llama3.1-70B-dpo as a evaluator (#412)
1 parent 10051fd commit d96bcbd

File tree

3 files changed

+35
-0
lines changed

3 files changed

+35
-0
lines changed
 
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
Self-taught-llama3.1-70B-dpo:
2+
prompt_template: "Self-taught-llama3.1-70B-dpo/self_taught.txt"
3+
fn_completions: "vllm_local_completions"
4+
completions_kwargs:
5+
model_name: "Self-taught-llama3.1-70B-dpo"
6+
max_new_tokens: 512
7+
temperature: 0
8+
model_kwargs:
9+
dtype: "half"
10+
tensor_parallel_size: 8
11+
enable_chunked_prefill: False
12+
max_model_len: 5120
13+
distributed_executor_backend: "ray"
14+
fn_completion_parser: "regex_parser"
15+
completion_parser_kwargs:
16+
outputs_to_match:
17+
1: "[[A]]"
18+
2: "[[B]]"
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
<|start_header_id|>system<|end_header_id|>
2+
3+
Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. You should choose the assistant that follows the user's instructions and answers the user's question better. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of their responses. Begin your evaluation by comparing the two responses and provide a short explanation. Avoid any position biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: \"[[A]]\" if assistant A is better, \"[[B]]\" if assistant B is better. <|eot_id|><|start_header_id|>user<|end_header_id|><|eot_id|><|start_header_id|>user<|end_header_id|>
4+
5+
[User Question]
6+
{instruction}
7+
8+
[The Start of Assistant A's Answer]
9+
{output_1}
10+
[The End of Assistant A's Answer]
11+
12+
[The Start of Assistant B's Answer]
13+
{output_2}
14+
[The End of Assistant B's Answer]<|eot_id|><|start_header_id|>assistant<|end_header_id|>
15+
16+

‎src/alpaca_eval/leaderboards/evaluators/evaluators_leaderboard.csv

+1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ alpaca_eval_cot_gpt4_turbo_fn,68.63874533448178,6.311349574632637,1988.601262671
77
weighted_alpaca_eval_cot_gpt4_turbo,68.45771313115921,6.447465224111284,1869.2926495435856,0.9333333333333332,0.7743167748273401,,,0.6853932584269663,0.6576576576576577,0.5283575514995362,647,verified
88
aviary_gpt4,68.3641975308642,12.781481481481482,1821.0640311000004,0.9205101496312952,0.9053426857899228,,,0.701123595505618,0.6486486486486487,0.5555555555555556,648,verified
99
alpaca_eval_gpt4_turbo_fn,68.09413580246913,5.533981481481482,864.3023563021605,0.9333333333333332,0.817290435500228,30.246913580246915,15.625,0.651685393258427,0.6036036036036037,0.5381944444444444,2592,minimal
10+
Self-taught-llama3.1-70B-dpo,68.03937590094094,,206.82500262105583,0.7999999999999999,0.7516559995326958,30.34055727554179,13.015337123801086,0.6567505720823799,0.6146788990825688,0.5172549019607844,2550,minimal
1011
gpt4_turbo_cot_logprob,67.86974910317902,5.397145061728395,1568.9484159171295,0.6333333333333333,0.6310442120964042,,,0.5932584269662922,0.5855855855855856,0.5285319490509259,648,verified
1112
gpt4_turbo_cot_clf,67.59689922480621,5.3972248062015495,1528.4046718706977,0.6666666666666667,0.6326057742256878,,,0.5936794582392777,0.5855855855855856,0.5255813953488373,645,verified
1213
claude_ranking,67.5925925925926,4.954578395061729,218.4230414438272,0.9,0.90848221004591,,,0.7303370786516854,0.6576576576576577,0.4552469135802468,648,verified

0 commit comments

Comments
 (0)
Please sign in to comment.