Skip to content

Commit 05bd844

Browse files
jerryzh168malfet
authored andcommitted
Only support newest versions of lm-eval (pytorch#556)
Summary: remove support for lm-eval 0.3 to reduce the options we have Test Plan: CI Reviewers: Subscribers: Tasks: Tags:
1 parent ead68a4 commit 05bd844

File tree

2 files changed

+5
-19
lines changed

2 files changed

+5
-19
lines changed

eval.py

Lines changed: 4 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -29,25 +29,11 @@
2929
torch._inductor.config.triton.cudagraphs = True
3030
torch._dynamo.config.cache_size_limit = 100000
3131

32-
try:
33-
import lm_eval
32+
import lm_eval
3433

35-
lm_eval_available = True
36-
except:
37-
lm_eval_available = False
38-
39-
40-
if lm_eval_available:
41-
try: # lm_eval version 0.4
42-
from lm_eval.evaluator import evaluate
43-
from lm_eval.models.huggingface import HFLM as eval_wrapper
44-
from lm_eval.tasks import get_task_dict
45-
except: # lm_eval version 0.3
46-
from lm_eval import base, evaluator, tasks
47-
48-
eval_wrapper = base.BaseLM
49-
get_task_dict = tasks.get_task_dict
50-
evaluate = evaluator.evaluate
34+
from lm_eval.evaluator import evaluate
35+
from lm_eval.models.huggingface import HFLM as eval_wrapper
36+
from lm_eval.tasks import get_task_dict
5137

5238

5339
def setup_cache_padded_seq_input_pos_max_seq_length_for_prefill(

requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ snakeviz
1919
sentencepiece
2020
numpy
2121
gguf
22-
lm-eval
22+
lm-eval==0.4
2323
blobfile
2424

2525
# Build tools

0 commit comments

Comments
 (0)