.. |
accelerator_intro.md
|
[Fix] Fix openai api tiktoken bug for api server (#1433)
|
2024-08-20 22:02:14 +08:00 |
circular_eval.md
|
[Feature] Add circular eval (#610)
|
2023-11-23 16:45:47 +08:00 |
code_eval_service.md
|
update links and checkers (#890)
|
2024-03-13 11:01:35 +08:00 |
code_eval.md
|
[Feature] Support ModelScope datasets (#1289)
|
2024-07-29 13:48:32 +08:00 |
contamination_eval.md
|
[Docs] Update contamination docs (#775)
|
2024-01-08 16:37:28 +08:00 |
custom_dataset.md
|
[Sync] update configs (#734)
|
2023-12-25 21:59:16 +08:00 |
evaluation_lightllm.md
|
Support prompt template for LightllmApi. Update LightllmApi token bucket. (#945)
|
2024-03-06 15:33:53 +08:00 |
evaluation_lmdeploy.md
|
[Feature] Integrate lmdeploy pipeline api (#1198)
|
2024-10-09 22:58:06 +08:00 |
llm_judge.md
|
[Feature] Add general math, llm judge evaluator (#1892)
|
2025-02-26 15:08:50 +08:00 |
longeval.md
|
[Docs] Readme in longeval (#389)
|
2023-09-18 17:06:00 +08:00 |
math_verify.md
|
[Feature] Add general math, llm judge evaluator (#1892)
|
2025-02-26 15:08:50 +08:00 |
needleinahaystack_eval.md
|
[Feature] Make NeedleBench available on HF (#1364)
|
2024-07-25 19:01:56 +08:00 |
new_dataset.md
|
[Feature] Add list of supported datasets at html page (#1850)
|
2025-02-14 16:17:30 +08:00 |
new_model.md
|
[Docs] add en docs (#15)
|
2023-07-06 12:58:44 +08:00 |
objective_judgelm_evaluation.md
|
[Fix] Fix Math Evaluation with Judge Model Evaluator & Add README (#1103)
|
2024-04-28 21:58:58 +08:00 |
prompt_attack.md
|
[Feat] implementation for support promptbench (#239)
|
2023-09-15 15:06:53 +08:00 |
subjective_evaluation.md
|
[Update] update docs and add compassarena (#1614)
|
2024-10-17 14:39:06 +08:00 |