.. |
accelerator_intro.md
|
Add doc for accelerator function (#1252)
|
2024-06-24 14:53:51 +08:00 |
circular_eval.md
|
[Feature] Add circular eval (#610)
|
2023-11-23 16:45:47 +08:00 |
code_eval_service.md
|
update links and checkers (#890)
|
2024-03-13 11:01:35 +08:00 |
code_eval.md
|
[Sync] deprecate old mbpps (#1064)
|
2024-04-19 20:49:46 +08:00 |
compassbench_intro.md
|
[Feature] CompassBench v1_3 subjective evaluation (#1341)
|
2024-07-19 23:12:23 +08:00 |
compassbench_v2_0.md
|
[Feature] CompassBench v1_3 subjective evaluation (#1341)
|
2024-07-19 23:12:23 +08:00 |
contamination_eval.md
|
[Docs] Update contamination docs (#775)
|
2024-01-08 16:37:28 +08:00 |
custom_dataset.md
|
[Sync] update configs (#734)
|
2023-12-25 21:59:16 +08:00 |
evaluation_lightllm.md
|
Support prompt template for LightllmApi. Update LightllmApi token bucket. (#945)
|
2024-03-06 15:33:53 +08:00 |
evaluation_turbomind.md
|
update links and checkers (#890)
|
2024-03-13 11:01:35 +08:00 |
longeval.md
|
[Docs] Readme in longeval (#389)
|
2023-09-18 17:06:00 +08:00 |
needleinahaystack_eval.md
|
[Feature] Make NeedleBench available on HF (#1364)
|
2024-07-25 19:01:56 +08:00 |
new_dataset.md
|
[Docs] update invalid link in docs (#499)
|
2023-10-25 13:15:42 +08:00 |
new_model.md
|
updates docs (#1015)
|
2024-04-02 10:30:04 +08:00 |
objective_judgelm_evaluation.md
|
[Fix] Fix Math Evaluation with Judge Model Evaluator & Add README (#1103)
|
2024-04-28 21:58:58 +08:00 |
prompt_attack.md
|
[Feat] implementation for support promptbench (#239)
|
2023-09-15 15:06:53 +08:00 |
subjective_evaluation.md
|
[Refactor] Reorganize subjective eval (#1284)
|
2024-07-05 22:11:37 +08:00 |