Songyang Zhang
be460fbb21
[Feature] Support OpenAI O1 models ( #1539 )
...
* [Feature] Support OpenAI O1 models
* Update README.md
---------
Co-authored-by: liushz <qq1791167085@163.com>
2024-09-18 22:41:17 +08:00
zhulinJulia24
3754dc1b67
update ( #1522 )
...
Co-authored-by: zhulin1 <zhulin1@pjlab.org.cn>
2024-09-12 15:00:52 +08:00
Albert Yan
928d0cfc3a
[Feature] Add support for Rendu API ( #1468 )
...
* Add support for Rendu API
* fix lint issue
* fix lint issue
* fix lint issue
* Update
---------
Co-authored-by: 13190 <zeyu.yan@transn.com>
Co-authored-by: tonysy <sy.zhangbuaa@gmail.com>
2024-09-06 01:00:43 +08:00
Maxime SHE
45efdc994d
[Feature] Add an attribute api_key into TurboMindAPIModel default None ( #1475 )
...
Co-authored-by: Maxime <maximeshe@163.com>
Add an attribute api_key into TurboMindAPIModel default None then we can set the api_key while using lmdeploy to deploy the llm model
2024-09-05 17:51:16 +08:00
zhulinJulia24
716d46e1f5
[ci] fix badcase and add env info ( #1491 )
...
* update
* update
---------
Co-authored-by: zhulin1 <zhulin1@pjlab.org.cn>
2024-09-05 16:43:45 +08:00
zhulinJulia24
fb6a0df652
[ci] fix test env for vllm and add vllm baselines ( #1481 )
...
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
---------
Co-authored-by: zhulin1 <zhulin1@pjlab.org.cn>
2024-09-04 19:24:09 +08:00
Alexander Lam
8b39225259
[Feature] Added extra_body
support for OpenAISDK; Added support for proxy URL when connecting to OpenAI's API. ( #1467 )
...
* fix lint issues
* fix lint issues
2024-08-29 00:43:43 +08:00
Guoli Yin
a488b9b4f5
[Feature] Make OPENAI_API_BASE compatible with openai default env ( #1461 )
...
* Make OPENAI_API_BASE compatible with openai default env
* Make OPENAI_API_BASE compatible with openai default env
---------
Co-authored-by: Guoli Yin <gyin@icloud.com>
2024-08-28 23:14:41 +08:00
seetimee
ac093fce53
[Update] Update openai_api.py ( #1438 )
...
Most models' token limits are above 32k. It will fix long context dataset test bug of skiping some data.
2024-08-21 18:57:49 +08:00
liushz
e076dc5acf
[Fix] Fix openai api tiktoken bug for api server ( #1433 )
...
* Fix openai api tiktoken
* Fix openai api tiktoken
---------
Co-authored-by: liushz <liuhongwei@pjlab.rog.cn>
2024-08-20 22:02:14 +08:00
Linchen Xiao
8e55c9c6ee
[Update] Compassbench v1.3 ( #1396 )
...
* stash files
* compassbench subjective evaluation added
* evaluation update
* fix lint
* update docs
* Update lint
* changes saved
* changes saved
* CompassBench subjective summarizer added (#1349 )
* subjective summarizer added
* fix lint
[Fix] Fix MathBench (#1351 )
Co-authored-by: liuhongwei <liuhongwei@pjlab.org.cn>
[Update] Update model support list (#1353 )
* fix pip version
* fix pip version
* update model support
subjective summarizer updated
knowledge, math objective done (data need update)
remove secrets
objective changes saved
knowledge data added
* secrets removed
* changed added
* summarizer modified
* summarizer modified
* compassbench coding added
* fix lint
* objective summarizer updated
* compass_bench_v1.3 updated
* update files in config folder
* remove unused model
* lcbench modified
* removed model evaluation configs
* remove duplicated sdk implementation
---------
Co-authored-by: zhangsongyang <zhangsongyang@pjlab.org.cn>
2024-08-12 19:09:19 +08:00
changyeyu
59586a8b4a
[Feature] Enable Truncation of Mid-Section for Long Prompts in huggingface_above_v4_33.py
( #1373 )
...
* Retain the first and last halves of the tokens from the prompt, discarding the middle, to avoid exceeding the model's maximum length.
* Add default parameter: mode
* Modified a comment.
* Modified variable names.
* fix yapf lint
2024-08-09 11:36:30 +08:00
Songyang Zhang
c81329b548
[Fix] Fix Slurm ENV ( #1392 )
...
1. Support Slurm Cluster
2. Support automatic data download
3. Update InternLM2.5-1.8B/20B-Chat
2024-08-06 01:35:20 +08:00
Songyang Zhang
c09fc79ba8
[Feature] Support OpenAI ChatCompletion ( #1389 )
...
* [Feature] Support import configs/models/summarizers from whl
* Update
* Update openai sdk
* Update
* Update gemma
2024-08-01 19:10:13 +08:00
Songyang Zhang
46cc7894e1
[Feature] Support import configs/models/summarizers from whl ( #1376 )
...
* [Feature] Support import configs/models/summarizers from whl
* Update LCBench configs
* Update
* Update
* Update
* Update
* update
* Update
* Update
* Update
* Update
* Update
2024-08-01 00:42:48 +08:00
Songyang Zhang
33ceaa0eb8
[Bug] Fix bug in turbomind ( #1377 )
2024-07-30 09:37:50 +08:00
Songyang Zhang
704853e5e7
[Feature] Update pip install ( #1324 )
...
* [Feature] Update pip install
* Update Configuration
* Update
* Update
* Update
* Update Internal Config
* Update collect env
2024-07-29 18:32:50 +08:00
jxd
12b84aeb3b
[Feature] Update CHARM Memeorziation ( #1230 )
...
* update gemini api and add gemini models
* add openai models
* update CHARM evaluation
* add CHARM memorization tasks
* add CharmMemSummarizer (output eval details for memorization-independent reasoning analysis
* update CHARM readme
---------
Co-authored-by: wujiang <wujiang@pjlab.org.cn>
2024-07-26 18:42:30 +08:00
LeavittLang
8ee7fecb68
Adding support for Doubao API ( #1218 )
...
* Adding support for Doubao API
* Update doubao_api.py
Fixed the bug that the connection would be retried even if it was normal.
* Update doubao_api.py
---------
Co-authored-by: bittersweet1999 <148421775+bittersweet1999@users.noreply.github.com>
2024-07-26 11:44:51 +08:00
heya5
73aa55af6d
[Fix] Support HF models deployed with an OpenAI-compatible API. ( #1352 )
...
* Support HF models deployed with an OpenAI-compatible API.
* resolve lint issue
* add extra_body arguments
There are many other arguments when using openi-compatiable API like this: https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#extra-parameters-for-chat-api
* fix linting issue
* fix yapf linting issue
2024-07-25 18:38:23 +08:00
Que Haoran
a244453d9e
[Feature] Support inference ppl datasets ( #1315 )
...
* commit inference ppl datasets
* revised format
* revise
* revise
* revise
* revise
* revise
* revise
2024-07-22 17:59:30 +08:00
Mo Li
f40add2596
[Fix] Fix lint ( #1334 )
...
* update needlebench docs
* update model_name_mapping dict
* update README
* fix_lint
2024-07-18 17:15:06 +08:00
Xu Song
1bfb4217ff
Fix typing and typo ( #1331 )
2024-07-18 13:41:24 +08:00
Fengzhe Zhou
a32f21a356
[Sync] Sync with internal codes 2024.06.28 ( #1279 )
2024-06-28 14:16:34 +08:00
Fengzhe Zhou
2954913d9b
[Sync] bump version ( #1204 )
2024-05-28 23:09:59 +08:00
bittersweet1999
88c14d3d04
add support for lmdeploy api judge ( #1193 )
2024-05-24 23:28:56 +08:00
Fengzhe Zhou
2b3d4150f3
[Sync] update evaluator ( #1175 )
2024-05-21 14:22:46 +08:00
Fengzhe Zhou
5de85406ce
[Sync] add OC16 entry ( #1171 )
2024-05-17 16:50:58 +08:00
Fengzhe Zhou
8ea2c404d7
[Feat] enable HuggingFacewithChatTemplate with --accelerator via cli ( #1163 )
...
* enable HuggingFacewithChatTemplate with --accelerator via cli
* rm vllm_internlm2_chat_7b
2024-05-15 21:51:07 +08:00
Fengzhe Zhou
f10dd48f9c
[Fix] Update stop_words in huggingface_above_v4_33 ( #1160 )
2024-05-15 14:10:33 +08:00
Fengzhe Zhou
62dbf04708
[Sync] update github workflow ( #1156 )
2024-05-14 22:42:23 +08:00
Fengzhe Zhou
7505b3cadf
[Feature] Add huggingface apply_chat_template ( #1098 )
...
* add TheoremQA with 5-shot
* add huggingface_above_v4_33 classes
* use num_worker partitioner in cli
* update theoremqa
* update TheoremQA
* add TheoremQA
* rename theoremqa -> TheoremQA
* update TheoremQA output path
* rewrite many model configs
* update huggingface
* further update
* refine configs
* update configs
* update configs
* add configs/eval_llama3_instruct.py
* add summarizer multi faceted
* update bbh datasets
* update configs/models/hf_llama/lmdeploy_llama3_8b_instruct.py
* rename class
* update readme
* update hf above v4.33
2024-05-14 14:50:16 +08:00
Yang Yong
53fe390454
fix LightllmApi workers bug ( #1113 )
2024-04-30 22:09:22 +08:00
Lyu Han
1013dce60c
adapt to lmdeploy v0.4.0 ( #1073 )
...
* adapt to lmdeploy v0.4.0
* compatible
2024-04-28 19:57:40 +08:00
Wang Xingjin
048d41a1c4
add vllm get_ppl ( #1003 )
...
* add vllm get_ppl
* add vllm get_ppl
* format
---------
Co-authored-by: xingjin.wang <xingjin.wang@mihoyo.com>
Co-authored-by: Leymore <zfz-960727@163.com>
2024-04-26 21:31:56 +08:00
klein
e4830a6926
Update CIBench ( #1089 )
...
* modify the requirements/runtime.txt: numpy==1.23.4 --> numpy>=1.23.4
* update cibench: dataset and evluation
* cibench summarizer bug
* update cibench
* move extract_code import
---------
Co-authored-by: zhangchuyu@pjlab.org.cn <zhangchuyu@pjlab.org.cn>
Co-authored-by: Leymore <zfz-960727@163.com>
2024-04-26 18:46:02 +08:00
Ke Bao
81d0e4d793
[Feature] Add lmdeploy tis python backend model ( #1014 )
...
* add lmdeploy tis python backend model
* fix pr check
* update
2024-04-23 14:27:11 +08:00
Fengzhe Zhou
8c85edd1cd
[Sync] deprecate old mbpps ( #1064 )
2024-04-19 20:49:46 +08:00
Robin Chen
c172401323
[Fix] Fixed repeated loading of VLLM ( #1051 )
...
* [fix]Fixed the issue caused by the repeated loading of VLLM model during task segmentation.
* [fix] avoid TypeError: VLLM.__init__() got an unexpected keyword argument 'tokenizer_only'
* restore .pre-commit-config.yaml
* restore opencompass/tasks/openicl_infer.py
---------
Co-authored-by: IcyFeather <mengzhuo.happy@gmail.com>
Co-authored-by: Leymore <zfz-960727@163.com>
2024-04-17 20:36:08 +08:00
Fengzhe Zhou
7a41951dda
[Fix] logger.error -> logger.debug in OpenAI wrapper ( #1050 )
...
* logger.error -> logger.info in OpenAI
* logger.info -> logger.debug in OpenAI
2024-04-15 21:08:13 +08:00
Fengzhe Zhou
b39f501563
[Sync] update taco ( #1030 )
2024-04-09 17:50:23 +08:00
bittersweet1999
02e7eec911
[Feature] Support AlpacaEval_V2 ( #1006 )
...
* support alpacaeval_v2
* support alpacaeval
* update docs
* update docs
2024-03-28 16:49:04 +08:00
Ke Bao
e415ddf96a
[Fix] Fix turbomind_tis ( #992 )
2024-03-22 15:50:12 +08:00
Fengzhe Zhou
bdd85358cc
[Sync] update 20240308 ( #953 )
2024-03-11 22:34:19 +08:00
Yang Yong
3829be87b1
Fix LightllmApi ppl test ( #951 )
2024-03-08 12:04:44 +08:00
Yang Yong
107e022cf4
Support prompt template for LightllmApi. Update LightllmApi token bucket. ( #945 )
2024-03-06 15:33:53 +08:00
RunningLeon
c54a5d3b0f
Support get_ppl for TurbomindModel ( #878 )
...
* update ppl for turbomindmodel
* update api_server
* rename config and set thread_safe for pytorch engine if possible
2024-03-06 11:44:19 +08:00
Fengzhe Zhou
b03d5dc531
[Sync] Sync Internal ( #941 )
2024-03-04 14:42:36 +08:00
bittersweet1999
001e77fea2
[Feature] add support for gemini ( #931 )
...
* add gemini
* add gemini
* add gemini
2024-02-28 19:38:34 +08:00
RunningLeon
32ba0b074e
Support lmdeploy pytorch engine ( #875 )
...
* add lmdeploy pytorch model
* fix
* speed up encoding and decoding
* fix
* change tokenizer
2024-02-22 03:46:07 -03:00