Lyu Han
fb12c3f98a
[Update] strip stop_words ( #1635 )
2024-10-24 20:39:20 +08:00
Chenguang Li
5868d5afa4
[Bug] Fix-NPU-Support ( #1618 )
...
* bugfix NPU support
* formatting
---------
Co-authored-by: noemotiovon <noemotiovon@gmail.com>
2024-10-21 17:42:53 +08:00
Lyu Han
6e8adf5221
[Bug] Remove prefix bos_token from messages when using lmdeploy as the accelerator ( #1623 )
...
* remove prefix bos_token from messages when using lmdeploy as the accelerator
* update
2024-10-19 20:03:47 +08:00
x54-729
2b1afa7d1e
[Fix] fix interntrain's tokenizer truncate ( #1605 )
...
Co-authored-by: x54-729 <xingshuhao.dispatch@pjlab.org.cn>
2024-10-15 16:03:57 +08:00
Lyu Han
4fde41036f
[Feature] Update TurboMindModel by integrating lmdeploy pipeline API ( #1556 )
...
* integrate lmdeploy's pipeline api
* fix linting
* update user guide
* rename
* update
* update
* update
* rollback class name
* update
* remove unused code
* update
* update
* use pipeline
* fix ci check
* compatibility
* compatibility
* remove concurrency
* update
* fix table content
* update
2024-10-14 15:33:40 +08:00
Lyu Han
b52ba65c26
[Feature] Integrate lmdeploy pipeline api ( #1198 )
...
* integrate lmdeploy's pipeline api
* fix linting
* update user guide
* rename
* update
* update
* update
* rollback class name
* update
* remove unused code
* update
* update
* fix ci check
* compatibility
* remove concurrency
* Update configs/models/hf_internlm/lmdeploy_internlm2_chat_7b.py
* Update docs/zh_cn/advanced_guides/evaluation_lmdeploy.md
* [Bug] fix lint
---------
Co-authored-by: Songyang Zhang <tonysy@users.noreply.github.com>
Co-authored-by: tonysy <sy.zhangbuaa@gmail.com>
2024-10-09 22:58:06 +08:00
x54-729
4d6349dfe1
[FIX] fix interntrain get_loglikelihood ( #1584 )
2024-10-08 11:34:04 +08:00
x54-729
bbdca5eb4c
[BUG] Fix eos token handling and add comments for InternTrain ( #1569 )
...
Co-authored-by: x54-729 <xingshuhao.dispatch@pjlab.org.cn>
2024-09-30 15:46:06 +08:00
Yi Ding
85a28874aa
[BUG]: Fix Bailing API configs ( #1570 )
2024-09-27 11:56:57 +08:00
Songyang Zhang
e8437db98f
[Feature] Update BailingLM/OpenAI verbose ( #1568 )
...
* [Feature] 1. Update CoreBench Base\n 2. Fix lint issue in BalingAPI
* Update
* [Feature] Update API
* Update
2024-09-27 11:15:25 +08:00
Songyang Zhang
7d50294117
[Feature] Update Bailing ( #1567 )
...
* [Feature] 1. Update CoreBench Base\n 2. Fix lint issue in BalingAPI
* Update
* Update
* Update
2024-09-26 18:56:17 +08:00
Songyang Zhang
a7bacfdf7e
[Feature] Update CoreBench 2.0 ( #1566 )
...
* [Feature] 1. Update CoreBench Base\n 2. Fix lint issue in BalingAPI
* Update
* Update
2024-09-26 18:44:00 +08:00
Yi Ding
3f833186dc
[Feature] Support the reasoning from BaiLing LLM ( #1541 )
...
* [Feature] Support the reasoning from BaiLing LLM
This commit includes the access to BaiLing LLM and gets the reasoning.
* Add the api example
The example of evalute bailing api
* Revise the generation arguments
Based on current experiment, we update some generation arguments for better reasoning
* [fix] set the batch size
* Retry under flowcontrol of serverside
* add dependent package into requirement.txt
add dependent package retrying to clean up the pre-comment check.
* correct the file names and make the file copy
correct the file names.
copy the files under configs to opencompass
* fix the lint issue
---------
Co-authored-by: christopher.dy <christopher.dy@antgroup.com>
2024-09-26 16:49:52 +08:00
x54-729
335667183a
[Feature] Add Interntrain model support ( #1548 )
...
Co-authored-by: x54-729 <xingshuhao.dispatch@pjlab.org.cn>
2024-09-23 19:10:26 +08:00
Songyang Zhang
ee058e25b2
[Feature] Support verbose for OpenAI API ( #1546 )
2024-09-20 17:12:52 +08:00
hailsham
a81bbb85bf
[FIX] Added handling for the "begin section" in meta_template to APITemplateParser ( #1405 )
...
Co-authored-by: leifei <nuuooo@icloud.com>
2024-09-19 18:12:04 +08:00
Songyang Zhang
be460fbb21
[Feature] Support OpenAI O1 models ( #1539 )
...
* [Feature] Support OpenAI O1 models
* Update README.md
---------
Co-authored-by: liushz <qq1791167085@163.com>
2024-09-18 22:41:17 +08:00
zhulinJulia24
3754dc1b67
update ( #1522 )
...
Co-authored-by: zhulin1 <zhulin1@pjlab.org.cn>
2024-09-12 15:00:52 +08:00
Albert Yan
928d0cfc3a
[Feature] Add support for Rendu API ( #1468 )
...
* Add support for Rendu API
* fix lint issue
* fix lint issue
* fix lint issue
* Update
---------
Co-authored-by: 13190 <zeyu.yan@transn.com>
Co-authored-by: tonysy <sy.zhangbuaa@gmail.com>
2024-09-06 01:00:43 +08:00
Maxime SHE
45efdc994d
[Feature] Add an attribute api_key into TurboMindAPIModel default None ( #1475 )
...
Co-authored-by: Maxime <maximeshe@163.com>
Add an attribute api_key into TurboMindAPIModel default None then we can set the api_key while using lmdeploy to deploy the llm model
2024-09-05 17:51:16 +08:00
zhulinJulia24
716d46e1f5
[ci] fix badcase and add env info ( #1491 )
...
* update
* update
---------
Co-authored-by: zhulin1 <zhulin1@pjlab.org.cn>
2024-09-05 16:43:45 +08:00
zhulinJulia24
fb6a0df652
[ci] fix test env for vllm and add vllm baselines ( #1481 )
...
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
---------
Co-authored-by: zhulin1 <zhulin1@pjlab.org.cn>
2024-09-04 19:24:09 +08:00
Alexander Lam
8b39225259
[Feature] Added extra_body
support for OpenAISDK; Added support for proxy URL when connecting to OpenAI's API. ( #1467 )
...
* fix lint issues
* fix lint issues
2024-08-29 00:43:43 +08:00
Guoli Yin
a488b9b4f5
[Feature] Make OPENAI_API_BASE compatible with openai default env ( #1461 )
...
* Make OPENAI_API_BASE compatible with openai default env
* Make OPENAI_API_BASE compatible with openai default env
---------
Co-authored-by: Guoli Yin <gyin@icloud.com>
2024-08-28 23:14:41 +08:00
seetimee
ac093fce53
[Update] Update openai_api.py ( #1438 )
...
Most models' token limits are above 32k. It will fix long context dataset test bug of skiping some data.
2024-08-21 18:57:49 +08:00
liushz
e076dc5acf
[Fix] Fix openai api tiktoken bug for api server ( #1433 )
...
* Fix openai api tiktoken
* Fix openai api tiktoken
---------
Co-authored-by: liushz <liuhongwei@pjlab.rog.cn>
2024-08-20 22:02:14 +08:00
Linchen Xiao
8e55c9c6ee
[Update] Compassbench v1.3 ( #1396 )
...
* stash files
* compassbench subjective evaluation added
* evaluation update
* fix lint
* update docs
* Update lint
* changes saved
* changes saved
* CompassBench subjective summarizer added (#1349 )
* subjective summarizer added
* fix lint
[Fix] Fix MathBench (#1351 )
Co-authored-by: liuhongwei <liuhongwei@pjlab.org.cn>
[Update] Update model support list (#1353 )
* fix pip version
* fix pip version
* update model support
subjective summarizer updated
knowledge, math objective done (data need update)
remove secrets
objective changes saved
knowledge data added
* secrets removed
* changed added
* summarizer modified
* summarizer modified
* compassbench coding added
* fix lint
* objective summarizer updated
* compass_bench_v1.3 updated
* update files in config folder
* remove unused model
* lcbench modified
* removed model evaluation configs
* remove duplicated sdk implementation
---------
Co-authored-by: zhangsongyang <zhangsongyang@pjlab.org.cn>
2024-08-12 19:09:19 +08:00
changyeyu
59586a8b4a
[Feature] Enable Truncation of Mid-Section for Long Prompts in huggingface_above_v4_33.py
( #1373 )
...
* Retain the first and last halves of the tokens from the prompt, discarding the middle, to avoid exceeding the model's maximum length.
* Add default parameter: mode
* Modified a comment.
* Modified variable names.
* fix yapf lint
2024-08-09 11:36:30 +08:00
Songyang Zhang
c81329b548
[Fix] Fix Slurm ENV ( #1392 )
...
1. Support Slurm Cluster
2. Support automatic data download
3. Update InternLM2.5-1.8B/20B-Chat
2024-08-06 01:35:20 +08:00
Songyang Zhang
c09fc79ba8
[Feature] Support OpenAI ChatCompletion ( #1389 )
...
* [Feature] Support import configs/models/summarizers from whl
* Update
* Update openai sdk
* Update
* Update gemma
2024-08-01 19:10:13 +08:00
Songyang Zhang
46cc7894e1
[Feature] Support import configs/models/summarizers from whl ( #1376 )
...
* [Feature] Support import configs/models/summarizers from whl
* Update LCBench configs
* Update
* Update
* Update
* Update
* update
* Update
* Update
* Update
* Update
* Update
2024-08-01 00:42:48 +08:00
Songyang Zhang
33ceaa0eb8
[Bug] Fix bug in turbomind ( #1377 )
2024-07-30 09:37:50 +08:00
Songyang Zhang
704853e5e7
[Feature] Update pip install ( #1324 )
...
* [Feature] Update pip install
* Update Configuration
* Update
* Update
* Update
* Update Internal Config
* Update collect env
2024-07-29 18:32:50 +08:00
jxd
12b84aeb3b
[Feature] Update CHARM Memeorziation ( #1230 )
...
* update gemini api and add gemini models
* add openai models
* update CHARM evaluation
* add CHARM memorization tasks
* add CharmMemSummarizer (output eval details for memorization-independent reasoning analysis
* update CHARM readme
---------
Co-authored-by: wujiang <wujiang@pjlab.org.cn>
2024-07-26 18:42:30 +08:00
LeavittLang
8ee7fecb68
Adding support for Doubao API ( #1218 )
...
* Adding support for Doubao API
* Update doubao_api.py
Fixed the bug that the connection would be retried even if it was normal.
* Update doubao_api.py
---------
Co-authored-by: bittersweet1999 <148421775+bittersweet1999@users.noreply.github.com>
2024-07-26 11:44:51 +08:00
heya5
73aa55af6d
[Fix] Support HF models deployed with an OpenAI-compatible API. ( #1352 )
...
* Support HF models deployed with an OpenAI-compatible API.
* resolve lint issue
* add extra_body arguments
There are many other arguments when using openi-compatiable API like this: https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#extra-parameters-for-chat-api
* fix linting issue
* fix yapf linting issue
2024-07-25 18:38:23 +08:00
Que Haoran
a244453d9e
[Feature] Support inference ppl datasets ( #1315 )
...
* commit inference ppl datasets
* revised format
* revise
* revise
* revise
* revise
* revise
* revise
2024-07-22 17:59:30 +08:00
Mo Li
f40add2596
[Fix] Fix lint ( #1334 )
...
* update needlebench docs
* update model_name_mapping dict
* update README
* fix_lint
2024-07-18 17:15:06 +08:00
Xu Song
1bfb4217ff
Fix typing and typo ( #1331 )
2024-07-18 13:41:24 +08:00
Fengzhe Zhou
a32f21a356
[Sync] Sync with internal codes 2024.06.28 ( #1279 )
2024-06-28 14:16:34 +08:00
Fengzhe Zhou
2954913d9b
[Sync] bump version ( #1204 )
2024-05-28 23:09:59 +08:00
bittersweet1999
88c14d3d04
add support for lmdeploy api judge ( #1193 )
2024-05-24 23:28:56 +08:00
Fengzhe Zhou
2b3d4150f3
[Sync] update evaluator ( #1175 )
2024-05-21 14:22:46 +08:00
Fengzhe Zhou
5de85406ce
[Sync] add OC16 entry ( #1171 )
2024-05-17 16:50:58 +08:00
Fengzhe Zhou
8ea2c404d7
[Feat] enable HuggingFacewithChatTemplate with --accelerator via cli ( #1163 )
...
* enable HuggingFacewithChatTemplate with --accelerator via cli
* rm vllm_internlm2_chat_7b
2024-05-15 21:51:07 +08:00
Fengzhe Zhou
f10dd48f9c
[Fix] Update stop_words in huggingface_above_v4_33 ( #1160 )
2024-05-15 14:10:33 +08:00
Fengzhe Zhou
62dbf04708
[Sync] update github workflow ( #1156 )
2024-05-14 22:42:23 +08:00
Fengzhe Zhou
7505b3cadf
[Feature] Add huggingface apply_chat_template ( #1098 )
...
* add TheoremQA with 5-shot
* add huggingface_above_v4_33 classes
* use num_worker partitioner in cli
* update theoremqa
* update TheoremQA
* add TheoremQA
* rename theoremqa -> TheoremQA
* update TheoremQA output path
* rewrite many model configs
* update huggingface
* further update
* refine configs
* update configs
* update configs
* add configs/eval_llama3_instruct.py
* add summarizer multi faceted
* update bbh datasets
* update configs/models/hf_llama/lmdeploy_llama3_8b_instruct.py
* rename class
* update readme
* update hf above v4.33
2024-05-14 14:50:16 +08:00
Yang Yong
53fe390454
fix LightllmApi workers bug ( #1113 )
2024-04-30 22:09:22 +08:00
Lyu Han
1013dce60c
adapt to lmdeploy v0.4.0 ( #1073 )
...
* adapt to lmdeploy v0.4.0
* compatible
2024-04-28 19:57:40 +08:00