[Feature] Support OpenFinData (#896)

* [Feature] Support OpenFinData

* add README for OpenFinData

* update README
This commit is contained in:
Skyfall-xzz 2024-02-29 12:55:07 +08:00 committed by GitHub
parent 001e77fea2
commit 4c45a71bbc
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 215 additions and 0 deletions

View File

@ -0,0 +1,64 @@
# OpenFinData
## Introduction
The following introduction comes from the introduction in [OpenFinData](https://github.com/open-compass/OpenFinData)
```
OpenFinData是由东方财富与上海人工智能实验室联合发布的开源金融评测数据集。该数据集代表了最真实的产业场景需求是目前场景最全、专业性最深的金融评测数据集。它基于东方财富实际金融业务的多样化丰富场景旨在为金融科技领域的研究者和开发者提供一个高质量的数据资源。
OpenFinData is an open source financial evaluation dataset jointly released by Oriental Fortune and Shanghai Artificial Intelligence Laboratory. This data set represents the most realistic industrial scenario needs and is currently the most comprehensive and professional financial evaluation data set. It is based on the diverse and rich scenarios of Oriental Fortune's actual financial business and aims to provide a high-quality data resource for researchers and developers in the field of financial technology.
```
## Official link
### Repository
[OpenFinData](https://github.com/open-compass/OpenFinData)
## Use cases
In evaluation scripts, add OpenFinData dataset as other datasets by using
```
from .datasets.OepnFinData.OpenFinData_gen import OpenFinData_datasets
```
## Examples
Input example I:
```
你是一个数据审核小助手。表格内给出了2023年11月10日文一科技600520的最新数据请指出其中哪个数据有误。请给出正确选项。
| 代码 | 名称 | 最新 | 涨幅% | 涨跌 | 成交量(股) | 成交额(元) | 流通市值 | 总市值 | 所属行业 |
|-------:|:-----|------:|------:|-----:|---------:|-----------:|-----------:|-----------:|:-------|
| 600520 | 文一科技 | 34.01 | 9.99 | 3.09 | 74227945 | 2472820896 | 5388200000 | 5388204300 | 通用设备 |
A. 2023年11月10日文一科技最新价34.01
B. 2023年11月10日文一科技成交额为2472820896
C. 文一科技的流通市值和总市值可能有误因为流通市值5388200000元大于总市值5388204300元
D. 无明显错误数据
答案:
```
Output example I (from QWen-14B-Chat):
```
C. 文一科技的流通市值和总市值可能有误因为流通市值5388200000元大于总市值5388204300元。
```
Input example II:
```
你是一个实体识别助手。请列出以下内容中提及的公司。
一度扬帆顺风的光伏产业在过去几年中面对潜在的高利润诱惑吸引了众多非光伏行业的上市公司跨界转战试图分得一杯羹。然而今年下半年以来出现了一个显著的趋势一些跨界公司开始放弃或削减其光伏项目包括皇氏集团002329.SZ、乐通股份002319.SZ、奥维通信002231.SZ等近十家公司。此外还有一些光伏龙头放缓投资计划如大全能源688303.SH、通威股份600438.SZ。业内人士表示诸多因素导致了这股热潮的退却包括市场变化、技术门槛、政策调整等等。光伏产业经历了从快速扩张到现在的理性回调行业的自我调整和生态平衡正在逐步展现。从财务状况来看较多选择退出的跨界企业都面临着经营压力。不过皇氏集团、乐通股份等公司并未“全身而退”仍在保持对光伏市场的关注寻求进一步开拓的可能性。
答案:
```
Output example II (from InternLM2-7B-Chat):
```
皇氏集团002329.SZ、乐通股份002319.SZ、奥维通信002231.SZ、大全能源688303.SH、通威股份600438.SZ
```
## Evaluation results
```
dataset version metric mode qwen-14b-chat-hf internlm2-chat-7b-hf
---------------------------------- --------- -------- ------ ------------------ ----------------------
OpenFinData-emotion_identification b64193 accuracy gen 85.33 78.67
OpenFinData-entity_disambiguation b64193 accuracy gen 52 68
OpenFinData-financial_facts b64193 accuracy gen 70.67 46.67
OpenFinData-data_inspection a846b7 accuracy gen 53.33 51.67
OpenFinData-financial_terminology a846b7 accuracy gen 84 73.33
OpenFinData-metric_calculation a846b7 accuracy gen 55.71 68.57
OpenFinData-value_extraction a846b7 accuracy gen 84.29 71.43
OpenFinData-intent_understanding f0bd9e accuracy gen 88 86.67
OpenFinData-entity_recognition 81aeeb accuracy gen 68 84
```

View File

@ -0,0 +1,4 @@
from mmengine.config import read_base
with read_base():
from .OpenFinData_gen_46dedb import OpenFinData_datasets # noqa: F401, F403

View File

@ -0,0 +1,99 @@
from opencompass.openicl.icl_prompt_template import PromptTemplate
from opencompass.openicl.icl_retriever import ZeroRetriever
from opencompass.openicl.icl_inferencer import GenInferencer
from opencompass.openicl.icl_evaluator import AccEvaluator
from opencompass.datasets.OpenFinData import OpenFinDataDataset, OpenFinDataKWEvaluator
from opencompass.utils.text_postprocessors import last_capital_postprocess
OpenFinData_datasets = []
OpenFinData_3choices_list = ['emotion_identification', 'entity_disambiguation', 'financial_facts']
OpenFinData_4choices_list = ['data_inspection', 'financial_terminology', 'metric_calculation', 'value_extraction']
OpenFinData_5choices_list = ['intent_understanding']
OpenFinData_keyword_list = ['entity_recognition']
OpenFinData_all_list = OpenFinData_3choices_list + OpenFinData_4choices_list + OpenFinData_5choices_list + OpenFinData_keyword_list
OpenFinData_eval_cfg = dict(evaluator=dict(type=AccEvaluator), pred_postprocessor=dict(type=last_capital_postprocess))
OpenFinData_KW_eval_cfg = dict(evaluator=dict(type=OpenFinDataKWEvaluator))
for _name in OpenFinData_all_list:
if _name in OpenFinData_3choices_list:
OpenFinData_infer_cfg = dict(
ice_template=dict(type=PromptTemplate, template=dict(begin="</E>", round=[
dict(role="HUMAN", prompt=f"{{question}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\n答案: "),
dict(role="BOT", prompt="{answer}")]),
ice_token="</E>"), retriever=dict(type=ZeroRetriever), inferencer=dict(type=GenInferencer))
OpenFinData_datasets.append(
dict(
type=OpenFinDataDataset,
path="./data/openfindata_release",
name=_name,
abbr="OpenFinData-" + _name,
reader_cfg=dict(
input_columns=["question", "A", "B", "C"],
output_column="answer"),
infer_cfg=OpenFinData_infer_cfg,
eval_cfg=OpenFinData_eval_cfg,
))
if _name in OpenFinData_4choices_list:
OpenFinData_infer_cfg = dict(
ice_template=dict(type=PromptTemplate, template=dict(begin="</E>", round=[
dict(role="HUMAN", prompt=f"{{question}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案: "),
dict(role="BOT", prompt="{answer}")]),
ice_token="</E>"), retriever=dict(type=ZeroRetriever), inferencer=dict(type=GenInferencer))
OpenFinData_datasets.append(
dict(
type=OpenFinDataDataset,
path="./data/openfindata_release",
name=_name,
abbr="OpenFinData-" + _name,
reader_cfg=dict(
input_columns=["question", "A", "B", "C", "D"],
output_column="answer"),
infer_cfg=OpenFinData_infer_cfg,
eval_cfg=OpenFinData_eval_cfg,
))
if _name in OpenFinData_5choices_list:
OpenFinData_infer_cfg = dict(
ice_template=dict(type=PromptTemplate, template=dict(begin="</E>", round=[
dict(role="HUMAN", prompt=f"{{question}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\nE. {{E}}\n答案: "),
dict(role="BOT", prompt="{answer}")]),
ice_token="</E>"), retriever=dict(type=ZeroRetriever), inferencer=dict(type=GenInferencer))
OpenFinData_datasets.append(
dict(
type=OpenFinDataDataset,
path="./data/openfindata_release",
name=_name,
abbr="OpenFinData-" + _name,
reader_cfg=dict(
input_columns=["question", "A", "B", "C", "D", "E"],
output_column="answer"),
infer_cfg=OpenFinData_infer_cfg,
eval_cfg=OpenFinData_eval_cfg,
))
if _name in OpenFinData_keyword_list:
OpenFinData_infer_cfg = dict(
ice_template=dict(type=PromptTemplate, template=dict(begin="</E>", round=[
dict(role="HUMAN", prompt=f"{{question}}\n答案: "),
dict(role="BOT", prompt="{answer}")]),
ice_token="</E>"), retriever=dict(type=ZeroRetriever), inferencer=dict(type=GenInferencer))
OpenFinData_datasets.append(
dict(
type=OpenFinDataDataset,
path="./data/openfindata_release",
name=_name,
abbr="OpenFinData-" + _name,
reader_cfg=dict(
input_columns=["question"],
output_column="answer"),
infer_cfg=OpenFinData_infer_cfg,
eval_cfg=OpenFinData_KW_eval_cfg,
))
del _name

View File

@ -0,0 +1,47 @@
import json
import os.path as osp
from datasets import Dataset
from opencompass.openicl.icl_evaluator import BaseEvaluator
from opencompass.registry import ICL_EVALUATORS, LOAD_DATASET
from .base import BaseDataset
@LOAD_DATASET.register_module()
class OpenFinDataDataset(BaseDataset):
@staticmethod
def load(path: str, name: str):
with open(osp.join(path, f'{name}.json'), 'r') as f:
data = json.load(f)
return Dataset.from_list(data)
@ICL_EVALUATORS.register_module()
class OpenFinDataKWEvaluator(BaseEvaluator):
def __init__(self, ):
super().__init__()
def score(self, predictions, references):
assert len(predictions) == len(references)
scores = []
results = dict()
for i in range(len(references)):
all_hit = True
judgement = references[i].split('')
for item in judgement:
if item not in predictions[i]:
all_hit = False
break
if all_hit:
scores.append(True)
else:
scores.append(False)
results['accuracy'] = round(sum(scores) / len(scores), 4) * 100
return results

View File

@ -72,6 +72,7 @@ from .natural_question import * # noqa: F401, F403
from .natural_question_cn import * # noqa: F401, F403 from .natural_question_cn import * # noqa: F401, F403
from .NPHardEval import * # noqa: F401, F403 from .NPHardEval import * # noqa: F401, F403
from .obqa import * # noqa: F401, F403 from .obqa import * # noqa: F401, F403
from .OpenFinData import * # noqa: F401, F403
from .piqa import * # noqa: F401, F403 from .piqa import * # noqa: F401, F403
from .py150 import * # noqa: F401, F403 from .py150 import * # noqa: F401, F403
from .qasper import * # noqa: F401, F403 from .qasper import * # noqa: F401, F403