[Feature] add --dry-run option (#59)

This commit is contained in:
Leymore 2023-07-17 10:41:38 +08:00 committed by GitHub
parent 840a8ebecb
commit e19a0c1cf8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 20 additions and 3 deletions

View File

@ -5,7 +5,7 @@
The program entry for the evaluation task is `run.py`, its usage is as follows:
```shell
python run.py $Config {--slurm | --dlc | None} [-p PARTITION] [-q QUOTATYPE] [--debug] [-m MODE] [-r [REUSE]] [-w WORKDIR] [-l]
python run.py $Config {--slurm | --dlc | None} [-p PARTITION] [-q QUOTATYPE] [--debug] [-m MODE] [-r [REUSE]] [-w WORKDIR] [-l] [--dry-run]
```
Here are some examples for launching the task in different environments:
@ -24,6 +24,7 @@ The parameter explanation is as follows:
- `-r`: Reuse existing inference results, and skip the finished tasks. If followed by a timestamp, the result under that timestamp in the workspace path will be reused; otherwise, the latest result in the specified workspace path will be reused.
- `-w`: Specify the working path, default is `./outputs/default`.
- `-l`: Enable status reporting via Lark bot.
- `--dry-run`: When enabled, inference and evaluation tasks will be dispatched but won't actually run for debugging.
Using run mode `-m all` as an example, the overall execution flow is as follows:

View File

@ -5,7 +5,7 @@
评测任务的程序入口为 `run.py`,使用方法如下:
```shell
python run.py $Config {--slurm | --dlc | None} [-p PARTITION] [-q QUOTATYPE] [--debug] [-m MODE] [-r [REUSE]] [-w WORKDIR] [-l]
python run.py $Config {--slurm | --dlc | None} [-p PARTITION] [-q QUOTATYPE] [--debug] [-m MODE] [-r [REUSE]] [-w WORKDIR] [-l] [--dry-run]
```
启动方式:
@ -24,6 +24,7 @@ python run.py $Config {--slurm | --dlc | None} [-p PARTITION] [-q QUOTATYPE] [--
- `-r`: 重用已有的推理结果。如果后面跟有时间戳,则会复用工作路径下该时间戳的结果;否则则复用指定工作路径下的最新结果。
- `-w`: 指定工作路径,默认为 `./outputs/default`
- `-l`: 打开飞书机器人状态上报。
- `--dry-run`: 开启时,推理和评测任务仅会分发但不会真正运行,便于调试;
以运行模式`-m all`为例,整体运行流如下:

17
run.py
View File

@ -37,6 +37,12 @@ def parse_args():
'redirected to files',
action='store_true',
default=False)
parser.add_argument('--dry-run',
help='Dry run mode, in which the scheduler will not '
'actually run the tasks, but only print the commands '
'to run',
action='store_true',
default=False)
parser.add_argument('-m',
'--mode',
help='Running mode. You can choose "infer" if you '
@ -135,7 +141,8 @@ def parse_dlc_args(dlc_parser):
def main():
args = parse_args()
if args.dry_run:
args.debug = True
# initialize logger
logger = get_logger(log_level='DEBUG' if args.debug else 'INFO')
@ -197,6 +204,8 @@ def main():
max_task_size=args.max_partition_size,
gen_task_coef=args.gen_task_coef)
tasks = partitioner(cfg)
if args.dry_run:
return
# execute the infer subtasks
exec_infer_runner(tasks, args, cfg)
# If they have specified "infer" in config and haven't used --slurm
@ -217,6 +226,8 @@ def main():
cfg['work_dir'], 'predictions/')
partitioner = PARTITIONERS.build(cfg.infer.partitioner)
tasks = partitioner(cfg)
if args.dry_run:
return
runner = RUNNERS.build(cfg.infer.runner)
runner(tasks)
@ -235,6 +246,8 @@ def main():
partitioner = NaivePartitioner(
osp.join(cfg['work_dir'], 'results/'))
tasks = partitioner(cfg)
if args.dry_run:
return
# execute the eval tasks
exec_eval_runner(tasks, args, cfg)
# If they have specified "eval" in config and haven't used --slurm
@ -255,6 +268,8 @@ def main():
'results/')
partitioner = PARTITIONERS.build(cfg.eval.partitioner)
tasks = partitioner(cfg)
if args.dry_run:
return
runner = RUNNERS.build(cfg.eval.runner)
runner(tasks)