Skip to content

[ACM'MM 2024 Oral] Official code for "OneChart: Purify the Chart Structural Extraction via One Auxiliary Token"

License

Notifications You must be signed in to change notification settings

LingyvKong/OneChart

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Jinyue Chen*, Lingyu Kong*, Haoran Wei, Chenglong Liu, Zheng Ge, Liang Zhao, Jianjian Sun, Chunrui Han, Xiangyu Zhang

Release

  • [2024/9/16] 🔥 Support quickly trying the demo using huggingface.
  • [2024/7/21] 🎉🎉🎉 OneChart is accepted by ACM'MM 2024 Oral! (3.97%)
  • [2024/4/21] 🔥🔥🔥 We have released the web demo in Project Page. Have fun!!
  • [2024/4/15] 🔥 We have released the code, weights and the benchmark data.

Contents

0. Quickly try the demo using hugginface

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('kppkkp/OneChart', trust_remote_code=True, use_fast=False, padding_side="right")
model = AutoModel.from_pretrained('kppkkp/OneChart', trust_remote_code=True, low_cpu_mem_usage=True, device_map='cuda')
model = model.eval().cuda()

# input your test image
image_file = 'image.png'
res = model.chat(tokenizer, image_file, reliable_check=True)
print(res)

1. Benchmark Data and Evaluation Tool

  • Download the ChartSE images and jsons here.
  • Modify json path at the beginning of ChartSE_eval/eval_ChartSE.py. Then run eval script:
python ChartSE_eval/eval_ChartSE.py

2. Install

  • Clone this repository and navigate to the code folder
git clone https://github.com/LingyvKong/OneChart.git
cd OneChart/OneChart_code/
  • Install Package
conda create -n onechart python=3.10 -y
conda activate onechart
pip install -e .
pip install -r requirements.txt
pip install ninja
  • Download the OneChart weights here.

3. Demo

python vary/demo/run_opt_v1.py  --model-name  /onechart_weights_path/

Following the instruction, type 1 first, then type image path.

4. Train

  • Prepare your dataset json, the format example is:
[
 {
  "image": "000000.png",
  "conversations": [
   {
    "from": "human",
    "value": "<image>\nConvert the key information of the chart to a python dict:"
   },
   {
    "from": "gpt",
    "value": "{\"title\": \"Share of children who are wasted, 2010\", \"source\": \"None\", \"x_title\": \"None\", \"y_title\": \"None\", \"values\": {\"Haiti\": \"6.12%\", \"Libya\": \"5.32%\", \"Morocco\": \"5.11%\", \"Lebanon\": \"4.5%\", \"Colombia\": \"1.45%\"}}"
   }
  ]
 },
 {
   ...
 }
]
  • Fill in the data path to OneChart/OneChart_code/vary/utils/constants.py. Then a example script is:
deepspeed /data/OneChart_code/vary/train/train_opt.py     --deepspeed /data/OneChart_code/zero_config/zero2.json --model_name_or_path /data/checkpoints/varytiny/  --vision_tower /data/checkpoints/varytiny/ --freeze_vision_tower False --freeze_lm_model False --vision_select_layer -2 --use_im_start_end True --bf16 True --per_device_eval_batch_size 4 --gradient_accumulation_steps 1 --evaluation_strategy "no" --save_strategy "steps" --save_steps 250 --save_total_limit 1 --weight_decay 0. --warmup_ratio 0.03 --lr_scheduler_type "cosine" --logging_steps 1 --tf32 True --model_max_length 2048 --gradient_checkpointing True --dataloader_num_workers 4 --report_to none --per_device_train_batch_size 16 --num_train_epochs 1 --learning_rate 5e-5 --datasets render_chart_en+render_chart_zh  --output_dir /data/checkpoints/onechart-pretrain/
  • You can pay attention to modifying these parameters according to your needs: --model_name_or_path, freeze_vision_tower, --datasets, --output_dir

Acknowledgement

  • Vary: the codebase and initial weights we built upon!

Code License Data License

Usage and License Notices: The data, code, and checkpoint are intended and licensed for research use only. They are also restricted to use that follow the license agreement of Vary, Opt.

Citation

If you find our work useful in your research, please consider citing OneChart:

@misc{chen2024onechart,
      title={OneChart: Purify the Chart Structural Extraction via One Auxiliary Token}, 
      author={Jinyue Chen and Lingyu Kong and Haoran Wei and Chenglong Liu and Zheng Ge and Liang Zhao and Jianjian Sun and Chunrui Han and Xiangyu Zhang},
      year={2024},
      eprint={2404.09987},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

[ACM'MM 2024 Oral] Official code for "OneChart: Purify the Chart Structural Extraction via One Auxiliary Token"

Resources

License

Stars

Watchers

Forks

Languages