This repository is an implementation of the paper accepted to ICMLA 2021, "Sketch2Vis: Generating Data Visualizations from Hand-drawn Sketches with Deep Learning".
It presents a deep learning solution of translating human sketches into data visualization source code.
- Create a conda environment with
conda env create -f environment.yml -n sketch2vis
. - Activate the conda environment with
conda activate sketch2vis
.
We used synthetic hand-drawn style data visualizations and Domain Specific Language (DSL) for model training. We provide 3 ways for dataset generations:
Source | Current Supported Type | Examples |
---|---|---|
MatPlotLib | Bar, Line, Scatte, Pie, Box | |
roughViz.js | Bar, Line, Pie, Scat | |
Photo-Sketching | Bar, Line, Scatte, Pie, Box |
The detailed Sketch2Vis DSL grammar can be checked in this Notebook.
- MatPlotLib
Run python generate_data.py --task create --plot_source xkcd --output_dir raw_data --plot_number $number$
- Photo-Sketching
Run python generate_data.py --task create --plot_source transfer --output_dir raw_data --plot_number $number$
,
Then Download pre-trained PhotoSketch models and save them into checkpoints/PhotoSketch/pretrained
.
Then run git submodule update --init
and ./transfer_style.sh
- roughViz.js
We currently only provide pre-generated roughViz.js images.
- Merge Records
Run python generate_data.py --task merge --output_dir raw_data
Run python preprocess.py
and ./preprocess.sh
The implementation of Transformer-Based model, which translates the sketch into DSL code is updated based on fairseq-image-captioning
Run ./train.sh
Run ./inference.sh
Run python eval.py
If you like our work, please consider citing:
@inproceedings{teng2021sketch2vis, title={Sketch2Vis: Generating Data Visualizations from Hand-drawn Sketches with Deep Learning}, author={Teng, Zhongwei and Fu, Quchen and White, Jules and Schmidt, Douglas C}, booktitle={2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)}, pages={853--858}, year={2021}, organization={IEEE} }