From 9e8b671de01f1a8de40fdba092e255169d57834d Mon Sep 17 00:00:00 2001 From: lupantech Date: Mon, 17 Oct 2022 01:00:23 -0700 Subject: [PATCH] first commit --- README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index db7a386..bdbc739 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # ScienceQA: Science Question Answering -![VQA](https://img.shields.io/badge/Task-VQA-orange) ![Science Problems](https://img.shields.io/badge/Task-Science Problems-orange) ![ScienceQA](https://img.shields.io/badge/Dataset-ScienceQA-blue) ![Chain-of-Thought](https://img.shields.io/badge/Model-Chain of Thought-red) ![GPT-3](https://img.shields.io/badge/Model-GPT3-red) ![LLM](https://img.shields.io/badge/Model-LLM-red) +![VQA](https://img.shields.io/badge/Task-VQA-orange) ![Science Problems](https://img.shields.io/badge/Task-Science_Problems-orange) ![ScienceQA](https://img.shields.io/badge/Dataset-ScienceQA-blue) ![Chain-of-Thought](https://img.shields.io/badge/Model-Chain_of_Thought-red) ![GPT-3](https://img.shields.io/badge/Model-GPT--3-red) ![LLM](https://img.shields.io/badge/Model-LLM-red) Data and code for NeurIPS 2022 Paper "[Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering](http://lupantech.github.io/papers/neurips22_scienceqa.pdf)". @@ -25,7 +25,7 @@ For more details, you can find our project page [here](https://scienceqa.github. ## Download the dataset -The text part of the **ScienceQA** dataset is provided in [data/scienceqa/problems.json](https://github.com/lupantech/ScienceQA/data/scienceqa/problems.json). You can download the image data of ScienceQA by running: +The text part of the **ScienceQA** dataset is provided in [data/scienceqa/problems.json](https://github.com/lupantech/ScienceQA/blob/main/data/scienceqa/problems.json). You can download the image data of ScienceQA by running: ```sh . tools/download.sh @@ -62,7 +62,7 @@ pip install -r requirements.txt ### Generate the image captions -We use the image captioning model to generate the text content for images in ScienceQA. The pre-generated image captions are provided in [data/captions.json](https://github.com/lupantech/ScienceQA/data/problems.json). +We use the image captioning model to generate the text content for images in ScienceQA. The pre-generated image captions are provided in [data/captions.json](https://github.com/lupantech/ScienceQA/blob/main/data/captions.json). (Optionally) You can generate the image captions with user-specific arguments with the following command, which will save the caption data in `data/captions_user.json`. @@ -138,6 +138,7 @@ This work is licensed under a [MIT License](http://creativecommons.org/licenses/ The ScienceQA dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/). + ## Cite If the paper, codes, or the dataset inspire you, please cite us: