Skip to content

Commit

Permalink
first commit
Browse files Browse the repository at this point in the history
  • Loading branch information
lupantech committed Oct 17, 2022
1 parent 360b71c commit 9e8b671
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# ScienceQA: Science Question Answering

![VQA](https://img.shields.io/badge/Task-VQA-orange) ![Science Problems](https://img.shields.io/badge/Task-Science Problems-orange) ![ScienceQA](https://img.shields.io/badge/Dataset-ScienceQA-blue) ![Chain-of-Thought](https://img.shields.io/badge/Model-Chain of Thought-red) ![GPT-3](https://img.shields.io/badge/Model-GPT3-red) ![LLM](https://img.shields.io/badge/Model-LLM-red)
![VQA](https://img.shields.io/badge/Task-VQA-orange) ![Science Problems](https://img.shields.io/badge/Task-Science_Problems-orange) ![ScienceQA](https://img.shields.io/badge/Dataset-ScienceQA-blue) ![Chain-of-Thought](https://img.shields.io/badge/Model-Chain_of_Thought-red) ![GPT-3](https://img.shields.io/badge/Model-GPT--3-red) ![LLM](https://img.shields.io/badge/Model-LLM-red)

Data and code for NeurIPS 2022 Paper "[Learn to Explain: Multimodal Reasoning via
Thought Chains for Science Question Answering](http://lupantech.github.io/papers/neurips22_scienceqa.pdf)".
Expand All @@ -25,7 +25,7 @@ For more details, you can find our project page [here](https://scienceqa.github.

## Download the dataset

The text part of the **ScienceQA** dataset is provided in [data/scienceqa/problems.json](https://github.com/lupantech/ScienceQA/data/scienceqa/problems.json). You can download the image data of ScienceQA by running:
The text part of the **ScienceQA** dataset is provided in [data/scienceqa/problems.json](https://github.com/lupantech/ScienceQA/blob/main/data/scienceqa/problems.json). You can download the image data of ScienceQA by running:

```sh
. tools/download.sh
Expand Down Expand Up @@ -62,7 +62,7 @@ pip install -r requirements.txt

### Generate the image captions

We use the image captioning model to generate the text content for images in ScienceQA. The pre-generated image captions are provided in [data/captions.json](https://github.com/lupantech/ScienceQA/data/problems.json).
We use the image captioning model to generate the text content for images in ScienceQA. The pre-generated image captions are provided in [data/captions.json](https://github.com/lupantech/ScienceQA/blob/main/data/captions.json).

(Optionally) You can generate the image captions with user-specific arguments with the following command, which will save the caption data in `data/captions_user.json`.

Expand Down Expand Up @@ -138,6 +138,7 @@ This work is licensed under a [MIT License](http://creativecommons.org/licenses/
The ScienceQA dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).



## Cite

If the paper, codes, or the dataset inspire you, please cite us:
Expand Down

0 comments on commit 9e8b671

Please sign in to comment.