-
Notifications
You must be signed in to change notification settings - Fork 382
Adding OBELICS DataLoader #663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi @TJ-Solergibert! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
scripts/check_padding_mm.py
Outdated
BATCH_NUMBER = 4 | ||
|
||
|
||
def main(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can make this as a unit test? WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add as a unit test some checks of shapes & types on the DP axis rather than this script that just checks the amount of padding in each batch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for super late review. I finally finish the first round of review and let me know what do you think about my questions and comments. I will revisit this PR in the coming week.
Mapping[str, Any]: The sample with an updated "image" filed and added | ||
"aspect_ratio" field. | ||
""" | ||
image = sample["image"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering instead of assuming a "image" field, shall we just pass in the image itself so that this part can be a generic for other dataset as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docs in _process_obelics_sample
state clearly the structure that a sample_processor generates in order to work with Llama3VisionTransform
. I guess that we can mark this comment as resolved now that we have the sample_processor & text_processor + datasets.md in docs, right?
max_num_tiles (Optional[int]): Only used if possible_resolutions is NOT given. | ||
Maximum number of tiles to break an image into. | ||
This will be used to generate possible_resolutions, | ||
e.g. [(224, 224), (224, 448), (448, 224)] if max_num_tiles = 2 and tile_size = 224. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we don't do (448, 448) as well? Also max_num_tiles should be 4?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tiles have width and height, 2D. (448, 448) is 4 tiles, while the 3 examples mentioned in the example have only 2 ((224, 224) single tile, (224, 448) one tile next to another & (448, 224) one tile up another down).
For max_num_tiles
= 4 we have: [(224, 896), (448, 448), (224, 224), (896, 224), (224, 672), (672, 224), (224, 448), (448, 224)].
We can put this example in the docstring but it's larger.
torchtitan/datasets/mm_datasets.py
Outdated
].count(self.image_token) | ||
self._sample_idx += 1 | ||
# Transform sample | ||
processed_sample = self.transform(processed_sample) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know you get this from TorchTune, can we make the name not like transformer? Maybe prepoc? Because we will have transform or transformer in later stage of the model as well. And this part is not trainable, so to better differentiate, could you give it a different name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to self.format(processed_sample)
& Llama3VisionFormatter
>>> transform = VisionCrossAttentionMask(tile_size=400, patch_size=40, image_token_id=1) | ||
>>> intervals = transform._get_image_attention_intervals(tokens) | ||
>>> print(intervals) | ||
[[0, 7], [1, 7], [7, 12]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm this is slightly different from what I have read about masking.. I need more time to think and validate on the logic of this part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Text tokens only attend to the previous image, multiple images if they are consecutive.
From the example: "Image1Image2 These are two dogs. Image3 This is a cat."
These are two dogs.
will attend to Image 1 & 2 and This is a cat.
only to Image 3
torchtitan/datasets/hf_datasets.py
Outdated
batch_size: int, | ||
collator_fn: Callable, | ||
): | ||
super().__init__(dataset=hf_ds, batch_size=batch_size, collate_fn=collator_fn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is this collate_fn
being used or called?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now we also have #1021 which is also adding the collator. For the HF datasets solution already in torchtitan we dont need it as the samples produced by the Dataset are ready to go. For the multimodal one we need it to pad to the longest/biggest samples, otherwise, we can directly pad to the longest supported shapes possible directly in the dataset (I prefer the collator solution)
|
||
# NOTE Inspired from torchtune.data._collate.py | ||
@dataclass | ||
class MultiModalCollator: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, this generates the all the data needed before sending into encoder right? Also for the MM model, we need to feed text tokens into decoder as well? shall we just reuse the existing dataloader for llama3?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we just reuse the existing dataloader for llama3?
No! Now, with the Dataset & DataLoader in mm_dataset.py
& mm_dataloader.py
+ the collator we prepare all the inputs with a single DataLoader!
The collator returns the prepared batches with:
batch_dict = {
"tokens": collated_text["tokens"],
"labels": collated_text["labels"],
"encoder_input": {
"images": collated_images,
"aspect_ratio": collated_aspect_ratios,
},
"encoder_mask": concat_masks,
}
Sorry, I will try to answer all of these comments within the next 10 days. |
@TJ-Solergibert I am wondering if we can split this PR and get it merged in pieces? |
Hey @TJ-Solergibert , are you still interested in continuing to work on this PR? |
9a02575
to
1daca4c
Compare
Updated the PR to the main branch, incorporating new features from torchtitan like the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the hard work here. This looks good but includes more Llama3.2 code than I think we need to enable MM in torchtitan. Since most modern VLM's use Early Fusion architectures instead of Deep Fusion like 3.2, we should choose to just support Early Fusion models for now. I left some comments on what could be removed or moved. After that it looks good to go.
self._sample_processor = sample_processor | ||
self.image_token = "<|image|>" # TODO(tj.solergibert) Hardcoded! | ||
|
||
self.transform = Llama3VisionTransform( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leave a todo comment here to make this not hardcoded
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left multiples. We have to decide which variables we want to expose through JobConfig
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, thank you for the quick turnaround. I left one additional comment, but I'd be happy to land it now and then we can iterate on it further in follwup PRs. If you're ready to land it, you can just remove your personal todo comments and remove [WIP] and I'll approve it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work. I left some comments.
Please create a new customized tiktoken.py
within the experiment folder.
|
||
from mm_dataset import build_mm_dataloader | ||
|
||
PATH_TO_TOKENIZER = "/iopsstor/scratch/cscs/asolergi/torchtitan/tokenizer.model" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the path to the llama 3 tokenizer. I've exposed all the args through click
, but I can drop this script if you want!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this file for? Is it a test, or for sanity check? If so let's put it into a tests
folder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's just for sanity check. Should I delete it?
Hi @tianyu-l & @pbontrager, The PR is ready for a Re-review! I've addressed all your comments I would say. They've been very helpful, thanks! In the last push mainly I've created a new
I've left some TODO comments in the code, most of them regarding which arguments we should expose to the user or not:
Toni |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks beautiful! Thank you for building the foundation of multimodal training in torchtitan!
Please fix linting before we could merge.
Done! Thanks for your comments! Now that I have a bit of bandwidth I will check if I can continue contributing forward! I'm excited to see how far torchtitan gets for multimodal training! |
Hi,
In this PR I present a first draft of the Multimodal DataLoader. First I will describe how the batches are created and then I will explain the padding problem.
Let's begin checking the OBELICS dataset. For every sample on the dataset we have 4 keys, but we are just interested in 2 of them:
images
: A list either with URLs of images ORNone
s to specify the position of the text.texts
: A list either with text strings ORNone
s to specify the position of the images.It's important to highlight that
len(images)==len(texts)
and that for each index, one element and only one is notNone
.The
format_obelics
function will transform each sample to a format that can be later fed into the transform block that will prepare the samples to the target type. Each formatted sample will be a dictionary containing 2 keys:images
:List
of PIL Images with the loaded images.text
:str
with the text of the sample ready to be tokenized, including the image tokens.Once formatted, we will process each sample with the transform block. This transform block is composed of
CLIPPreprocess
,TikTokenizer
&VisionCrossAttentionMask
modules.CLIPPreprocess
This module will prepare the List of images to be fed into the CLIP model. The most relevant steps is resizing the image without distortion, dividing the image into tiles and padding if necessary. Highlight the fact that it will still produce a List of tensors and NOT a tensor as every image can have a different number of tiles. This will be addressed in the collator where we will pad the image tiles to the largest in the batch. Also, we keep the maximum number of tiles to 4 and the tile size to 448 for pretraining [1], [2].
TikTokenizer
I've included a new method in the tokenizer to encode the multimodal text. In short, it just encodes the text adding the special
image_id
token and returns both theinput_ids
&labels
masking thebos
,eos
&image_id
tokens.VisionCrossAttentionMask
This module will create the attention mask for the Fused layers. In short, for each TILE we will have 1025
image_tokens
and this mask will specify for eachtext_token
to whichimage_tokens
should attend to. We are returning again a List of tensors as the quantity ofimage_tokens
will depend on the number of tiles. Again, we will solve this in the collator.Padding & the collator
As we've previously seen, both the outputs of the
CLIPPreprocess
&VisionCrossAttentionMask
are list of tensors because of the different number of tiles. Within the same sample we should pad both artifacts to the maximum number of tiles, but the issue arises when we runbatch_size > 1
as we will also need to pad theinput_ids
(&labels
) which is relatively cheap BUT also the Number of images, as the input to the CLIP model will be a tensor of shape [Batch size, Number of images, Number of tiles, Channels, Tile size, Tile size]. Padding to the maximum number of tiles is bad, but in the worst case scenario you end up increasing the tensor x4 (from 1 tile to maximum number of tiles = 4). But for the number of images it can get really really big, as there are samples with +30 images.To check this phenomenon I've included
scripts/check_padding_mm.py
which computes the % of padding in a sample. Feel free to give it a try but it's very easy to get samples where the majority of the input is padding.That's why I proposed continue working on a DataLoader & Dataset than can pack multiple samples up to a given
input_ids
length OR number of images in a batch. Packing theinput_ids
is fairly easy while packing the cross attention masks will require a bit more effort. Let me know if you would be interested on supporting that feature or you just want to include in the repo an example of the multimodal pipeline despite the padding issue described. I also plan including some unit test, to check the generated samples & recovering from failures abilities.Other comments:
scripts/check_padding_mm.py
script.torchtune
cleaning the unnecessary parts like the code for the inference case. Also in theformat_obelics
function we could drop the last images in the case the sample end with images and not text as no token will attend to them and we dont compute the loss with the image tokens (So they are useless)input_ids
/tokens
across the repo.Toni