-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[compiler] Support dynamic shape for reshape #13927
Comments
This comment was marked as resolved.
This comment was marked as resolved.
Suggestion 2About
I think this rule is simple and we don't need discussions, But I leave it to ensure that we have a same understanding. |
@llFreetimell suggest to follow tensorflow rules
I think this is also a good idea :) BTW, other current shape inference use
|
Due to my wrong comments, some misunderstanding is propagated 😅
For me, suggestion 1 and 2 are all fine :) |
I have a small question. If my understanding below is correct, does this modification involve changing the base class of the Current implementation always adds
The comment says it's because of the current requirement of the IR( CircleReshape ).
I think it's because current
|
Ah, I misunderstood tensorflow codes 😢 Sorry for confusion
Thank you for detailed guide :) It looks good to me. |
Not at all :)
I missed this codes exist... thanks..!
If I am misunderstanding something, please let me know :( |
I think so. Hmm..
I didn't know it. thank you for figuring it out 👍 If so, how about this? A bit modified version from @llFreetimell
|
@zetwhite Looks nice :D I agree for your final suggestion! |
@jongwonyang Could you make a draft PR based on this discussion? +) I'll take a look this recipe ( #13866 (comment) ) and try to fix it. |
Of course! I'll create the draft PR ASAP. The final suggestion looks good to me :) Thank you for addressing this issue!
Thank you for this too :) |
I have created the draft PR #13935 |
No. This is not invalid. We had a model like this. |
Thanks for pointing out. But the current
Could you give me some more detailed explanation about how to utilize |
Can you please explain how to got this from the code ? |
I guess it from the copy from luci/import/src/Nodes/CircleReshape.cpp auto *shape_node = (inputs.size() == 2) ? inputs.at(1) : nullptr;
if (shape_node == nullptr)
{
const auto *options = op.builtin_options.AsReshapeOptions();
if (options != nullptr)
{
// shape_node X, attribute O -> create input node based on attribute
shape_node = create_shape_node(options->new_shape, graph);
}
else
{
// shape_node X, attribute X -> create CircleOutputDummy
// If CircleOutputDummy exist, it means that both shape_node and attribute are NOT exist
shape_node = graph->nodes()->create<CircleOutputDummy>();
shape_node->dtype(loco::DataType::S32);
shape_node->rank(0);
shape_node->name("Reshape/dummy");
}
} |
+) Ah, Maybe this situation can happened. Above If a circle file has a |
Isn't |
BTW, I don't know that |
I think the current Not sure how to define valid/invalid models now, but we can have
Not sure As current |
Thanks for information. Since there is a recipe
|
Can you give the recipe name? |
I was considering this recipe. |
From history, #1519 , we had a real world model for both X.
We can remove 003 and add 004 for non constant input, like simple |
I don't know the details and history of frontend compiler. But you may refer to tflite's behavior. TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {
TF_LITE_ENSURE(context, NumInputs(node) == 1 || NumInputs(node) == 2);
TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);
OpData* op_data = reinterpret_cast<OpData*>(node->user_data);
op_data->output_ptr = nullptr;
// Always postpone sizing string tensors, even if we could in principle
// calculate their shapes now. String tensors don't benefit from having their
// shapes precalculated because the actual memory can only be allocated after
// we know all the content.
TfLiteTensor* output;
TF_LITE_ENSURE_OK(context,
GetOutputSafe(context, node, kOutputTensor, &output));
if (output->type != kTfLiteString) {
const TfLiteTensor* input = GetInput(context, node, kInputTensor);
const TfLiteTensor* shape = GetInput(context, node, kShapeTensor);
if (NumInputs(node) == 1 || IsConstantOrPersistentTensor(shape)) {
if (IsConstantOrPersistentTensor(input)) {
SetTensorToPersistentRo(output);
TF_LITE_ENSURE_OK(context, ResizeOutput(context, node));
op_data->output_ptr = output->data.data;
memcpy(output->data.data, input->data.data, input->bytes);
return kTfLiteOk;
} else {
TF_LITE_ENSURE_OK(context, ResizeOutput(context, node));
}
} else {
SetTensorToDynamic(output);
}
}
return kTfLiteOk;
} |
Sorry for the late response. I completely misunderstood the role of Now, I have a better understanding of the implementation, taking into account the existence of I'll be adding more details to the draft PR soon. Thanks again for your participation and the wonderful discussions! |
About removing 003, do you mean completely deleting 003 or just excluding/commenting out 003? |
just commenting with some explanation comments will do. |
Following cases (from #13927 (comment)) have been implemented:
Following case is not implemented since the problem has been resolved by removing Reshape_003 recipe.
Remaining case could be introduced when there is any issue with For now, dynamic shape inference of reshape works well! Thanks for all of your support =) |
While reviewing above PR, I think we need to make some consensus about
reshape
shape inference rule.So, I make this issue to discuss details.
I'd like to suggest below 2 things. If you have some free time, please take a look and leave any kind of comments!
/cc @llFreetimell @jongwonyang @shs-park @seanshpark
The text was updated successfully, but these errors were encountered: