-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide API to get runtime Tensor buffer, dtype, and shape. #1957
Comments
@ctodTT I believe you were investigating something similar about returning non-float types back. Let's discuss this and the other issues on your plate right now. |
We might want to clean up the APIs as more support for the callback grows, other frontends are starting to integrate this into their ecosystem so if you think there's a case for cleaner API support, definitely post your suggestions. |
I think if the frontends are able to retrieve a buffer, shape, dtype, and strides - that should be enough for any frontend to construct a tensor in whatever framework they want. I guess strides is a little redundant since ttnn tensors don't have strides and are always contiguous (at least the order of the logical data is, tile layout is technically not contiguous I guess. But we convert to row major anyway). |
This change implements an API on runtime tensors to reflect tensor metadata and contents reflecting the underlying TTNN tensor. This functionality is pybound as well, allowing for easy casting to torch tensors. Please see `runtime/test/python/ttnn/test_runtime_api.py` for an example. NOTE: This is not the most efficient implementation possible, as there is effectively a double copy to get the data in a pybindable row major format. There is probably some trickery that can be done later to avoid this, but I would like to get this functionality out ASAP and avoid premature optimization Closes #1957
This change implements an API on runtime tensors to reflect tensor metadata and contents reflecting the underlying TTNN tensor. This functionality is pybound as well, allowing for easy casting to torch tensors. Please see `runtime/test/python/ttnn/test_runtime_api.py` for an example. NOTE: This is not the most efficient implementation possible, as there is effectively a double copy to get the data in a pybindable row major format. There is probably some trickery that can be done later to avoid this, but I would like to get this functionality out ASAP and avoid premature optimization Closes #1957
This change implements an API on runtime tensors to reflect tensor metadata and contents reflecting the underlying TTNN tensor. This functionality is pybound as well, allowing for easy casting to torch tensors. Please see `runtime/test/python/ttnn/test_runtime_api.py` for an example. NOTE: This is not the most efficient implementation possible, as there is effectively a double copy to get the data in a pybindable row major format. There is probably some trickery that can be done later to avoid this, but I would like to get this functionality out ASAP and avoid premature optimization Closes #1957
For the frontends to properly reconstruct a tensor during intermediate callback, the data type, shape, and data buffer must be provided. Currently, we can only get the tensor data as a
std::vector<float>
or as att::runtime::Tensor
, which itself holds a void pointer to attnn::Tensor
, which can't be used unless the frontend depends on tt-metal as well.I believe if the
tt::runtime::Tensor
held aTensorDesc
for itself, or had theshape
and dtype of itself then callinggetOpOutputTensor
should suffice.The text was updated successfully, but these errors were encountered: