-
Notifications
You must be signed in to change notification settings - Fork 10
Improvements to device support #70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I tried to run the scikit-learn tests with |
I'll take a look/investigate that failure further. |
Commenting here because it partially relates OP but directly related to @ogrisel 's comment #70 (comment) Looking at: https://data-apis.org/array-api/latest/design_topics/device_support.html#device-support , the spec is very light in this area. It only specifies support for local control over data allocation target device, and syntax for device assignment. I agree that it makes sense that strict does not allow mixing of devices, but there are 2 different options here:
Current behaviour seems a bit conflicting (?), this works: from array_api_strict import asarray, Device
asarray([1,2,3], device=Device('device1')) this fails: asarray([asarray([1,2], device=Device('device1')), asarray([1,2], device=Device('device1'))]) due to this check (in array-api-strict/array_api_strict/_array_object.py Lines 166 to 167 in a8f567a
this seems to be the only place where such a check is happening. And it's probably the cause of the behaviour noticed by @ogrisel where some tests on non-default devices are failing on strict but not with Torch and non-default device. (I think we should agree on a behaviour and make it consistent) (ref: noticed while working on #134 ) |
First and foremost, working on device support is very welcome @lucyleeow ! Two quick notes:
|
Agreed, having different devices is useful for testing.
is more idiomatically written as
which should not fail |
The idea behind the "multi device support" in array-api-strict is to help projects test the behaviour of using more than one device, without needing actual special hardware (like a CUDA GPU). The decision I made was to allow arrays on the default device (CPU_DEVICE) to be converted back to Numpy arrays. The reasoning being that it was already possible (don't introduce breaking changes) and that there are libraries that allow this (so it would be useful to test this). The "non default devices" don't allow conversion to Numpy arrays because there are libraries that don't allow you to do that and there was no existing behaviour so we could do what we wanted. I think the error you get when you write |
Thanks for the background @betatim, I think your decisions all sound reasonable. I'm afraid the context is very boring, while working on #134, I wanted to amend
which I think we now agree should have been written I agree that the error is confusing. Indeed, this was also the error returned by bug #134 , though that one is an actual bug to be fixed.
I am now interested in what test failed with |
#59 added support for basic devices. Some improvements that could be made:
The text was updated successfully, but these errors were encountered: