Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of convolve #2205

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open

Implementation of convolve #2205

wants to merge 10 commits into from

Conversation

AlexanderKalistratov
Copy link
Collaborator

Add implementation of dpnp.convolve. Implementation is mostly based on already existing functionality developed for dpnp.correlate
Similar to scipy.signal.convolve method keyword is introduced, but unlike scipy.signal.convolve dpnp.convolve works only with 1-d arrays.

  • Have you provided a meaningful PR description?
  • Have you added a test, reproducer or referred to issue with a reproducer?
  • Have you tested your changes locally for CPU and GPU devices?
  • Have you made sure that new changes do not introduce compiler warnings?
  • Have you checked performance impact of proposed changes?
  • If this PR is a work in progress, are you filing the PR as a draft?

Copy link
Contributor

View rendered docs @ https://intelpython.github.io/dpnp/pull/2205/index.html

Copy link
Contributor

View rendered docs @ https://intelpython.github.io/dpnp/pull/2205/index.html

@coveralls
Copy link
Collaborator

coveralls commented Mar 14, 2025

Coverage Status

coverage: 72.179% (+0.03%) from 72.152%
when pulling 74ab8f3 on convolve
into 04bfac7 on master.

Copy link
Contributor

github-actions bot commented Mar 14, 2025

Array API standard conformance tests for dpnp=0.18.0dev0=py312he4f9c94_53 ran successfully.
Passed: 1005
Failed: 0
Skipped: 45

@AlexanderKalistratov
Copy link
Collaborator Author

@antonwolfy please review

)

if a.ndim == 0:
a = dpnp.reshape(a, (1,))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not covered with tests

if a.ndim == 0:
a = dpnp.reshape(a, (1,))
if v.ndim == 0:
v = dpnp.reshape(v, (1,))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not covered with tests

Comment on lines 167 to 176
if dtype == dpnp.bool:
an = numpy.random.rand(a_size) > 0.9
vn = numpy.random.rand(v_size) > 0.9
else:
an = (100 * numpy.random.rand(a_size)).astype(dtype)
vn = (100 * numpy.random.rand(v_size)).astype(dtype)

if dpnp.issubdtype(dtype, dpnp.complexfloating):
an = an + 1j * (100 * numpy.random.rand(a_size)).astype(dtype)
vn = vn + 1j * (100 * numpy.random.rand(v_size)).astype(dtype)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use generate_random_numpy_array from helper here?

dpnp.issubdtype(dtype, dpnp.integer) or dtype == dpnp.bool
):
# For 'direct' and 'auto' methods, we expect exact results for integer types
assert_array_equal(result, expected)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can we expect exact result for method="auto" that might fall back on "fft" but cannot expect it for "fft" itself?

Comment on lines 193 to 194
if dpnp.issubdtype(rdtype, dpnp.integer):
rdtype = dpnp.default_float_type(ad.device)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to change output dtypes (and also below at lines 202, and 223)? If numpy and dpnp outputs have different dtyps for a known reason, could not we use check_type or check_only_type_kind of assert_dtype_allclose to handle it?

# For these outliers, the relative error can be significant.
# We can tolerate a few such outliers.
max_outliers = 8 if expected.size > 1 else 0
if invalid.sum() > max_outliers:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not get the logic here! First, it seems the code never goes inside the if condition here which mean we do not compare the outputs at all. Second, as I understand it, invalid is True when the difference between result and expected is large (outlier), so this condition indicates if there are more than 8 outliers the outputs should be compared. Shouldn't we avoid comparison when there is an outlier (or manually modify the values of outliers and then compare)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants