Skip to content

Commit 085b0e7

Browse files
Update pre-commit hooks (#10208)
* Update pre-commit hooks updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.9 → v0.11.4](astral-sh/ruff-pre-commit@v0.9.9...v0.11.4) - [github.com/abravalheri/validate-pyproject: v0.23 → v0.24.1](abravalheri/validate-pyproject@v0.23...v0.24.1) - [github.com/crate-ci/typos: dictgen-v0.3.1 → v1](crate-ci/typos@dictgen-v0.3.1...v1) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 08fa7b9 commit 085b0e7

26 files changed

+63
-63
lines changed

.pre-commit-config.yaml

+3-3
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ repos:
2525
- id: text-unicode-replacement-char
2626
- repo: https://github.com/astral-sh/ruff-pre-commit
2727
# Ruff version.
28-
rev: v0.9.9
28+
rev: v0.11.4
2929
hooks:
3030
- id: ruff-format
3131
- id: ruff
@@ -69,12 +69,12 @@ repos:
6969
- id: taplo-format
7070
args: ["--option", "array_auto_collapse=false"]
7171
- repo: https://github.com/abravalheri/validate-pyproject
72-
rev: v0.23
72+
rev: v0.24.1
7373
hooks:
7474
- id: validate-pyproject
7575
additional_dependencies: ["validate-pyproject-schema-store[all]"]
7676
- repo: https://github.com/crate-ci/typos
77-
rev: dictgen-v0.3.1
77+
rev: v1
7878
hooks:
7979
- id: typos
8080
# https://github.com/crate-ci/typos/issues/347

design_notes/flexible_indexes_notes.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ Besides `pandas.Index`, other indexes currently supported in Xarray like `CFTime
166166

167167
Like for the indexes, explicit coordinate creation should be preferred over implicit coordinate creation. However, there may be some situations where we would like to keep creating coordinates implicitly for backwards compatibility.
168168

169-
For example, it is currently possible to pass a `pandas.MulitIndex` object as a coordinate to the Dataset/DataArray constructor:
169+
For example, it is currently possible to pass a `pandas.MultiIndex` object as a coordinate to the Dataset/DataArray constructor:
170170

171171
```python
172172
>>> midx = pd.MultiIndex.from_arrays([['a', 'b'], [0, 1]], names=['lvl1', 'lvl2'])

doc/getting-started-guide/quick-overview.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ Operations also align based on index labels:
128128
129129
data[:-1] - data[:1]
130130
131-
For more, see :ref:`comput`.
131+
For more, see :ref:`compute`.
132132

133133
GroupBy
134134
-------

doc/user-guide/computation.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
.. currentmodule:: xarray
22

3-
.. _comput:
3+
.. _compute:
44

55
###########
66
Computation
@@ -236,7 +236,7 @@ These operations automatically skip missing values, like in pandas:
236236
If desired, you can disable this behavior by invoking the aggregation method
237237
with ``skipna=False``.
238238

239-
.. _comput.rolling:
239+
.. _compute.rolling:
240240

241241
Rolling window operations
242242
=========================
@@ -308,7 +308,7 @@ We can also manually iterate through ``Rolling`` objects:
308308
# arr_window is a view of x
309309
...
310310
311-
.. _comput.rolling_exp:
311+
.. _compute.rolling_exp:
312312

313313
While ``rolling`` provides a simple moving average, ``DataArray`` also supports
314314
an exponential moving average with :py:meth:`~xarray.DataArray.rolling_exp`.
@@ -354,7 +354,7 @@ You can also use ``construct`` to compute a weighted rolling sum:
354354
To avoid this, use ``skipna=False`` as the above example.
355355

356356

357-
.. _comput.weighted:
357+
.. _compute.weighted:
358358

359359
Weighted array reductions
360360
=========================
@@ -823,7 +823,7 @@ Arithmetic between two datasets matches data variables of the same name:
823823
Similarly to index based alignment, the result has the intersection of all
824824
matching data variables.
825825

826-
.. _comput.wrapping-custom:
826+
.. _compute.wrapping-custom:
827827

828828
Wrapping custom computation
829829
===========================

doc/user-guide/dask.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,7 @@ we use to calculate `Spearman's rank-correlation coefficient <https://en.wikiped
282282
283283
The only aspect of this example that is different from standard usage of
284284
``apply_ufunc()`` is that we needed to supply the ``output_dtypes`` arguments.
285-
(Read up on :ref:`comput.wrapping-custom` for an explanation of the
285+
(Read up on :ref:`compute.wrapping-custom` for an explanation of the
286286
"core dimensions" listed in ``input_core_dims``.)
287287

288288
Our new ``spearman_correlation()`` function achieves near linear speedup

doc/user-guide/data-structures.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -880,7 +880,7 @@ them into dataset objects:
880880
881881
The merge method is particularly interesting, because it implements the same
882882
logic used for merging coordinates in arithmetic operations
883-
(see :ref:`comput`):
883+
(see :ref:`compute`):
884884

885885
.. ipython:: python
886886

doc/user-guide/indexing.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -276,7 +276,7 @@ This is particularly useful for ragged indexing of multi-dimensional data,
276276
e.g., to apply a 2D mask to an image. Note that ``where`` follows all the
277277
usual xarray broadcasting and alignment rules for binary operations (e.g.,
278278
``+``) between the object being indexed and the condition, as described in
279-
:ref:`comput`:
279+
:ref:`compute`:
280280

281281
.. ipython:: python
282282

doc/whats-new.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -4281,7 +4281,7 @@ New Features
42814281
~~~~~~~~~~~~
42824282

42834283
- Weighted array reductions are now supported via the new :py:meth:`DataArray.weighted`
4284-
and :py:meth:`Dataset.weighted` methods. See :ref:`comput.weighted`. (:issue:`422`, :pull:`2922`).
4284+
and :py:meth:`Dataset.weighted` methods. See :ref:`compute.weighted`. (:issue:`422`, :pull:`2922`).
42854285
By `Mathias Hauser <https://github.com/mathause>`_.
42864286
- The new jupyter notebook repr (``Dataset._repr_html_`` and
42874287
``DataArray._repr_html_``) (introduced in 0.14.1) is now on by default. To
@@ -6412,7 +6412,7 @@ Enhancements
64126412
- New helper function :py:func:`~xarray.apply_ufunc` for wrapping functions
64136413
written to work on NumPy arrays to support labels on xarray objects
64146414
(:issue:`770`). ``apply_ufunc`` also support automatic parallelization for
6415-
many functions with dask. See :ref:`comput.wrapping-custom` and
6415+
many functions with dask. See :ref:`compute.wrapping-custom` and
64166416
:ref:`dask.automatic-parallelization` for details.
64176417
By `Stephan Hoyer <https://github.com/shoyer>`_.
64186418

@@ -7434,7 +7434,7 @@ Enhancements
74347434
* x (x) int64 0 1 2
74357435
* y (y) int64 0 1 2 3 4
74367436

7437-
See :ref:`comput.rolling` for more details. By
7437+
See :ref:`compute.rolling` for more details. By
74387438
`Joe Hamman <https://github.com/jhamman>`_.
74397439

74407440
Bug fixes

xarray/backends/zarr.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1290,7 +1290,7 @@ def _validate_and_autodetect_region(self, ds: Dataset) -> Dataset:
12901290
region = self._write_region
12911291

12921292
if region == "auto":
1293-
region = {dim: "auto" for dim in ds.dims}
1293+
region = dict.fromkeys(ds.dims, "auto")
12941294

12951295
if not isinstance(region, dict):
12961296
raise TypeError(f"``region`` must be a dict, got {type(region)}")

xarray/computation/fit.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -80,8 +80,8 @@ def _initialize_feasible(lb, ub):
8080
)
8181
return p0
8282

83-
param_defaults = {p: 1 for p in params}
84-
bounds_defaults = {p: (-np.inf, np.inf) for p in params}
83+
param_defaults = dict.fromkeys(params, 1)
84+
bounds_defaults = dict.fromkeys(params, (-np.inf, np.inf))
8585
for p in params:
8686
if p in func_args and func_args[p].default is not func_args[p].empty:
8787
param_defaults[p] = func_args[p].default

xarray/computation/rolling.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1087,7 +1087,7 @@ def __init__(
10871087
if utils.is_dict_like(coord_func):
10881088
coord_func_map = coord_func
10891089
else:
1090-
coord_func_map = {d: coord_func for d in self.obj.dims}
1090+
coord_func_map = dict.fromkeys(self.obj.dims, coord_func)
10911091
for c in self.obj.coords:
10921092
if c not in coord_func_map:
10931093
coord_func_map[c] = duck_array_ops.mean # type: ignore[index]

xarray/core/common.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -457,7 +457,7 @@ def squeeze(
457457
numpy.squeeze
458458
"""
459459
dims = get_squeeze_dims(self, dim, axis)
460-
return self.isel(drop=drop, **{d: 0 for d in dims})
460+
return self.isel(drop=drop, **dict.fromkeys(dims, 0))
461461

462462
def clip(
463463
self,
@@ -1701,11 +1701,11 @@ def full_like(
17011701

17021702
if isinstance(other, Dataset):
17031703
if not isinstance(fill_value, dict):
1704-
fill_value = {k: fill_value for k in other.data_vars.keys()}
1704+
fill_value = dict.fromkeys(other.data_vars.keys(), fill_value)
17051705

17061706
dtype_: Mapping[Any, DTypeLikeSave]
17071707
if not isinstance(dtype, Mapping):
1708-
dtype_ = {k: dtype for k in other.data_vars.keys()}
1708+
dtype_ = dict.fromkeys(other.data_vars.keys(), dtype)
17091709
else:
17101710
dtype_ = dtype
17111711

xarray/core/coordinates.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -309,7 +309,7 @@ def __init__(
309309
var = as_variable(data, name=name, auto_convert=False)
310310
if var.dims == (name,) and indexes is None:
311311
index, index_vars = create_default_index_implicit(var, list(coords))
312-
default_indexes.update({k: index for k in index_vars})
312+
default_indexes.update(dict.fromkeys(index_vars, index))
313313
variables.update(index_vars)
314314
else:
315315
variables[name] = var
@@ -384,7 +384,7 @@ def from_xindex(cls, index: Index) -> Self:
384384
f"create any coordinate.\n{index!r}"
385385
)
386386

387-
indexes = {name: index for name in variables}
387+
indexes = dict.fromkeys(variables, index)
388388

389389
return cls(coords=variables, indexes=indexes)
390390

@@ -412,7 +412,7 @@ def from_pandas_multiindex(cls, midx: pd.MultiIndex, dim: Hashable) -> Self:
412412
xr_idx = PandasMultiIndex(midx, dim)
413413

414414
variables = xr_idx.create_variables()
415-
indexes = {k: xr_idx for k in variables}
415+
indexes = dict.fromkeys(variables, xr_idx)
416416

417417
return cls(coords=variables, indexes=indexes)
418418

@@ -1134,7 +1134,7 @@ def create_coords_with_default_indexes(
11341134
# pandas multi-index edge cases.
11351135
variable = variable.to_index_variable()
11361136
idx, idx_vars = create_default_index_implicit(variable, all_variables)
1137-
indexes.update({k: idx for k in idx_vars})
1137+
indexes.update(dict.fromkeys(idx_vars, idx))
11381138
variables.update(idx_vars)
11391139
all_variables.update(idx_vars)
11401140
else:
@@ -1159,7 +1159,7 @@ def _coordinates_from_variable(variable: Variable) -> Coordinates:
11591159

11601160
(name,) = variable.dims
11611161
new_index, index_vars = create_default_index_implicit(variable)
1162-
indexes = {k: new_index for k in index_vars}
1162+
indexes = dict.fromkeys(index_vars, new_index)
11631163
new_vars = new_index.create_variables()
11641164
new_vars[name].attrs = variable.attrs
11651165
return Coordinates(new_vars, indexes)

xarray/core/dataarray.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -7078,7 +7078,7 @@ def weighted(self, weights: DataArray) -> DataArrayWeighted:
70787078
--------
70797079
:func:`Dataset.weighted <Dataset.weighted>`
70807080
7081-
:ref:`comput.weighted`
7081+
:ref:`compute.weighted`
70827082
User guide on weighted array reduction using :py:func:`~xarray.DataArray.weighted`
70837083
70847084
:doc:`xarray-tutorial:fundamentals/03.4_weighted`

xarray/core/dataset.py

+13-13
Original file line numberDiff line numberDiff line change
@@ -1122,7 +1122,7 @@ def _copy_listed(self, names: Iterable[Hashable]) -> Self:
11221122
coord_names.add(var_name)
11231123
if (var_name,) == var.dims:
11241124
index, index_vars = create_default_index_implicit(var, names)
1125-
indexes.update({k: index for k in index_vars})
1125+
indexes.update(dict.fromkeys(index_vars, index))
11261126
variables.update(index_vars)
11271127
coord_names.update(index_vars)
11281128

@@ -3012,7 +3012,7 @@ def head(
30123012
if not isinstance(indexers, int) and not is_dict_like(indexers):
30133013
raise TypeError("indexers must be either dict-like or a single integer")
30143014
if isinstance(indexers, int):
3015-
indexers = {dim: indexers for dim in self.dims}
3015+
indexers = dict.fromkeys(self.dims, indexers)
30163016
indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "head")
30173017
for k, v in indexers.items():
30183018
if not isinstance(v, int):
@@ -3100,7 +3100,7 @@ def tail(
31003100
if not isinstance(indexers, int) and not is_dict_like(indexers):
31013101
raise TypeError("indexers must be either dict-like or a single integer")
31023102
if isinstance(indexers, int):
3103-
indexers = {dim: indexers for dim in self.dims}
3103+
indexers = dict.fromkeys(self.dims, indexers)
31043104
indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "tail")
31053105
for k, v in indexers.items():
31063106
if not isinstance(v, int):
@@ -3186,7 +3186,7 @@ def thin(
31863186
):
31873187
raise TypeError("indexers must be either dict-like or a single integer")
31883188
if isinstance(indexers, int):
3189-
indexers = {dim: indexers for dim in self.dims}
3189+
indexers = dict.fromkeys(self.dims, indexers)
31903190
indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "thin")
31913191
for k, v in indexers.items():
31923192
if not isinstance(v, int):
@@ -4029,7 +4029,7 @@ def _rename_indexes(
40294029
for index, coord_names in self.xindexes.group_by_index():
40304030
new_index = index.rename(name_dict, dims_dict)
40314031
new_coord_names = [name_dict.get(k, k) for k in coord_names]
4032-
indexes.update({k: new_index for k in new_coord_names})
4032+
indexes.update(dict.fromkeys(new_coord_names, new_index))
40334033
new_index_vars = new_index.create_variables(
40344034
{
40354035
new: self._variables[old]
@@ -4315,7 +4315,7 @@ def swap_dims(
43154315
variables[current_name] = var
43164316
else:
43174317
index, index_vars = create_default_index_implicit(var)
4318-
indexes.update({name: index for name in index_vars})
4318+
indexes.update(dict.fromkeys(index_vars, index))
43194319
variables.update(index_vars)
43204320
coord_names.update(index_vars)
43214321
else:
@@ -4474,7 +4474,7 @@ def expand_dims(
44744474
elif isinstance(dim, Sequence):
44754475
if len(dim) != len(set(dim)):
44764476
raise ValueError("dims should not contain duplicate values.")
4477-
dim = {d: 1 for d in dim}
4477+
dim = dict.fromkeys(dim, 1)
44784478

44794479
dim = either_dict_or_kwargs(dim, dim_kwargs, "expand_dims")
44804480
assert isinstance(dim, MutableMapping)
@@ -4700,7 +4700,7 @@ def set_index(
47004700
for n in idx.index.names:
47014701
replace_dims[n] = dim
47024702

4703-
new_indexes.update({k: idx for k in idx_vars})
4703+
new_indexes.update(dict.fromkeys(idx_vars, idx))
47044704
new_variables.update(idx_vars)
47054705

47064706
# re-add deindexed coordinates (convert to base variables)
@@ -4816,7 +4816,7 @@ def drop_or_convert(var_names):
48164816
# instead replace it by a new (multi-)index with dropped level(s)
48174817
idx = index.keep_levels(keep_level_vars)
48184818
idx_vars = idx.create_variables(keep_level_vars)
4819-
new_indexes.update({k: idx for k in idx_vars})
4819+
new_indexes.update(dict.fromkeys(idx_vars, idx))
48204820
new_variables.update(idx_vars)
48214821
if not isinstance(idx, PandasMultiIndex):
48224822
# multi-index reduced to single index
@@ -4996,7 +4996,7 @@ def reorder_levels(
49964996
level_vars = {k: self._variables[k] for k in order}
49974997
idx = index.reorder_levels(level_vars)
49984998
idx_vars = idx.create_variables(level_vars)
4999-
new_indexes.update({k: idx for k in idx_vars})
4999+
new_indexes.update(dict.fromkeys(idx_vars, idx))
50005000
new_variables.update(idx_vars)
50015001

50025002
indexes = {k: v for k, v in self._indexes.items() if k not in new_indexes}
@@ -5104,7 +5104,7 @@ def _stack_once(
51045104
if len(product_vars) == len(dims):
51055105
idx = index_cls.stack(product_vars, new_dim)
51065106
new_indexes[new_dim] = idx
5107-
new_indexes.update({k: idx for k in product_vars})
5107+
new_indexes.update(dict.fromkeys(product_vars, idx))
51085108
idx_vars = idx.create_variables(product_vars)
51095109
# keep consistent multi-index coordinate order
51105110
for k in idx_vars:
@@ -5351,7 +5351,7 @@ def _unstack_full_reindex(
53515351
# TODO: we may depreciate implicit re-indexing with a pandas.MultiIndex
53525352
xr_full_idx = PandasMultiIndex(full_idx, dim)
53535353
indexers = Indexes(
5354-
{k: xr_full_idx for k in index_vars},
5354+
dict.fromkeys(index_vars, xr_full_idx),
53555355
xr_full_idx.create_variables(index_vars),
53565356
)
53575357
obj = self._reindex(
@@ -10052,7 +10052,7 @@ def weighted(self, weights: DataArray) -> DatasetWeighted:
1005210052
--------
1005310053
:func:`DataArray.weighted <DataArray.weighted>`
1005410054
10055-
:ref:`comput.weighted`
10055+
:ref:`compute.weighted`
1005610056
User guide on weighted array reduction using :py:func:`~xarray.Dataset.weighted`
1005710057
1005810058
:doc:`xarray-tutorial:fundamentals/03.4_weighted`

0 commit comments

Comments
 (0)