Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PDEP-14: Dedicated string data type for pandas 3.0 #58551
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PDEP-14: Dedicated string data type for pandas 3.0 #58551
Changes from 2 commits
fbeb69d
f03f54d
561de87
86f4e51
30c7b43
54a43b3
5b5835b
9ede2e6
f5faf4e
f554909
ac2d21a
82027d2
5b24c24
f9c55f4
2c58c4c
0a68504
8974c5b
cca3a7f
d24a80a
9c5342a
b5663cc
1c4c2d9
c44bfb5
af5ad3c
bd52f39
f8fbc61
d78462d
4de20d1
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see no reason not to use #57073 as the discussion issue as any further discussion will be here and #57073 can now focus on whether to reject PDEP-10 and what to do about the planned improvements to other dtypes.
My assumption is that approval of this PDEP should not, in itself, be a justification to overturn the PDEP-10 decision even though they are very much related and the implementation of the fallback option is only applicable if PDEP-10 is formally rejected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should you allow the possability of a NumPy 2 improved type for pandas 3? With a heirarchy arrow -> np 2 -> np object?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This proposal does not preclude any further improvements for the numpy-based string dtype using numpy 2.0. A few lines below I explicitly mention it as a future improvement and in the "Object-dtype "fallback" implementation" section as well.
I just don't want to explicitly commit to anything for pandas 3.0 related to that, given it is hard to judge right now how well it will work / how much work it is to get it ready (not only our own implementation, but also support in the rest of the ecosystem). If it is ready by 3.0, then we can evaluate that separately, but this proposal doesn't stand or fall with it.
Regardless of whether to also use numpy 2.0, we have to agree on 1) making a "string" dtype the default for 3.0, 2) the missing value behaviour to use for this dtype, and 3) whether to provide an alternative for PyArrow (in which case we need the object-dtype version anyway since we also can't require numpy 2.0). I would like the proposal to focus on those aspects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth mentioning why this has been objected to? As far as I am aware virtually all objections are due to the installation size effect, and not performance or compatibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can certainly mention something, but would prefer to keep that brief to focus here on the strings context and not trigger discussion here about the merits of those objections.
(for example, it's not only installation size, but also the difficulty to install from source in case there are no wheels)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added "(mostly around installation complexity and size)"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think NumPy 2.0 will reduce the need to make pyarrow a dependency for strings; as far as I am aware it is not natively returned by any I/O operation and it has a completely different string architecture than pyarrow, so there is no zero-copy capability. Those seem like they either will require a large amount of string copying or a hefty amount of updates to make it natively work with our I/O, as well as with the larger Arrow ecosystem. That's a huge amount of things to gloss over
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it can do that if your motivation for wanting pyarrow is the better performance compared to object-dtype. In that case, numpy 2.0's StringDType can give you a part of the speedup, without requiring pyarrow.
The discussion in #57073 also started from that point of view, mentioning numpy 2.0 as an alternative to requiring pyarrow, so based on that my feeling is that what I wrote here is correct (or at least seen as such by some people).
But you are completely right that there are a lot of things that would need to be implemented to make it fully usable for us. That's also the reason that this PDEP does not say to use numpy 2.0, but defers that as a possible future enhancement, to discuss later. And you are also right that it has drawbacks compared to a Arrow based solution (using Arrow memory layout, but not necessary using pyarrow the package), another reason for me personally to again defer that to a separate discussion.
I just wanted to mention it for the complete context of the string dtype history and discussion. Now, I already mention its existence in the previous paragraph, so could keep it shorter here.
(and if you have any concrete suggestions to word this better, I am all ears!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
pyarrow_numpy
StringArray also returns numpy arrays as results for some operations.I think this is also important to mention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point, I haven't yet mentioned that the original StringDtype returns masked arrays from operations (only that it uses
pd.NA
). I only mention that when going more in detail on this topic in the "Missing value semantics" subsection. Given that, I would also leave it here to the generic "missing value semantics" for the new variant as well (to not make the background section even longer. I can certainly expand the "Missing value semantics" section if needed)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and do we consider adding a performance warning to the fallback also?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally wouldn't do that always / for each method, because that would be super noisy (and in some cases, like smallish data, it doesn't matter that much, so getting those warnings would be annoying).
If we wanted to warn users to gently push them towards installing pyarrow, I think we could do a warning but only 1) raise it once, and 2) only when doing one of the string operations on a big enough dataset (with some threshold).
Now, your question reminds me that the current pyarrow-backed string dtype has those fallback warnings for very specific cases, which I personally think we should stop doing when it becomes the default dtype. Given this is already for the existing implementation (and to keep the many discussion lines here a bit more limited), I opened a separate issue for this: #58581.
(but if there is agreement on that other issue, can of course briefly mention that here later)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fair point. from the recent user feedback of adding the deprecation warning for the PyArrow requirement, then maybe not having any warnings is wise.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to clarify that this is a separate dtype from the original
string[python]
dtype, just to make it clear that the original StringDtype is not changing (and still will return masked arrays, and use pd.NA as its missing sentinel)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to clarify in the test that it is indeed a new variant of the string dtype and uses a subclass to reuse most code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would drop this bit about nanoarrow (given it is not explained/introduced in the paragraphs beforehand).
If you want to add an explanation above, that's also fine with me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a link to the discussion issues for both numpy 2.0 and nanoarrow, so people can find more explanation there if they want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we are reusing
pd.StringDtype()
in this case right? Is that going to break existing use cases where users have relied on that using pd.NA as a sentinel?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, and that is what already happens since pandas 2.1 with
future.infer_string
enabledYes, I mentioned that in the "Backwards compatibility" section
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah thanks - sorry for overlooking that. So I think it goes without saying then that if we go this route we no longer will declare
pd.StringDtype()
experimental? Or are we still trying to keep that reservation knowing even this is not considered a long term design decision?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, given the proposal is to enable this by default, I think that is indeed saying to remove the experimental label (I can mention that somewhere explicitly if that helps)
Once we have a
"string"
, we will always have one, I think. That aspect is the long term decision this PDEP is proposing. We might change later the missing value semantics, but that doesn't mean the string dtype proposed here is still experimental (just like our default "int64" dtype is not experimental). At the time that we would decide to enable new missing value semantics by default, then"string"
will "simply" start meaning something differently.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
StringDtype(storage="pyarrow", semantics="numpy")
? or instead of semantics, could use "na_value=np.nan`There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If i'm understanding correctly about the motivation for the change in dtype (improved overall user experience), then moving forward I suspect that when we can have improved/native dtypes for other data types (nested, date, etc) that the same logic would need to apply, i.e. we would need to have a variants of these with NumPy semantics.
Now this probably falls under PDEP-13 but if we have
semantics
as a argument (that users would see and use) we could still end up with columns using different missing value indicators?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or maybe "nullable=[True|False]"
However, at the moment, we distinguish the nullable data types for the other dtypes (int, float, etc) with capitalization and so for consistency could also consider string/String as the dtypes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PDEP-13 proposes
StringDtype(backend="pyarrow", na_marker=np.nan)
. I think the repr should just be updated to reflect that; trying to sift through the meaning ofint
versusInt
versusint[pyarrow]
compared tostring
versusstring[pyarrow]
versusstring[pyarrow_numpy]
I think would be a distraction for this proposalThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jbrockmendel good point that we can also use other keywords than just
storage
to make the distinctionOnly if users explicitly specify a non-default value for this, and never by default. This is the same with whatever option we come up with (eg also when using
dtype_backend="pyarrow"
or explicitly asking for one of the masked dtypes withdtype=Int64
or .. you can end up with a DataFrame with columns with mixed semantics)Yeah, only unfortunately to be consistent with the other dtypes where we use capitalization, it would need to be
"string"
for the new NaN-based dtype, and"String"
for the "nullable" NA-based variant. And so that doesn't help with backwards compatibility, because"string"
right now means the nullable dtype. Given that, I would personally not use capitalization here (which also only is a solution for the string alias naming, not for theStringDtype(..)
API)To keep the sub-discussions manageable, I moved this specific topic out of this inline comment thread, and into it's own issue: #58613
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can just claim this. I don't disagree, but this should be backed up more.
At least from the feedback received from #57073 and the other issue, there's at least a significant part of the user base that doesn't use strings.
There's also a significant chunk of the population that can't install pyarrow (due to size requirements or exotic platforms or whatever).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure this argument is that convincing either, although for slightly different reasons. I don't think we need to feel rushed for the next release
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lithomas1 can you clarify which part of the paragraph you think requires more backing up?
The fact that I say a "significant" part of our user base has pyarrow installed?
I don't think we can ever know exact numbers for this, but one data point is that pandas currently has 210M monthly downloads and pyarrow has 120M monthly downloads. Of course not all of those pyarrow users are also using pandas, but let's just assume that half of those pyarrow downloads come from people using pandas, that would mean that around 30% for our users already have pyarrow installed, which I would consider as a "significant part".
(and my guess is that for people working with larger datasets, where the speed of pyarrow becomes more important, this percentage will be higher, for example because of using the parquet IO)
But anyway, we are never going to know this exact number, but IMO we do know that a significant part of our userbase has pyarrow and will benefit from using that by default.
Yes, and then this PDEP is not relevant for them. But it's not because some users don't use strings, that we shouldn't improve the life of those users that do use strings? (so just not really understanding how this is a relevant argument)
Yes, and this PDEP addresses that by allowing a fallback when pyarrow is not installed.
@WillAyd can you then clarify which other reasons?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My other reason is that I don't think there is ever a rush to get a release out; we have historically never operated that way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the last six years, we have roughly released a new feature release every six months. We indeed never rush a specific release if there is something holding it up for a bit, but historically we have been releasing somewhat regularly.
At this point, a next feature release will be 3.0 given the amount of changes we already made on the main branch that require the next release cut from main to be 3.0 and not 2.3 (enforced deprecations etc).
(we can cut a 2.3 release from the the 2.2.x maintenance branch, which we might want to do for several reasons, but not counting that as a feature release for this discussion, as that will not actually contain features)
So I would say there is not necessarily a rush to do a release with a default "string" dtype (that is up for debate, i.e. this PDEP), but there is some rush to get a 3.0 release out. In the meaning that I think we don't want to delay 3.0 for like half a year or longer.
So for me delaying the string dtype, essentially means not including it in 3.0 but postponing it to pandas 4.0 (I should maybe be clearer in the paragraph above about that).
And then I try to argue in the text here that postponing it for 4.0 has a cost (or, missed benefit), because we have an implementation we could use for a default string dtype in pandas 3.0, and postponing introducing it makes that users will use the sub-optimal object dtype for longer, for (IMO) no good reason.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be nice to add how much perf benefits Arrow strings are expected to bring (e.g. 20%? 2x? 10x?).
Putting in the part about how many users have pyarrow would also help.
It'd also be good to elaborate on the usability part. IIUC, the main benefit here is not having to manually check element to see whether your object dtype'd column contains strings (since I think all the string methods work on object dtype'd columns).
I think it's also fair to amend this part to say "massive benefits to users that use strings" (instead of in general).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Benchmarks are going to be highly dependent on usage and context. If working in an Arrow native ecosystem, the speedup of strings may be a factor over 100x. If working in a space where you have to copy back and forth a lot with NumPy, that number goes way down.
I think trying to set expectations on one number / benchmark for performance is futile, but generally Arrow only helps, and makes it so that we as developers don't need to write custom I/O solutions (eg: ADBC Drivers, parquet, read_csv with pyarrow all work with Arrow natively with no extra pandas dev effort)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, for single operations you can easily get a >10x speedup, but of course a typical workflow does not consist of just string operations, and the overall speedup depends a lot (see this slide for one small example comparison (https://phofl.github.io/pydata-berlin/pydata-berlin-2023/intro.html#74) and this blogpost from Patrick showing the benefit in a dask example workflow (https://towardsdatascience.com/utilizing-pyarrow-to-improve-pandas-and-dask-workflows-2891d3d96d2b).
That is often true, but except for strings ;).
For strings, the faster compute kernels will still give a lot of value even if your IO wasn't done through Arrow (and give a lot more value compared to using pyarrow for numeric data)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be missing the intent but I don't understand why the larger issue of NA handling means we should be faster to implement this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not a reason to do it "faster", but I meant to say that the discussion regarding NA is not a reason to do it "slower" (to delay introducing a dedicated string dtype)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the flip side is that if we aren't careful about the NA handling we can introduce some new keywords / terminology that makes it very confusing in the long run (which is essentially one of the problems with our strings naming conventions)
As a practical example, if we decided we wanted
semantics=
as a keyword argument toStringDtype
in this PDEP to move the NA discussion along, that might be counter-productive when we look at more data types and decidesemantics=
was not a clear way to allow datetime data types to supportpd.NaT
as the missing value.(not saying the above is necessarily the truth, just cherry picking from conversation so far)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's one reason that I personally would prefer not introducing a keyword specifically for the missing value semantics, for now (just for this PDEP / the string dtype). I just listed some options in #58613, and I think we can do without it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This just retroactively clarifies the reasoning for
string[pyarrow_numpy]
to have existed in the first place right? Or is it supposed to be hinting at some other feature that the implementation details of the PDEP is proposing?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's indeed explaining why we did this, which is of course "retroactively" given I was asked to write this PDEP partly for changes that have already been released. So a big part of the PDEP is retroactively in that sense (which it not necessarily helping to write it clearly ..).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
however, more importantly, the PDEP makes this (the already added dtype) the default in 3.0. It would remain behind the future flag for the next release if enough people feel we are not ready.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Historically you would get this by using
dtype="string"
too right? I'm a little wary that we are underestimating the scope of how breaking this could be; I didn't even realize we considered that dtype experimental all this timeThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been available (as pyarrow backed) since 1.3, so almost three years (July 2, 2021). Even though considered experimental, if the new string dtype is not accepted for 3.0, then maybe a deprecation warning should be added? (We could also do this if decided a 2.3 release is needed?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A deprecation warning about what exactly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The scope of changing NaN to NA for all users is much bigger though (essentially what was decided in PDEP-10 if we would follow it strictly to the letter).
And similarly if we would in the future change NaN/NaT semantics to NA for all dtypes, the scope will be much bigger (because once that is enabled by default, for example a user that was doing
dtype="float64"
will probably get the new NA behaviour while now it uses NaN), but we are still considering that (granted, it's exactly those details that we have to discuss a lot more in detail (elsewhere) and figure out, though).I know that this is not necessarily a good argument to justify this breaking change (because we certainly should be wary of the scope of those breaking changes), but I do want to point out again that the choice in this PDEP to use NaN semantics is to reduce the scope of the breaking changes for most users (at the expense of increasing the scope of breaking changes for the smaller subset of users that was already using
dtype="string"
).If we don't want to make
dtype="string"
breaking, then either we need to come up with a different name for the dtype (not using "string", like "utf8" or "text"), or either we need to delay introducing a default string dtype until after we have agreement on the NA discussions.And personally I think "string" is by far the best name (and I find the small breakage worth it for being able to use that name), and as I argued elsewhere (and in the Why not delay introducing a default string dtype? section in the PDEP text), I think it is valuable for our users to not wait with adding a dedicated string dtype until we are ready with the NA discussion and implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is where I am a little uncomfortable - I don't know how to measure the size of that, but I am wary of assuming it is not a signifcant number of users. The fact that "string" returns NA as a missing value is a documented difference in our code base:
https://pandas.pydata.org/docs/dev/user_guide/text.html#behavior-differences
And its usage has been promoted for quite some time:
https://stackoverflow.com/a/60553529/621736
https://towardsdatascience.com/why-we-need-to-use-pandas-new-string-dtype-instead-of-object-for-textual-data-6fd419842e24
https://pandas.pydata.org/pandas-docs/stable/whatsnew/v1.1.0.html#all-dtypes-can-now-be-converted-to-stringdtype
Yea none of these options are great...but out of them I still would probably prefer waiting. I think right now we are marching down a path of "string" missing values:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we have to carefully specify what the user specifies in a
dtype
argument and how that gets interpreted, versus what we return as thedtype
when they look atSeries.dtype
.So we could have a mapping that says
dtype
=Series.dtype
"string[pyarrow_numpy]"
OR"string[python]"
"string"
"string[pyarrow]"
"string[pyarrow]"
"string[python]"
"string[pyarrow_numpy]"
The first row depends on whether
pyarrow
is installed.For the second, third and fifth rows, if
pyarrow
is not installed, we raise an Exception.Separately, we can then debate what the values in the second column should look like in #58613 . I personally am not a fan of
"pyarrow_numpy"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah OK - I didn't realize you were proposing that change be a part of this PDEP, just thought it was an idea you had for the future. But that's a completely new behavior...and then begs the question of do we go back and change dtype=object to have that same behavior or just have dtype="string" exclusively have it. Ultimately we end up with the same issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I also agree with Will that it's not fair to change this without warning for people already using "string".
(pd.NA is also a big selling point of the
dtype="string"
too)Maybe a good compromise would be to use
string[pyarrow]
under the hood for those users (if they had it installed)?If we were to move ahead with the move to nullable dtypes in general, I worry that this changing of the na value for
dtype="string"
from pd.NA -> np.nan -> pd.NA will cause a lot of confusion.If we were to do 2.3 (like I suggested below), this might be addressable there (with a deprecation).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still adding some deprecation warnings in 2.x for current users of StringDtype is something we certainly could do. I am personally ambivalent about it, but fine with adding it if others think that is better (I do think it might become quite noisy, and it also does not change the fact that 3.0 would switch from NA to NaN)
The warning message could then point people to enable
pd.options.future.infer_string = True
in case they only care about having the (faster) string dtype, or otherwise update their dtype specification if they want the NA instead of NaN version.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I created a variant of that table #58613 (comment) with a concrete proposal
(for clarity, this "second" row referred to specifying a dtype with
"string"
)If you explicitly ask for pyarrow, then yes raising an exception is fine and expected. But a generic
"string"
(orStringDtype()
) has to mean "whatever string dtype that is the default" and so cannot raise an exception if pyarrow is not installed, but should return the object-dtype based fallback.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part of the plan worries me a little.
Maybe it would be better to cut off a 2.3 from 2.2.x.
I think there's a significant proportion of the downloads for 2.2 that aren't on the latest patch release.
I think there's ~ 1/3 of the downloads that are fetching 2.2.0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also,
it would be good to mention which version of pandas is expected to have
infer_string
be able to infer to the object fallback option.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a 2.3 release (maybe around the same time as 3.0rc) sounds reasonable.
If the features/bugfixes added to 2.3 are limited to the string dtype then we shouldn't need many patch releases. We may not need to fix any string dtype related issues that are fixed for 3.0 as these will be behind a flag in 2.3 and so shouldn't break existing code.
On the other hand, as these features are behind a flag, maybe releasing a 2.3 would not gain the field testing we hope for.
And therefore, instead of doing a 2.3, planning for at least a couple of release candidates for 3.0 would better achieve this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jorisvandenbossche
Thoughts on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if we still plan to add a deprecation warning and change the naming scheme in
StringDtype
, calling that 2.3.0 sounds as the best option (I had been planning to propose doing a 2.3.0 (from the 2.2.x branch) anyway to bump the warning for CoW from DeprecationWarning to FutureWarning)