-
Notifications
You must be signed in to change notification settings - Fork 306
unreads: Track total and per-stream unread counts, in addition to IDs #345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Unlike for [mentions], we don't currently suspect that [streams] or [dms] will be missing information on messages' read state except for the reason explained at [oldUnreadsMissing].
Sometimes we get feedback from a lint rule -- https://dart.dev/tools/linter-rules/avoid_function_literals_in_foreach_calls -- whose documentation recommends using for loops instead of forEach. That rule didn't fire on these instances; shrug; but this file has had a mix of for loops and forEach loops, and it seems nicer to consistently use one or the other. Just in case there was a performance bug in a Map.forEach implementation, I tested mark-as-read (by scrolling through the web app) to see if we might get a free performance benefit there, where the norm on my device (with a profile build) has been to take about 14ms per event: zulip#304 (comment) But, unsurprisingly, it stayed at around 14ms as before.
This won't help UI code that wants to filter out muted messages, but otherwise it should let the caller conveniently know how many total unreads there are, without worrying about performance and dropped frames.
We'll give it a `count` field next.
This won't help UI code that wants to filter out muted messages, but otherwise it should let the caller conveniently know how many unreads there are in a stream, without worrying about performance and dropped frames.
I don't expect it to make the model's event-handling much more expensive. Here's at the tip of the PR branch, scrolling through messages (on web) to mark as read:
which looks roughly similar to what we saw before. Maybe one or two milliseconds are added, actually…hmm. Still, we expect to be able to cut down most of that duration: #304 (comment) |
2c4b16b
to
848d233
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, thanks. I'm not yet convinced we actually want to have these counts around, with muting not taken account of — see comment below.
I also read through the commits that are independent of that:
a7f7fff unreads: Be more precise in dartdoc's description of information gaps
cc76f2a unreads [nfc]: Use for/in style instead of forEach
aeb570f test: Fix jsonEquals for int-keyed Maps
but not the ones that actually implement the counts:
0ebaef1 unreads: Track whole-model unread count
ad074b2 unreads [nfc]: Add StreamUnreads class, to store per-stream unread count
848d233 unreads: Track per-stream unread counts
/// It may also be unknown for other reasons that differ by component; see each. | ||
/// In [mentions], there's another more complex reason | ||
/// the state might be unknown; see there. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: commit message can mark as NFC
for ( | ||
final MapEntry(key: incomingStreamId, value: incomingTopics) | ||
in newlyUnreadInStreams.entries | ||
) { | ||
for ( | ||
final MapEntry(key: incomingTopic, value: incomingMessageIds) | ||
in incomingTopics.entries | ||
) { | ||
_addAllInStreamTopic(incomingMessageIds..sort(), incomingStreamId, incomingTopic); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wish I had a good idea for making these more concise, but I don't. :-/ Short of, like, an extension entryPairs
on Map
to return 2-tuples instead of MapEntry
— which is probably what Map.entries
would have done in the first place if tuples/records had existed in Dart from early on.
Can at least make them more compact vertically, though:
for ( | |
final MapEntry(key: incomingStreamId, value: incomingTopics) | |
in newlyUnreadInStreams.entries | |
) { | |
for ( | |
final MapEntry(key: incomingTopic, value: incomingMessageIds) | |
in incomingTopics.entries | |
) { | |
_addAllInStreamTopic(incomingMessageIds..sort(), incomingStreamId, incomingTopic); | |
for (final MapEntry(key: incomingStreamId, value: incomingTopics) | |
in newlyUnreadInStreams.entries) { | |
for (final MapEntry(key: incomingTopic, value: incomingMessageIds) | |
in incomingTopics.entries) { | |
_addAllInStreamTopic(incomingMessageIds..sort(), incomingStreamId, incomingTopic); |
|
||
// TODO excluded for now; would need to handle nuances around muting etc. | ||
// int count; | ||
/// Total unread messages. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, but do we have a use case for this?
I don't immediately see one — I feel like any place that we might want a count like this one, we'd actually want the version that accounts for muting.
If we don't have a use case where it'd be valid to use, then given that it looks appealingly like something one might want in situations where in fact it'd be buggy to use it, it's probably best not to have it around.
(If we did have reason to keep it around, I'd want the doc to clearly warn that it's not the "total number of unreads" one wants in the UI.)
case Map() when object.keys.every((k) => k is int): | ||
result = object.map((k, v) => MapEntry<String, dynamic>(k.toString(), deepToJson(v))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm interesting.
The idea in the existing code here is that if you have a Map whose keys aren't strings, that is not a valid JSON-like value — not a value that a toJson
method should be emitting. Keys in actual JSON objects are strings (just like in JS objects).
E.g. see the deepToJson
doc:
/// All JSON atoms (numbers, booleans, null, and strings) are used directly.
/// All JSON containers (lists, and maps with string keys) are copied
/// as their elements are converted recursively.
/// For any other value, a dynamic call `.toJson()` is made and
/// should return either a JSON atom or a JSON container.
Object? deepToJson(Object? object) {
Does jsonEncode
accept a value like this? If it does, we should probably accommodate it here too.
If not, but jsonEncode
gives a more helpful error message about this, then it'd be good to try to give an equally helpful error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In DartPad, this fails:
import 'dart:convert';
main() {
print(jsonEncode({ 1: 123 }));
}
with
Uncaught Error: Converting object to an encodable object failed: Instance of 'JsLinkedHashMap<int, int>'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Compare that with the output of this test on main
(256be9a):
test('', () {
check({1: 1}).jsonEquals({1: 1});
});
$ flutter test test/model/unreads_test.dart
00:02 +0 -1: constructor [E]
Converting object to an encodable object failed: _Map len:1
test/stdlib_checks.dart 49:5 deepToJson
test/stdlib_checks.dart 79:26 JsonChecks.jsonEquals
test/model/unreads_test.dart 98:21 main.<fn>.<fn>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a JsonUnsupportedObjectError
that we throw in deepToJson
:
final Object? shallowlyConverted;
try {
shallowlyConverted = (object as dynamic).toJson();
} catch (e) {
throw JsonUnsupportedObjectError(object, cause: e);
}
Inspecting its cause
with print(e)
gives:
NoSuchMethodError: Class '_Map<int, int>' has no instance method 'toJson'.
Receiver: _Map len:1
Tried calling: toJson()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it would help to make that cause
appear in the test failure output? If the test writer already knows not to expect deepToJson
to convert an int-keyed map to a string-keyed map (as JS's JSON.stringify
does), then that output is likely to be a helpful clue, because it points to an int-keyed map.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we give the same error as jsonEncode
does here. It's different in DartPad vs. flutter test
because the former is running Dart by compiling to JS; but in flutter test
, the jsonEncode
version gives the same "_Map len:1" stringification.
Which isn't a coincidence, because as the implementation comment says:
Object? deepToJson(Object? object) {
// Implementation is based on the recursion underlying [jsonEncode],
// at [_JsonStringifier.writeObject] in the stdlib's convert/json.dart .
// (We leave out the cycle-checking, for simplicity / out of laziness.)
and that includes largely throwing the same exceptions.
I'm inclined to not try to optimize the error messages beyond what jsonEncode
gives. For example it's not clear that this would be helpful:
If the test writer already knows not to expect
deepToJson
to convert an int-keyed map to a string-keyed map (as JS'sJSON.stringify
does), then that output is likely to be a helpful clue,
because that doesn't seem to have been the scenario.
Thanks for the review! Yeah, that reasoning makes sense. I've left some comments on the 492ee38 unreads [nfc]: Use for/in style instead of forEach and I'll close this PR. |
Would it be helpful for the unreads model to track total and per-stream unread counts, so it can offer those numbers to callers without having to traverse the model and do sums for each call? Here's a PR that does that. I had this idea after reading @sirpengi's comment here and his referenced commit: #334 (comment).
With this PR, it's now efficient to learn the number of:
in addition to these cases that were already efficient: