Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bitbot disconnects finding the title of really large pages #328

Open
OrichalcumCosmonaut opened this issue Sep 13, 2021 · 13 comments
Open

Comments

@OrichalcumCosmonaut
Copy link

when given a URI like https://git.causal.agency/scooper/tree/sqlite3.c?h=vendor-sqlite3 (a 24MiB page), bitbot disconnects while getting its title, probably because it tries to parse the entire page, just to find the title, leading to it timing out.

this could probably be fixed by limiting the amount of the page to be parsed to 64KiB or thereabouts.

@jesopo
Copy link
Member

jesopo commented Sep 13, 2021

deadline for slow transfers
https://github.com/jesopo/bitbot/blob/4a6037c77405f3584efadc10ae75826b6b9ac422/src/utils/http.py#L240

max file size for large files that could oom the bot
https://github.com/jesopo/bitbot/blob/4a6037c77405f3584efadc10ae75826b6b9ac422/src/utils/http.py#L224

don't know what problem you've managed to stumble on, but there's already code intended to handle what you've described. do you have a stacktrace?

@OrichalcumCosmonaut
Copy link
Author

https://github.com/jesopo/bitbot/blob/4a6037c77405f3584efadc10ae75826b6b9ac422/src/utils/http.py#L35

it seems that the default for that variable is 100MiB, which probably doesn’t help for a page smaller than that, especially when the title probably isn’t very far into the file.

i don’t have a stacktrace, this is tildebot that disconnected, so maybe ben has one, assuming it did crash?

@jesopo
Copy link
Member

jesopo commented Sep 13, 2021

I'm sure the machine it's running on can read and parse 100mib of html unless it's somehow akin to a zipbomb. can you get the stacktrace from ben? we're going to be blind without it

@examknow
Copy link
Member

I think that limit ought to be configurable. Some people's stuff can handle that but others obviously can't.

@causal-agent
Copy link

My guess is it spends forever in html5lib trying to parse the page. Pure python parser = sadness.

@jesopo
Copy link
Member

jesopo commented Sep 13, 2021

I've hit it with much worse in testing. eager to see a stacktrace

@jesopo
Copy link
Member

jesopo commented Sep 13, 2021

if it is a timeout, especially on the .soup() call outside the deadline, I'd be inclined to do a much less thorough parse, even just a regex to grab the <title>. it'd work the majority of the time

@jesopo
Copy link
Member

jesopo commented Sep 13, 2021

benchmarking lxml against html5lib puts the former far ahead of the latter but I recall picking the latter for fault tolerances the former doesn't have

@benharri
Copy link
Contributor

looks like my log level was too low it just shut down

@causal-agent
Copy link

>>> def parseit():
...     with open("big.html", "rb") as f:
...             return html5lib.parse(f)
... 
>>> timeit.timeit(parseit)

This has been running for over 20 minutes...

@jesopo
Copy link
Member

jesopo commented Sep 13, 2021

😰

@jesopo
Copy link
Member

jesopo commented Sep 13, 2021

I don't think the correct solution is limiting file size, I imagine it's trivial to code golf something html5lib finds hard to parse, I'd either deadline soupifying the results or switch to something closer to O(1). lxml is undoubtedly faster but I can't remember what exact case caused me to switch away from it

@causal-agent
Copy link

causal-agent commented Sep 13, 2021

Well I wanted to post the final count for timing html5lib for posterity but it seems python got OOM killed while I wasn't looking 🙁

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants