Skip to content

Errors in ddtrace collectors when using with flask + gunicorn + gevent #11281

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mubin-tirsaiwala opened this issue Nov 4, 2024 · 2 comments
Closed
Assignees
Labels
Profiling Continous Profling stale

Comments

@mubin-tirsaiwala
Copy link

Description

We are hosting a flask app with gunicorn servers. Until now, we were using default synchronous workers and there were no ddtrace issues. Recently, we tried using gevent workers for async handling and ddtrace started throwing errors related to what seems like collector in profiling. Everything works perfectly well until there's a heavy load on the system. I could not reproduce this behavior locally but is happening in both our staging as well as production environments.

Note: The app still serves requests without a significant performance impact but the errors keep flowing

Running Environment

  • Python Version: 3.10.15
  • ddtrace version: 2.12.0
  • gevent version: 24.10.3
  • gunicorn version: 21.2.0
  • Number of gevent workers = 12
  • We are running the flask app in a kubernetes cluster.

What have I tried

  • Bumping ddtrace version from 2.5.2 to 2.12.0. I tried bumping up the version to the latest that includes a fix regarding gevent in its release note i.e. v2.12

Expected behavior

  • ddtrace works without producing errors and killing the workers.

Logs

Apologies for such limited logs, but this is all I got!

KeyError: 132388630494432
o = self.data[key]()
File "/usr/local/lib/python3.10/weakref.py", line 137, in __getitem__
File "ddtrace/profiling/collector/_task.pyx", line 125, in ddtrace.profiling.collector._task.list_tasks
File "ddtrace/profiling/collector/_task.pyx", line 102, in ddtrace.profiling.collector._task.list_tasks
File "ddtrace/profiling/collector/stack.pyx", line 321, in ddtrace.profiling.collector.stack.stack_collect
File "ddtrace/profiling/collector/stack.pyx", line 573, in ddtrace.profiling.collector.stack.StackCollector.collect
for events in self.collect():
File "/usr/local/lib/python3.10/site-packages/ddtrace/profiling/collector/__init__.py", line 43, in periodic
Traceback (most recent call last):
@wconti27 wconti27 added the Profiling Continous Profling label Nov 18, 2024
@mubin-tirsaiwala
Copy link
Author

@taegyunkim Any updates on this?

@github-actions github-actions bot added the stale label Feb 10, 2025
Copy link
Contributor

This issue has been automatically closed after a period of inactivity. If it's a
feature request, it has been added to the maintainers' internal backlog and will be
included in an upcoming round of feature prioritization. Please comment or reopen
if you think this issue was closed in error.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale May 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Profiling Continous Profling stale
Projects
None yet
Development

No branches or pull requests

3 participants