Skip to content

doc: ch9 fix style #11

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 2, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -67,19 +67,19 @@ If you want to run the task after `t` seconds of non-blocking delay, use `apply_
```

### **Some important points on running Celery worker**
- 1. The following command determines the number of workers that can be forked/spawned:
1. The following command determines the number of workers that can be forked/spawned:
```shell
ulimit -n
```
This number is actually the maximum number of
[file descriptors](https://stackoverflow.com/questions/5256599/what-are-file-descriptors-explained-in-simple-terms)
that can be open at any point of time.

- 2. By running Celery's worker using `celery -A ex_9_1 worker`,
2. By running Celery's worker using `celery -A ex_9_1 worker`,
there are `n` number of processes forked/spawned by default, where `n` is the number of CPU cores.
You can determine this by running `pgrep celery` command (which returns their PIDs).

- 3. There are two important inputs for running celery worker, `concurrency` and `autoscale`:
3. There are two important inputs for running celery worker, `concurrency` and `autoscale`:
`concurrency` is used to determine the number of processes to be forked/spawned by celery worker
(which has an upper-bound of maximum number of file descriptors under the hood).
`autoscale` limits the number of forked/spawned processes between two numbers
Expand All @@ -94,13 +94,13 @@ It provides Cron-style scheduling and some interval based scheduling.

As the [APScheduler official document](https://apscheduler.readthedocs.io/en/3.x/) states,
APScheduler has four compnents:
- 1. Trigger: Used to determine when to run the task and has three types:
1. Trigger: Used to determine when to run the task and has three types:
- a. Date
- b. Interval
- b. Cron
- 2. Job Store: Saves the scheduled jobs in memory or database.
- 3. Executor: Handles running of the jobs, using thread or process pool.
- 4. Scheduler: Binds all things together and helps with configuring the job stores and executors.
2. Job Store: Saves the scheduled jobs in memory or database.
3. Executor: Handles running of the jobs, using thread or process pool.
4. Scheduler: Binds all things together and helps with configuring the job stores and executors.

Now let's see an example of a simple periodic task using APScheduler from
[Keshav Manglore](https://medium.com/@keshavmanglore/advanced-python-scheduler-scheduling-tasks-with-ap-scheduler-in-python-8c7998a4f116)
Expand Down