What is “Celery”?
I remember I was very confused when I first encountered Python Celery (as well as how random the name seemed for me). I’ve had opportunities to work with Celery at work and I find it very interesting. I’d like to leave notes about it so I can learn about it more.
Celery
is a task queue.
What is a Task Queue?
It’s a queue of tasks and “are used as a mechanism to distribute work across threads or machines.”
A task queue’s input is a unit of work called a task. Dedicated worker processes constantly monitor task queues for new work to perform.
Celery communicates via messages, usually using a broker to mediate between clients and workers. To initiate a task the client adds a message to the queue, the broker then delivers that message to a worker.
A Celery system can consist of multiple workers and brokers, giving way to high availability and horizontal scaling.
Celery is written in Python, but the protocol can be implemented in any language.
Requirements
Celery
requires a message transport/broker (ex: RabbitMQ / Redis) to send and receive messages.
Characteristics of Celery
- Simple: easy to use and maintain, no need for configuration files
- Highly Available: automatically retry in the event of connection loss or failure
- Fast:
- Flexible: “Almost every part of Celery can be extended or used on its own, Custom pool implementations, serializers, compression schemes, logging, schedulers, consumers, producers, broker transports, and much more.”
Supports
- Brokers (ex: RabbitMQ, Redis, Amazon SQS, etc)
- Concurrency (ex: prefork: multiprocessing, thread: multithreaded, solo: single threaded)
- Result Stores: AMQP, Redis, Memcached, SQLAlchemy, Amazon S3, File System, etc.
- Serialization:
pickle
,json
,yaml
,msgpack
,zlib/bip2
compression, cryptographic message signing
Features
- Monitoring:
A stream of monitoring events is emitted by workers and is used by built-in and external tools to tell you what your cluster is doing - in real time
- Work-flows:
Simple and complex workflows can be composed using a set of powerful primitives called “canvas”
- Time & Rate Limits:
You can control how many tasks can be executed per second/minute/hour, or how long a task can be allowed to run, and this can be set as a default, for a specific worker or individually for each task type
- Scheduling:
You can specify the time to run a task in seconds or a
datetime
or you can use periodic tasks for recurring events based on a simple interval, or Crontab expressions supporting minute, hour, day of week, day of month, and month of year. - Resource Leak Protection:
The
--max-tasks-per-child
option is used for user tasks leaking resources, like memory or file descriptors, that are simply our of your control - User Components:
Each worker component can be customized, and additional components can be defined by the user. The worker is built up using “bootsteps” - a dependency graph enabling fine grained control of the worker’s internals.