Celery -Bound Tasks

A task being bound means the first argument to the task will always be the task instance (self).
Bound tasks are needed for retries (using app.Task.retry()), for accessing information about the current task request, and for any additional functionality you add to custom task base classes. Ref

An example to get the tasks ID of current tasks has been added below :

@app.task(bind=True,name=”tasks.get_ID”)
def get_ID(self):
print “hooo”
print self.request.id

It is always a good practice to give names to tasks otherwise it will go for automatic  naming which in some situation may lead to tasks unregistered error.

 

Advertisements

Celery in Production – Supervisor

In this tutorial, we are going to see how celery is set up in production environment, where both workers and other processes such as monitoring tool like flower are to be run continuously. During development stage, both worker and flower processes used to get stopped somehow forcing me to restart the every now and then. A solution for this, as suggested in official site of Celery, is to make use of tools like Supervisor.

In production you will want to run the worker in the background as a daemon and some times there may be a chance of stopping of celery worker automatically then it should be restarted automatically. To do these tasks you need to use the tools provided like supervisord.

Installing Supervisor

First we need to set up python virtual environment. Then run the following command to create a virtual environment for our demo projects :
virtualenv env

Now move to this folder env and activate the this virtual environment:
source bin/activate

(Now we need to install both celery and rabitMQ in this virtual environment using pip,)

Now install supervisor using the following command :

pip install supervisor

This would create a configuration file named echo_supervisord_conf

Now run the following command to generate the config file :
echo_supervisord_conf > supervisord.conf

This would generate a config file, supervisord.conf where lies all the keys for our magic………….
Now move this file to the destination folder where we have written codes for celery. In my case I have a folder named project inside this env folder (which contains files such as tasks.py etc )

Now cd to projects folder.

Now open the file we have just copied, and add the following lines

[program:tasks]
command=celery worker -A tasks –loglevel=INFO
stdout_logfile=celeryd.log
stderr_logfile=celeryd.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600

[program:flower]
command=celery flower -A tasks
stdout_logfile=flower.log
stderr_logfile=flower.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600

Since we need to run both worker and flower processes, they need to be added as two separate tasks as written above. Also we can set these as a group so that they would be started and stopped together. Most of the fields in these are selfexplanatory, however, if you would like to get a clear picture, you can this

Now Starting the demons :
Just go to the project directory (folder where we copied the config file) and open the terminal
and run the following command,
supervisord

This would start both flower and celery worker as demons.

Stopping Supervisord

If we want stop supervisord, just runthe following command,

killall supervisord

Ref :
https://micropyramid.com/blog/celery-with-supervisor/
http://jamie.curle.io/posts/bottle-and-supervisord/
https://serversforhackers.com/monitoring-processes-with-supervisord