Access denied issue with mysqlimport with remote machine

On importing a csv file into a table with same structure that placed on different machine, with mysql import using mysql root user , it is common that we may end up with some access denied error messages.

mysqlimport -h yourhostname –port 3306 -u root -pyourpassword –fields-terminated-by=’;’ –local csv_test tweet.csv

For instance the above stated query which worked in some machines, may raise

mysqlimport: Error: 1045 Access denied for user ‘root’@’ipaddress-of-remote-host’ (using password: YES)

The fix to this issue is as follows :

  1. First check the my.cnf file ( inside /etc/mysql folder ) and look for the line :

bind-address = 127.0.0.1

If it not commented, comments it out, as :

 #bind-address = 127.0.0.1

This tells system that, this mysql is accessible from different hosts as well.

2. Next we need to grant permission to the root user.

For that login to the mysql prompt using mysql -u root -p. Then execute the following query,

GRANT ALL PRIVILEGES ON *.* TO ‘root’@’%’ IDENTIFIED BY ‘password’;

3. Now restart mysql using the command:

service mysql restart

VirtualBox – Copy Paste Not working (Debian)

Even though you have set bidirectional for the option Shared Clipboard in settings ( under General –>Advanced ) some times it will not work. The following steps fixed this issue for me.

In the guest OS do the following :

  1. update source.list file
  • enable the contrib repositories; e.g. for Debian 8, make sure your /etc/apt/sources.list contains something like
    deb http://ftp.debian.org/debian jessie main contrib

2.  Install guest addition software.


apt-get update
apt-get install virtualbox-guest-dkms

Error processing package nginx

During the installation of nginx using apt repository ie when we use the following command :
apt-get install nginx we may come across some error messages like this,

Job for nginx.service failed. See ‘systemctl status nginx.service’ and ‘journalctl -xn’ for details.
invoke-rc.d: initscript nginx, action “start” failed.
dpkg: error processing package nginx-full (–configure):
subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of nginx:
nginx depends on nginx-full (>= 1.6.2-5+deb8u4) | nginx-light (>= 1.6.2-5+deb8u4) | nginx-extras (>= 1.6.2-5+deb8u4); however:
Package nginx-full is not configured yet.
Package nginx-light is not installed.
Package nginx-extras is not installed.
nginx depends on nginx-full (<< 1.6.2-5+deb8u4.1~) | nginx-light (<< 1.6.2-5+deb8u4.1~) | nginx-extras (<< 1.6.2-5+deb8u4.1~); however:
Package nginx-full is not configured yet.
Package nginx-light is not installed.
Package nginx-extras is not installed.

dpkg: error processing package nginx (–configure):
dependency problems – leaving unconfigured
Errors were encountered while processing:
nginx-full
nginx
E: Sub-process /usr/bin/dpkg returned an error code (1)

Fix

Stopping the apache service (or current webserver) before we try to install nginx, would solve this issue. Once we get nginx installed, we can start apache service again.

Hence the following steps are supposed to solve this issue.

1. sudo systemctl stop apache2.service
2. sudo apt-get install nginx
3. sudo systemctl start apache2.service

Celery in Production – Supervisor

In this tutorial, we are going to see how celery is set up in production environment, where both workers and other processes such as monitoring tool like flower are to be run continuously. During development stage, both worker and flower processes used to get stopped somehow forcing me to restart the every now and then. A solution for this, as suggested in official site of Celery, is to make use of tools like Supervisor.

In production you will want to run the worker in the background as a daemon and some times there may be a chance of stopping of celery worker automatically then it should be restarted automatically. To do these tasks you need to use the tools provided like supervisord.

Installing Supervisor

First we need to set up python virtual environment. Then run the following command to create a virtual environment for our demo projects :
virtualenv env

Now move to this folder env and activate the this virtual environment:
source bin/activate

(Now we need to install both celery and rabitMQ in this virtual environment using pip,)

Now install supervisor using the following command :

pip install supervisor

This would create a configuration file named echo_supervisord_conf

Now run the following command to generate the config file :
echo_supervisord_conf > supervisord.conf

This would generate a config file, supervisord.conf where lies all the keys for our magic………….
Now move this file to the destination folder where we have written codes for celery. In my case I have a folder named project inside this env folder (which contains files such as tasks.py etc )

Now cd to projects folder.

Now open the file we have just copied, and add the following lines

[program:tasks]
command=celery worker -A tasks –loglevel=INFO
stdout_logfile=celeryd.log
stderr_logfile=celeryd.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600

[program:flower]
command=celery flower -A tasks
stdout_logfile=flower.log
stderr_logfile=flower.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600

Since we need to run both worker and flower processes, they need to be added as two separate tasks as written above. Also we can set these as a group so that they would be started and stopped together. Most of the fields in these are selfexplanatory, however, if you would like to get a clear picture, you can this

Now Starting the demons :
Just go to the project directory (folder where we copied the config file) and open the terminal
and run the following command,
supervisord

This would start both flower and celery worker as demons.

Stopping Supervisord

If we want stop supervisord, just runthe following command,

killall supervisord

Ref :
https://micropyramid.com/blog/celery-with-supervisor/
http://jamie.curle.io/posts/bottle-and-supervisord/
https://serversforhackers.com/monitoring-processes-with-supervisord

Invoking Celery Tasks from Java Application – Part #2

In the previous post we have seen how to invoke a celery tasks from java application. but it was based on sending messge to  rabbitMQ queue using respective rabbitMQ libraries. But in this post, let’s be be familiar with more convenient way or rather using Rest APIs.

For this, we need to install a celery monitoring tool called flower. Not all version of flower is supposed to serve our purpose. What worked for me is the development version. (the command to install is written below)
pip install https://github.com/mher/flower/zipball/master#egg=flower

So let me assume that we have tasks.py with a task named add

@app.task
def add(x, y):
print x+y

Now run the worker
celery -A tasks worker –loglevel=info

Starting flower
Finally it is time to start flower so that we access/control both tasks and workers using flower REST apis. For that we need to run the following command :

celery flower -A appname (celery flower -A tasks)

Care should be taken to specify the project name in the above command(here tasks) when we start flower because the apis would not work properly otherwise.

Now this can be viewed from the url http://localhost:5555 (or using respective hostname). This has got different tabs to show the status of tasks, workers and so on. So basically what we are going to do is, use the the apis which flower is using for aforementioned feature, directly in our application.

In order to simulate REST api call, throughout this post I am using curl command as I am coming from linux background. This apis can be integrated from any programming languages.

1. Invoking a celery task

curl -X POST -d ‘{“args”:[1,2]}’ http://localhost:5555/api/task/async-apply/tasks.add

this would trigger celery task add with parameters 1 and 2 and would generate an output similar to the following:

{
“task-id”: “81775ebb-7d88-4e91-b580-b3a2d79fe668”,
“state”: “PENDING”
}

So this api would return the task id of the generaed task, which can be used for tracking it whenever we want.

2. Retrieving information regarding a specific task using its id

curl -X GET http://localhost:5555/api/task/info/81775ebb-7d88-4e91-b580-b3a2d79fe668

output :
{
“task-id”: “81775ebb-7d88-4e91-b580-b3a2d79fe668”,
“result”: “‘None'”,
“clock”: 371,
“routing_key”: null,
“retries”: 0,
“failed”: false,
“state”: “SUCCESS”,
“kwargs”: “{}”,
“sent”: false,
“expires”: null,
“exchange”: null,
“started”: 1466248131.745754,
“timestamp”: 1466248131.837694,
“args”: “[1, 2]”,
“worker”: “celery@space-Vostro-3800”,
“revoked”: false,
“received”: 1466248131.744577,
“exception”: null,
“name”: “tasks.add”,
“succeeded”: 1466248131.837694,
“traceback”: null,
“eta”: null,
“retried”: false,
“runtime”: 0.09263942600227892
}

3. Listing all the tasks sent to workers

curl -X GET http://localhost:5555/api/tasks

output :
{
“81775ebb-7d88-4e91-b580-b3a2d79fe668”: {
“received”: 1466248131.744577,
“revoked”: false,
“name”: “tasks.add”,
“succeeded”: 1466248131.837694,
“clock”: 371,
“started”: 1466248131.745754,
“timestamp”: 1466248131.837694,
“args”: “[1, 2]”,
“retries”: 0,
“failed”: false,
“state”: “SUCCESS”,
“result”: “‘None'”,
“retried”: false,
“kwargs”: “{}”,
“runtime”: 0.09263942600227892,
“sent”: false,
“uuid”: “81775ebb-7d88-4e91-b580-b3a2d79fe668”
},
“50c589e1-b613-496f-af1e-c94c04b163dc”: {
“received”: 1466248086.289584,
“revoked”: false,
“name”: “tasks.add”,
“succeeded”: 1466248086.339701,
“clock”: 313,
“started”: 1466248086.291148,
“timestamp”: 1466248086.339701,
“args”: “[4, 3]”,
“retries”: 0,
“failed”: false,
“state”: “SUCCESS”,
“result”: “‘None'”,
“retried”: false,
“kwargs”: “{}”,
“runtime”: 0.049509562999446644,
“sent”: false,
“uuid”: “50c589e1-b613-496f-af1e-c94c04b163dc”
}
}

4. Terminating a task
curl -X POST -d ‘terminate=True’ http://localhost:5555/api/task/revoke/81775ebb-7d88-4e91-b580-b3a2d79fe668

References :
https://pypi.python.org/pypi/flower
http://flower.readthedocs.io/en/latest/api.html

http://nbviewer.jupyter.org/github/mher/flower/blob/master/docs/api.ipynb

 

 

Reading Java property file in Python

Accessing  a java property file in a python code is an easy task. For this we need to install, a python module called pyjavaproperties. (There are many other ways in which we can do this. I prefer this module)

For installing this, please run the following command :

sudo pip install http://pypi.python.org/packages/source/p/pyjavaproperties/pyjavaproperties-0.6.tar.gz

How to use it 

we have a property file named  config.properties and which is as follows,

config.properties
user=Crunchify
company1=Google
company2=eBay
company3=Yahoo
Now open a python ide and add the following lines
from pyjavaproperties import Properties
p = Properties()
p.load(open('test2.properties'))
p.list()     #will all the properties and its valuesprint                                               print p['user']   #prints Cruchify
Ref :                                                                                                   https://pypi.python.org/pypi/pyjavaproperties                                                           https://www.versioneye.com/python/pyjavaproperties/0.6

Uploading and Extracting data from mysql as csv

Extract data as csv from mysql table 

From the termial run the following command (you may need to unformat  this if you are copying this command, the quotes may appear in different format),

mysql -u root -pspace123 -e “SELECT * from employee” mydb > asd.csv                                                                         The above command export data from the table employee of the database mydb into a csv file called asd.csv . By default the delimiter would be tab in this csv file and this can be overidden. Also note that there no space after -p. The password is expected be written along with -p attribute unlike what we do for -u. Otherwise error will be thrown.

asd.csv                                                                                                                                                                                   empno ename
1              ram
1000       Jeena

In this, the header row can be omitted by specifying -N parameter along with the previous command

mysql -N -u root -pspace123 -e “SELECT * from employee” mydb > asd.csv

asd.csv
1               ram
1000        Jeena

Upload csv into mysql table                                                                                                                                                        For this, the only requirement is that we need to have a table in the database with the same and structure as that of the csv file we are going to upload. In my case I have an employee.csv file and table with same name employee and same structure.

employee.csv
1;ram
1000;Jeena

Now type the following command from the terminal,

mysqlimport -u root -pspace123 –fields-terminated-by=’;’ –local mydb employee.csv

This would upload this csv file to the table employee and that can be viewed    from mysql console

mysql> select * from employee;
Screenshot from 2016-06-04 12:52:07

Upload csv file into Remote Table

mysqlimport -h remote-host-name –port 3306 -u username -ppassword –fields-terminated-by=’;’ –local remote-db-name filename.csv-on-local-machine

mysqlimport -h example.com –port 3306 -u testUser -ptestUser123 –fields-terminated-by=’;’ –local demo test.csv