Access denied issue with mysqlimport with remote machine

On importing a csv file into a table with same structure that placed on different machine, with mysql import using mysql root user , it is common that we may end up with some access denied error messages.

mysqlimport -h yourhostname –port 3306 -u root -pyourpassword –fields-terminated-by=’;’ –local csv_test tweet.csv

For instance the above stated query which worked in some machines, may raise

mysqlimport: Error: 1045 Access denied for user ‘root’@’ipaddress-of-remote-host’ (using password: YES)

The fix to this issue is as follows :

  1. First check the my.cnf file ( inside /etc/mysql folder ) and look for the line :

bind-address = 127.0.0.1

If it not commented, comments it out, as :

 #bind-address = 127.0.0.1

This tells system that, this mysql is accessible from different hosts as well.

2. Next we need to grant permission to the root user.

For that login to the mysql prompt using mysql -u root -p. Then execute the following query,

GRANT ALL PRIVILEGES ON *.* TO ‘root’@’%’ IDENTIFIED BY ‘password’;

3. Now restart mysql using the command:

service mysql restart

VirtualBox – Copy Paste Not working (Debian)

Even though you have set bidirectional for the option Shared Clipboard in settings ( under General –>Advanced ) some times it will not work. The following steps fixed this issue for me.

In the guest OS do the following :

  1. update source.list file
  • enable the contrib repositories; e.g. for Debian 8, make sure your /etc/apt/sources.list contains something like
    deb http://ftp.debian.org/debian jessie main contrib

2.  Install guest addition software.


apt-get update
apt-get install virtualbox-guest-dkms

RabbitMQ – Status Checking

To see the status of rabbitmq

sudo rabbitmqctl status

To stop the rabbitmq

sudo rabbitmqctl stop

(Try the status command again to see that it’s stopped). To start it again, the recommended method is

sudo invoke-rc.d rabbitmq-server start


Still not getting it up ????

Try restarting your system. It is supposed to work now. 

Celery -Bound Tasks

A task being bound means the first argument to the task will always be the task instance (self).
Bound tasks are needed for retries (using app.Task.retry()), for accessing information about the current task request, and for any additional functionality you add to custom task base classes. Ref

An example to get the tasks ID of current tasks has been added below :

@app.task(bind=True,name=”tasks.get_ID”)
def get_ID(self):
print “hooo”
print self.request.id

It is always a good practice to give names to tasks otherwise it will go for automatic  naming which in some situation may lead to tasks unregistered error.

 

Java – Batch Upload to Database

Usually I/O operations are costly. When I  tried to upload my huge csv file of 1.8 lakhs records one by one,  into a MySQL table , it took almost more than 30 minutes. And obviously it was not an acceptable result. So I had to resort  to batch uploading. On writing data as batches of 1000 records, everything was over in 30-40 seconds.

Lets see the code in detail.

The important thing to remember is that we need to turn off  “auto commit” mode. This means that if this mode is enable, every time record is pushed DB memory, it would automatically get written into tables, nullifying the effects of batch upload. At the same we have to enable it just before writing the records into table once enough number of records are pushed into DB memory or cache, using the commit().

In the the following example, records are read from a csv file named input.csv and its first three fields are written into tables called “batch” in the DB test. MySQL was DB of my choice.

At first auto commit mode is turned off by calling setAutoCommit(false) on DB connection object. Each record will be read and pushed them into DB cache using the addBatch(); 

When the we have 1000 records in the cache ie count variables becomes multiples of 1000, we need to write them into DB . For that we call following method, executeBatch(); Since we have disabled the auto commit mode, we need to enable that as well by calling commit()  on connection object in order to get this data written into the DB.

Gitub link here

BatcUpload.java

public class BatchUpload {

public static void main(String[] args) throws IOException, SQLException {

String line = “”;
String delimiter = “,”;
int count = 1;
DBConnector.createConnection();
Connection dbConn = DBConnector.getDBConnection();
System.out.println(dbConn);
PreparedStatement ps = DBConnector.getPSInstance();
dbConn.setAutoCommit(false);
String inputFile = “input.csv”;
BufferedReader br = new BufferedReader(new FileReader(inputFile));
while ((line = br.readLine()) != null) {
String[] entities = line.split(delimiter);
try {
ps.setString(1,entities[0]);
ps.setString(2,entities[1]);
ps.setString(3,entities[2]);

ps.addBatch();
if(count%1000==0){
ps.executeBatch();
dbConn.commit();
}
count++;

System.out.println(“Records are inserted into DBUSER table!”);

} catch (SQLException e) {

System.out.println(e.getMessage());

}

}

/*** To write the remaining records into DB*/

ps.executeBatch();
dbConn.commit();
dbConn.close();

}
}

DBConnector.java 

public class DBConnector {
private static final String DB_DRIVER = “com.mysql.jdbc.DRIVER”;
private static final String DB_CONNECTION = “jdbc:mysql://localhost:3306/test”;
private static final String DB_USER = “root”;
private static final String DB_PASSWORD = “root”;

private static Connection conn;
private static PreparedStatement ps;

public static void createConnection() {

// conn = null;
System.out.println(“asdasdfafds”);

try {

Class.forName(DB_DRIVER);

} catch(ClassNotFoundException cnf){
System.out.println(“Driver could not be loaded: ” + cnf);
}

try{
conn = DriverManager.getConnection(DB_CONNECTION, DB_USER,DB_PASSWORD);
String query = “INSERT INTO batch”
+ “(userID, username, address) VALUES”
+ “(?,?,?)”;

ps = conn.prepareStatement(query);
}
catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

}

public static Connection getDBConnection() {

return conn;

}

public static PreparedStatement getPSInstance() {
return ps;

}
}

Error processing package nginx

During the installation of nginx using apt repository ie when we use the following command :
apt-get install nginx we may come across some error messages like this,

Job for nginx.service failed. See ‘systemctl status nginx.service’ and ‘journalctl -xn’ for details.
invoke-rc.d: initscript nginx, action “start” failed.
dpkg: error processing package nginx-full (–configure):
subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of nginx:
nginx depends on nginx-full (>= 1.6.2-5+deb8u4) | nginx-light (>= 1.6.2-5+deb8u4) | nginx-extras (>= 1.6.2-5+deb8u4); however:
Package nginx-full is not configured yet.
Package nginx-light is not installed.
Package nginx-extras is not installed.
nginx depends on nginx-full (<< 1.6.2-5+deb8u4.1~) | nginx-light (<< 1.6.2-5+deb8u4.1~) | nginx-extras (<< 1.6.2-5+deb8u4.1~); however:
Package nginx-full is not configured yet.
Package nginx-light is not installed.
Package nginx-extras is not installed.

dpkg: error processing package nginx (–configure):
dependency problems – leaving unconfigured
Errors were encountered while processing:
nginx-full
nginx
E: Sub-process /usr/bin/dpkg returned an error code (1)

Fix

Stopping the apache service (or current webserver) before we try to install nginx, would solve this issue. Once we get nginx installed, we can start apache service again.

Hence the following steps are supposed to solve this issue.

1. sudo systemctl stop apache2.service
2. sudo apt-get install nginx
3. sudo systemctl start apache2.service

Celery in Production – Supervisor

In this tutorial, we are going to see how celery is set up in production environment, where both workers and other processes such as monitoring tool like flower are to be run continuously. During development stage, both worker and flower processes used to get stopped somehow forcing me to restart the every now and then. A solution for this, as suggested in official site of Celery, is to make use of tools like Supervisor.

In production you will want to run the worker in the background as a daemon and some times there may be a chance of stopping of celery worker automatically then it should be restarted automatically. To do these tasks you need to use the tools provided like supervisord.

Installing Supervisor

First we need to set up python virtual environment. Then run the following command to create a virtual environment for our demo projects :
virtualenv env

Now move to this folder env and activate the this virtual environment:
source bin/activate

(Now we need to install both celery and rabitMQ in this virtual environment using pip,)

Now install supervisor using the following command :

pip install supervisor

This would create a configuration file named echo_supervisord_conf

Now run the following command to generate the config file :
echo_supervisord_conf > supervisord.conf

This would generate a config file, supervisord.conf where lies all the keys for our magic………….
Now move this file to the destination folder where we have written codes for celery. In my case I have a folder named project inside this env folder (which contains files such as tasks.py etc )

Now cd to projects folder.

Now open the file we have just copied, and add the following lines

[program:tasks]
command=celery worker -A tasks –loglevel=INFO
stdout_logfile=celeryd.log
stderr_logfile=celeryd.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600

[program:flower]
command=celery flower -A tasks
stdout_logfile=flower.log
stderr_logfile=flower.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600

Since we need to run both worker and flower processes, they need to be added as two separate tasks as written above. Also we can set these as a group so that they would be started and stopped together. Most of the fields in these are selfexplanatory, however, if you would like to get a clear picture, you can this

Now Starting the demons :
Just go to the project directory (folder where we copied the config file) and open the terminal
and run the following command,
supervisord

This would start both flower and celery worker as demons.

Stopping Supervisord

If we want stop supervisord, just runthe following command,

killall supervisord

Ref :
https://micropyramid.com/blog/celery-with-supervisor/
http://jamie.curle.io/posts/bottle-and-supervisord/
https://serversforhackers.com/monitoring-processes-with-supervisord