Copy large amounts of data between servers

There are several methods of moving data between servers in a UNIX/Linux/BSD environment, for example scp, ftp, SMB-shares, NFS-shares or rsync. My experience is that using netcat is one of the faster methods if you have large amount of data to copy.

On the receiving server:

cd /  (or the base directory under which you want to recieve the data)
nc -l 1234 | tar -zxvf -

This will make netcat listen on port 1234 (don’t forget to open the port in the firewall and/or iptables).

On the transmitting server:

tar -zcvf - /data/ | nc -q 1 targethostname 1234  (replace /data/ with the folder you want to copy and targethostname with the hostname or IP-address of the receiving server)

If you have more bandwidth than computing power (i.e. a slow machine) you might want to consider to leave out the "z" in the tar commands on both sides and transfer the data uncompressed.

If you need to move data between physical locations – never underestimate the bandwith of a car loaded with harddisks (quote from a former collegue :)).

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

twenty − 10 =