Comments on Shell Script To Back Up All MySQL Databases, Each Table In An Individual File And Upload To Remote FTP

Shell Script To Back Up All MySQL Databases, Each Table In An Individual File And Upload To Remote FTP This script will create a backup of each table in every database (one file per table), compress it and upload it to a remote ftp.

10 Comment(s)

Add comment

Please register in our forum first to comment.

Comments

By: Dave

I would recommend AutoMySQLBackup as a better alternative.  You can find more about it on SourceForge at:

http://sourceforge.net/projects/automysqlbackup/

By: Webmaster

    Hi Marc,

Many Thanks for sharing your knowledge in Shell Scripting as this is perfectly working fine.

At my first try, i got two errors stating that the "which mysql" and "which mysqldump" are not found. So basically, I manually edit the 2 lines and instead i put the real path of those two commands. I am using LAMPP by the way.

I also put it in my Cron.hourly and tested it works fine. I also checked my FTP Directory and it works fine.

I am also in root privilege. 

I am using Fedora 6 and I am behind a firewall/proxy but no problem was found.

Once again, thanks for this great script. Hope more people will enjoy using this script.


By:

Quote from the famous movie Snatch :

"...if I throw a dog a bone, I don't want to know if it tastes good or not..."

This works perfect for me, I just wanted to keep a backup of my important database, they are quite small (>50MB) but contains very important data.

If you think its poor, why dont you open a text editor and write a perfect script you could share with people.

Marc

By:

To fix the problem related with the consistency between the tables (read danielj comment), I recommend replacing those lines :

### Create dir for each databases, backup tables in individual files ###
  mkdir $BACKUP/$NOW/$db
  FILE=$BACKUP/$NOW/$db/$db.sql.gz
  echo $i; $MYSQLDUMP --add-drop-table --allow-keywords -q -c -u $MUSER -h  $MHOST -p$MPASS $db $i | $GZIP -9 > $FILE
done

### Compress all tables in one nice file to upload ###

Instead of creating a backup of individual tables, it will create individual databases backups. You can open the .sql file afterward in your favorite text editor and retreive tables data if needed.

By: Anonymous

# USERNAME
MyUSER="`cat /etc/mysql/debian.cnf | grep user | uniq | awk -F'=' '{print $2}'`"
# PASSWORD
MyPASS="`cat /etc/mysql/debian.cnf | grep password | uniq | awk -F'=' '{print $2}' | \
sed -e 's/^ //'`"
# Hostname
MyHOST="localhost"
# Servername
SERVER="$HOSTNAME"
# mysqldump options
MYSQLDUMPOPTS="--opt"
# Linux bin paths, change this if it can't be autodetected via which command
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
CHOWN="$(which chown)"
CHMOD="$(which chmod)"
GZIP="$(which gzip)"

# Backup Dest directory, change this if you have someother location
DEST="/root/backups"
# Main directory where backup will be stored
MBD="$DEST/mysql"
#create if not existing
if [ ! -e "$MBD" ]; then
        mkdir -p $MBD
fi
# Get data in dd-mm-yyyy format
NOW="$(date +"%m-%d-%Y")"
FNAME="mysql-backup"
# File to store current backup file
FILE=""
# Store list of databases
DBS=""
TMP="/tmp/$NOW"
mkdir -p $TMP
n=5;
# the first archive number must be zero
l=0;

while [ ! $n -le 0 ]; do
        let "m = $n - 1"
        if [ -e $MBD/$FNAME.$m.tar.gz ]; then
                mv $MBD/$FNAME.$m.tar.gz $MBD/$FNAME.$n.tar.gz
        fi
        let "n = $n - 1"
done



# DO NOT BACKUP these databases
# IGGY="test"

[ ! -d $MBD ] && mkdir -p $MBD || :
[ ! -d $TMP ] && mkdir -p $TMP || :
# Only root can access it!
$CHOWN root.root -R $DEST
$CHMOD 600 $DEST

# Get all database list first
DBS="$($MYSQL -u $MyUSER -h $MyHOST -p$MyPASS -Bse 'show databases')"

for db in $DBS
do
   skipdb=-1
   if [ "$IGGY" != "" ];
   then
       for i in $IGGY
       do
           [ "$db" == "$i" ] && skipdb=1 || :
       done
   fi

   if [ "$skipdb" == "-1" ] ; then
       FILE="$db.$SERVER.$NOW.sql"
       # do all inone job in pipe,
       # connect to mysql using mysqldump for select mysql database
       # and pipe it out to gz file in backup dir :)
       $MYSQLDUMP -u $MyUSER -h $MyHOST -p$MyPASS $MYSQLDUMPOPTS $db > $TMP/$FILE
   fi
done
tar czvf $MBD/$FNAME.0.tar.gz $TMP &> /dev/null
rm /tmp/$NOW -r
exit 0

By: danielj

This will create an inconsistent dump, since table A may be changed while table B is being dumped. This might for example break foreign key constraints. This is not necessarily a problem but should be kept in mind.

By:

Hi, indeed you are right. Table will not be 100% consistent with each others but for my usage it doesnt create problems.

I use this with replication. When I do a checksum of my databases to check consistency with Maatkit (http://blogama.org/node/39) I get results table by table.

Most of the time its only a table at a time that is inconsistent (if it happend!) so I didnt want a complete dump of the database.

 

 

By: Anonymous

I think its a very poor script.

Where is the retention?

What happen with the writing in HD?. You have to be careful with your HD and the bigs databases.

And the bandwith? Why not use a temporal directory and make a synchronization to a final directory to save some of bandwith in the ftp connection? Or much more better.... why ftp?? you can save more time an bandwith if you make rsync

What happen with big databases? You spend more time, CPU and bandwith in make the dumps all hours and move it to the ftp that let the applications runs through db

I think that is a script that make the job but....  is very poor

By: bhavesh

mysql -u root -p  -e "select * from databasename.tablename limit 4;" > /tmp/ABC.txt

By: Rosel

Great post !! Thanks for such a clear and step by step MySQL Backup Script.