Friday, December 20, 2013

Yahoo Messenger - Online Status and Quick Link



<a href="ymsgr:SendIM?YahooUser">
     <img border=0 src="http://opi.yahoo.com/online?u=YahooUser&m=g&t=2">

</a>

change the  "t=#" to any number from blank to 25.
Many images available.










http://opi.yahoo.com/online?m=g&t=21&u=artistan

Friday, December 13, 2013

Apache Permissions - CentOS

Issues today with apache and permissions on linux.

Apparently the directories need to be WORLD executable.
This does not make sense to me, but there you go.
Note: we need permissions for groups our developers so we are setting the user to nobody and the group to myusers. Apache will use the Other or World permissions.

create an alias to setup permission in a web directory.


alias webperms="shopt -s dotglob; sudo chown -R nobody:myusers*; find . -type f -exec sudo chmod 0464 {} \; ; find . -type d -exec sudo chmod 2575 {} \; ; grep -Rl \#\!/usr/local/bin/php * | xargs sudo chmod 0474; grep -Rl \#\!/bin/bash * | xargs sudo chmod 0474; shopt -u dotglob;"

Breakdown of the aliased commands.

Show hidden files

 shopt -s dotglob;

 CREATE AN ALIAS

alias webperms

OWNED BY NOBODY AND YOUR USER GROUP. (APACHE WILL USE "OTHER")

sudo chown -R nobody:myusers *;

FILES - OWNER READ - GROUP READ WRITE - WORLD READ

find . -type f -exec sudo chmod 0464 {} \; ;

DIRECTORIES - GROUP STICKY - OWNER READ EXECUTE - GROUP READ WRITE EXECUTE - WORLD READ EXECUTE

find . -type d -exec sudo chmod 2575 {} \; ;

FIND EXECUTABLE PHP FILES AND SET GROUP PERMISSIONS TO EXECUTE

grep -Rl \#\!/usr/local/bin/php * | xargs sudo chmod 0474;

FIND EXECUTABLE BASH FILES AND SET GROUP PERMISSIONS TO EXECUTE

grep -Rl \#\!/bin/bash * | xargs sudo chmod 0474;

Hide hidden files

 shopt -u dotglob;

Thursday, September 19, 2013

Exception happening while executing LOAD DATA LOCAL INFILE

PDO Connection


    SQLSTATE[42000]: Syntax error or access violation: 1148 The used command is not allowed with this MySQL version

    $connectionString = 'mysql:host='.$host.';dbname='.$schema;
    $conn = new PDO(
                            $connectionString,
                            $user,
                            $pass,
                            array(
                                PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => true,
                                PDO::MYSQL_ATTR_LOCAL_INFILE => true
                            )
                        );
    $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

Mysqli Connection


    $conn = mysqli_init();
    mysqli_options($conn, MYSQLI_OPT_LOCAL_INFILE, true);
    mysqli_real_connect($conn,_DBR_HST,_DBW_USR,_DBW_PAS,_DB_NAME);
    $result = mysqli_query($conn, $query);
    if($result){
        echo "yay";
    }

Wednesday, August 28, 2013

Worst Php Code EVER!





Quote from reddit PHP

Creating a user from the web problem.
I have a form that creates a user by entering the username and their password. The code I'm using in php is:
shell_exec("sudo useradd -p $encpass -g groupname -s /bin/bash $username");
I have used a whoami and have confirmed that it runs as http. In /etc/sudoers I have
http ALL=(ALL) NOPASSWD: ALL
root ALL=(ALL) ALL
%wheel ALL=(ALL) NOPASSWD: ALL
%sudo ALL=(ALL) ALL
I also added http to group wheel. The problem I am having is it's not setting the password correctly. The user is created, just the password isn't set. I know that $encpass has a value because I can display it. I also know the command works because it runs fine in command line. This was working before, but I had to reinstall Arch Linux, so does anyone have an idea for why this doesn't work?
Go check out the comments!!!
http://www.reddit.com/r/PHP/comments/1l7baq/creating_a_user_from_the_web_problem/

 

Wednesday, August 7, 2013

WinSCP and Cygwin


Most people believe Cygwin will not work with WinSCP.

I am not a fan of believing something will not work, so I setup "Custom Commands" in WinSCP to startup Cygwin mintty terminals.

SSH KEYS

First of all you should have ssh keys to allow auto login to server.
With the following Custom command added to WinSCP you can auto add an ssh key. I setup this command with hotkey Ctrl+0.

C:\cygwin\bin\mintty.exe /bin/bash -ile /cygdrive/c/Dropbox/WinSCP/ssh.sh !U !@ !P
 This is the ssh.sh script.

echo "USE THIS PASSWORD: $3"
cat /.ssh/id_dsa.pub | ssh $1@$2 'cat - >>~/.ssh/authorized_keys2'
 Which will echo the password for entry via WinSCP and allow you to setup your ssh key on the server you are connected to.

STARTING A CYGWIN TERMINAL (Mintty)


Now you need a command to launch Cygwin mintty.exe and log you into that server. This command I setup with hotkey Ctrl+1, which is very easy to use.

C:\cygwin\bin\mintty.exe /bin/ssh !U@!@

SSHPASS

And that is all you need to use Cygwin with WinSCP.
If you really wanted to you could avoid ssh keys by installing sshpass into Cygwin.

I created another file, sshpass.sh like this.
/usr/local/bin/sshpass.exe -p '$3' ssh $1@$2
And added the Custom Command pointed at that bash script. I setup this command with hotkey Ctrl+2.

C:\cygwin\bin\mintty.exe /bin/bash -ile /cygdrive/c/Dropbox/WinSCP/sshpass.sh !U !@ !P

Tuesday, July 30, 2013

Machine Configs for Bash Php & Apache


System wide configs for Bash, Php, and Apache (Php).

 We are vitalizing our systems and want to get to a standard config that will allow our single code base to run on many servers that may be utilizing other servers for services such as MySQL, Memcached, NFS shares and so on. The idea is that the server itself could hold the config necessary for it to communicate with its sibling systems.

Example Diagram:


Notice how the top section is web, middle is database, and bottom is storage.
Now each set of web servers could communicate with one or more databases and storage servers. But how do you let your code know which ones it should be using?

Here is where I am breaking off into Environment Variables!

   

Environment Variables.

By creating a single set of OS based environment variables, we can share the same config across all code on a server and standardize the setup for a new server that looks at other service providers.

Step 1: Profile Configs.

All login shell connections have a profile with "configs" that are loaded via /etc/profile.d (CentOS) or something similar.  I created a file in profile.d that will auto load with most things like executing a Php CLI script via login shell. Then it just takes a few tweaks for Apache and others. ( Environment Variables 1 2 )

/etc/profile.d/custom.config.sh
export MYVAR=this custom var
export MYVAR2=another custom var
 Now, to load this into Apache you first need to add
. /etc/profile.d/custom.config.sh
to your /etc/sysconfig/httpd file. This tells apache to source that file when restarting.
Next you need to tell Apache what to include into its SERVER variables. It is inclusive only, so you need to specify the variable for them to be included. I did this in the main /etc/httpd/conf/httpd.conf file, but you should be able to do it an any of the /etc/httpd/conf.d/files.

Example:

PassEnv MYVAR MYVAR2
Now, this will allow these variables to be passed to Apache Php scripts for use. Here is a Php script to test your output.

test.php

 <?php
echo "<pre>SERVER\n";
print_r($_SERVER);
echo "ENV\n";
print_r($_ENV);
exit;
 This should show something like this..
SERVER
Array
(
    [HTTP_AUTHORIZATION] =>
    [MYVAR] => this custom var
    [MYVAR2] => another custom var

    [HTTP_HOST] => www.bla.com
    [HTTP_USER_AGENT] => Mozilla/5.0 (Windows NT 6.1; WOW64; rv:22.0) Gecko/20100101 Firefox/22.0
    [HTTP_ACCEPT] => text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
    [HTTP_ACCEPT_LANGUAGE] => en-US,en;q=0.5
    [HTTP_ACCEPT_ENCODING] => gzip, deflate
    [HTTP_CONNECTION] => keep-alive
    [PATH] => /sbin:/usr/sbin:/bin:/usr/bin
    [SERVER_SIGNATURE] =>
Apache/2.2.15 (CentOS) Server at www.bla.com Port 80


    [SERVER_SOFTWARE] => Apache/2.2.15 (CentOS)
    [SERVER_NAME] => www.bla.com
    [SERVER_ADDR] => 10.128.1.100
    [SERVER_PORT] => 80
    [REMOTE_ADDR] => 10.128.1.58
    [DOCUMENT_ROOT] => /var/www/vhosts/bla/website/www/
    [SERVER_ADMIN] => admin@bla.com
    [SCRIPT_FILENAME] => /var/www/vhosts/bla/website/www/index.php
    [REMOTE_PORT] => 57829
    [GATEWAY_INTERFACE] => CGI/1.1
    [SERVER_PROTOCOL] => HTTP/1.1
    [REQUEST_METHOD] => GET
    [QUERY_STRING] =>
    [REQUEST_URI] => /
    [SCRIPT_NAME] => /index.php
    [PHP_SELF] => /index.php
    [REQUEST_TIME_FLOAT] => 1375211324.713
    [REQUEST_TIME] => 1375211324
)
ENV
Array
(
)
   

 Overriding the Variables per Virtual Host

Another awesome thing is that you may override the variables per virtual host on the machine. This will allow us to override the config for one V.H. using SetEnv which is part of Apache Module mod_env.

Example config:

# bla.com public
<VirtualHost *:80>
    DocumentRoot /var/www/vhosts/bla/website/www/
    ServerName www.bla.com
    ServerAlias bla.com
    ErrorLog logs/www.bla.com-error_log
#  CustomLog logs/www.bla.com-access_log combined
    <Directory /var/www/vhosts/bla/website/>
        AllowOverride All
    </Directory>
    SetEnv SITE_NAME bla.com
    SetEnv MYVAR 192.168.0.100
    SetEnv MYVAR 192.168.0.100

</VirtualHost>

BASH

Now some more details on how to get these variables into BASH and Crontab Php Scripts.
#!/bin/bash -l
This is how to use bash to act as a shell login (-l) which will load the profile configs and hence our custom environment variables.

Example BASH script to test variables

#!/bin/bash -l
set > test.sh.txt

CRONTAB

We are down to the end here, with slight wave of the hand we can pass the environment variable to php crontab scripts also. This is done by using the bash -l command to do a shell login call and using the -c with executes a command via bash.

Example CRONTAB

[root@web1 vhosts]# crontab -l
# *    *    *    *    *  command to execute
# .    .    .    .    .
# .    .    .    .    .
# .    .    .    .    .
# .    .    .    .    ...... day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names)
# .    .    .    ........... month (1 - 12)
# .    .    ................ day of month (1 - 31)
# .    ..................... hour (0 - 23)
# .......................... min (0 - 59)

* * * * * cd /var/www/vhosts/; bash -l -c ./test.php
* * * * * cd /var/www/vhosts/; ./test.sh

Example Php Script ./test.php

#!/usr/local/bin/php
<?php

file_put_contents('./test.php.txt',var_export($_SERVER,true));
 

 

mysql security using config editor

Here is a sweet tidbit for those who need to do mysql scripts often.
This is great for bash scripts.


mysql_config_editor — MySQL Configuration Utility
The mysql_config_editor utility (available as of MySQL 5.6.6) enables you to store authentication credentials in an encrypted login file named .mylogin.cnf. The file location is the %APPDATA%\MySQL directory on Windows and the current user's home directory on non-Windows systems. The file can be read later by MySQL client programs to obtain authentication credentials for connecting to MySQL Server.
Stores encrypted login information.
mysql_config_editor set --login-path=local --host=localhost --user=fooman --password  

Then just use login-path for your scripts...
 cat some-random-queries.sql | mysql --login-path=local

Tuesday, July 16, 2013

Obvious For Some - How to tail log for connections on port, CentOS?

This may be obvious to a lot of users out there, but I am making myself a note.
Mostly because I am more likely to remember if I make a note!



tail -f /var/log/messages  |grep "DPT=3306"
This tails the messages log for any connection to port 3306, typically MySQL.
It is a handy way to see what IPs are connecting to MySQL.

Monday, July 15, 2013

Backup/Copy a MySQL table to another table on same server.

So I need to update our table copies at work due to moving to InnoDB table from MyISAM, finally!

I have created a simple script thanks to Todd's MySQL Blog for some insights into transportable tablespaces and MySQL dev site on table copying InnoDB tables.  Trick here is to ensure that you can still copy MyISAM tables during the transition.

More Error handling/checking to come...

create file: copy_table.php

#!/usr/local/bin/php
<?php
define('_REPLICATION_USER', 'xxx');
define('_REPLICATION_PASS', 'xxx');

$db = new PDO('mysql:host='.$argv[1].';dbname=mysql', 'xxx', 'xxx');

$originalschema=$argv[2];
$originaltablename=$argv[3];
$copyedschema=$argv[4];
$copyedtablename=$argv[5];

//copy_table.php localhost originalschema originaltablename copyedschema copyedtablename


// check schema info
$engine = $db->query("select engine from information_schema.tables 
where table_schema = '$originalschema' AND  table_name = '$originaltablename'")->fetch(PDO::FETCH_COLUMN);
$engine2 = $db->query("select engine from information_schema.tables where table_schema = '$copyedschema' AND  table_name = '$copyedtablename'")->fetch(PDO::FETCH_COLUMN);


if($engine && $engine2!=$engine){
    echo "creating $copyedschema.$copyedtablename\n";
    // different table type for copy table, drop it like its HOT!
    $db->query( "DROP TABLE $copyedschema.$copyedtablename" );
    // create copy
    $db->query( "CREATE TABLE IF NOT EXISTS $copyedschema.$copyedtablename LIKE $originalschema.$originaltablename" );
    // make sure there is something in there.
    $db->query( "INSERT INTO $copyedschema.$copyedtablename SELECT * FROM $originalschema.$originaltablename LIMIT 1" );
}

switch ($engine){
//####################################################################################
case "InnoDB":
    echo "InnoDB:\n\n";
    if(file_exists("/var/lib/mysql/$originalschema/$originaltablename.ibd")){
        $db->query( "ALTER TABLE $copyedschema.$copyedtablename DISCARD TABLESPACE" );
        $db->query( "FLUSH TABLES $originalschema.$originaltablename FOR EXPORT" );

        # save data file.
        exec("cp -pf /var/lib/mysql/$originalschema/$originaltablename.ibd /var/lib/mysql/$copyedschema/$copyedtablename.ibd");

        $db->query( "UNLOCK TABLES" );
        $db->query( "ALTER TABLE $copyedschema.$copyedtablename IMPORT TABLESPACE" );
    } else {
        echo "InnoDB file does not exist.\n\n";;
    }

    break;
//####################################################################################
case "MyISAM":
    echo  "MyISAM:\n\n";
    if(file_exists("/var/lib/mysql/$originalschema/$originaltablename.MYI")){
        $db->query( "FLUSH TABLES $originalschema.$originaltablename WITH READ LOCK" );

        # save data files.
        exec("cp -pf /var/lib/mysql/$originalschema/$originaltablename.frm /var/lib/mysql/$copyedschema/$copyedtablename.frm");
        exec("cp -pf /var/lib/mysql/$originalschema/$originaltablename.MYI /var/lib/mysql/$copyedschema/$copyedtablename.MYI");
        exec("cp -pf /var/lib/mysql/$originalschema/$originaltablename.MYD /var/lib/mysql/$copyedschema/$copyedtablename.MYD");

        $db->query( "UNLOCK TABLES" );
        $db->query( "FLUSH TABLES $copyedschema.$copyedtablename" );
    } else {
        echo "MyISAM file does not exist.\n\n";;
    }

    break;
//####################################################################################
}

Make the file executable.
 

Usage:
copy_table.php localhost originalschema originaltablename copyedschema copyedtablename


 

Wednesday, July 3, 2013

Recursive copy of files overwriting existing files

Here is a trick I just learned, again. It is amazing the things you forget and come back to every so often.

 cp command, at least in CentOS, defaults with alias cp -i (interactive). To bypass this and run a copy overwriting existing file you need to use /bin/cp or unalias the cp command. I am unsure about this being the default in other linux distros.

/bin/cp -Rfu source /var/www/html/

or

unalias cp
cp -Rfu source /var/www/html/

-R: Recursively copy files from directory.
-f:  if an existing destination file cannot be opened, remove it and try again
-u: copy only when the SOURCE file is newer than the destination file or when the destination file is missing

Recursive Deletion of .svn directories


After some evil bad version control errors by some crazy lunatic people I had to clean up svn for a project. The problem was that the files were added to svn on two different checkouts. When trying to do an update on one after a commit on the other it freaked the hell out. So i needed to recheckout a clean version and then copy the new files over the top of the other files. This does have the problem of loosing code, but I am doing it for the production copy which should be the primary code.

Here is how I cleaned up the live copy so I could copy those changes over the top of the clean checkout without copying the svn files over the top also.

find . -type d -name .svn -exec rm -rf {} \; # linux command line

the . in the find means start from the current directory.  the d is for types of file equals directories. the .svn is the name of the directories and finally rm -rf is the command to execute on each found item.

Friday, May 10, 2013

Regex to replace charset in MySQL dump.

converting all our databases to charset utf8mb4 and collation utf8_general_ci.
Exported all structure using mysqldump from my last post.
Now for ease of completing over 5000 table I wanted to use regex, this is what I got...

/////////////////// USING Komodo Edit!! //////////////////////////////////////////


---- COLLATE STATEMENTS ----
FIND: COLLATE([ =]).*?([ ;])
REPLACE: COLLATE\1utf8_general_ci\2

---- COLLATE STATEMENTS 2 ----
FIND: collation_connection      = [^ ]*
REPLACE: collation_connection      = utf8_general_ci

---- CHARSET STATEMENTS ----
FIND: CHARSET([ =]).*?([ ;])
REPLACE: CHARSET\1utf8mb4\2
---- CHARSET STATEMENTS 2 ----
FIND: character_set_client = [^ ]*
REPLACE: character_set_client = utf8mb4
---- CHARSET STATEMENTS 3 ----
FIND: CHARACTER SET [^ ]*
REPLACE: CHARACTER SET utf8mb4
---- CHARSET STATEMENTS 3 ----
FIND: (character_set_[^ ]*[ ]*)= [a-z0-9_]+
REPLACE: \1     = utf8mb4

---- DEFINER STATEMENTS ----
FIND: DEFINER=`.*`@`.*`
REPLACE: DEFINER=`myuser`@`localhost`

---- ENGINE STATEMENTS ----
FIND: ENGINE=[^ ]*
REPLACE: ENGINE=InnoDB

---- ENGINE STATEMENTS ----
FIND: ENGINE=[^ ]*
REPLACE: ENGINE=InnoDB

other things I changed


----- standardize column lengths -----
FIND: (`columnprefix_.*`) varchar\(.*\)
REPLACE: \1 varchar(36)

----- standardize column lengths 2 -----
FIND: (`columname`) varchar\(.*\)
REPLACE: \1 varchar(36)

 

Thursday, May 9, 2013

MySQL export all databases structure with triggers

So, I had many issues exporting a dev db because of missing permissions and more.
I finally worked out a way that would finish on our development server...

    mysqldump -pmy1pass --no-data --allow-keywords --compact --triggers --single-transaction --all-databases | gzip -9 > backup/mysql/all_structure.zip

So this skips locking due to the --single-transaction
Allows keywords as column names with --allow-keywords
Compact export with --compact

Woot.

Monday, May 6, 2013

Going food geek now..


SPARKPEOPLE
 
I was curious about what to eat AFTER a workout.
I found this on SPARKPEOPLE.

So what does the ideal meal or snack look like?
  • Calories. Ideally, try to eat enough calories to equal 50% of the calories you burned during your workout. So if you burn about 600 calories during your workout, try to eat 300 calories afterward.
    Don’t worry about undoing the calorie-burning benefits of your workout–that’s not how weight loss works. As long as you're eating within your recommended calorie range (whether for weight loss or maintenance), you'll be on your way to reaching your goals.
     
  • Carbohydrates. Roughly 60% of the calories you eat at this time should come from carbohydrates. Contrary to popular belief, your body needs more carbohydrates than protein after a workout, to replace the muscle fuel (glycogen) you used up and to prepare for your next exercise session. Moderate exercisers need about 30-40 grams of carbohydrates after an hour of exercise, but high-intensity exercisers need more—around 50-60 grams for each hour they exercised.
    If you have some favorite high-carb foods that are lacking in the whole grains and fiber that are often recommended as part of a healthy diet, this is a good time to have them! Your body can digest refined carbohydrates faster during your "refueling window," but if you’re a whole foods foodie, don’t force yourself to eat processed foods.
     
  • Protein. While carbs are essential, it’s also important to include some high-quality protein in your post-workout meal or snack. This protein will stop your body from breaking down muscle tissue for energy and initiate the process of rebuilding and repairing your muscles. About 25% of the calories you eat after a workout should come from protein—that's about 10-15 grams for most people.
     
  • Fat. Fat doesn't play a big role in post-workout recovery, and eating too much fat after a workout won't help your weight control or fitness endeavors. Only 15% (or less) of your post-workout calories should come from fat—that's less than 10 grams.

The ideal time to eat after a workout is within 30 minutes to two hours, when your body is ready and waiting to top off its fuel tanks to prepare for your next workout.


Here are some sample food combinations for your post exercise meal:
  • Bread, a bagel, or an English muffin with cheese or peanut butter
  • Dried fruit and nuts
  • Cottage cheese with fruit
  • Fruit juice with cheese
  • Yogurt with fruit
  • Veggie omelet with toast or roll
  • Chocolate milk
  • Cereal with milk
  • Eggs and toast
  • Turkey, ham, chicken, or roast beef sandwich
  • Vegetable stir-fry with chicken, shrimp, edamame or tofu
  • Crackers with low fat cheese
  • Rice or popcorn cakes with nut butter
  • Smoothie (with milk, yogurt, or added protein powder)
  • A protein or energy bar
  • A protein or energy shake
  • Pancakes and eggs
  • Any regular meal that contains lean protein, starch, and vegetables

MySQL Error when importing LARGE files.

<quote> 
   Hi,

   I got the MySQL Error Nr.2006-MySQL server has gone away when trying 
   to restore a 580MB file with mySQL administrator 1.1.9. However when
    I set the max_allowed_packet=100M in the my.ini file everything works
    fine. I don't know if this helps anyone.
</quote> 

Yes Frank, years later it still helps.

I was getting a  
ERROR 2006 (HY000) at line #: MySQL server has gone away
message from MySQL and nothing in the error logs!
Very frustrating to not be able to SEE that it was hitting a limit.
 

Simply increase your max_allowed_packet while doing the import.

This will increase the limit temporarily (until mysql restarted).
<SQL>
   set global max_allowed_packet = 100*1024*1024;
   show global variables like "max_allowed_packet";
</SQL>
Note that the 100 in the SQL above is equivalent to 100M in bytes.
If you want to update your ini file to make this permanent,
 then just add this to your my.cnf
 
   max_allowed_packet = 100M 

Happy Coding!

Friday, May 3, 2013

Argument list too long error for rm (linux remove command)

...Via StackOverflow...
    thanks to Dennis and j0k

The reason this occurs is because bash actually expands the asterisk to every matching file, producing a very long command line.

Try this:
    find . -name "*.pdf" -print0 | xargs -0 rm

Warning: this is a recursive search and will find (and delete) files in subdirectories as well. Tack on -f to the rm command only if you sure you don't want confirmation.

If you're on Linux, you can do the following to make the command non-recursive:
find . -name "*.pdf" -maxdepth 1 -print0 | xargs -0 rm

Export MySQL databases to file..

Needed to export MANY databases, but only based on wildcard% search.
This also could be used to backup databases!
Here is the bash script I came up with...
Update: added another script to be table specific.

------------------------------------------------------------------------

backup_db.sh

------------------------------------------------------------------------

#!/bin/bash

# http://codeinthehole.com/writing/mysqldump-with-wildcard-table-matching/


if [ $# -lt 1 ]
then
    echo "Usage: DB (auto add the wildcard at the end)"
    exit 1
fi;

database=$1
echo "# $database DUMP" > $database.sql

echo "SELECT DISTINCT TABLE_SCHEMA FROM information_schema.TABLES where TABLE_SCHEMA LIKE '$database%'" | mysql -pmypass --database=information_schema | while read -r line
do
    echo "$line";
    mysqldump --databases $line  -pmypass >> $database.sql
done;

gzip $database.sql

------------------------------------------------------------------------

backup_tables.sh

------------------------------------------------------------------------


#!/bin/bash

# http://codeinthehole.com/writing/mysqldump-with-wildcard-table-matching/


if [ $# -lt 1 ]
then
    echo "Usage: ./backup_tables.sh {DB} {TABLE} (auto add the wildcard at the end)"
    exit 1
fi;

database=$1
table=$2
echo "# $database $table DUMP" > $database.$table.sql

echo "SELECT \`TABLE_SCHEMA\` as \`db\`, \`TABLE_NAME\` as \`table\` FROM information_schema.TABLES WHERE \`TABLE_SCHEMA\` LIKE '$database%' AND \`TABLE_NAME\` LIKE '$table%'" | mysql -pmypass --database=information_schema | while read db table
do
    echo "$line";
    mysqldump -pmypass  --hex-blob --triggers --databases $db --tables $table >> $database.sql
done;

gzip $database-$table.sql
-----------------------------------------------------------------------------------

------ UNZIPPING AND IMPORTING TO SERVER ------


gunzip -c {filename}.sql.gz | mysql -u root -pmypass --force


MySQL Query Logging

Needed to log some queries today in MySQL.
Here is the long and the short of it....(more info)


set global log_queries_not_using_indexes = 'ON';
SET GLOBAL general_log = 'ON';
SET GLOBAL log_slow_queries = 'ON';
set global long_query_time= 1;

Now check your settings...