Thursday, October 30, 2014

Monitor Directories For Changes And Perform Action

I needed to monitor files for changes and execute a script when that happens. After a bit of google searching I found this article "Linux incrond inotify: Monitor Directories For Changes And Take Action" and decided to copy if for archiving purposes. Basically I just want to be able to find it again!


The incrond (inotify cron daemon) is a daemon which monitors filesystem events (such as add a new file, delete a file and so on) and executes commands or shell scripts. It’s use is generally similar to cron.

Install incron

Type the following command under RHEL / Fedora / CentOS Linux:
$ sudo yum install incron
Type the following command under Debian / Ubuntu Linux:
$ sudo apt-get install incron

Configuration Files

  • /etc/incron.conf - Main incron configuration file
  • /etc/incron.d/ - This directory is examined by incrond for system table files. You should put all your config file here as per directory or domain names.
  • /etc/incron.allow - This file contains users allowed to use incron.
  • /etc/incron.deny - This file contains users denied to use incron.
  • /var/spool/incron - This directory is examined by incrond for user table files which is set by users running the incrontab command.

incron Syntax

The syntax is as follows:
<directory> <file change mask> <command or action>  options
/var/www/html IN_CREATE /root/scripts/backup.sh
/sales IN_DELETE /root/scripts/sync.sh
/var/named/chroot/var/master IN_CREATE,IN_ATTRIB,IN_MODIFY /sbin/rndc reload
Where,
  • <directory> - It is nothing but path which is an absolute filesystem path such as /home/data. Any changes made to this path will result into command or action.
  • <file change mask> - Mask is is nothing but various file system events such as deleting a file. Each event can result into command execution. Use the following masks:
    • IN_ACCESS - File was accessed (read)
    • IN_ATTRIB - Metadata changed (permissions, timestamps, extended attributes, etc.)
    • IN_CLOSE_WRITE - File opened for writing was closed
    • IN_CLOSE_NOWRITE - File not opened for writing was closed
    • IN_CREATE - File/directory created in watched directory
    • IN_DELETE - File/directory deleted from watched directory
    • IN_DELETE_SELF - Watched file/directory was itself deleted
    • IN_MODIFY - File was modified
    • IN_MOVE_SELF - Watched file/directory was itself moved
    • IN_MOVED_FROM - File moved out of watched directory
    • IN_MOVED_TO - File moved into watched directory
    • IN_OPEN - File was opened
    • The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above events.
  • <command or action> - Run command or scripts when mask matched on given directory.
  • options - It can be any one of the following with command (i.e. you can pass it as arg to your command):
    1. $$ - dollar sign
    2. $@ - watched filesystem path (see above)
    3. $# - event-related file name
    4. $% - event flags (textually)
    5. $& - event flags (numerically)

Turn Service On

Type the following command:
# service incrond start
# chkconfig incrond on

Examples:

Type the following command to edit your incrontab
incrontab -e
Run logger command when file created or deleted from /tmp directory:
/tmp IN_ALL_EVENTS logger "/tmp action for $# file"
Save and close the file. Now cd to /tmp and create a file:
$ cd /tmp
$ >foo
$ rm foo

To see message, enter:
$ sudo tail -f /var/log/messages
Sample outputs:
Jul 17 18:39:25 vivek-desktop logger: "/tmp action for foo file"

How Do I Run Rsync Command To Replicate Files For /var/www/html/upload Directory?

Type the following command:
# incrontab -e
Append the following command:
/var/www/html/upload/ IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
Now, wherever files are uploaded in /var/www/html/upload/ directory, rsync will be executed to sync files to www2.example.com server. Make sure ssh keys are set for password less login.

How Do I Monitor /var/www/html/upload/ and Its Subdirectories Recursively?

You cannot monitor /var/www/html/upload/ directory recursively. However, you can use the find command to add all sub-directories as follows:
find /var/www/html/upload -type d -print0 | xargs -0 -I{} echo "{} IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/" > /etc/incron.d/webroot.conf
This will create /etc/incron.d/webroot.conf config as follows:
/var/www/html/upload IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/css IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/1 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/js IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/3 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/2010 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/2010/11 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/2010/12 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/2 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/files IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/images IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/

How Do I Troubleshoot Problems?

You need to see /var/log/cron log file:
# tail -f /var/log/cron
# grep something /var/log/cron

Further readings:

Wednesday, October 22, 2014

dealing with mysql limits/errors/configurations

You may need to update your my.cnf (mysql configuration file) to deal with limitations in mysql.
Here are a couple examples...


/etc/my.cnf #add these...


# ERROR 2006 (HY000): MySQL server has gone away
max_allowed_packet=64M# maybe a wait timeout....
wait_timeout = 28800
interactive_timeout = 28800

# ERROR 1118 (42000) at line ####: Row size too large (> 8126).
# make it 10 times larger than largest blob.
innodb_log_file_size=512M 

InnoDB BLOB limited by size of redo log

  • Important Change: Redo log writes for large, externally stored BLOB fields could overwrite the most recent checkpoint. The 5.6.20 patch limits the size of redo log BLOB writes to 10% of the redo log file size. The 5.7.5 patch addresses the bug without imposing a limitation. For MySQL 5.5, the bug remains a known limitation.
    As a result of the redo log BLOB write limit introduced for MySQL 5.6, the innodb_log_file_size setting should be 10 times larger than the largest BLOB data size found in the rows of your tables plus the length of other variable length fields (VARCHARVARBINARY, and TEXT type fields). No action is required if your innodb_log_file_sizesetting is already sufficiently large or your tables contain no BLOB data.
    Note
    In MySQL 5.6.22, the redo log BLOB write limit is relaxed to 10% of the total redo log size(innodb_log_file_size * innodb_log_files_in_group).
    (Bug #16963396, Bug #19030353, Bug #69477)

Tuesday, October 14, 2014

ssh keys - for newbies

Add SSH Key

SSH (Secure Shell) can be set up with public/private key pairs so that you don't have to type the password each time. Because SSH is the transport for other services such as SCP (secure copy), SFTP (secure file transfer), and other services (CVS, etc), this can be very convenient and save you a lot of typing. SSH Version 2Setting up SSH public/private keys

On the local machine, type the BOLD part. The non-bold part is what you might see as output or prompt.

Step 1:

   % ssh-keygen -t dsa
   Generating public/private dsa key pair.
   Enter file in which to save the key (~/.ssh/id_dsa): (just type return)
   Enter passphrase (empty for no passphrase): (just type return)
   Enter same passphrase again: (just type return)
   Your identification has been saved in ~/.ssh/id_dsa
   Your public key has been saved in ~/.ssh/id_dsa.pub
   The key fingerprint is:
   Some really long string
   %

Step 2:

   Then, paste the content of the local ~/.ssh/id_dsa.pub file into the file ~/.ssh/authorized_keys on the remote host.
   RSA instead of DSA
       If you want something strong, you could try
       % ssh-keygen -t rsa -b 4096
       Instead of the names id_dsa and id_dsa.pub, it will be id_rsa and id_rsa.pub, etc.
       The rest of the steps are identical. 
That's it!

FAQ:

   Q: I follow the exact steps, but ssh still ask me for my password!
   A: Check your remote .ssh directory. It should have only your own read/write/access permission (octal 700)
   % chmod 700 ~/.ssh 
   Q: cygwin: chmod 600 does not work as expected?
   A: chgrp -R Users ~/.ssh

SSH Version 1

   Step 1:
   % cd ~/.ssh
   % ssh-keygen -t rsa1
   Generating public/private rsa1 key pair.
   Enter file in which to save the key (~/.ssh/identity): (just type return)
   Enter passphrase (empty for no passphrase): (just type return)
   Enter same passphrase again: (just type return)
   Your identification has been saved in ~/.ssh/identity
   Your public key has been saved in ~/.ssh/identity.pub
   The key fingerprint is:
   Some really long string
   %
   Step 2:
   Then, paste content of the local ~/.ssh/identity.pub file into the file ~/.ssh/authorized_keys on the remote host. 

I'm using Cygwin in the Win8CP, and I had the same issue; it's definitely a cygwin bug, but there's a workaround.
Try running:
   chgrp -R Users ~/.ssh
The longer explanation is for some reason, cygwin's /etc/passwd / /etc/group generation are putting the user's default/main group as None. You cannot change the permission of None, so the chmod for group has no effect. I didn't try repairing the passwd / group files myself, but I did a chgrp -R Users ~/.ssh (or the group "HomeUsers" On Windows8 pre-release). After that, you can do the chmod 0600 and it'll work as expected. The chgrp to the Users group can be done in whichever other similar cases you find; it even works as expected since cygwin puts users in the Users group as a secondary group (instead of primary, which would be the correct behavior).

Adding hosts

One Host

cd into .ssh directory and execute a bash file with these contents
#!/bin/bash
SERVER=$*
echo $SERVER
cat id_dsa.pub | ssh root@$SERVER "cat - >>authorized_keys2"

All Hosts from /etc/hosts

cd into .ssh directory and execute a bash file with these contents
#!/bin/bash
for i in $(sed 's/;.*//;' /etc/hosts | awk ' /^[[:digit:]]/ {$1 = "";print tolower($0)}')
do
:
cat id_dsa.pub | ssh root@$i "cat - >>authorized_keys2"
done

Single Server command line

   cat /root/.ssh/id_dsa.pub | ssh root@(server) 'cat - >>~/.ssh/authorized_keys2'

EXAMPLE __ CHUCK:

   cat /.ssh/id_dsa.pub | ssh root@db4 'cat - >>~/.ssh/authorized_keys2'

general hosts file

 127.0.0.1 localhost
 127.0.1.1 (YOUR PC NAME HERE)
 
#Office
192.168.0.104 admin
10.0.0.202    bendev
10.0.0.58     support
10.0.0.52     office
10.0.0.168    zenddev
10.0.0.227    dev_db1
10.0.0.228    dev_db2
10.0.0.221    dev_db3
10.128.1.28   gltail
 
#Minnetonka
192.168.0.108 web1
192.168.0.109 web2
192.168.0.110 bb3
192.168.0.100 db1
192.168.0.106 db2
192.168.0.113 db4
192.168.0.102 dbnew
192.168.0.107 ein
192.168.0.103 eout
192.168.0.111 services4
192.168.0.112 data1
 
#Dallas
10.20.0.21   da_db1
10.20.0.22   da_db2
10.20.0.23   da_db3
10.20.0.31   da_web1
10.20.0.39   da_web2
74.249.6.120 chat
 
#Hosting
hosting.resellersolutions.com hosting1
hosting2.resellersolutions.com hosting2
 
 # The following lines are desirable for IPv6 capable hosts
 ::1     localhost ip6-localhost ip6-loopback
 fe00::0 ip6-localnet
 ff00::0 ip6-mcastprefix
 ff02::1 ip6-allnodes
 ff02::2 ip6-allrouters
 ff02::3 ip6-allhosts

[edit]sshfs

First install the module:
   sudo apt-get install sshfs
Load it to kernel
   sudo modprobe fuse
Setting permissions
   sudo adduser maythux fuse
   sudo chown root:fuse /dev/fuse
   sudo chmod +x /dev/fusermount
Now we’ll create a directory to mount the remote folder in.
I chose to create it in my home directory and call it remoteDir.
   mkdir ~/remoteDir
Now I ran the command to mount it(mount on home).
   sshfs maythux@192.168.xx.xx:/home/maythuxServ/Mounted ~/remoteDir
Now it should be mounted
   cd ~/Mounted
   ls -l 
To unmount,

   fusermount -u ~/remoteDir
To add it to your /etc/fstab,

   sshfs#$USER@far:/projects /home/$USER/remoteDir fuse defaults,idmap=user 0 0

[edit]suggested .bashrc file

 # /etc/skel/.bashrc
 #
 # This file is sourced by all *interactive* bash shells on startup,
 # including some apparently interactive shells such as scp and rcp
 # that can't tolerate any output.  So make sure this doesn't display
 # anything or bad things will happen !
 
 
 # Test for an interactive shell.  There is no need to set anything
 # past this point for scp and rcp, and it's important to refrain from
 # outputting anything in those cases.
 if [[ $- != *i* ]] ; then
        # Shell is non-interactive.  Be done now!
        return
 fi
 
 if [ -f ~/.bash_aliases ]; then
     . ~/.bash_aliases
 fi
 
 #enable bash AutoComplete from known hosts
 if [ -f /etc/bash_completion ]; then
  . /etc/bash_completion
 fi
 
 export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
 
 # Put your fun stuff here.
 alias apt='sudo apt-get install'
 alias remove='sudo apt-get remove'
 alias search='apt-cache search'
 alias rar='sudo'

suggested .bash_aliases file

 function benmount {
        server="`echo $@ | tr '[:upper:]' '[:lower:]'`"
        for i in $server; do
                echo -n "Mounting ${i}... "
                if [ ! -d "/www/servers/${i}" ]; then
                        sudo mkdir /www 2> /dev/null && sudo chmod 777 /www
                        mkdir -p /www/servers/${i}
                fi
                [ -d "/www/servers/${i}" ] \
                && sshfs root@${i}:/ /www/servers/${i} -C \
                        -o reconnect \
                        -o workaround=all \
                        -o follow_symlinks \
                        -o transform_symlinks \
                && echo "DONE" && continue
                echo "UNSUCCESSFUL" && continue
        done
        [ -z "$server" ] && echo -e "\nUsage: benmount <SERVER> <SERVER> ...\n" && return 1 || return 0
 }
 
 function benumount {
        [ -z "$@" ] && list="`ls /www/servers | tr '[:upper:]' '[:lower:]'`" || list="`echo "$@" | tr '[:upper:]' '[:lower:]'`"
        for i in $list; do
                fusermount -u /www/servers/${i} && rmdir /www/servers/${i} && continue
                [ -d "/www/servers/${i}" ] && [ -z "$(ls /www/servers/${i})" ] && rmdir /www/servers/${i}
        done
        return 0
 }
 
 function benssh {
        [ -z "$1" ] && echo -e "\nUsage: benssh <SERVER>\n" && return 1
        server="`echo "$1" | tr '[:upper:]' '[:lower:]'`"
        ssh -C root@$server
 }
 
 function benpub {
        uname="$1"
        pword="$2"
        [ -z "$uname" -o -z "$pword" ] && [ ! -f "$HOME/.benpub" ] \
            && echo -e "\nUsage: benpub <USERNAME> <PASSWORD>\n(first time only/or if changing credentials)\n" \
            && return 1
        [ -n "$uname" -a -n "$pword" ] && echo -e "username=${uname}\npassword=${pword}" > $HOME/.benpub && chmod 600 $HOME/.benpub
        [ -n "$(mount | grep 10\.0\.0\.10/Public)" ] && echo "Public already mounted" && return 1
        [ ! -d "/www" ] && sudo mkdir /www 2> /dev/null && sudo chmod 777 /www
        mkdir -p /www/public && sudo mount -t cifs -o cred=$HOME/.benpub //10.0.0.10/Public /www/public/ && return 0 || rmdir /www/public
 }
 
 function benupub {
        [ -z "$(mount | grep 10\.0\.0\.10/Public)" ] && echo "Public not mounted" && return 1
        sudo umount /www/public/ && rmdir /www/public
 }
 
 complete -F _known_hosts benmount
 complete -F _known_hosts benumount
 complete -F _known_hosts benssh

You will need to reload your bash
 source ~/.bashrc

GIT gitlab ssh key for composer install

Example when to use

example when running...
   composer install --dev
you get
     - Installing ben/support (v0.1.5)
       Cloning 506036c184d721d5b82a2f3056e3941759e2ded2
   git@git.usi.ben's password:

Generate RSQ key for user

   ssh-keygen -t rsa -C "chuck@www.com"

add to github / gitlab

test new key

   ssh git@git.usi.ben
DONE!

Monday, October 13, 2014

MySQL FullText index find and create

Find all Full Text Indexes and build create index statements.


This is useful for when we create a new innodb database, 
convert some of the tables to MyISAM 
and setup full text indexes on a new replica.

Make it MyISAM!
SELECT 
CONCAT('ALTER TABLE `',TABLE_SCHEMA,'.',TABLE_NAME,'` ENGINE=MyISAM')
FROM 
information_schema.statistics 
WHERE index_type LIKE 'FULLTEXT%'
GROUP BY TABLE_SCHEMA, TABLE_NAME
create index statements.
SELECT 
CONCAT('ALTER TABLE `',TABLE_SCHEMA,'.',TABLE_NAME,'` ADD FULLTEXT `',INDEX_NAME,'` (`',GROUP_CONCAT(COLUMN_NAME,'`,`'),'`)')
FROM 
information_schema.statistics 
WHERE index_type LIKE 'FULLTEXT%'
GROUP BY TABLE_SCHEMA, TABLE_NAME, INDEX_NAME

Convert to InnoDB from MyISAM - MySQL

Bash IT!

this will convert and schema prefixed with foo_ AND is MyISAM to InnoDB.

#!/bin/bash
#

# Linux bin paths, change this if it can not be autodetected via which command
MYSQL="$(which mysql)"

# Get hostname
HOST="$(hostname)"
x=1
# Get all database list first
echo "select table_schema,table_name from information_schema.tables where (table_schema Like 'foo_%') and engine = 'MyISAM'" | $MYSQL --login-path=local -Bs | while read -r schema name
do
    x=$(ps ax|grep -c "mysql --login")
    while [ $x -gt 40 ]
    do
       echo "Overload!!! Running = $x"
       sleep 10
        x=$(ps ax|grep -c "mysql --login")
    done
    echo "Running = $x"

    changequery="ALTER TABLE \`$schema\`.\`$name\` ENGINE=InnoDB;";
    echo "echo $changequery | $MYSQL --login-path=local -Bs -D$schema";
    nohup echo $changequery | $MYSQL --login-path=local -Bs -D$schema &
done
~      

Friday, October 3, 2014

CRONTAB run script or command with Environment Variables

CRONTAB

Allow environment variable to be passed to your bash script when using crontab. This is useful for custom configs!  As my previous post about custom configs shows. This uses enviroment variable to define server configs for you scripts, allowing you to customize code/scripts per server with simple configs.

# echo enviroment variables into txt file
0 9 * * * cd /var/www/vhosts/techspecs/download; bash -l -c "`echo env` > /tmp/env.txt"
# executable file using enviroment variables
0 9 * * * cd /var/www/vhosts/techspecs/download; bash -l -c './runner.sh'
# php execute using enviroment variables
0 9 * * * cd /var/www/vhosts/techspecs/download; bash -l -c 'php -f ./doit.php'


BASH REFERENCE



--login
Read and execute commands from string after processing the options, then exit. Any remaining arguments are assigned to the positional parameters, starting with $0.
-c string
Equivalent to -l.

Wednesday, October 1, 2014

Custom Server Configs - Linux

Keywords

Custom Config
Server Variables
System Based Configurations

CONFIG

custom environment variable to control IPs and and other custom variables between different servers.
    vi /etc/profile.d/custom.config.sh
this is the file that gets loaded into bash / system for custom configs. custom.config.sh example file
 export BROCK_READ_IP=192.168.0.118
 export BROCK_TEXT_IP=192.168.0.119
 export BROCK_WRITE_IP=192.168.0.100

 export FEEDP_IP=192.168.0.113
 export TECHDATA_IP=192.168.0.110
 export TECHSPECS_IP=10.30.0.155
 export DATALICENSE_IP=192.168.0.113
 export UPLOADER_IP=10.30.0.126
 export DATA1_IP=192.168.0.112

 export SERVER_TYPE=PRODUCTION
 export TASKS_USER=auto_tasks_dev

 export SEARCH_READ_DB_IP=192.168.0.119
 export SEARCH_WRITE_DB_IP=192.168.0.100
 export BROCK_JOOMLA_IP=192.168.0.100

BASH CRONTAB

    0 9 * * * cd /var/www/vhosts/techspecs/download; bash -l -c ./download.php

APACHE STARTUP

SERVICE RESTART INCLUDE ENVIRONMENT VARIABLES
vi /etc/sysconfig/httpd

add the following line at the end to automatically include the environment
variable shell script when httpd is started via a service call:

./etc/profile.d/custom.config.sh
ENVIRONMENT PASS THRU TO APACHE
vi /etc/httpd/conf/httpd.conf
OR
vi /etc/httpd/conf.d/virtual.host.name.conf

Use the following to add environment variables to the PassEnv

PassEnv TASKS_USER SERVER_TYPE ....
Example Usage
 # examples for PHP
 #  $_SERVER['DATALICENSE_IP']
 #  $_SERVER['BROKERB_IP']

 # examples for bash
 #  $DATALICENSE_IP
 #  $BROKERB_IP

This will allow you to have many VMs (virtual machines) with unique configurations without modifying your code base.  This is very useful for expansion of your network. Scalability is key.