Thursday, December 18, 2014

Autoincrement MySQL increase to allow clone to insert new without collisions (for a time)



-- this will allow you to update a schema(s) to approx double the max autoincrement value of the data_type (int, smallint, mediumint...)  works well for dev servers that are replicating live data and you want to add data.
-- select the awesomeness of information schema

use information_schema;
-- create alter table statements into an outfile
SELECT concat ("ALTER TABLE `",table_schema,"`.`",table_name,"` AUTO_INCREMENT=",IF(DATA_TYPE='smallint',15000,IF(DATA_TYPE='tinyint',64,IF(DATA_TYPE='mediumint',4000000,IF(DATA_TYPE='int',1000000000,99999999999999)))),";")
FROM `COLUMNS` WHERE extra LIKE '%auto_increment%' and table_schema IN ('schema name...')
INTO OUTFILE '/tmp/auto.sql';
-- source the outfile
source /tmp/auto.sql;

Magento Cart: show selected option value information form Custom Options selection

I wanted to get a custom "front_image_path" from the customoptions that I have implemented with MageWorx. This way I can have a shirt with multiple image options to select from for color and that will show in cart. So this code goes in the "app/design/frontend/default/default/template/checkout/cart/item/default.phtml" to modify the frontimage.
  
    // this is for non-customizable products
    $_options = $_product->getOptions();
    // if option enabled = no && hasOptions = 0
    if (!$_options) $optionsArr = $this->getProduct()->getProductOptionsCollection();

    /** @var MageWorx_CustomOptions_Helper_Data $helper */
    $helper = Mage::helper('customoptions');
    foreach ($_options as $_option) {
        /** @var MageWorx_CustomOptions_Model_Catalog_Product_Option $_option */
        if(strtolower($_option->getTitle())=='color'){
            // this option id
            $option_id = $_option->getId();
            // cart item option info
            $item_option = $_item->getOptionByCode('option_' . $option_id);

            foreach ($_option->getValues() as $_value) {
                /** @var Mage_Catalog_Model_Product_Option_Value $_value */
                //
                $value_type = $_value->getOptionTypeId();
                if($item_option && $value_type == $item_option['value']){
                    $frontImgArr = $helper->getColorImgHtml($_value->getFrontImagePath(), $option_id, $value_type,true,true,'front');
                    $frontimage = $frontImgArr['url'];
                }
            }
        }
    }
    if(empty($frontimage)){
        $frontimage = $this->getProductThumbnail();
    }

Monday, December 8, 2014

Magento force secure urls (https) on all frontend pages.

this works in Mageno 1.9.1
use your app/etc/config.xml file
<?xml version="1.0"?>
<config>
  <frontend>
    <secure_url>
      <all>/</all>
    </secure_url>
  </frontend>
</config>

If user is using https, this should force all urls to rewrite (created as) to https.

Tuesday, November 18, 2014

Magento - Category Reset

So I have a magento install that needed to be cleared out. One of the problems is resetting the categories back to zero.  The issue is that the admin area uses these categories also.

Solution: Don't delete the #1 entity_id.

MySQL Queries...
DELETE FROM catalog_category_entity where entity_id <> 1;
ALTER TABLE catalog_category_entity AUTO_INCREMENT = 2;

DELETE FROM catalog_category_entity_datetime where entity_id <> 1;
ALTER TABLE catalog_category_entity_datetime AUTO_INCREMENT = 2;

DELETE FROM catalog_category_entity_decimal where entity_id <> 1;
ALTER TABLE catalog_category_entity_decimal AUTO_INCREMENT = 2;

DELETE FROM catalog_category_entity_int where entity_id <> 1;
ALTER TABLE catalog_category_entity_int AUTO_INCREMENT = 2;

DELETE FROM catalog_category_entity_text where entity_id <> 1;
ALTER TABLE catalog_category_entity_text AUTO_INCREMENT = 2;

DELETE FROM catalog_category_entity_varchar where entity_id <> 1;
ALTER TABLE catalog_category_entity_varchar AUTO_INCREMENT = 2;

Friday, November 14, 2014

Install SB235 bluetooth headset on xubuntu


Thanks to Warreee I was able to get these connected to xubuntu.  The piece I was missing was installing the paulsaudio-module-bluetooth.


If you're getting the following error while setting up a bluetooth headset on Xubuntu with PulseAudio:Bluetooth Audio Sink: Stream Setup Failed then you probably will need to do the following:
Install the following: sudo apt-get install pulseaudio-module-bluetooth
And add the following command to your Startup and Session: pactl load-module module-bluetooth-discover.
Enjoy!



Then, in PulseAudio Volume Control,  I just needed to set the device to SB235 on Chromium under playback and under Output Devices.





Find all auto_increment tables

MySQL Information Schema to the rescue.

I needed to find all auto_increment tables.

SELECT 
    * 
FROM 
    `information_schema`.`COLUMNS`
WHERE 
    `EXTRA` = 'auto_increment' AND 
    `TABLE_SCHEMA` = 'foochoo'

Now I need to make all these tables a larger auto_increment value.

use information_schema;
select table_name 
from tables 
where auto_increment is not null and table_schema=...;

You can then set the auto-increment value as per Change auto increment starting number?
Or, in a single shot (assuming Unix shell):

mysql information_schema -e 
'select concat ("ALTER TABLE ",table_name," AUTO_INCREMENT=1000000") `-- sql` 
from tables 
where auto_increment is not null and table_schema="your-schema";
'|mysql your-schema

Wednesday, November 12, 2014

mysqldump with INSERT … ON DUPLICATE

Thanks to stackoverflow for another helpful mysqldump tidbit. This logic will allow you to do updates to your database via a mysqldump. Similar to querying with...
  
  create table db1.table1 like db2.table1;
  insert into db1.table1 select a,b,c from db2.table1 as db2t1 ON DUPLICATE KEY UPDATE a=db2t1.a,b=db2t1.b,c=db2t1.c;
posted by RolandoMySQLDBA
  
  --insert-ignore     Insert rows with INSERT IGNORE.
  --replace           Use REPLACE INTO instead of INSERT INTO.
  -t, --no-create-info
                      Don't write table creation info.
  
Keep this paradigm in mind

mysqldump everything from DB1 into DUMP1
load DUMP1 into DB3
mysqldump everything from DB2 using --replace (or --insert-ignore) and --no-create-info into DUMP2
load DUMP2 into DB3

Tuesday, November 11, 2014

Centos 7 Server LAMP Stack.

This is really just notes for a server install, but maybe it will help someone.

start with standard Centos 7 Server Install
#hostnames!!

/etc/hostname
vi /etc/hosts

### network config ###

#vmware... if using vmware, make sure the "hardware" is on the right network.
VM Network ##

# ifcfg...
vi /etc/sysconfig/network-scripts/ifcfg-eno...
---- EXAMPLE
#HWADDR
IPV4_FAILUER_FATAL=yes
#IPV6...
#UUID
---- /EXAMPLE

# restart machine (CentOS 7 requires this)
---------After this point you can ssh into the box.----------
ifconfig

#In CentOS 7.0 uses Firewall-cmd, so I will customize it to allow external access to port 80 (http) and 443 (https) and 3306 (mysql).
firewall-cmd --permanent --zone=public --add-service=http
firewall-cmd --permanent --zone=public --add-service=https
firewall-cmd --permanent --zone=public --add-service=mysql
firewall-cmd --reload

## php / apache
yum install httpd rsync php php-gd php-mysql php-xml php-xmlrpc php-common php-cli php-xml php-mbstring php-gd php-soap wget git rsync mariadb



### Now configure your system to start Apache at boot time...
systemctl start httpd.service
systemctl enable httpd.service
curl http://icanhazip.com
systemctl restart httpd.service


### mcrypt
cd /tmp/ && wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm
rpm -ivh epel-release-7-2.noarch.rpm
yum --enablerepo=epel install php-mcrypt

### PHP INI
# vi /etc/php.ini
# memory_limit = 512M

### SELINUX
# vi /etc/sysconfig/selinux
# SELINUX=disabled

### APC
yum install php-pear php-devel httpd-devel pcre-devel gcc make
pecl install apc
echo "extension=apc.so" > /etc/php.d/apc.ini
cp /usr/share/pear/apc.php /var/www/html/
#  Now set the username and password in the file apc.php as shown below.
pecl upgrade apc
vi /etc/php.d/apc.ini
----- PASTE
apc.enabled = 1
apc.optimization  = 0
apc.shm_segments = 1
apc.shm_size = 768M
apc.ttl = 48000
apc.user_ttl  = 48000
apc.num_files_hint = 8096
apc.user_entries_hint = 8096
apc.mmap_file_mask = /tmp/apc.XXXXXX
apc.enable_cli = 1
apc.cache_by_default  = 1
apc.max_file_size = 10M
apc.include_once_override = 0
------ /PASTE

# ssh keys
## on server - setup ssh keys.
ssh-keygen -t dsa
chmod -R 600 ~/.ssh
## local - push key to server
cat ~/.ssh/id_dsa.pub | ssh root@mage.izon.usi.ben 'cat - >>~/.ssh/authorized_keys'

# setup virtual hosts in /etc/httpd/conf.d/hostnamex.conf
------  Example
<VirtualHost *:80>
    AllowEncodedSlashes On

    ServerName test.com
    ServerAlias 192.168.0.100
    ServerAlias www.test.com
    ServerAdmin admin@test.com
    DocumentRoot /var/www/html/

    DirectoryIndex index.html

    <Directory "/var/www/html/" >
        AllowOverride All
    </Directory>
</VirtualHost>
------ /Example

systemctl restart httpd.service

# mysql!! http://sharadchhetri.com/2014/07/31/how-to-install-mysql-server-5-6-on-centos-7-rhel-7/
wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
rpm -ivh mysql-community-release-el7-5.noarch.rpm
# so you just added 2 new yum repos.
yum install mysql-server
systemctl start mysqld

# finish installation (use blank for initial password)
mysql_secure_installation

# recommend removing everything for testing.
 -- annonymous, remote login, test db
systemctl enable mysqld.service

# if not installing mysql server... just use the mariadb client. (same thing)
yum install mariadb

yum -y install nfs-utils inkscape

yum -y install htop zsh
curl -L http://install.ohmyz.sh | sh
# edit .zshrc , us theme ys, add more if you like...

############
# Preferred editor for local and remote sessions
if [[ -n $SSH_CONNECTION ]]; then
  export EDITOR='vim'
else
  export EDITOR='mvim'
fi

h=()
if [[ -r ~/.ssh/config ]]; then
  h=($h ${${${(@M)${(f)"$(cat ~/.ssh/config)"}:#Host *}#Host }:#*[*?]*})
fi
if [[ -r ~/.ssh/known_hosts ]]; then
  h=($h ${${${(f)"$(cat ~/.ssh/known_hosts{,2} || true)"}%%\ *}%%,*}) 2>/dev/null
fi
if [[ $#h -gt 0 ]]; then
  zstyle ':completion:*:ssh:*' hosts $h
  zstyle ':completion:*:slogin:*' hosts $h
fi

source $HOME/.aliases

function git_prompt_info() {
  ref=$(git symbolic-ref HEAD 2> /dev/null) || return
  echo "$ZSH_THEME_GIT_PROMPT_PREFIX${ref#refs/heads/}$ZSH_THEME_GIT_PROMPT_SUFFIX"
}
###################


# mount remote code share... (nfs)
vi /etc/fstab
# add this line...
xxx.yyyy.sss.com:/exports/public          /var/www/vhosts         nfs     defaults        0 0
# not mount the directory.
mkdir /var/www/vhosts
mount /var/www/vhosts

# /etc/idmapd.conf - to get permissions from nfs share.
set Domain = xxx.yyyy.sss.com

# link to version to html directory.
rm -rf /var/www/html
ln -s /var/www/vhosts/v1 /var/www/html


### mysql access user..
CREATE USER 'username'@'localhost' IDENTIFIED BY PASSWORD '*hashedpasswordvalue';
GRANT ALL PRIVILEGES ON *.* TO 'username'@'localhost' WITH GRANT OPTION;
FLUSH PRIVILEGES;


vi /etc/my.cnf
###### after ....
# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
###### add ....

wait_timeout = 28800
interactive_timeout = 28800
max_allowed_packet=128M

read_rnd_buffer_size = 12M
sort_buffer_size = 12M

innodb_buffer_pool_size = 1G
key_buffer_size = 512MB

innodb_table_locks=0
autocommit=1

# Size this cache to keep most tables open since opening tables can be expensive.
# The optimum value for table_cache is directly related to the number of tables
# that need to be opened simultaneously in order to perform multiple-table joins.
# The table_cache value should be no less than the number of concurrent connections
# times the largest number tables involved in any one join.
# You should check the Open_tables status variable to see if it is large compared to table_cache
table_open_cache=10000

tmp_table_size=512M
max_heap_table_size=512M
thread_cache=16
secure-auth=0

# logging
log-queries-not-using-indexes = 1
long_query_time = 10
slow_query_log = 1
slow_query_log_file = /var/lib/mysql/slow_query_log


innodb_force_recovery = 0
# Avoid double buffering
innodb_flush_method = O_DIRECT

# infinite threads - http://www.mysqlperformanceblog.com/2011/12/02/kernel_mutex-problem-cont-or-triple-your-throughput/
innodb_thread_concurrency=8

innodb_log_files_in_group=6
innodb_log_file_size=512M
innodb_log_buffer_size=16M
# breaks ACID, can loose 1-2 seconds of data on OS crash, but much greater performance
innodb_flush_log_at_trx_commit=0

# locks that did not match the scan are released after the STATEMENT completes.
# prevents gap locks (write locks) on the table
transaction-isolation=READ-COMMITTED

max_connections=3000
# Check it however after a while and see if it is well used
query_cache_size=50M
query_cache_limit = 10M
skip-name-resolve=1



###################################
# MyISAM stuff
###################################
# http://www.mysqlperformanceblog.com/2007/09/17/mysql-what-read_buffer_size-value-is-optimal/
read_buffer_size = 2M
read_rnd_buffer_size = 12M
myisam_sort_buffer_size = 256M
thread_cache_size = 12
join_buffer_size = 4M
thread_concurrency = 0 # specific to solaris
ft_min_word_len = 3

###################################

Computer Theory - Need for speed.

I am frustrated right now that my computer is not processing fast enough.  Who cares if it is 3 years old? Who cares if I am on a VPN and SSHing into a office machine (2 miles away), in turn SSHing into another machine in another location (150 miles away). Latency? BAH!

ME: if i stare at the computer or hit it, then it should go faster.... right?
ZZ: lol, I found throwing it makes it go the fastest :P

Server Admin humor, excellent.  

And I am spent, damn it.

Monday, November 3, 2014

move all files in current directory into a subdirectory in the current directory

I needed to move all the files in the current working directory (mount point) into a sub-directory. I hope this helps others, and me in the future.

Use the -maxdepth 1 option to find, instead of the "*" and ".*", something like this:

Code:
mkdir ./dest_dir
find . -maxdepth 1 | grep -v dest_dir | xargs -i mv {} ./dest_dir
http://www.linuxquestions.org/questions/linux-newbie-8/move-all-files-in-current-directory-into-a-subdirectory-in-the-current-directory-637150/


  -i,--replace=[R]             Replace R in initial arguments with names
                               read from standard input. If R is
                               unspecified, assume {}

http://community.spiceworks.com/how_to/show/98496-move-all-files-in-current-directory-into-a-subdirectory-in-the-current-directory

---------

addition per spiceworks! (Leecallen)

Nice, and useful.
Slight simplification - explicitly tell find that you are only interested in files, and then remove the grep command:

     find . -maxdepth 1 -type f | xargs -i mv {} ./dest_dir;

or just for kicks:

     find . -maxdepth 1 -type f -exec mv {} ./dest_dir \;

Thursday, October 30, 2014

Monitor Directories For Changes And Perform Action

I needed to monitor files for changes and execute a script when that happens. After a bit of google searching I found this article "Linux incrond inotify: Monitor Directories For Changes And Take Action" and decided to copy if for archiving purposes. Basically I just want to be able to find it again!


The incrond (inotify cron daemon) is a daemon which monitors filesystem events (such as add a new file, delete a file and so on) and executes commands or shell scripts. It’s use is generally similar to cron.

Install incron

Type the following command under RHEL / Fedora / CentOS Linux:
$ sudo yum install incron
Type the following command under Debian / Ubuntu Linux:
$ sudo apt-get install incron

Configuration Files

  • /etc/incron.conf - Main incron configuration file
  • /etc/incron.d/ - This directory is examined by incrond for system table files. You should put all your config file here as per directory or domain names.
  • /etc/incron.allow - This file contains users allowed to use incron.
  • /etc/incron.deny - This file contains users denied to use incron.
  • /var/spool/incron - This directory is examined by incrond for user table files which is set by users running the incrontab command.

incron Syntax

The syntax is as follows:
<directory> <file change mask> <command or action>  options
/var/www/html IN_CREATE /root/scripts/backup.sh
/sales IN_DELETE /root/scripts/sync.sh
/var/named/chroot/var/master IN_CREATE,IN_ATTRIB,IN_MODIFY /sbin/rndc reload
Where,
  • <directory> - It is nothing but path which is an absolute filesystem path such as /home/data. Any changes made to this path will result into command or action.
  • <file change mask> - Mask is is nothing but various file system events such as deleting a file. Each event can result into command execution. Use the following masks:
    • IN_ACCESS - File was accessed (read)
    • IN_ATTRIB - Metadata changed (permissions, timestamps, extended attributes, etc.)
    • IN_CLOSE_WRITE - File opened for writing was closed
    • IN_CLOSE_NOWRITE - File not opened for writing was closed
    • IN_CREATE - File/directory created in watched directory
    • IN_DELETE - File/directory deleted from watched directory
    • IN_DELETE_SELF - Watched file/directory was itself deleted
    • IN_MODIFY - File was modified
    • IN_MOVE_SELF - Watched file/directory was itself moved
    • IN_MOVED_FROM - File moved out of watched directory
    • IN_MOVED_TO - File moved into watched directory
    • IN_OPEN - File was opened
    • The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above events.
  • <command or action> - Run command or scripts when mask matched on given directory.
  • options - It can be any one of the following with command (i.e. you can pass it as arg to your command):
    1. $$ - dollar sign
    2. $@ - watched filesystem path (see above)
    3. $# - event-related file name
    4. $% - event flags (textually)
    5. $& - event flags (numerically)

Turn Service On

Type the following command:
# service incrond start
# chkconfig incrond on

Examples:

Type the following command to edit your incrontab
incrontab -e
Run logger command when file created or deleted from /tmp directory:
/tmp IN_ALL_EVENTS logger "/tmp action for $# file"
Save and close the file. Now cd to /tmp and create a file:
$ cd /tmp
$ >foo
$ rm foo

To see message, enter:
$ sudo tail -f /var/log/messages
Sample outputs:
Jul 17 18:39:25 vivek-desktop logger: "/tmp action for foo file"

How Do I Run Rsync Command To Replicate Files For /var/www/html/upload Directory?

Type the following command:
# incrontab -e
Append the following command:
/var/www/html/upload/ IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
Now, wherever files are uploaded in /var/www/html/upload/ directory, rsync will be executed to sync files to www2.example.com server. Make sure ssh keys are set for password less login.

How Do I Monitor /var/www/html/upload/ and Its Subdirectories Recursively?

You cannot monitor /var/www/html/upload/ directory recursively. However, you can use the find command to add all sub-directories as follows:
find /var/www/html/upload -type d -print0 | xargs -0 -I{} echo "{} IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/" > /etc/incron.d/webroot.conf
This will create /etc/incron.d/webroot.conf config as follows:
/var/www/html/upload IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/css IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/1 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/js IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/3 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/2010 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/2010/11 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/2010/12 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/2 IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/files IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/
/var/www/html/upload/images IN_CLOSE_WRITE,IN_CREATE,IN_DELETE /usr/bin/rsync --exclude '*.tmp' -a /var/www/html/upload/ user@www2.example.com:/var/www/html/upload/

How Do I Troubleshoot Problems?

You need to see /var/log/cron log file:
# tail -f /var/log/cron
# grep something /var/log/cron

Further readings:

Wednesday, October 22, 2014

dealing with mysql limits/errors/configurations

You may need to update your my.cnf (mysql configuration file) to deal with limitations in mysql.
Here are a couple examples...


/etc/my.cnf #add these...


# ERROR 2006 (HY000): MySQL server has gone away
max_allowed_packet=64M# maybe a wait timeout....
wait_timeout = 28800
interactive_timeout = 28800

# ERROR 1118 (42000) at line ####: Row size too large (> 8126).
# make it 10 times larger than largest blob.
innodb_log_file_size=512M 

InnoDB BLOB limited by size of redo log

  • Important Change: Redo log writes for large, externally stored BLOB fields could overwrite the most recent checkpoint. The 5.6.20 patch limits the size of redo log BLOB writes to 10% of the redo log file size. The 5.7.5 patch addresses the bug without imposing a limitation. For MySQL 5.5, the bug remains a known limitation.
    As a result of the redo log BLOB write limit introduced for MySQL 5.6, the innodb_log_file_size setting should be 10 times larger than the largest BLOB data size found in the rows of your tables plus the length of other variable length fields (VARCHARVARBINARY, and TEXT type fields). No action is required if your innodb_log_file_sizesetting is already sufficiently large or your tables contain no BLOB data.
    Note
    In MySQL 5.6.22, the redo log BLOB write limit is relaxed to 10% of the total redo log size(innodb_log_file_size * innodb_log_files_in_group).
    (Bug #16963396, Bug #19030353, Bug #69477)

Tuesday, October 14, 2014

ssh keys - for newbies

Add SSH Key

SSH (Secure Shell) can be set up with public/private key pairs so that you don't have to type the password each time. Because SSH is the transport for other services such as SCP (secure copy), SFTP (secure file transfer), and other services (CVS, etc), this can be very convenient and save you a lot of typing. SSH Version 2Setting up SSH public/private keys

On the local machine, type the BOLD part. The non-bold part is what you might see as output or prompt.

Step 1:

   % ssh-keygen -t dsa
   Generating public/private dsa key pair.
   Enter file in which to save the key (~/.ssh/id_dsa): (just type return)
   Enter passphrase (empty for no passphrase): (just type return)
   Enter same passphrase again: (just type return)
   Your identification has been saved in ~/.ssh/id_dsa
   Your public key has been saved in ~/.ssh/id_dsa.pub
   The key fingerprint is:
   Some really long string
   %

Step 2:

   Then, paste the content of the local ~/.ssh/id_dsa.pub file into the file ~/.ssh/authorized_keys on the remote host.
   RSA instead of DSA
       If you want something strong, you could try
       % ssh-keygen -t rsa -b 4096
       Instead of the names id_dsa and id_dsa.pub, it will be id_rsa and id_rsa.pub, etc.
       The rest of the steps are identical. 
That's it!

FAQ:

   Q: I follow the exact steps, but ssh still ask me for my password!
   A: Check your remote .ssh directory. It should have only your own read/write/access permission (octal 700)
   % chmod 700 ~/.ssh 
   Q: cygwin: chmod 600 does not work as expected?
   A: chgrp -R Users ~/.ssh

SSH Version 1

   Step 1:
   % cd ~/.ssh
   % ssh-keygen -t rsa1
   Generating public/private rsa1 key pair.
   Enter file in which to save the key (~/.ssh/identity): (just type return)
   Enter passphrase (empty for no passphrase): (just type return)
   Enter same passphrase again: (just type return)
   Your identification has been saved in ~/.ssh/identity
   Your public key has been saved in ~/.ssh/identity.pub
   The key fingerprint is:
   Some really long string
   %
   Step 2:
   Then, paste content of the local ~/.ssh/identity.pub file into the file ~/.ssh/authorized_keys on the remote host. 

I'm using Cygwin in the Win8CP, and I had the same issue; it's definitely a cygwin bug, but there's a workaround.
Try running:
   chgrp -R Users ~/.ssh
The longer explanation is for some reason, cygwin's /etc/passwd / /etc/group generation are putting the user's default/main group as None. You cannot change the permission of None, so the chmod for group has no effect. I didn't try repairing the passwd / group files myself, but I did a chgrp -R Users ~/.ssh (or the group "HomeUsers" On Windows8 pre-release). After that, you can do the chmod 0600 and it'll work as expected. The chgrp to the Users group can be done in whichever other similar cases you find; it even works as expected since cygwin puts users in the Users group as a secondary group (instead of primary, which would be the correct behavior).

Adding hosts

One Host

cd into .ssh directory and execute a bash file with these contents
#!/bin/bash
SERVER=$*
echo $SERVER
cat id_dsa.pub | ssh root@$SERVER "cat - >>authorized_keys2"

All Hosts from /etc/hosts

cd into .ssh directory and execute a bash file with these contents
#!/bin/bash
for i in $(sed 's/;.*//;' /etc/hosts | awk ' /^[[:digit:]]/ {$1 = "";print tolower($0)}')
do
:
cat id_dsa.pub | ssh root@$i "cat - >>authorized_keys2"
done

Single Server command line

   cat /root/.ssh/id_dsa.pub | ssh root@(server) 'cat - >>~/.ssh/authorized_keys2'

EXAMPLE __ CHUCK:

   cat /.ssh/id_dsa.pub | ssh root@db4 'cat - >>~/.ssh/authorized_keys2'

general hosts file

 127.0.0.1 localhost
 127.0.1.1 (YOUR PC NAME HERE)
 
#Office
192.168.0.104 admin
10.0.0.202    bendev
10.0.0.58     support
10.0.0.52     office
10.0.0.168    zenddev
10.0.0.227    dev_db1
10.0.0.228    dev_db2
10.0.0.221    dev_db3
10.128.1.28   gltail
 
#Minnetonka
192.168.0.108 web1
192.168.0.109 web2
192.168.0.110 bb3
192.168.0.100 db1
192.168.0.106 db2
192.168.0.113 db4
192.168.0.102 dbnew
192.168.0.107 ein
192.168.0.103 eout
192.168.0.111 services4
192.168.0.112 data1
 
#Dallas
10.20.0.21   da_db1
10.20.0.22   da_db2
10.20.0.23   da_db3
10.20.0.31   da_web1
10.20.0.39   da_web2
74.249.6.120 chat
 
#Hosting
hosting.resellersolutions.com hosting1
hosting2.resellersolutions.com hosting2
 
 # The following lines are desirable for IPv6 capable hosts
 ::1     localhost ip6-localhost ip6-loopback
 fe00::0 ip6-localnet
 ff00::0 ip6-mcastprefix
 ff02::1 ip6-allnodes
 ff02::2 ip6-allrouters
 ff02::3 ip6-allhosts

[edit]sshfs

First install the module:
   sudo apt-get install sshfs
Load it to kernel
   sudo modprobe fuse
Setting permissions
   sudo adduser maythux fuse
   sudo chown root:fuse /dev/fuse
   sudo chmod +x /dev/fusermount
Now we’ll create a directory to mount the remote folder in.
I chose to create it in my home directory and call it remoteDir.
   mkdir ~/remoteDir
Now I ran the command to mount it(mount on home).
   sshfs maythux@192.168.xx.xx:/home/maythuxServ/Mounted ~/remoteDir
Now it should be mounted
   cd ~/Mounted
   ls -l 
To unmount,

   fusermount -u ~/remoteDir
To add it to your /etc/fstab,

   sshfs#$USER@far:/projects /home/$USER/remoteDir fuse defaults,idmap=user 0 0

[edit]suggested .bashrc file

 # /etc/skel/.bashrc
 #
 # This file is sourced by all *interactive* bash shells on startup,
 # including some apparently interactive shells such as scp and rcp
 # that can't tolerate any output.  So make sure this doesn't display
 # anything or bad things will happen !
 
 
 # Test for an interactive shell.  There is no need to set anything
 # past this point for scp and rcp, and it's important to refrain from
 # outputting anything in those cases.
 if [[ $- != *i* ]] ; then
        # Shell is non-interactive.  Be done now!
        return
 fi
 
 if [ -f ~/.bash_aliases ]; then
     . ~/.bash_aliases
 fi
 
 #enable bash AutoComplete from known hosts
 if [ -f /etc/bash_completion ]; then
  . /etc/bash_completion
 fi
 
 export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
 
 # Put your fun stuff here.
 alias apt='sudo apt-get install'
 alias remove='sudo apt-get remove'
 alias search='apt-cache search'
 alias rar='sudo'

suggested .bash_aliases file

 function benmount {
        server="`echo $@ | tr '[:upper:]' '[:lower:]'`"
        for i in $server; do
                echo -n "Mounting ${i}... "
                if [ ! -d "/www/servers/${i}" ]; then
                        sudo mkdir /www 2> /dev/null && sudo chmod 777 /www
                        mkdir -p /www/servers/${i}
                fi
                [ -d "/www/servers/${i}" ] \
                && sshfs root@${i}:/ /www/servers/${i} -C \
                        -o reconnect \
                        -o workaround=all \
                        -o follow_symlinks \
                        -o transform_symlinks \
                && echo "DONE" && continue
                echo "UNSUCCESSFUL" && continue
        done
        [ -z "$server" ] && echo -e "\nUsage: benmount <SERVER> <SERVER> ...\n" && return 1 || return 0
 }
 
 function benumount {
        [ -z "$@" ] && list="`ls /www/servers | tr '[:upper:]' '[:lower:]'`" || list="`echo "$@" | tr '[:upper:]' '[:lower:]'`"
        for i in $list; do
                fusermount -u /www/servers/${i} && rmdir /www/servers/${i} && continue
                [ -d "/www/servers/${i}" ] && [ -z "$(ls /www/servers/${i})" ] && rmdir /www/servers/${i}
        done
        return 0
 }
 
 function benssh {
        [ -z "$1" ] && echo -e "\nUsage: benssh <SERVER>\n" && return 1
        server="`echo "$1" | tr '[:upper:]' '[:lower:]'`"
        ssh -C root@$server
 }
 
 function benpub {
        uname="$1"
        pword="$2"
        [ -z "$uname" -o -z "$pword" ] && [ ! -f "$HOME/.benpub" ] \
            && echo -e "\nUsage: benpub <USERNAME> <PASSWORD>\n(first time only/or if changing credentials)\n" \
            && return 1
        [ -n "$uname" -a -n "$pword" ] && echo -e "username=${uname}\npassword=${pword}" > $HOME/.benpub && chmod 600 $HOME/.benpub
        [ -n "$(mount | grep 10\.0\.0\.10/Public)" ] && echo "Public already mounted" && return 1
        [ ! -d "/www" ] && sudo mkdir /www 2> /dev/null && sudo chmod 777 /www
        mkdir -p /www/public && sudo mount -t cifs -o cred=$HOME/.benpub //10.0.0.10/Public /www/public/ && return 0 || rmdir /www/public
 }
 
 function benupub {
        [ -z "$(mount | grep 10\.0\.0\.10/Public)" ] && echo "Public not mounted" && return 1
        sudo umount /www/public/ && rmdir /www/public
 }
 
 complete -F _known_hosts benmount
 complete -F _known_hosts benumount
 complete -F _known_hosts benssh

You will need to reload your bash
 source ~/.bashrc

GIT gitlab ssh key for composer install

Example when to use

example when running...
   composer install --dev
you get
     - Installing ben/support (v0.1.5)
       Cloning 506036c184d721d5b82a2f3056e3941759e2ded2
   git@git.usi.ben's password:

Generate RSQ key for user

   ssh-keygen -t rsa -C "chuck@www.com"

add to github / gitlab

test new key

   ssh git@git.usi.ben
DONE!