I have 3 nodes and I want to set up ssh between them. Normnally I am fine with that but I am a bit stumped because for the Oracle user we have a shared mountpoint (/home/oracle) across all 3 nodes
I create my rsa and dsa files in ~/.shh as node1_id_rsa / node1_id_dsa (for each of the 3 modes) and then I cat each of the .pub files (both rsa and dsa) into the authorised_keys file.
ssh still requires a password. The concept of the shared home area is confusing me a bit
I am appending the files to $HOME/.ssh/authorized_keys.
There is only one .ssh directory. However if I run the ssh-keygen routine on node1 it creates .pub files with node1 in the line. Therefore if I try and connect from/to node 2 then it requires a password.
Bear with me, I know I am not explaining myself very well
I have a number of machines where I put exactly the same "identity" and "identity.pub" in my $HOME/.ssh directories because I am the same user.
This allows me to ssh and scp directly to any machine without having to use a password.
And you would only need one entry in the authorized_keys, eg the same as identity.pub.
known_hosts would accumulate the different machines you talk to of course.
I use this to distribute my keys....
#!/bin/sh -x
ME=`whoami`
for d in $@
do
ssh <identity.pub $ME@$d cat \>\>.ssh/authorized_keys
if test "$?" = "0"
then
if ssh </dev/null $ME@$d chmod 600 .ssh/authorized_keys
then
ssh <identity $ME@$d dd of=.ssh/identity
ssh <identity.pub $ME@$d dd of=.ssh/identity.pub
ssh </dev/null $ME@$d chmod 600 .ssh/identity .ssh/identity.pub
ssh $ME@$d ls -l .ssh
fi
fi
done