Passing password with SSH command

Hi Experts,

I have specific requirement where I want to pass the password with the

ssh username@hostname command .

I dont want to use RSA public and private keys also. Because that will be on production server and no one wants to give access like that.
Second thing it is production servers, We dont want to install any utilities like SSHPASS or expect.

I am lookin for the solution where I can pass it password directly in the command or can read it from one file.

Thanks

As far as i know the whole idea of SSH is dont use or automate password to it. Thats whythey have SSHPASS and except tools to achieve this.

Yeah Agree to you.

But requirement is like I dont have the permission to install any utilities or generating the RSA certificate authentication. In this case do we have any way to pass the password within the script?

In such scenarios, production environment required some rights which generally developer doesnot have it. So how to bypass this thing and directly hardcode the password in the script or read it from some file.

Thanks again. Anyone faced this kind of problem or having any resolution for the same?

So are you saying that you cannot (on the local system) issue ssh-keygen

If you can, can you use SFTP (with user & passwd) to the target server? If it's a unix server, then normally you would need to edit ~/.ssh/authorized_keys and add in the newly generated public key. If it's Windoze, then I'm a bit stuck.

SSH is designed for key exchange really, with user & password being a manual process only. There are ways, but you are then exposing the credentials to anyone who can read the script. I doubt that meets your security and auditing requirements.

Can you elaborate on why you (or the sysadmin of the other server if it's not you) don't want to use keys?

I hope that this helps,
Robin,
Liverpool/Blackburn
UK

If you're not allowed to use the safe, secure, normal, day-to-day method of automatic login, it's pretty safe to assume your admin did not intend you to use some wildly insecure kludge to do so instead.

Yes Rbatte, Here is the explaination.

We have unix boxes in the production which are being replicated on other boxes for disaster recovery. So the script which am writing actually need to run on the DR site. First thing is we wont be having access to install anything on the DR servers and we need to perform immediate test using script. If we issue key certificate also for production servers as it will be replicated to DR servers so the IP and hosts will change for DR. So your key certificates wont work in DR site.

So what solution I was thinking is just passing the password from the scripts only and run using SSH to multiple servers. I agree that SSH keys provide the safe and secure mechanism but sometime more security become bottleneck for straight forward solution.

I appericiate you guys for sticking to standards and guidelines but i need some solution out of the box here.

why not just generate 1 key each for each server in a production-dr server pair inside the ssh keys file? that way replication from production to dr will not clobber the dr servers' ssh key files in any way ...

2 Likes

Thanks Just Ice. Could you please explain me how to do it to generate the key pair for two servers inside ssh keys file?

It will be very helpful.

you generate the ssh keys in the same way as you would single server keys ... just add the public keys into the same .authorized_keys file ... but before you get lost there ...

first off, we need to know ...

  1. direction of replication (i.e., prod to dr, dr to prod, both ways, etc.)

  2. files and directories being replicated (i.e., ALL, /export/home, /, etc.)

  3. domain/network of prod and dr servers (i.e., prod servers are in prod.some.com and network 192.168.1.0 while dr servers are in dr.some.com and network 192.168.15.0)

  4. naming convention of prod and dr servers (i.e., server1 in prod and server1-d in dr)

Thank Ice Age.

Direction of replication :- One way (prod to dr).
files and directories being replicated completely (i.e., ALL, /export/home, /, etc.)
domain/network of prod and dr servers :- Production Host name and DR host name will be same but IP's will be different.
Naming convention of prod and dr servers is different. Server-1 and for DR Server-2

Reagrds,
Sourabh

I'm assuming that there must be some parts not replicated, such as the network configuration. How do you exclude these files? We might need to exclude a few others for the replication.

If you replicate absolutely everything, then if you DR server boots, it will get the IP address of the production server. If it is not isolated it could damage production services by getting in the way of your real production server. I'm assuming that you have identical hardware, else you might have issues booting anyway with device addresses all being wrong.

Is there a neat, yet complete definition of what for the OS filesystems is replicated rather than just everything? Do not worry about application & data filesystems information.

Robin

Yes Robin,

Netwrok configurations will be different. It implies IP address of each dr server will be differnt from the productions ones. So that in Host file, We will be having differnt IP addresses mentioned for DR servers.

So that tells me that you are not replicating everything. What is excluded?

Robin

Only hostfile will be excluded. DR servers will be having differnt hostfile than production servers.

Okay, so could you, on production:-

  1. Generate an ssh key in the usual way with ssh-keygen and the options you find are appropriate.
  2. Copy/add the public key into the authorized_keys file
  3. Copy the authorized_keys file to the target server

This way they authorized key will allow you to ssh/sftp to the local server too, but it will persist whenever you replicate 'everything' to the DR server.

I guess from what you are saying, that the production server name will also be replicated, hence why the /etc/hosts needs to be protected. Can I assume that /etc/hosts is very small and that you use DNS? If not, then you are risking updates to /etc/hosts on production being missed on the DR server. Really, you should keep /etc/hosts to:-

  • One record per network card, plus one for the loopback
  • One record for the boot addresses and persistent addresses when operating in a cluster.

More than this is bad practice anyway. I have a struggle with colleagues chucking stuff in all over the place because they see it as easier that asking for a DNS update. We end up in a mess every now and then because of typing errors or a complete failure to make the update to all. I am always taking them out and putting them in DNS behind the scenes - and they don't notice.

Anyway, does this give you an option to proceed?

Robin