bash:umount/: no such file or directory

I am trying to connect two system (let's say for time being) together such that it supports clustering.

for this I got the following packages:

   	 	 	 	 	 	  # sudo apt-get install  pacemaker sysv-rc-conf glusterfs-server glusterfs-examples glusterfs-client chkconfig nmap ntp
 

Next, I did the following for both system:

   	 	 	 	 	 	  node[x]:~# mkfs.ext3 /dev/sd??  
 node[x]:~# blkid -g  
 node[x]:~# blkid /dev/sd?? >> /etc/fstab  
 node[x]:~# vi /etc/fstab  
 you must have a line like that :
 UUID=9dc20d6c-a3d7-4667-a9b1-e8939a0473f1	/export	 ext3	defaults		0	2  
 node[x]:~# mount /export/
 node[x]:~# mkdir  /export/part1  
  

Then, I added two files inside /etc/glusterfs. These are glusterd.vol & raid1.vol

Here's the contents of the two files, for the second system ip addresses may change though.

this is for glusterd.vol

volume management
    type mgmt/glusterd
    option working-directory /etc/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
end-volume

 the above lines are defaults, i mean, they're the only lines which are included in this file when you installed glusterfs package, I commented them first as I thought they wouldn't be needed, but then I got the endpoint connection error when tried to mount /mnt/glusterfs.  

volume posix
  type storage/posix
  option directory /export/part1
end-volume

volume brick
  type features/locks
  subvolumes posix
end-volume

volume server
  type protocol/server
  subvolumes brick
  option transport-type tcp
  option transport.socket.bind-address 192.168.3.13 # system IP
  option transport.socket.listen-port 820
  option transport.socket.nodelay on
  option auth.addr.brick.allow 127.0.0.1,192.168.3.13,192.168.3.99 #Systems IPs
end-volume

and this is the raid1.vol file

volume VolNode1 
  type protocol/client 
  option remote-subvolume brick 
  option transport-type tcp/client 
  option remote-host 192.168.3.13 # system 1  IP
  option remote-port 820 
  option transport.socket.nodelay on 
end-volume 

volume VolNode2 
  type protocol/client 
  option remote-subvolume brick 
  option transport-type tcp/client 
  option remote-host 192.168.3.99 # system 2  IP
  option remote-port 820 
  option transport.socket.nodelay on 
end-volume 

volume afr 
  type cluster/replicate 
  subvolumes VolNode2 VolNode1 
  option read-subvolume VolNode2 
end-volume 

volume wb 
  type performance/write-behind 
  subvolumes afr 
  option cache-size 4MB 
end-volume 

volume cache 
  type performance/io-cache 
  subvolumes wb 
  option cache-size 1024MB 
  option cache-timeout 60 
end-volume 

could anyone tell me how to make the two system work fine n share the folder?