Wouldn’t it be nice and handy to go to your local home directory and from there just cd into a remote one (say university stuff or via WLAN or other sometimes unsecured lines) as if it were local data? Of course there is NFS or GNOME’s network folders (that use ssh; Places -> Connect to Server…) and I guess there are heaps of other ways to do it. I chose the sshfs way because it’s
- easy to set up
- only needs client side (local) side preparations
- can be set up and mounted entirely by a “normal user”
- data line is encripted just as ssh is (because data does go via ssh)
So, what needs to be done? I’d just list the steps with only the necessary explanation. For further introduction see below.
sudo aptitude install sshfs
lsmod | grep fuse see if the fuse module is there. Otherwise modprobe it (
sudo modprobe fuse).
- see if you user name is listed in the fuse user group: grep fuse /etc/group. If not do
sudo adduser yourusername fuse. You might need to logout and log back in in order for this change to take effekt.
ls -la /dev/fuse should give you
crw-rw---- 1 root fuse.... The ownership root:fuse is important. If not, do
sudo chown root:fuse /dev/fuse
- Now create the mountpoint:
- Actually mount the remote fs (syntax is like the one from ssh or scp):
sshfs remoteuser@remotehost:remotepath ~/unihome . If no ssh-key stuff is configured you’ll be asked your remote password
You now can cd ~/unihome or otherwise use the data there as if it was local. To unmount the remote data do
fusermount -u mountpoint. Here it would be
fusermount -u ~/unihome.
To make your daily life easier you can add a file called config to your (local) home’s .ssh directory with the following lines (insert you personal data):
Host wsl01 Hostname remotemachine's-name-or-ipUser remoteuser
After that you can shorten the mount command to
wsl01: ~/unihome to mount the entire home directory (see bottom for why
-oreconnect). Of course this only works for ssh’s default to go straight into your home directory after login. From sshfsfaq:
Automatic mounting, if desired, can be added to a shell script such as .bashrc (provided authentication is done using RSA/DSA keys).
See Kevin van Zonneveld’s Blog for how to setup everything to automatically login using ssh (and thus sshfs) without beeing promted for a password. But beware not to give anyone access to your private key file (see Kevin’s note under “Pitfalls” at the bottom)! Even though the key is user and machine specific anyone that gathers access to your machine and your user can hop to the remote machine with your remote login as well. After done generating and installing the keys you need the mount command from above in your .bashrc file in your home directory. It will be unmounted on system shut down or logout.
Now, you’re done.
Update: Tweak timeout
I’ve experienced several disconnects when the connection has been idle for to long. So I digged into it. From
man 5 ssh_config:
If set to “yes”, passphrase/password querying will be disabled. In addition, the ServerAliveInterval and SetupTimeOut options will both be set to 300 seconds by default. This option is useful in scripts and other batch jobs where no user is present to supply the password, and where it is desirable to detect a broken network swiftly. The argument must be “yes” or “no”. The default is “no”.
Sets the number of server alive messages (see below) which may be sent without ssh receiving any messages back from the server. If this threshold is reached while server alive messages are being sent, ssh will disconnect from the server, terminating the session. It is important to note that the use of server alive messages is very different from TCPKeepAlive (below). The server alive messages are sent through the encrypted channel and therefore will not be spoofable. The TCP keepalive option enabled by TCPKeepAlive is spoofable. The server alive mechanism is valuable when the client or server depend on knowing when a connection has become inactive.
The default value is 3. If, for example, ServerAliveInterval (see below) is set to 15, and ServerAliveCountMax is left at the default, if the server becomes unresponsive ssh will disconnect after approximately 45 seconds. This option works when using protocol version 2 only; in protocol version 1 there is no mechanism to request a response from the server to the server alive messages, so disconnection is the responsibility of the TCP stack.
Sets a timeout interval in seconds after which if no data has been received from the server, ssh will send a message through the encrypted channel to request a response from the server. The default is 0, indicating that these messages will not be sent to the server, or 300 if the BatchMode option is set. This option applies to protocol version 2 only. ProtocolKeepAlives is a Debian-specific compatibility alias for this option.
So, I added a line in my .ssh/config file saying BatchMode “yes”. This, per default, gives the line
$((300 / 60)) = 5 minutes (bash simple math, use with
echo on the command line) until the ssh connection is dropped.
Update 2: Automounting
Add a line like the following to your
/etc/fstab file (open in graphical mode with
gksudo gvim /etc/fstab):
# <file system> <mount point> <type> <options>
sshfs#wsl01: /mountpointpath fuse optionsset 0 0
Remember to adopt the bits written itelic, i.e. wsl01, the path to your mount point and the options. A typical option set could be
comment=sshfs,users,noauto,uid=1000,gid=1000,allow_other,reconnect,transform_symlinks. It’s a mixture of basic mount options and fuse and sshfs, respectively, specific options. The main ones are:
users: anyone can mount this filesystem
noauto: don’t mount automatically on system start up since network is not up, yet
uid=1000,gid=1000: since mount is not run with your uid/gid this is needed (find out the numbers with
Now configure fuse by using
/etc/fuse.conf (infos locally in
less /usr/share/doc/fuse-utils/README.gz). Add
user_allow_other to be able to use the fstab option
I was writing this section in parallel while testing it myself. And I suddenly noticed it’s not what I was looking for (which was auto reconnect). More so this seams less secure than the original since with this any local user could mount it. The only advantage was to have icons on the gnome desktop (because it’s in the fstab) or if you wanted to auto mount on network up/down. See the original forum post for how to do that.
Automatic reconnect is easily done by using the -o reconnect option with sshfs:
sshfs -oreconnect wsl01: ~/mountpoint.