UNIX Recycle Bin - restore function

Use and complete the template provided. The entire template must be completed. If you don't, your post may be deleted!

  1. The problem statement, all variables and given/known data:

A set of Linux shell scripts is required to allow users to �remove' files without them really disappearing until the �dustbin' is emptied. The three shell scripts required are :-

del <filename> - This script should move the file called <filename> (a full or relative pathname) to the
dustbin directory.

trash [-a] - This script should remove the contents of the dustbin directory.
If the -a option is not used, the script should print the filenames in the dustbin one by one and ask the
user for confirmation that they should be deleted.
Otherwise, If the -a option is used, the script should simply remove ALL files from the dustbin.

restore [-n] <filename> - This script should move the file called <filename> (a full or relative pathname)
back to its original directory.
If the -n option is used, the script should allow the file to be moved to a directory nominated by the user.

Help:

I need help with the restore function.
How can I save the original path of the file and then after deletion move it back? (bash code would help)
Any relevant ideas on how to do this can help me very much.
Thanks!

  1. Relevant commands, code, scripts, algorithms:

bash code
using Puppy Linux OS
I used a dustbin directory to keep my deleted files

  1. The attempts at a solution (include all code and scripts):

To get the path of the file I used this:
ABS_PATH=$(cd $(dirname ${1});pwd)

  1. Complete Name of School (University), City (State), Country, Name of Professor, and Course Number (Link to Course):
    Edinburgh Napier University , Edinburgh , UK , Dr Alistair Armitage , CSN08101 Systems and Services

Note: Without school/professor/course information, you will be banned if you post here! You must complete the entire template (not just parts of it).

Create a clone tree ~/.snapshot/ with every subdirectory of ~ and every file hard linked. They can delete th originals and recover them from ~/.snapshot/. A monitoring daemon can ensure that all files created are linked in soon thereafter. The path is the same path plus /.snapshot/

Now, if you want to ensure you can recover even after an overwrite, you have to copy not move. If space is a problem, the .snapshot can be a zip file. The daemon can have a directory of cksum values or trust time stamps to decide what is new to save.

I have seen facilities where ther were N snapshots holding files for N prior days, so you can recover an old version.

Great! I got something like this for the clone tree:

# cd /
# tar cf - . | (cd ../clonetree/; tar xpvf -)

And now, how do I actually restore the file from my dustbin to the original location? ( how do I use the clone tree?)

What is it with tar, a command so old it it like going to rent a car and taking the 1962 VW beetle, especially when not leaving the host!

Yes, tar will make copies, so you just copy the file you want back. If /a/b/c/d is restored:

cp -p /clonetree/a/b/c/d /a/b/c/d

Copies are slower than links and take more space. You can use cpio -pl to make linked clone trees. The linking is more likely possible if you clone within one device.

tar is simple, efficient, and works. cpio also works, but its syntax is far more tortuous.

tar also has advantages over cp in some ways. I occasionally prefer it over cp in situations where I need to be very careful where it extracts because cp -R is entirely willing to create a new destination directory even when you didn't mean to. tar on the other hand is very precise and strict about what it creates where.

As for your continual harangues on tar's inefficiency:

# directory of tiny tiny files
$ du -hs ./code
187M        ./code
# creating a tarball of it
$ tar -cf code.tar code
# Did tar waste gigatons of space?
$ du -hs ./code.tar
184M
# NO, it didn't!  Don't believe me?  Let's look closer.
$ du -s ./code ./code.tar
188236	./code.tar
191296	./code
# Now we create a CPIO archive, which should be wayyyyy better right?
$ find ./code -type f -depth -print | cpio -ov > ./code.cpio
$ ls -l code.tar code.cpio
-rw-r--r-- 1 monttyle monttyle 192747520 Nov 17 08:14 ../code.tar
-rw-r--r-- 1 monttyle monttyle 191435776 Nov 17 08:22 ../code.cpio
$

...and even that minute 0.5% extra reduces to 26 kilobytes out of 100 megs when you compress -- 0.02%.

So tar works absolutely fine, and you really don't need to tell us how awful it is all the time. Thank you.

Maybe it was that tar I got on my favorite jacket at Mystic Seaport. :smiley:

You seem to have a newer tar, that has compression and allows '-' arguments, but there are some really old ones out there that do not behave as normally or nice. This forum is not UNIX-flavor-specific, and I hate it when an old command refuses my examples!

I do not memorize cpio syntax, I write it in scripts, or I get cpio managed free in recursive modes of cp, rcp, scp, scp2. Some scp2 have a -d that says the target must be a pre-existing dir, so the 'different result second time' problem is averted.

Also tar does not have a linking to original, not copying, option like cpio pass and cp, since when space was a concern and overwrite was not, the object was to create a clone tree of hard links?

I gzipped the files after.

tar and cpio were standardized at the same time. You might also find it illuminating that POSIX-2001 restandardized the tar format in a backwards-compatible way to support files >8GB, but cpio couldn't be adapted -- they removed it from POSIX instead. It's gone.

In short -- tar won, cpio lost. On many of my systems it didn't even come with the base OS!

Preaching to the choir, my friend. :slight_smile: My personal bugbear is solaris /bin/sh...

This is true. tar and cpio are not totally equivalent in functionality.

So, what does POSIX cp/rcp/scp/scp2 usr in recursive mode now that cpio is gone, old cpio libraries or home brew replacements (not rocket science and easily replicated in shell scripting)?

I don't know, ftw()? It's not hard.

If these commands are cpio-backended on some systems, that could explain why some still have the 8gig limit even when the system doesn't...