(10-12-2018, 12:59 AM)deck_luck Wrote: Did you run the ddrescue via a script or periodically via a cronjob?
I've run ddrescue with a single command in the bash, there was no script or cronjob.
Quote:Did your script make sure the filesystems were not in use or unmounted before attempting to make an image?
Yes! (pretty sure ddrescue would complain, though)
Quote:How did you monitor the image creation for successful completion? (200MB compressed should have been a red flag)
When creating an image, ddrescue will print its status directly in the terminal. It will also display "Image creation successfull" alongside showing how much of the data was being recovered. Added to that, ddrescue created a logfile confirming all of that information. The created image was probably approximately the size of the hdd. At least the image-file in the later compressed archive has this size. Therefore, I'm pretty confident the image creation worked as good as ddrescue would allow it to work.
After some consideration, the way I created the 7z archive is rather what I nowadays blame myself for. If I remember correctly, I used the GUI to do this. The fact that the .7z-file haven't had the *.tmp-extension behind it was seemingly enough for me. I'm currently evaluating how to improve my "compressing skills" as mentioned by elsandosgrande (using the pipe from dd to gzip).
It is not unusual that a compressed hdd image has a high compression ratio. The unused disk space of an hdd image is usually compressed to almost zero in the archive. BUT 200 MB for a 20 GB hdd with about 75% of it being used disk space should have been a red flag, that is indeed true.
Quote:How did you maintain your "backup" image rotations?
Why do you use quotes?
There is no backup rotation right here. This hdd wasn't in use since 2003. The data on the hdd is not supposed to change anymore. The goal is to keep an image of the data in case I will ever need it and then shred and throw the old hdd away.
Off-Topic: As far as rotations go, I would usually use rsync/grsync/SyncToy (Windows) for regularly backing up my personal data to the NAS. The RAID 1-Partition of the NAS is backed up to an additional hdd on a regular basis, as mentioned. But this has nothing to do with this thread or my image creation of old hdd's.
Quote:So, I don't know how to generate a checksum since I never needed one before, I'm sorry I can't help you there.
Regarding dd | gzip, you must add -c after gzip, just like in the wiki, and be sure to run the command from a root shell (tried to do it via sudo by typing sudo at the beginning of the entire line and before each command, but it would still spit out permission errors, so I just used the root shell since I already use it quite often anyway) which you can enter either by logging in as root if you have the root account password set (sudo passwd, follow on-screen prompts and voila, just type su, type in the root password and you're golden), or by typing sudo su (that is good enough if you don't intend on using software that needs the root password in particular, for example VMware Workstation). That should be enough until you find something else that works if you want something else.
Hope this helps and have a nice day!
Don't worry about the checksums. cleverwise already told me all I needed to know in this thread. I already did my homework and generated sha256-sums for all my archive files in the NAS with a small script.
Generating the .img.gz-file with the dd | gzip-pipe worked for me.
After your post I started trying around a little bit. The result was the same no matter if I was using the -c option or not. Afaik, gzip will write to the standard output by default when used like mentioned above.
The wiki also suggests using the command exactly like this:
Code:
dd if=/dev/sda1 | gzip > ~/image-compress_sda1.img.gz
What also worked for me is using this command with sudo. Using a root shell wasn't necessary for me.
Here's the deal: What I've read comes from the ubuntuusers-wiki and I'm also using debain-based OSes only. Seeing that you are sitting in front of an Arch-based distribution, I guess that this is what makes the image generation and compression a little different for both of us.
Thank you again for your participation and your help in this thread.
And also thanks to everyone else so far.