welzel Posted August 7, 2010 Share Posted August 7, 2010 Hi, i was hit by the "lets format all your disks" bug in unraid. as a result, i was forced to spend some money and huge time to recover from this - mainly by restoring a older backup on a new machine (buying new drives and installing a new unraid) i restored files on the wiped drives, and was able to get approx. 90% of the important files back. now, i have a few TB of files in "LOST+FOUND". anyhow, i found a nice tool: "dupemap" that is part of magicrescue package. after spending arround 30 hours to get this working with slackware, i gave up. i need help. root@Tower:/boot/tmp/magicrescue-1.1.9# ./configure Checking whether the C compiler (cc -O3 -Wall) works... yes Checking for atoll... yes Checking for Cygwin... no Checking for ndbm.h... no <---- so Berkeley is not present Checking for getrlimit... yes Checking the size of off_t... 8 Checking for perl... ok Finding dependencies... ok root@Tower:/boot/# perl -MCPAN -e 'install BerkeleyDB' BerkeleyDB.xs: In function 'boot_BerkeleyDB': BerkeleyDB.xs:5492: error: 'DB_VERSION_MAJOR' undeclared (first use in this function) BerkeleyDB.xs:5492: error: 'DB_VERSION_MINOR' undeclared (first use in this function) BerkeleyDB.xs:5493: error: 'DB_VERSION_PATCH' undeclared (first use in this function) BerkeleyDB.xs:5507: error: 'DBT' undeclared (first use in this function) BerkeleyDB.xs:5507: error: 'my_cxt_t' has no member named 'x_empty' BerkeleyDB.xs:5508: error: 'my_cxt_t' has no member named 'x_empty' BerkeleyDB.xs:5508: error: 'my_cxt_t' has no member named 'x_zero' BerkeleyDB.xs:5509: error: 'my_cxt_t' has no member named 'x_empty' BerkeleyDB.xs:5509: error: 'db_recno_t' undeclared (first use in this function) BerkeleyDB.xs:5510: error: 'my_cxt_t' has no member named 'x_empty' make: *** [berkeleyDB.o] Error 1 PMQS/BerkeleyDB-0.43.tar.gz /usr/bin/make -- NOT OK Warning (usually harmless): 'YAML' not installed, will not store persistent state Running make test Can't test without successful make Running make install Make had returned bad status, install seems impossible ... so i downloaded 8 different versions of Berkeley DB... non of them are able to install... allways something is failing... basicly, i feel like something is really really wrong with this system if i compare it to debian or any other distro i´ve worked with for the past 10 years. my basic problem is that i am unable to install Berkeley DB on unraid. it will not compile (tried out 8 differerent versions/sources/methods). so i looked into package manager for slackware: tried out 3 of them, but allways hit a wall because some libraries where missing/unable to install. seriously, i am doing something wrong. so, how should i start? Example: install slapt-get dependency: slapt-get: error while loading shared libraries: libgpgme.so.11: cannot open shared object file: No such file or directory dependency: configure: error: libgpg-error was not found .... i´ve read all about switching to a user friendly distro inside forum, and it seems like LIME TECH is the only one who can do this. right? why is unraid based on slackware? i really try to understand why i have to suffer so much any ideas how to get this working? my basic problem was to find duplicate files and delete them. i have to use a db based solution, and it has to run on unraid (having my workstation blocked for 2-3 weeks is no option). thank you for your time, Bernhard Link to comment
WeeboTech Posted August 7, 2010 Share Posted August 7, 2010 Did you install the berkley libraries from a slackware distro? http://ftp.riken.go.jp/pub/Linux/slackware/slackware-12.2/slackware/l/ db44 is in there. EDIT: And here are the dependencies of the shared lib. root@atlas /boot/packages/libs #tar -xvzf db44-4.4.20-i486-2.tgz usr/lib/libdb_cxx-4.4.so usr/lib/libdb_cxx-4.4.so root@atlas /boot/packages/libs #ldd usr/lib/libdb_cxx-4.4.so linux-gate.so.1 => (0xb7750000) libpthread.so.0 => /lib/libpthread.so.0 (0xb7621000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb753b000) libm.so.6 => /lib/libm.so.6 (0xb7515000) libc.so.6 => /lib/libc.so.6 (0xb73c9000) libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0xb73bd000) /lib/ld-linux.so.2 (0xb7751000) Link to comment
Joe L. Posted August 7, 2010 Share Posted August 7, 2010 I cannot help with the Berkeley stuff, but this post http://lime-technology.com/forum/index.php?topic=7018.msg68073#msg68073 has an attached shell script that will run on unRAID and which will find duplicate files regardless of their names. No database is needed. No additional packages are needed, as it uses only the utilities on the stock unRAID. Perhaps it will help you to weed out the files in lost+found that have corresponding identical files from your older backups. Joe L. Link to comment
welzel Posted August 9, 2010 Author Share Posted August 9, 2010 Thank you for this: I cannot help with the Berkeley stuff, but this post http://lime-technology.com/forum/index.php?topic=7018.msg68073#msg68073 has an attached shell script that will run on unRAID and which will find duplicate files regardless of their names. after some more failures at getting magicrescue compiled, i started the script and hope for the best. basicly, it seems like a good stress-test for my unraid machine. at this point, i´ve got "dupe_tmp1" ... "dupe_tmp5" each 250 mb heavy (overall arround 1 TB) and i expect the script to finish within the next 2-3 more days... not so bad at all Thank you very much, Bernhard Link to comment
Joe L. Posted August 9, 2010 Share Posted August 9, 2010 Thank you for this: I cannot help with the Berkeley stuff, but this post http://lime-technology.com/forum/index.php?topic=7018.msg68073#msg68073 has an attached shell script that will run on unRAID and which will find duplicate files regardless of their names. after some more failures at getting magicrescue compiled, i started the script and hope for the best. basicly, it seems like a good stress-test for my unraid machine. at this point, i´ve got "dupe_tmp1" ... "dupe_tmp5" each 250 mb heavy (overall arround 1 TB) and i expect the script to finish within the next 2-3 more days... not so bad at all Thank you very much, Bernhard I'm glad it might be of some help. Obviously you have a lot of duplicate files, since the lost&found folder probably has a copy of nearly everything you restored from a backup. Normally people do not have as many duplicate files, or potential duplicates, so the script weeds out those that are unique earlier in its processing. (unique length in bytes, or unique within the first 4 Meg of the file) Since the script is a series of commands, you can execute them one at a time if you need. Unless you edited it, it is processing everything on your server so you'll also learn of other duplicates you might have saved over the years. From what you described, it is on the last step where it has to examine the full contents of every file it considers a potential duplicate. As you said, it will give your disks a workout. Joe L. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.