numanuma

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by numanuma

  1. can somebody point me in the right direction, how do i configure PIA VPN to work with this docker? I've put my username and password in and it does't appear to connect and i'm not seeing where to put my chosen server in. Do i need to configure something else first? thanks in advance
  2. Damn, i've tried that. A friend suggested gparted and a live USB, worth a shot?
  3. I will check. If is not, is it worth trying it on another motherboard? As i don't have another hard drive chipset to try it with Realised about 20 minutes ago that my nephews baby photos are on there too. I already owe all you guys beers from the first drive, i'll have to give a kidney to the guy who saves the photos!
  4. I've tried reseating the XFS drive but still nothing. Wouldnt it /not/ be spinning if it was dead?
  5. slight noob reply but i'm right in thinking its sdX1 because this is a single disk i want to repair and /not/ the array? Better safe than sorry!
  6. If i can get it to show up, as it is XFS i should run xfs_repair /dev/sdX or mdX?
  7. It is family photos so the data is invaluable to me. Does anybody know of a UK based service they can recommend? A google search turned up a company called Kroll who seem highly rated
  8. Damn ;_; Can you advise on what to do next? i'm at a loss. currently consolidating the 2x2TB into 1x2TB so to remove the known faulty reiserfs drive i just repaired
  9. Sorry, here it is. Please, tell me i can still salvage my data? tower-diagnostics-20170801-1758.zip Edit: Is my 4TB drive showing up in df.txt as the 3.7TB? The amounts look about right from when i last saw it mounted in the gui. My 5TB parity drive isn't showing up in that file though? It is in other files, but then the 4TB is not
  10. Connected the 4TB XFS drive, it is not showing up in both the gui or /dev/. ideas? The parity was going suuuper slow anyway, is that because the parity drive is showing as faulty in the gui?
  11. is there anything i can do to help my parity drive showing as faulty in the webgui?
  12. aaaaand swiftly stopping the parity sync! Edit: on second thought, can't i parity these 2x2TB media drives then New Config tomorrow when it comes to fix the XFS? once that is fixed (pray), i can add back the 2x2TB drives and then add the parity back which wouldn't lose anything? just before i go clicking cancel on anything. i think thats sorta what you said but i want to make sure
  13. root@Tower:/dev# reiserfsck --rebuild-tree /dev/sdc1 reiserfsck 3.6.24 ************************************************************* ** Do not run the program with --rebuild-tree unless ** ** something is broken and MAKE A BACKUP before using it. ** ** If you have bad sectors on a drive it is usually a bad ** ** idea to continue using it. Then you probably should get ** ** a working hard drive, copy the file system from the bad ** ** drive to the good one -- dd_rescue is a good tool for ** ** that -- and only then run this program. ** ************************************************************* Will rebuild the filesystem (/dev/sdc1) tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes Replaying journal: Done. Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed Zero bit found in on-disk bitmap after the last valid bit. Fixed. ########### reiserfsck --rebuild-tree started at Mon Jul 31 20:39:50 2017 ########### Pass 0: ####### Pass 0 ####### Loading on-disk bitmap .. ok, 215905101 blocks marked used Skipping 23115 blocks (super block, journal, bitmaps) 215881986 blocks will be read 0%....20%....40%....60%....80%....100% left 0, 19325 /sec 2102 directory entries were hashed with "r5" hash. "r5" hash is selected Flushing..finished Read blocks (but not data blocks) 215881986 Leaves among those 214519 Objectids found 2115 Pass 1 (will try to insert 214519 leaves): ####### Pass 1 ####### Looking for allocable blocks .. finished 0%....20%....40%....60%....80%....100% left 0, 114 /sec Flushing..finished 214519 leaves read 214510 inserted 9 not inserted ####### Pass 2 ####### Pass 2: 0%....20%....40%....60%....80%....100% left 0, 0 /sec Flushing..finished Leaves inserted item by item 9 Pass 3 (semantic): ####### Pass 3 ######### ... .Nixon.PDTV.x264-W4F/american.experience.s03e04-e06.nixon.pdtv.x264-w4f.r00vpf-10680: The file [2838554 2838567] has the wrong block count in the StatData (97664) - corrected to (6440) /media/downloading/American.Experience.S03E04-E06.Nixon.PDTV.x264-W4Frebuild_semantic_pass: The entry [2838554 2838571] ("american.experience.s03e04-e06.nixon.pdtv.x264-w4f.r01") in directory [2413036 2838554] points to nowhere - is removed /media/downloading/American.Experience.S03E04-E06.Nixon.PDTV.x264-W4Fvpf-10650: The directory [2413036 2838554] has the wrong size in the StatData (2496) - corrected to (2424) Flushing..finished Files found: 1778 Directories found: 325 Names pointing to nowhere (removed): 1 Pass 3a (looking for lost dir/files): ####### Pass 3a (lost+found pass) ######### Looking for lost directories: Flushing..finished 67765 /sec Empty lost dirs removed 1 Pass 4 - finishedne 136508, 45502 /sec Deleted unreachable items 14 Flushing..finished Syncing..finished ########### reiserfsck finished at Tue Aug 1 00:28:28 2017 ########### root@Tower:/dev# reiserfs is working again, thank you. i thought i was up to date but obviously not. now doing a parity sync and will connect the XFS drive in the morning. when i checked yesterday it would not show up in the webgui
  14. I think i meant --fix-fixable first. i'm nearly at 80% on --rebuild-tree. From what i've read --check, --fix-fixable and --rebuild-tree are the ones i should be using but please if anybody knows better do chime in. 0%....20%....40%....60%... left 57960765, 21927 /sec How are the latest XFS repair tools on 6.3.5? I've read they aren't that reliable
  15. Ok, i'll work on XFS once this reiserfs is done. Stopped the parity sync because thr data on disk2 can wait until disk3 is done. Updated to latest unraid, --check, --fix-fixable & --rebuild-tree tried and failed. root@Tower:/dev# reiserfsck --check /dev/sdc1 reiserfsck 3.6.24 Will read-only check consistency of the filesystem on /dev/sdc1 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --check started at Mon Jul 31 20:19:37 2017 ########### Replaying journal: Done. Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed Zero bit found in on-disk bitmap after the last valid bit. Checking internal tree.. Bad root block 0. (--rebuild-tree did not complete) Aborted root@Tower:/dev# reiserfsck --fix-fixable /dev/sdc1 reiserfsck 3.6.24 Will check consistency of the filesystem on /dev/sdc1 and will fix what can be fixed without --rebuild-tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --fix-fixable started at Mon Jul 31 20:33:13 2017 ########### Replaying journal: Done. Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed Zero bit found in on-disk bitmap after the last valid bit. Fixed. Checking internal tree.. Bad root block 0. (--rebuild-tree did not complete) Aborted currently running rebuild-tree root@Tower:/dev# reiserfsck --rebuild-tree /dev/sdc1 reiserfsck 3.6.24 ************************************************************* ** Do not run the program with --rebuild-tree unless ** ** something is broken and MAKE A BACKUP before using it. ** ** If you have bad sectors on a drive it is usually a bad ** ** idea to continue using it. Then you probably should get ** ** a working hard drive, copy the file system from the bad ** ** drive to the good one -- dd_rescue is a good tool for ** ** that -- and only then run this program. ** ************************************************************* Will rebuild the filesystem (/dev/sdc1) tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes Replaying journal: Done. Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed Zero bit found in on-disk bitmap after the last valid bit. Fixed. ########### reiserfsck --rebuild-tree started at Mon Jul 31 20:39:50 2017 ########### Pass 0: ####### Pass 0 ####### Loading on-disk bitmap .. ok, 215905101 blocks marked used Skipping 23115 blocks (super block, journal, bitmaps) 215881986 blocks will be read 0%. left 196606499, 24743 /sec
  16. I'll grab the latest zip and move the bz files. Any ideas on my XFS issue?
  17. I've back up my old configs and done Tools>New Config, moved the 5TB parity drive (which is also showing the faulty triangle on the webgui) and formatted it to XFS, then New Config again and moved it back to Parity. I've got the only of my 3 data drives in lot 2, which is also a 2TB drive, doing a parity back up now. 12hr to go, ~100MB/sec I have no idea what to do after that completes, assuming there are no errors
  18. Hiya guys, my problem is with the XFS & ReiserFS. I have 1x2TB ReisterFS media drive and the more important family photos drive which is 4TB and XFS. Brand new build with old drives (currently replacing all). i7 6800k+MSI X99A Mobo+8GB Corsair Vengeance 2400MHz, Passive MSI GeForce 2GB 1030 GPU (in an Antec S10 case - its my new baby) I would be really grateful for any help, like i said there is a really important drive (aren't they all). Family pictures are the most important that i get off the drive. The 2TB Reiser has been reiserfsck --check, reiserfsck --fix-fixable, resierfsck --rebuild-sb, and --rebuild-tree. root@Tower:~# reiserfsck --fix-fixable /dev/sdc1 reiserfsck 3.6.25 Will check consistency of the filesystem on /dev/sdc1 and will fix what can be fixed without --rebuild-tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --fix-fixable started at Mon Jul 31 15:48:14 2017 ########### Replaying journal: Done. Reiserfs journal '/dev/sdc1' in blocks [18..8211]: 0 transactions replayed Zero bit found in on-disk bitmap after the last valid bit. Fixed. Checking internal tree.. Bad root block 0. (--rebuild-tree did not complete) Aborted That is the most common error now, the bad root block 0 on the reiserfs disk. It has media on it and i really don't want to lose it. Currently showing as unmountable if not in maintenance mode. The XFS 4TB WD Green (main) disk has been disconnected and turned off in the hope that somehow jesus or al gore works some magic while it is off. No such luck. I think it is a bad sector on the XFS but i will need to connect it, run xfs_repair and post the results. Any advise on this would be greatly appreciated. More so would reassurance my almost 20 years of family photos have not been lost. (Yes, i know they should have been backed up before now. Disk1 (XFS) failed, parity was corrupt and disk 3 (reiserfs) failed. Thank you
  19. I'm trying to get any YaRSS2 version to work with my deluge 1.3.12 binhex docker container i just installed but unlike every other plugin YaRSS refuses to stay ticked and activate. Can anybody help me with this? It's driving me up the wall
  20. What do you have set under Settings->Global Share Settings? Unless you want to specifically exclude certain disks from being used for any shares I would leave the include and exclude entries their as blank. Also you should only ever set the include or exclude entries for a share - not both (assuming you are not leaving them both blank). That got it. I had 2 disks for the longest time i assumed that once a 3rd was added it would automajically show up. Thank you for fixing an issue i have had for a very long time!
  21. I need disk 3 and 4 in a share and to exclude disk 1 and 2, however when i go to the share settings it only shows disk 1 and 2 as selectable options. Any ideas? Thank you.
  22. Hiya, i can't enable plugins. I have downloaded yarss 1.2, 1.32, and 1.3.3 and they all do not enable. I can tick the box and apply but the change does not take effect and the box is unchecked when i open the plugins preference pane again. Any ideas please?