JustinAiken

Members
  • Posts

    452
  • Joined

  • Last visited

Everything posted by JustinAiken

  1. Heaver docker user, non-VM user here - updated yesterday without incident, haven't seen any issues since EDIT - Did have to add ntlm auth = yes to my samba conf
  2. Hahahaha, it's *so* much faster now!
  3. Just the dockers in my sig; with my tiny E5200, I have to pause everything but Crashplan when I want to backup, then pause Crashplan and unpause everything else.
  4. My aging E5200 can't keep up with the dockers... Just ordered: - i5-6600k https://www.newegg.com/Product/Product.aspx?Item=N82E16819117561 - ASRock Z170 Pro4S https://www.newegg.com/Product/Product.aspx?Item=N82E16813157636 - 16GB GSKILL Aegis DDR4 https://www.newegg.com/Product/Product.aspx?Item=N82E16820232249 Will that be a good upgrade for lots of dockers, no VMs?
  5. - Smooth update from 6.2.3 here - Seeing all those CVE's fixed also reminded me to `brew upgrade curl` on my mac
  6. Ahh, finally finished this morning! I was getting worried this was going to be one of those 400 day things
  7. - After replacing/rebuilding a failed hardware drive, the filesystem was still corrupt - Started the array in maintenance mode - Did a check through the WebUI, it told me to do `--rebuild-tree` - Started the `--rebuild-tree` through the WebUI - ..but now it's been running for 3 days, and I'm missing the array - I started it through the UI, which added `--quiet`, so now I can't see too much of the progress; this morning it maed it to Pass 3 of Pass 3 (semantic), but the log hasn't moved for about 8 hours root@Tower:/# ps aux |grep [r]eiser root 11162 2.9 9.0 373176 362956 ? D Sep25 95:37 /sbin/reiserfsck /dev/md13 --yes --quiet --rebuild-tree Is there a way to get the array/shares back while this finishes? EDIT - More info: Here's the log: https://gist.github.com/JustinAiken/fb84a4b62e2d6509743b65cd76809a32 ... vpf-10680: The file [4472 4481] has the wrong block count in the StatData (619528) - corrected to (496) vpf-10680: The file [4472 4482] has the wrong block count in the StatData (630328) - corrected to (4176) vpf-10680: The file [4472 4483] has the wrong block count in the StatData (599536) - corrected to (5152) vpf-10680: The file [4472 4484] has the wrong block count in the StatData (606216) - corrected to (4536) vpf-10680: The file [4472 4485] has the wrong block count in the StatData (752344) - corrected to (5336) vpf-10680: The file [4472 4486] has the wrong block c For some reason it cuts of mid-sentence, but the process is still running.
  8. Thanks! After I get this mess resolved, I'll start using that. Haha... yeah. I'd updated to the 6.20RC's once they came out because I was excited for dual parity, I just hadn't gotten around to setting it up yet Learned my lesson - new 8TB ordered and on it's way! Yeah... For the files I do have a backup of on these drives, I think I'll just nuke them and restore them from Crashplan (family pics, other important things are backed up!) For files I don't... guess I need to scan through lots of movies to see if they're corrupt or not!! Excellent plan - as soon as my advance RMA from Seagate comes, I'll preclear and do exactly that - thanks for the advice!
  9. https://gist.github.com/JustinAiken/d4fc4fb7886de1a00da6a4bef0b90390 There's: - The syslog from just before I kicked off the rebuild, until the log quit (The other one was 0 bytes) - Misc other little files Was afraid of that.. I don't - any recommendations for mass-checksumming in the future? Unfortunately, I ran a preclear on it at the same time as the rebuild - wanted to 0 it out before I RMA'd it off..
  10. - Had a drive (drive 13, a 4TB Seagate NAS) tell me it was about to die - end to end SMART was "FAILING NOW" - So before I left town for a couple of days, I stopped the array, assigned a warm space to the slot, and kicked off a rebuild - Came home to find that whilst drive 13 was being rebuilt, a different drive (drive 3) died - This is bad - I don't have dual parity - Attached is the notifications: - Clicking about, everything from drive 13 -looks- to have been rebuilt okay - Clicking about, everything on the simulated-with-parity drive 3 -looks- to be intact - What should I do next? I had stopped my crashplan docker before - I'm not starting it back up, to make sure if any of these files are corrupted, they don't upload over the good copies in the cloud - Is there anyway to verify the files on drive3/drive13 to see if any got corrupted?
  11. Most are linuxserver based, but not all: Unraid 6.2.rc1 - not sure if this is a bug introduced here or fixed since then
  12. Will the new ISOs share be created if I don't enable VMs?
  13. Is it possible to use a VPN for downloads with this Transmission/docker setup? Or would I need to switch to team Deluge for that?
  14. My weak server's been running the beta for 11 days now now: - Moderate docker use - No VMs - SMB shares from both Mac/Windows - Precleared (w/ beta plugin) an 8TB drive, added to array - Did a parity check So far all very smooth!
  15. Just kicked off a Preclear of a new 8TB Seagate archive using the beta plugin... Will report back when it's done!
  16. Just updated straight from 6.1.9 to 6.2.0-beta21... - Smooth update overall - It took longer than expected for the Docker migration; some way to check progress would be good - All my existing Dockers are functioning okay!
  17. Fair enough! Thanks for clarification, will give the beta a try today
  18. This is important - I want to try the beta, and will help report issues/etc found, but I can't install something that's going to hold all my data hostage if I can't connect to the internet.
  19. Awesome! Excited for dual parity... time to watch HD deals again! Also excited for modern Docker version... Question though.. My docker's are merrily humming along - most have shares mounted such as `/config` -> `/mnt/cache/apps/sabnzbd` With all the new 'Default volume mapping for appdata' and 'auto-share creation', what will happen to my existing setup? I'd rather keep everything where it is...
  20. 3 year warranty! Don't have any of this kind yet, tempting..
  21. Mover is breaking for me sometimes: root@Tower:~# ps aux |grep rsync root 21647 0.0 0.0 11780 1448 ? D 03:40 0:00 rsync -i -dIWRpEAXogt --numeric-ids --inplace ./Video/TV/Transparent/Season 01/Transparent.S01E08.720p.WEB-DL-BATV.en.srt /mnt/user0/ root 21648 0.0 0.0 0 0 ? Z 03:40 0:00 [rsync] <defunct> root 24486 0.0 0.0 5104 1640 pts/4 S+ 10:24 0:00 grep rsync That subtitle file was a few hundred kb, and I have to hard reset my server, since I can't unmount the drive due to it being locked up by the failed rsync.