deleteme

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by deleteme

  1. Great - thanks for the clear response!
  2. I'm trying out the legacy 1.0.2 version of this container to see if Filebot works for me and if I can figure out the set-up and rename conventions. Everything seems to be working well with the WebGUI, but I am getting no signs of the AMC script running at all. Nothing in the Docker log shows any of the "[amc] ...." lines previous posters have referenced despite making the AMC frequency in the docker extreme (run every 10 seconds) and quad-checking all my Docker paths. Is AMC a function that just didn't work in 1.0.2 and it's just something to figure out after getting a license or is there an additional file or am I missing a script which needs to be added into the container to make it work?
  3. deleteme

    Ethernet

    Do you use a Docker or plug-in on the unraid box for DHCP assignments? An ad blocker such as PiHole or the like?
  4. Well that opens some interesting edge cases. Is there a resource which jumps deeper into how unRAID handles rebuilds under dual parity with more detail than just "2 parity = 2 failed drives are recoverable"? The official wiki still just links to limetech's forum posts saying P/Q parity will be implemented and the 6.2 release info is equally skim. - When rebuilding one data drive with dual parity, does unRaid default trust one parity while checking against the second? > If both parities are available and only one drive is being rebuilt, is Parity 1/P used for rebuild by default? > If a sync error is detected between the newly rebuilt bit vs the other parity bit what corrective actions are taken or error is thrown? Would the rebuild stop to get user input (disabling one of the parities)? > Some people recommend performing a parity check after a drive rebuild to make sure there were errors in the rebuild process. Would the "check" of the second parity essentially be completing this at the same time as the rebuild? This might be a good thing to publish for those being conservative- going to dual parity would decrease the rebuild time and number of full disk reads in half when replacing multiple drives. The second parity could then be swapped back to data as the last step.
  5. Can't speak for the WD Reds in particular, but SMR and unRaid do just fine. Here's a super thorough post back when Seagate introduced the Archive line. In fact I've had a server running solely SMR Seagate Archive drives for Parity and Data with not a single problem for several years. Replacing them with a non-SMR drive isn't an issue, it's transparent to unRaid. The important thing for performance with these drives is to have a fast cache drive (SSD or high performance HDD). That way you will rarely encounter the negatives of the SMR drives - most writes will be stored on your fast cache until the Mover takes them to the array at its leisure.
  6. General question here after reading as much as I could on the whole Parity 1 / Parity 2 functionality. I'm upgrading my server to larger capacities and decided to assign the first larger drive to a new Parity 2 for a stress test and to try it out for future knowledge. Went fine. I've now replaced my Parity 1 drive with its larger capacity drive (via un-assigning the original disc and assigning the new while the array was stopped - no "new config") and initiated the Parity Rebuild. See the image below, but the Parity 2 drive is being read during the Parity 1 rebuild at the same rate as the data drives. Is that expected behavior? I thought the two parities were more or less independent of each other and Parity 1 is just XOR'ed of the data discs. Shouldn't Parity 2 idle along while the data drives are used to rebuild Parity 1? Is this a function of re-assigning drives vs New Config?
  7. Overall sitting around 9TB of data space over 7 drives + parity + cache. Here's a capture from the WebGUI before I re-assigned all the drives when recovering: http://i.imgur.com/pp3Dqmf.png Disk3 was the target of the removal. I guess my question becomes more generic now in that what do I trust - the collective data drives or the parity that I had swapped out? If I go back through a rebuild on a new disk4 and some of the files I mess with are fine- does that actually say that the overall parity is correct? From how I've understood Unraid, that's not how corruption works. (Shoutout to limetech to get P+Q parity and integrity on the short list ;P) deleteme
  8. Realized I was running reiserfsck on the physical drive rather than the logical. Re-did it on /md4 and seems clean: ########### reiserfsck --check started at Wed Dec 18 21:46:22 2013 ########### Replaying journal: Done. Reiserfs journal '/dev/md4' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. finished Comparing bitmaps..finished Checking Semantic tree: finished No corruptions found There are on the filesystem: Leaves 234174 Internal nodes 1400 Directories 7359 Other files 44141 Data block pointers 231168541 (0 of them are zero) Safe links 0 ########### reiserfsck finished at Wed Dec 18 21:56:19 2013 ########### Still don't know what to do about the parity errors. Like I said, the New Config was done on a different drive. Would there have been writes to the data disks during the first 10 seconds of a New Config?
  9. So tonight I attempted to remove an old (emptied) drive my my array and recalculate parity to minimize the number of disks I had running and improve my parity check speed. I was conservative when doing this operation and: A) Screencapped my disk configuration from the main menu B) Used a preclear'ed new disk for a new parity C) Removed the old drive D) Backed-up my flash before restarting When I brought the server back up again, used "New Config" and re-assigned all my drives as they were previously (with the new parity drive). When I brought the array online to recalculate, one disk (1TB, "disk4") started reporting hundreds of reading errors. ["shiz"]. I stopped the calcs, shut the machine down, re-checked all the cable connections, and brought it back up. Now this disk was being reported as Unformatted. Ran reiserfsck --check on it and was reported a bad superblock. ["super shiz"]. I decide to back out to rebuild disk4- I re-install the original parity disk, the old disk, and re-flash my usb key with the old configuration. This time the box starts up like a charm and begins the array on its own. The entire file structure of disk4 is even visible and the few pictures and videos I sampled off the drive seem to be fine. However, if I do a non-correct Parity Check, it errors out instantly with over 248 parity errors. So I ask the experts - what should my next move be? I have a 2TB preclear'ed drive sitting in standby. Do I swap disk4 with the preclear'ed drive and rebuild, assuming that it was damaged somehow by the read errors? Do I do smart on every drive with the array offline and attempt to find a different drive that's reporting errors? Do I do something I don't even know about? *I'll note that I ran two parity checks before starting this operation, so everything was fine at the beginning! Running version 5.0.4 Thanks for any help you can offer! deleteme