• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About jnheinz

  • Rank


  • Gender
  1. It's actually Step 2 -- and yes, you just select No Device. It's a good idea after doing that to Start the array, so it shows a missing drive -- THEN shut down and physically replace the drive. [if there's room to have all drives in the system, you don't actually have to remove the drive -- you'll simply assign the "new" drive in its place when you reboot] Thanks, this makes perfect sense. I will test it this way when I am to that point.
  2. Sorry, I meant Step 2. I am just testing a rebuild. I just have gotten slightly burned by a 2TB SAS disk that has half-failed during other maintenance where I had to rebuild parity (going to attempt a ddrescue on a live CD to clone it to another disk.... xfs_repair & mounting it have failed), so I would like to make sure my array is capable of rebuilding a disk in a standard scenario at least. I will pre-clear the disks in advance. I will retain the "original/good" data on the 1TB disk being replaced... so even if the "new" drive is bad, I still have the old data.
  3. https://lime-technology.com/wiki/index.php/Replacing_a_Data_Drive On Step 3, I recall my only choices were No device or the drive itself. There was no Unassigned. Am I selecting No device? Does it matter that the drive hasn't actually failed? I have several blank 1TB disks that I could replace an active 1TB disk with. I will retain the original disk (with its data) until I have confirmed the rebuild is complete.
  4. I have a SAS disk that was part of the array for awhile that I have about 1TB of data that I want to copy off of it. I suspect it may be failing & was encountering a complicated scenario that I ended up removing it from the array. I have since rebuilt parity. I would rather not add it back to the array. What are my options? Should I run a filesystem check outside of the array first? Thanks in advance.
  5. Hi jnheinz, well gracefully isn't possible. I'd do killall rsync killall unbalance You'd be left with a partially copied folder. The log will tell you which folders were copied and which one was in progress. Then you'd have to do some manual tending after (delete the partially copied folder on the target disk maybe?) Ok, thanks. I will probably hold off. Thank you for this plug-in, it has been very helpful. Unrelated question, I'll lay out the scenario to see if this is something unBALANCE can handle. I have a share called TV shows that is r
  6. Is there a way to gracefully terminate an unBALANCE job in progress?
  7. Thank you for this docker, works great thus far.
  8. Dumb question.. if I change the Included Disks from All Disks to Disk 20 on a share that is currently spread on Disks 1, 13, & 22.. Will just new data go to Disk 20 & the old data on Disks 1, 13 & 22 still be accessible? It doesn't actually move all the data to Disk 20 does it? Just talking about the settings of a Share. Thank you for this plug-in, it helps me move stuff around.
  9. I couldn't tell you how many times I have struggled trying to figure out which VM I left an open folder/file or which SSH session I accidentally left myself at the pwd of a mount. This would be extremely helpful to implement.
  10. I believe that's a known issue, spindown probably won't work as well. Ok. Wasn't a big deal with my one SAS disk, I noticed it would refuse to spin down. I will have 10 of them now, so I guess I will have more heat to deal with. Is there a link to it being reported as an issue?
  11. Unrelated - Is there a reason why SAS disks can't show temperature? I've found a few other posts reporting this with no answers.
  12. This setting did the trick. SAS disks require that setting to be set to Automatic to successfully add to the array. I've only had one SAS disk prior to these, so that would explain why.
  13. Oh, this - sorry. I overlooked this. I will try this out when the parity rebuild is done.
  14. Yes, I attached my diagnostics zip above. I am running UnRAID 6.2. I don't know what other information is needed.