Jump to content

trurl

Moderators
  • Posts

    44,361
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. All of the moderators on the forum are just users like yourself who are volunteering their time and experience to help others. A good attitude will go a long way in getting these volunteers to help you.
  2. If you've read the guidelines for a defect report, update your post to comply and I will move the thread back to Defect Reports.
  3. johnnie.black already gave you the link but I thought I would comment on this one part of your idea. User shares can be cache-only, but a cache-only disk is not possible and doesn't even make any sense.
  4. "Needs" are in the eye of the beholder !! Clearly I NEED anything that I WANT ... the trick is convincing the spouse that this is true As I've said before, she gets anything she wants, and I get anything she can't talk me out of.
  5. SMART looks OK, and you certainly don't need to preclear, but what makes you think you want to avoid rebuilding? unRAID disables a disk when a write to it fails. That failed write was used to update parity though, so the failed write is in the array. And any write that may have happened after the disk was disabled is in the array. So the disk should be rebuilt because its contents are now invalid, but the valid contents are in the array and can be restored by rebuilding the disk. The only other way to reenable a disabled disk is to rebuild parity instead so you wouldn't save any time, and you would lose any of those writes, some of which may have been critical for maintaining that disks filesystem. Rebuild. Here is the wiki link: What do I do if I get a red X next to a hard disk?
  6. SMART looks OK, and you certainly don't need to preclear, but what makes you think you want to avoid rebuilding? unRAID disables a disk when a write to it fails. That failed write was used to update parity though, so the failed write is in the array. And any write that may have happened after the disk was disabled is in the array. So the disk should be rebuilt because its contents are now invalid, but the valid contents are in the array and can be restored by rebuilding the disk. The only other way to reenable a disabled disk is to rebuild parity instead so you wouldn't save any time, and you would lose any of those writes, some of which may have been critical for maintaining that disks filesystem. Rebuild.
  7. Before you go posting in Defect Reports again with some random question, please read the Guidelines for Defect Reports stickied in that subforum.
  8. Are you saying that if I were to replace a failed drive with a new one, I don't have to run a preclear? Can I simply "Verify all the Disk" on the preclear plugin to test the new drive and immediately start rebuilding? As I said, the only scenario that actually requires a clear disk is when you add a data disk to a new slot in an array that already has parity on it. This is so parity will remain valid, since a clear disk has no effect on the parity calculation. This has always been the case going back to at least as long as preclear has existed. So rebuilding a disk does not require a clear disk, but people often preclear a disk just to test it. I'm not sure about "Verify" with the plugin, since I haven't used it lately. I would think verify would either check the disk to see if the clear signature has been written to it, or possibly read the whole disk to make sure it is clear. I don't think verify is going to give you a shortcut way to test a disk.
  9. Try plugging the cruzer into another port, preferably USB2 if you have one.
  10. If you have the docker set to use host network mode, then it take whatever ports it needs from the host. You must set the docker to use bridge network mode if you need to map to different ports.
  11. How do you lose 2 drives if one fails? What he meant is that if you have 2 parity drives, you can have 2 disks fail simultaneously and you still loose nothing. If you have a 6TB parity drive then you can use any number of data disks* but no single data drive can be larger than 6TB. Similarly with 8TB parity, any number of data disks but none larger than 8TB. So the size of the parity drive doesn't limit how much storage you can have, it just limits how large any single drive in your storage can be. Of course there are other limits such as number of ports, number of drive bays, number of connections allowed by your license. *Technically there is some limit to the number of disks you can assign to the array even with a PRO license. (is it still 26?)
  12. Might also be worth mentioning that recent versions of unRAID do NOT take the array offline to clear a disk. If you add a data disk to a new slot in an array that already has parity, it must be clear so parity will remain valid. It used to be that unRAID would take the array offline for this, so preclearing was invented. People still use preclear to test disks, but the clearing part isn't strictly necessary anymore, and in any case, a new data slot in a parity array is the only scenario that requires a clear disk.
  13. Frank, thank you, I'll begin looking into that. I don't mind CLI at all, thankfully and it's nice to know it's still an option, modifications or not. I have the Nerd Pack plugin, have had it, yet, upon telnetting and issuing "screen" nadda. So I may need to find where it is specifically. I appreciate the response bud. NerdPack plugin allows you to choose which of its packages to load. Go to its Settings page to enable screen.
  14. A question better asked on the plex forum or google.
  15. This is definitely the correct thread for this question. If you had searched you would have found it had already been answered. See How to Search sticky linked in my sig.
  16. Okay, I have turned on the option for dated backups and to delete them after 3 days. I also deleted the old backups. From my understanding, the initial backup will take a while because I have 1.5TB of data to backup, but if I understand what you're saying correctly....after this initial backup, each dated backup will take less time? Please correct me if I am wrong. No this is wrong. Having dated backups will make it store a complete backup folder for each date, instead of just storing the changes to the same folder.
  17. Make sure you have Notifications setup if you are going to forget it. Notifications can alert you when you have a problem so you don't ignore it before it becomes worse. Assuming you already know how to search (see How to Search in my sig) the forum and wiki, that's about all we have. LimeTech is a very small company, and almost all the support is unpaid volunteers on this forum. Probably not going to change. We're always interested in anyone who will pitch in and edit the wiki.
  18. OK. Post screenshot of that page before you Apply.
  19. Thank you. In real life terms, it's going to take me about 2-3 days for each of these. I only have 4 disks of real data (and one newly added disk that happened to be empty). I estimate in real terms, given my time constraints, it'll be a month to do this, as I usually only have time on weekends. One thing I don't understand is, given it could take days to copy and compare, how do you ensure the "old" drive isn't being overwritten? In other words, I ran rsync yesterday morning (I couldn't check until today). How do I know that today there's not a file that's been changed on the "old" drive that's been modified? Someone had recommended to run two command prompts in windows and use the dir command, which I did this morning. As of this morning, I know the same number of files and bytes are being used, but I don't know if one file was updated on the "old" drive, if the updated file contains the same number of bytes. Or I am being too cautious? (I'm about to erase the "old" drive, but I'm just wondering.) See for example reply 12 in this thread.
  20. Thank you VERY much for this! I was excited to see that previous guaranteed lockup behavior wasn't causing the lockup once I set this parameter. Then all of a sudden after repeated testing it locked up again---looking at the cores from the webgui dashboard page I saw that core 6 was reporting 100% (almost no activity on the other cores) and I had specified Plex to use 0-5....which leads me to the point I had configured Crashplan to use 6&7 and enabled it after extensive testing. So reality is I'm looking at a crashplan issue and would have never arrived there without your assistance. I know I had lockups with Plex being the only active docker previously but with this --cpuset-cpus variable I'm able to avoid having the lockups that plagued me as long as I don't have Crashplan running at the same time. Plex cpu0-5 and nothing else: no lockups Crashplan 6-7 and Plex 0-5: 100% on core6 and lockup Thanks! Maybe you need to let unRAID have a core.
  21. You can add --delete to rsync to delete files from the target that don't exist on the source.
×
×
  • Create New...