Jump to content

trurl

Moderators
  • Posts

    44,362
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Parity swap does not require any more devices than you already have connected. The original "failed" disk would be removed and kept as a backup in case anything goes wrong. Probably most of its files can still be read if necessary. A disabled disk is no longer in sync with parity. The only options to get it enabled again are to rebuild it, or New Config and rebuild parity instead. Rebuilding the disabled disk is the recommended procedure in most cases (and in your case). When a write to a disk fails, Unraid disables it and won't use it again until rebuilt. But, the data for that failed write, and for any subsequent writes to the disabled (and emulated) disk is used to update parity. So, that failed write and any writes after that can still be recovered from the parity calculation. The original, physical disk is out of sync and no longer has valid contents, since it is missing any writes that happened after the failure. The valid contents are in the array and will be rebuilt from the parity calculation.
  2. Not sure why it would say array turned good when you still had a disabled disk. You cannot rebuild the disk to a disk larger than parity, but you can do the parity swap procedure. Everything should be recovered when you rebuild. If you want to backup anything before you start you can copy files from the emulated disk to another computer. I never recommend trying to shuffle files to other disks in the array when you are already emulating a disk, since all disks must be read to get the contents of the emulated disk, and moving or copying to the array is just more activity when you are already without protection.
  3. Perhaps this was only a typo, but /mnt/usr/disks/... is not a valid path to any storage.
  4. Does it work booting Unraid on your other server?
  5. I thought I should go ahead and put this out since you are considering buying a new disk. I think perhaps you are talking about "short stroking" the drive. I remember some discussion about this a very long time ago for some reason, but I don't recommend it and not entirely sure it applies here. There is already a method for replacing a disk with a disk larger than parity. Actually, what it does is copy parity to the larger disk then use the original parity for rebuilding the failed data disk. This will give you something to read while we wait on diagnostics, the Parity Swap procedure: https://wiki.unraid.net/The_parity_swap_procedure
  6. Unraid only disables a disk for write errors. It is possible for a read error to cause Unraid to get the data for the failed read from the parity calculation and try to write it back to the disk. If that write-back fails then the disk would be disabled. It is possible something else is the cause. In any case it is always safer to rebuild to another disk if you have one or can afford it, and save the original in case there are issues during rebuild. This is confusing, I suspect because you are confused about what "format" does. You must let Unraid format any disk it will use in the array, and you must NEVER format a disk you are rebuilding. And you must NEVER format any disk that has data on it you want to keep. I just don't see how the idea of "format" figures into this at all. See if you can clarify what you are thinking in that part. Don't reboot. Just get us Diagnostics, and we can consider how to proceed from there. Maybe best if you don't do anything else without further advice since some of the other ideas you have may not be on the right track. 👍
  7. You can edit your first post in this thread and put (Solved) in the title.
  8. Your cache pool isn't mounting, that is likely the cause of your user shares problem. Where are you seeing this? It is normal for Linux to use all available memory for I/O buffer. Probably this is what you are seeing.
  9. trurl

    Upgrading server

    Filesystem Size Used Avail Use% Mounted on /dev/md1 1.9T 1.9T 12G 100% /mnt/disk1 /dev/md2 1.9T 1.9T 18G 100% /mnt/disk2 /dev/md3 1.9T 1.9T 1.1G 100% /mnt/disk3 /dev/md4 1.9T 1.9T 1.3G 100% /mnt/disk4 /dev/md5 2.8T 2.7T 36G 99% /mnt/disk5 /dev/md6 2.8T 2.8T 17G 100% /mnt/disk6 /dev/md7 5.5T 40G 5.5T 1% /mnt/disk7
  10. trurl

    Upgrading server

    Most of your disks are full. As noted If your split level says that file belongs with similar files then it will try to put them on the same possibly full disk. Also your diagnostics show 2 shares anonymized as F---s, do you have 2 shares like that?
  11. Study this: https://wiki.unraid.net/The_parity_swap_procedure I think simply unassigning parity2 and starting the array will get you back to where you can begin this procedure. Is that correct @johnnie.black or will New Config / Trust Parity be needed?
  12. As mentioned, it is possible to "parity swap". That would get the new larger disk into parity slot and the original parity disk replacing disk2.
  13. If you're using the same flash drive, and using the same disks, then everything is just as it was as far as your Unraid configuration is concerned. No need or benefit in changing anything. No need to restore appdata since it is already on the existing disks.
  14. No the backup will not be the reason for the appdata share on the array, since the backup is not (or at least should not be) in the appdata share. And Mover ignores cache-only shares, so it won't move them to cache. And Mover can't move open files either. So you would have to disable docker service, set appdata to cache-prefer, run mover, enable docker service. Unraid does not copy parity except in the case of the "parity swap" procedure, which is not what you were doing. Parity2 is calculated according to a totally different algorithm, so it is not interchangeable with parity. If parity and parity2 were the same there wouldn't be any extra redundancy for any of your data drives and the extra parity would only be useful if the other parity failed. And if you notice in that first screenshot you linked, it was reading all the other disks in order to calculate parity2 and write it. Speaking of "parity swap", that might be the way forward if you want to replace parity with that larger disk and disk2 with the old parity.
  15. Seems to me you don't want to change anything, not even a new docker. Why would you? In what way is your new Unraid configuration going to be different than the old one? Even if you needed to setup docker again the templates on your flash would allow you to simply do Previous Apps. If it is simply a matter of replacing hardware and reusing the drives then nothing changes as far as your configuration is concerned. If it is copying all shares to new drives then nothing changes as far as plex is concerned assuming you haven't specified drive paths in your mappings. If you intend your new setup to be much different, such as share names, etc. then of course your old plex library won't know anything about those changes. Maybe you need to elaborate on this "new setup".
  16. Quick skim over SMART for all disks, didn't notice any issues. SMART for all disks are included in diagnostics so no need to give any separately. Do any of them have SMART warnings on the Dashboard other than CRC? CRC, typically connection issues, can be acknowledged by clicking on the warning. Those are read errors on disk2, likely you disturbed a connection when installing drives. You must always double check all connections, power and SATA, including power splitters, whenever you are mucking about in the case. Also noticed your appdata share is cache-only, but it has files on the array. Probably that isn't what you intend. We can discuss how to fix that later. Do you intend to have dual parity? It doesn't sound like that is what you want. Parity and parity2 are not interchangeable, so even if you succeed in your approach, you will not have a parity disk, you will have a parity2 disk. That is not necessarily wrong, but maybe not what you had in mind. It would have been much simpler if you had just replaced and rebuilt parity to the larger disk. And it is still possible to do it that simpler (and more standard) way if you want to do that instead. Stop the array, go to Settings - Disk Settings and disable autostart. Then shutdown, double check all connections, reboot. Then tell us what you want to do.
  17. This is a very old thread. I don't know why anyone is using it when the Support thread for this plugin can be easily found. Go to the Apps page in the Unraid webUI (assuming you have installed Community Applications), search for OpenVPN. Several results will come up. You can go directly to the support thread for anything on the Apps page by clicking on the Support Thread (?) icon for the app.
  18. There is a whole "sticky" thread pinned near the top of this same subforum about Windows issues. Have you looked there?
  19. Do you mean you can't access any share on your server from your PC? If that is what you mean then the rest of the "context" is unnecessary. Accessing shares on your server from PCs on your network is indeed "remedial" and I'm sure by far the most common usage. Unraid is a NAS after all. What OS is your PC?
  20. @Squid Is it picking up something from the syslog when it says this? If so maybe I could find it and get a timestamp to see if it correlates to something. Nevermind, I found it.
  21. What did you discover? Maybe someone else will find this thread and want to know.
  22. You still have different sized disks in your btrfs raid1 cache pool. What does Unraid report for the SIZE of cache in Main - Cache Devices? The system doesn't "assign" a size to a share, it just calculates how much space a share has based on its included disks. There is no size allocation. Did you read the links I gave earlier? It is probably best if all the shares you create are left as cache-no (the default) since you don't have much capacity for cache anyway. As I said before
×
×
  • Create New...