Jump to content

trurl

Moderators
  • Posts

    43,962
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Builtin Unraid disk encryption doesn't require anything from Nerd Pack. It is possible, though, that something in Nerd Pack replaced some builtin Unraid library. What happens if you boot in SAFE mode?
  2. And you can just connect the old ReiserFS disks to that new server as Unassigned Devices and copy their data that way.
  3. Everything about your configuration is in the config folder on flash. Some things from that configuration that you will not want on the new server is the license .key from that other flash, since each flash must have its own unique key, and the super.dat file, which contains your disk assignments.
  4. The config folder on flash has everything about your configuration
  5. Go to Settings - Docker and disable Docker. Also, delete docker.img from that same page. There are several other changes to make here later. Don't enable Docker again until instructed. Go to Settings - VM Manager and disable VMs. Don't enable VMs again until instructed. Go to each of your User Shares and set them to Use cache: Yes Go to Main - Array Operation and click Move. Wait for it to finish. Then post new Diagnostics.
  6. The simple thing to do would be to just get everything moved from cache, replace cache, then get everything that belongs on cache (appdata, domains, system) moved back to cache. And along the way recreate docker.img at only 20G. As for getting things moved, Mover will move cache-yes user shares from cache to array. And it will move cache-prefer user shares from array to cache. But, mover can't move open files, so to get things moved, you will have to disable Docker and VM services in Settings. Also, mover won't move duplicates, so it is possible there will need to be some manual cleanup along the way. Now that I have explained the basic ideas and the reasons for them, I will give some steps to accomplish parts of this. We will take new diagnostics along the way to see the results and determine what needs to be done. Starting in my next post in this thread.
  7. Unraid does not automatically move files between array disks. Perhaps you mean it is writing new files to the second HDD. This is part of the user share "media" And this is also part of the user share "media". This is the user share "downloads". And this is part of the user share "downloads". The user shares are simply the aggregated top level folders on cache and array disks. For example, any top level folder named "media" on any cache or array disk are in the user share "media". But you say If that is indeed what you have for the host path, I don't know why you would want to do it that way. Post your docker run command for any of these as explained at this very first link in the Docker FAQ:
  8. And you are running with little RAM. But looks like you only have basic NAS functionality configured so that might be OK. Can you add more RAM?
  9. I will not be available until later tomorrow. We can work on it then or maybe someone else will take this up.
  10. Actually, the way you have listed these mappings: is unclear. You don't specify the absolute path /downloads for qbtorrent, instead you have the relative path downloads. I think you must not have it that way really since the docker run command would error out before even getting qbtorrent started. The best way to give us a complete and unambiguous idea of how you have a container configured is to post your docker run command as explained at this very first link in the Docker FAQ:
  11. The applications only know about the container paths and have no idea what the corresponding host paths are. In your case, qbtorrent tells sonarr to look in /downloads, and sonarr doesn't know anything about a folder named /downloads.
  12. Parity only allows you to recover a missing or disabled disk. Lots of more common ways to lose data, including user error. Many of us don't try to keep complete backups of everything on our large capacity servers. My Unraid is the backup for other devices on my network. And I have offsite backups of anything important and irreplaceable. Everyone has to decide these for themselves. Data loss is always possible even with parity and even when everything is working well. Sounds like you have been able to recover so far, and that is the way things often work unless the user makes some mistake when trying to recover. Such as formatting an unmoutable disk and then expecting to rebuild it from parity. Even though they were warned.
  13. Why do you have 500G allocated to docker.img? 20G should be more than enough and if it grows beyond that you have something misconfigured. Why are your dockers and VMs not configured to stay on cache (appdata, domains, system shares)? Docker/VM performance will be impacted by slower parity updates and will keep array disks spinning since they will have open files. Not sure why you would care about your "easy" way of upgrading cache since it isn't clear you have anything important on cache anyway.
  14. Do you have backups of anything important and irreplaceable? Parity isn't a substitute for a backup plan.
  15. Why have you installed so many (all?) packages from Nerd Tools? I recommend not installing anything that you don't need (and may not even know what it is).
  16. Each of those different numbers preceded by ata is referring to a different port, so the controller is the suspect. I think the idea is that some manufacturers don't update their linux drivers.
  17. The contents of an umountable disk, whether or not it is emulated, are not accessible, since it is not mounted. Perhaps you meant some files in your user shares were still accessible. Syslog is in RAM and resets on reboot. It is possible to have syslog written somewhere: This may be the root of your issues. You may have viewed your syslog and seen these multiple connection problems: Aug 16 09:56:09 Tower emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog Aug 16 09:56:33 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: cmd error handler Aug 16 09:56:33 Tower kernel: sas: ata9: end_device-9:0: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata10: end_device-9:1: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata12: end_device-9:3: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata13: end_device-9:4: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata14: end_device-9:5: dev error handler Aug 16 09:56:33 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:33 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: cmd error handler Aug 16 09:56:33 Tower kernel: sas: ata9: end_device-9:0: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata10: end_device-9:1: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata12: end_device-9:3: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata13: end_device-9:4: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata14: end_device-9:5: dev error handler Aug 16 09:56:33 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 I would still like to see this:
  18. to my questions without having to ask them, possibly requiring multiple posts that might even take a while since I might go to bed soon. Cache pools technically don't have parity disks. What filesystem is your cache now?
  19. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  20. Did you get parity upgrade completed? Why do you have 50G allocated to docker.img? 20G should be more than enough and if it grows beyond that you have application misconfigured. See here for an idea about your macvlan call traces:
  21. Depends on the answer to some questions. The easy way to get the answers is to Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post.
  22. I think it might be worthwhile to explain a few things since your description has some slight misunderstandings. Not entirely sure if you are using the word "disabled" correctly. A disabled disk is marked with a red X. Unraid will only disable as many drives as you have parity disks. So with single parity max 1 drive could be disabled, and with dual parity, max 2 drives could be disabled. Note that parity drive(s) are included in this count, so with dual parity you could have both parity disks disabled, one parity and one data, or 2 data. But still no more than 2 total disks could be disabled. A disabled disk can be rebuilt from the parity calculation by reading all remaining disks to calculate the data for the disabled disk. If you unassign a disk in the array it might also be considered disabled if there aren't already too many disks disabled. It is possible for any number of disks to be missing and/or unmountable, but these are not the same as disabled. Unraid disables a disk when a write to it fails. It is possible for a read error to cause Unraid to get the correct data from the parity calculation and attempt to write it back to the disk, and if that write fails the disk is disabled. It never disables a disk simply due to a read error. A disabled or missing disk can be emulated from the parity calculation, but this is a completely different and independent situation from an unrecognized filesystem. It is possible for a filesystem to be corrupt without the disk being disabled. It is possible for a disk to be disabled but the filesystem of the emulated disk is fine and in fact reads and writes of the emulated disk can continue even though a disabled disk will not be used by unraid; all the other disks are read and the data for the emulated disk is calculated, and if a write is involved, parity will be updated to emulate the write, but the disabled disk will not be used. And it is possible for a disk to be disabled and its emulated filesystem also be corrupt. It looks like parity2 is disabled, and as you say, unassigned. Was parity2 disabled before you unassigned it? I didn't see it being disabled in syslog. From syslog it seems the unassigned disk with serial ending 85G6 was parity2. SMART for that disk looks OK Disk9 is unmountable but not disabled, SMART for disk9 also OK. Please post a screenshot of Main - Array Devices to help confirm my understanding of your situation.
  23. If your "appdata" host path is currently /mnt/user/apps, then you could move that "appdata" to /mnt/user/appdata if you wanted to. Then you would have to change the host path to /mnt/user/appdata, but as long as you didn't change the container path, for example, /config, then the container will never know the difference. Do you understand docker volume mapping?
×
×
  • Create New...