Jump to content

trurl

Moderators
  • Posts

    44,363
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. The applications only know about the container paths and have no idea what the corresponding host paths are. In your case, qbtorrent tells sonarr to look in /downloads, and sonarr doesn't know anything about a folder named /downloads.
  2. Parity only allows you to recover a missing or disabled disk. Lots of more common ways to lose data, including user error. Many of us don't try to keep complete backups of everything on our large capacity servers. My Unraid is the backup for other devices on my network. And I have offsite backups of anything important and irreplaceable. Everyone has to decide these for themselves. Data loss is always possible even with parity and even when everything is working well. Sounds like you have been able to recover so far, and that is the way things often work unless the user makes some mistake when trying to recover. Such as formatting an unmoutable disk and then expecting to rebuild it from parity. Even though they were warned.
  3. Why do you have 500G allocated to docker.img? 20G should be more than enough and if it grows beyond that you have something misconfigured. Why are your dockers and VMs not configured to stay on cache (appdata, domains, system shares)? Docker/VM performance will be impacted by slower parity updates and will keep array disks spinning since they will have open files. Not sure why you would care about your "easy" way of upgrading cache since it isn't clear you have anything important on cache anyway.
  4. Do you have backups of anything important and irreplaceable? Parity isn't a substitute for a backup plan.
  5. Why have you installed so many (all?) packages from Nerd Tools? I recommend not installing anything that you don't need (and may not even know what it is).
  6. Each of those different numbers preceded by ata is referring to a different port, so the controller is the suspect. I think the idea is that some manufacturers don't update their linux drivers.
  7. The contents of an umountable disk, whether or not it is emulated, are not accessible, since it is not mounted. Perhaps you meant some files in your user shares were still accessible. Syslog is in RAM and resets on reboot. It is possible to have syslog written somewhere: This may be the root of your issues. You may have viewed your syslog and seen these multiple connection problems: Aug 16 09:56:09 Tower emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog Aug 16 09:56:33 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: cmd error handler Aug 16 09:56:33 Tower kernel: sas: ata9: end_device-9:0: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata10: end_device-9:1: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata12: end_device-9:3: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata13: end_device-9:4: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata14: end_device-9:5: dev error handler Aug 16 09:56:33 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:33 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: cmd error handler Aug 16 09:56:33 Tower kernel: sas: ata9: end_device-9:0: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata10: end_device-9:1: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata12: end_device-9:3: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata13: end_device-9:4: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata14: end_device-9:5: dev error handler Aug 16 09:56:33 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 I would still like to see this:
  8. to my questions without having to ask them, possibly requiring multiple posts that might even take a while since I might go to bed soon. Cache pools technically don't have parity disks. What filesystem is your cache now?
  9. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  10. Did you get parity upgrade completed? Why do you have 50G allocated to docker.img? 20G should be more than enough and if it grows beyond that you have application misconfigured. See here for an idea about your macvlan call traces:
  11. Depends on the answer to some questions. The easy way to get the answers is to Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post.
  12. I think it might be worthwhile to explain a few things since your description has some slight misunderstandings. Not entirely sure if you are using the word "disabled" correctly. A disabled disk is marked with a red X. Unraid will only disable as many drives as you have parity disks. So with single parity max 1 drive could be disabled, and with dual parity, max 2 drives could be disabled. Note that parity drive(s) are included in this count, so with dual parity you could have both parity disks disabled, one parity and one data, or 2 data. But still no more than 2 total disks could be disabled. A disabled disk can be rebuilt from the parity calculation by reading all remaining disks to calculate the data for the disabled disk. If you unassign a disk in the array it might also be considered disabled if there aren't already too many disks disabled. It is possible for any number of disks to be missing and/or unmountable, but these are not the same as disabled. Unraid disables a disk when a write to it fails. It is possible for a read error to cause Unraid to get the correct data from the parity calculation and attempt to write it back to the disk, and if that write fails the disk is disabled. It never disables a disk simply due to a read error. A disabled or missing disk can be emulated from the parity calculation, but this is a completely different and independent situation from an unrecognized filesystem. It is possible for a filesystem to be corrupt without the disk being disabled. It is possible for a disk to be disabled but the filesystem of the emulated disk is fine and in fact reads and writes of the emulated disk can continue even though a disabled disk will not be used by unraid; all the other disks are read and the data for the emulated disk is calculated, and if a write is involved, parity will be updated to emulate the write, but the disabled disk will not be used. And it is possible for a disk to be disabled and its emulated filesystem also be corrupt. It looks like parity2 is disabled, and as you say, unassigned. Was parity2 disabled before you unassigned it? I didn't see it being disabled in syslog. From syslog it seems the unassigned disk with serial ending 85G6 was parity2. SMART for that disk looks OK Disk9 is unmountable but not disabled, SMART for disk9 also OK. Please post a screenshot of Main - Array Devices to help confirm my understanding of your situation.
  13. If your "appdata" host path is currently /mnt/user/apps, then you could move that "appdata" to /mnt/user/appdata if you wanted to. Then you would have to change the host path to /mnt/user/appdata, but as long as you didn't change the container path, for example, /config, then the container will never know the difference. Do you understand docker volume mapping?
  14. No point now. Did the correcting parity check find any sync errors?
  15. Should be on the flash drive in logs folder
  16. Still looks like a connection issue. There was probably nothing even wrong with the original drive you replaced. Check all connections, SATA and power, both ends, including any splitters. Make sure you don't bundle your SATA cables. Make sure there is enough slack in the cables so the connector can sit square on the connection with nothing pulling on it.
  17. It's not clear from your description that you need to replace parity at all. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  18. For your dockers, if you move or rename things on the host, then you will have to change the host paths of course, but if you keep the container paths the same then the containers will not know the difference and should continue to work as before.
  19. Do you mean you can't get to your flash backup?
  20. This is pretty much what happened to me recently. And I had an UPS. My best guess was something with the motherboard, since it would boot to the point where normally the text on the screen would switch to higher resolution. But now it would just hang at that point. And I knew there was nothing wrong with Unraid or the flash drive or even the memory since that all worked when I moved it to the old mobo/CPU I still had in storage. I took the opportunity to upgrade mobo/CPU/RAM. Here is my upgrade thread if interested:
×
×
  • Create New...