trurl

Moderators
  • Posts

    43889
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. You might want to consider using the latest beta since you are doing a new SSD. It has a new partition scheme for SSDs that make them more efficient, as well as multiple fast (cache) pools. I have a fast pool consisting of 1 NVMe for my dockers and VMs, and another with 2 mirrored SSDs for caching.
  2. Instead of odd I would say it was expected.
  3. There are some very nice new features of the latest beta. Multiple fast (cache) pools for SSDs, more efficient partitioning of SSDs. I am running it on my main server and seldom hesitate to run betas because I have a good (enough) backup plan.
  4. If you don't install parity until the transfer is complete. Maybe even then it won't be that fast if you have another computer in the middle.
  5. Builtin Unraid disk encryption doesn't require anything from Nerd Pack. It is possible, though, that something in Nerd Pack replaced some builtin Unraid library. What happens if you boot in SAFE mode?
  6. And you can just connect the old ReiserFS disks to that new server as Unassigned Devices and copy their data that way.
  7. Everything about your configuration is in the config folder on flash. Some things from that configuration that you will not want on the new server is the license .key from that other flash, since each flash must have its own unique key, and the super.dat file, which contains your disk assignments.
  8. The config folder on flash has everything about your configuration
  9. Go to Settings - Docker and disable Docker. Also, delete docker.img from that same page. There are several other changes to make here later. Don't enable Docker again until instructed. Go to Settings - VM Manager and disable VMs. Don't enable VMs again until instructed. Go to each of your User Shares and set them to Use cache: Yes Go to Main - Array Operation and click Move. Wait for it to finish. Then post new Diagnostics.
  10. The simple thing to do would be to just get everything moved from cache, replace cache, then get everything that belongs on cache (appdata, domains, system) moved back to cache. And along the way recreate docker.img at only 20G. As for getting things moved, Mover will move cache-yes user shares from cache to array. And it will move cache-prefer user shares from array to cache. But, mover can't move open files, so to get things moved, you will have to disable Docker and VM services in Settings. Also, mover won't move duplicates, so it is possible there will need to be some manual cleanup along the way. Now that I have explained the basic ideas and the reasons for them, I will give some steps to accomplish parts of this. We will take new diagnostics along the way to see the results and determine what needs to be done. Starting in my next post in this thread.
  11. Unraid does not automatically move files between array disks. Perhaps you mean it is writing new files to the second HDD. This is part of the user share "media" And this is also part of the user share "media". This is the user share "downloads". And this is part of the user share "downloads". The user shares are simply the aggregated top level folders on cache and array disks. For example, any top level folder named "media" on any cache or array disk are in the user share "media". But you say If that is indeed what you have for the host path, I don't know why you would want to do it that way. Post your docker run command for any of these as explained at this very first link in the Docker FAQ:
  12. And you are running with little RAM. But looks like you only have basic NAS functionality configured so that might be OK. Can you add more RAM?
  13. I will not be available until later tomorrow. We can work on it then or maybe someone else will take this up.
  14. Actually, the way you have listed these mappings: is unclear. You don't specify the absolute path /downloads for qbtorrent, instead you have the relative path downloads. I think you must not have it that way really since the docker run command would error out before even getting qbtorrent started. The best way to give us a complete and unambiguous idea of how you have a container configured is to post your docker run command as explained at this very first link in the Docker FAQ:
  15. The applications only know about the container paths and have no idea what the corresponding host paths are. In your case, qbtorrent tells sonarr to look in /downloads, and sonarr doesn't know anything about a folder named /downloads.
  16. Parity only allows you to recover a missing or disabled disk. Lots of more common ways to lose data, including user error. Many of us don't try to keep complete backups of everything on our large capacity servers. My Unraid is the backup for other devices on my network. And I have offsite backups of anything important and irreplaceable. Everyone has to decide these for themselves. Data loss is always possible even with parity and even when everything is working well. Sounds like you have been able to recover so far, and that is the way things often work unless the user makes some mistake when trying to recover. Such as formatting an unmoutable disk and then expecting to rebuild it from parity. Even though they were warned.
  17. Why do you have 500G allocated to docker.img? 20G should be more than enough and if it grows beyond that you have something misconfigured. Why are your dockers and VMs not configured to stay on cache (appdata, domains, system shares)? Docker/VM performance will be impacted by slower parity updates and will keep array disks spinning since they will have open files. Not sure why you would care about your "easy" way of upgrading cache since it isn't clear you have anything important on cache anyway.
  18. Do you have backups of anything important and irreplaceable? Parity isn't a substitute for a backup plan.
  19. Why have you installed so many (all?) packages from Nerd Tools? I recommend not installing anything that you don't need (and may not even know what it is).
  20. Each of those different numbers preceded by ata is referring to a different port, so the controller is the suspect. I think the idea is that some manufacturers don't update their linux drivers.
  21. The contents of an umountable disk, whether or not it is emulated, are not accessible, since it is not mounted. Perhaps you meant some files in your user shares were still accessible. Syslog is in RAM and resets on reboot. It is possible to have syslog written somewhere: This may be the root of your issues. You may have viewed your syslog and seen these multiple connection problems: Aug 16 09:56:09 Tower emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog Aug 16 09:56:33 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: cmd error handler Aug 16 09:56:33 Tower kernel: sas: ata9: end_device-9:0: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata10: end_device-9:1: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata12: end_device-9:3: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata13: end_device-9:4: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata14: end_device-9:5: dev error handler Aug 16 09:56:33 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:33 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: cmd error handler Aug 16 09:56:33 Tower kernel: sas: ata9: end_device-9:0: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata10: end_device-9:1: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata11: end_device-9:2: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata12: end_device-9:3: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata13: end_device-9:4: dev error handler Aug 16 09:56:33 Tower kernel: sas: ata14: end_device-9:5: dev error handler Aug 16 09:56:33 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Aug 16 09:56:35 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: cmd error handler Aug 16 09:56:35 Tower kernel: sas: ata15: end_device-10:0: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata16: end_device-10:1: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata17: end_device-10:2: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata18: end_device-10:3: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata19: end_device-10:4: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata20: end_device-10:5: dev error handler Aug 16 09:56:35 Tower kernel: sas: ata21: end_device-10:6: dev error handler Aug 16 09:56:35 Tower kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 I would still like to see this:
  22. to my questions without having to ask them, possibly requiring multiple posts that might even take a while since I might go to bed soon. Cache pools technically don't have parity disks. What filesystem is your cache now?
  23. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.