Jump to content

itimpi

Moderators
  • Posts

    20,789
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Since even a single error is too many then this is probably the source of your issues. You should check that you are running the memory without any sort of overclocking (e.g. XMP) profile and within the maximum clock speed your motherboard/CPU combination is able to handle It can also be worth trying with just one RAM stick.
  2. I think we need more information such as are you talking about writing or reading speeds? Are you doing this to a user share or directly to a disk drive? What application are you using? it might be worth looking at this section of the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread. It is always a good idea when asking questions to supply your diagnostics so we can see details of your system, how you have things configured, and the current syslog.
  3. If the system reboots itself (as opposed to freezing) you almost certainly have a hardware error. likely suspected tend to be the PSU struggling to handle the load or the CPU overheating and causing thermal related shutdowns.
  4. Are you sure you have the schedule set correctly? Perhaps a screenshot of the scheduling page might help as the syslog in the diagnostics suggests it could be wrong. Also the output of the following command will allow us to check the schedule: cat /etc/cron.d/root
  5. I would suggest using the Dynamix File Manager plugin. This is going to be a standard part of the 6.13 release so a good idea to get used to it.
  6. Could not see any reason why the reboot happened.
  7. The syslog in the diagnostics is the RAM copy and only shows what happened since the reboot. It could be worth enabling the syslog server to get a log that survives a reboot so we can see what happened prior to the reboot. The mirror to flash option is the easiest to set up, but if you are worried about excessive wear on the flash drive you can put your server’s address into the Remote Server field.
  8. Yes - ZFS is much faster. You get improved performance at the expense of reduced flexibility.
  9. Those messages show you appear to have corruption on that drive. @JorgeB tends to be best on ways to fix this.
  10. I very much doubt it actually makes much difference. I think that using -P might just slow things down. Having said that having I have no experience of using -P so probably be best to upgrade RAM first.
  11. You have to make these two steps and let one complete before attempting the other. I would recommend upgrading the parity drive first.
  12. Not according to your diagnostics: /dev/sdl1 932G 209G 723G 23% /mnt/cache There appear to be 2 500GB SSDs configured as raid0. It depends on your use of the system. It is discussed in this section of the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release.
  13. The information is added to the system’s syslog.
  14. You can - but the mover information is not anonymised.
  15. You should activate mover logging and then run mover to see why. Also it look like the cache is 1TB - not 2TB. you also have not set a Minimum Free Space value for the cache to set the condition for starting to bypass the cache for new files.
  16. You have over 100GB in each of the appdata, domains and system share set to stay on cache.
  17. You should probably use the Compute option on the Shares tab to get an idea of what is taking up all the space on the cache.
  18. appdata shareUseCache="only" # Share exists on cache d--a shareUseCache="yes" # Share exists on cache, disk1, disk2, disk3, disk4 domains shareUseCache="no" # Share exists on cache d-------s shareUseCache="yes" # Share exists on disk1 isos shareUseCache="yes" # Share exists on disk1, disk2 n-------d shareUseCache="no" # Share exists on disk1 system shareUseCache="prefer" # Share exists on cache W---------k shareUseCache="no" # Share exists on disk2 You have the ‘system’’ and ‘appdata’ shares set to always be on the cache, so you have to take that into account. In addition the ‘domains’ share had files on the cache but is not set to transfer its contents to the array.
  19. For mover to do anything you have to have the array as secondary storage and the mover direction set appropriately.
  20. It look like xfs_repair would recover everything. I see it suggests using -P (which I have never used) so it might be worth trying without -n and adding -P.
  21. You will always get bad performance with that SATA card as it uses port multipliers to get all those ports which severely limits throughput when multiple drives are simultaneously active.. Cards with port multipliers are not recommended for use with Unraid for exactly that reason.
  22. This is not necessarily a reason to not lower it further! It just means that if in the future you get large files you need to copy it directly to a disk share.
  23. I am not sure that that we know this is true as the pricing has not been disclosed yet. The only thing that we know for certain is that the equivalent of the current licences + lifetime updates is going up in price. You could be correct, but lets wait and see.
  24. You might want to consider following the procedure documented here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom. The Unraid OS->Manual section covers most aspects of the current Unraid release.
×
×
  • Create New...