Jump to content

trurl

Moderators
  • Posts

    44,361
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. The webUI command line should work the same. Something not right about your results though. From my server the prompt is root@unserver:~# where root is the user (the only user that can access the command line) and unserver is my server name. I don't know how you could even get it to say sh-5.0# What do you get with this? ls -lah /mnt/user
  2. I have split your post into its own thread. Don't hijack the support thread of another user. It only causes confusion, and confusion can lead to disaster. Your appdata and system shares have some files on the array instead of all on cache. You want those shares all on cache and to stay on cache so your dockers and VMs will perform better, and so they won't keep array disks spinning. Do you have any VMs?
  3. Have you done anything since you posted your last diagnostics? Is the array actually started? That sh-5.0# doesn't seem right either. Are you actually at the command line of your Unraid server?
  4. Go to Shares - User Shares, click Compute All, wait for the results, then post a screenshot.
  5. Do you have any VMs? Your appdata, domains, and system shares have files on the array and they are not set to cache-prefer or cache-only. Ideally you want all of these shares to be completely on cache and stay on cache, so your dockers and VMs will perform better and so they won't keep your array disks spinning. Your SSD disk4 could be put in a pool with the nvme disk that is already cache, but unfortunately both disks would have to be reformatted as btrfs to be in a pool. So you have quite a few things that aren't configured ideally. Personally, I would start over except for the contents of your other user shares, which should wind up on the array, and maybe preserve your appdata so you could try to reuse it when you add your dockers again. It looks to me like you tried to run before you knew how to walk as the saying goes. Take things a little at a time and get each thing working well before trying to add more.
  6. Your system share still has files on array disks 1 and 2. What do you get from the command line with these? ls -lah /mnt/cache/system ls -lah /mnt/disk1/system ls -lah /mnt/disk2/system
  7. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  8. The fact you have allocated 50G to your docker image makes me suspect you have had issues with your docker applications filling the docker image, so perhaps your problem has nothing to do with wireguard at all. I always recommend 20G for docker image and it is extremely unlikely to need even that much. If your docker image is growing beyond 20G then you have one or more of your applications writing to a path that is not mapped. Making docker image larger will not fix that problem, it will just make it take longer to fill and corrupt. Also your system share is not on cache and not cache-prefer as it should be. Normally your docker image is in the system share, but I see you have yours at /mnt/user/docker.img. That is not in any user share and it isn't clear which disk that would be on. I had mine at /mnt/cache/docker.img for a long time and eventually put it in the system share just to get more in line with the standard way of doing things. I'm not entirely sure how a file at the top level of the user shares, and so not actually part of any user share, would be handled. So maybe you should clean up your docker setup and then see if you still have problems.
  9. You have your appdata on cache now where it should be, but unlike before you have your system share with some files on the array. Best way to fix docker now is to go to Settings - Docker, disable and delete docker image, then enable to recreate docker image on cache where it belongs. App - Previous Apps will add your dockers back just as they were. You want your docker applications and VMs to run on cache so they will perform better and so they won't keep array disks spinning. Do you have any VMs?
  10. Reviewing your already posted diagnostics, I think it is more likely that you had the Use cache setting incorrect. Mover never moves cache-no shares, and you had two shares with files on cache but set to cache-no. You also have your appdata as cache-yes instead of cache-prefer or cache-only. To get that moved to cache where it belongs set appdata to cache-prefer, disable the docker service, then run mover. See this FAQ for more details on the nuances of the Use cache setting: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/#comment-537383
  11. SMART for disabled Disk5 looks OK. Yes
  12. Are you sure some other device on your network doesn't have the same IP?
  13. You can edit config/domain.cfg on flash to disable VM service. Similarly for config/docker.cfg
  14. This is not the current support thread. As I said in your other thread
  15. Probably would have been better to post this in the Unassigned Devices thread. You can go directly to the correct support thread for any of your plugins by clicking on the Support Thread link on the Plugins page. I don't think UD actually supports creating multiple partitions but you might ask on that thread. If not, you would have to go to the command line or create them on another system.
  16. Bad connections are much more common than bad disks. Especially if you have recently been inside the case to install a new disk. Check all connections, power and SATA, both ends, including any power splitters. Then post new diagnostics.
  17. Not the cause of your issue, but you have a share anonymized M----s with some files on cache but it is set to cache-no. Mover never moves cache-no shares. See this FAQ for details on the nuances of the various Use cache settings: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/#comment-537383
  18. SSDs are NOT recommended in the parity array. They can't be trimmed, and there is some question whether some implementations might invalidate parity.
  19. Syslog is already included in diagnostics, but in the event of a crash, you can't get diagnostics. The syslog up to that point will be saved by syslog server.
  20. You should always Go to Tools-diagnostics and attach the complete Diagnostics zip file to your NEXT post
  21. Are you mapping the Unassigned Device r/w slave?
  22. The benefit of having parity larger than any of your data disks is it allows you to replace a data disk with a larger disk. So for example, after you have a 4TB parity, you could replace a 3TB with a 4TB. And 4TB parity will also allow you to add a 4TB disk, whereas a 3TB parity will not allow you to add a 4TB data (your current situation). As for faster, writes always update the data disk to be written, and it updates the parity disk, since parity is realtime. The speed of a write will be limited by the slower of the disks involved. So for example, if you have 5400 parity and 2 data disks, one 5400 and the other 7200, the slower parity won't matter to the 5400 data disk, but it will limit the write speed of the 7200 data disk. So it's best if parity isn't slower than your data disks. Here are some more details about the 2 different ways you can configure parity updates, and the differences between them: https://lime-technology.com/forum/index.php?topic=52122.0
  23. Those diagnostics are without the array started. Can you start the array and post new diagnostics?
  24. There are many ways to use cache. I personally don't bother with using cache to speed up user share writes. Most of my writes are from scheduled backups and queued downloads, so I am not waiting for them to complete anyway. I use cache for my apps (dockers) for better performance, and so the apps won't keep parity and array disks spinning. I don't have any VMs because dockers do everything I want. I also have my Plex docker DVR setup to go to cache for better performance. I either delete those after watching or move them to other user shares long-term. And I have a copy of a subset of my photos and music on cache so other devices on the network can access them without spinning up array disks. I have 2x250GB SSDs as cache, and could probably do with much less. Other people like to cache writes to user shares so they are faster. Cache in that case needs to be large enough to hold those until they can be moved to the array. The default schedule for mover is daily in the middle of the night but that can be changed. In any case, you can't move files from cache to array as fast as you can write to cache. Mover is intended for idle time.
×
×
  • Create New...