Jump to content

trurl

Moderators
  • Posts

    44,362
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. It isn't necessarily bad to have 40G, but that is not the recommended setting and it is very unlikely necessary. Making it larger will not fix the problem with it filling or growing, it will just make it take longer to fill. Only fixing your applications will fix this problem. The recommended setting is 20G, but you have already used all of that. If you have an application configured to write to a path that is not mapped to the host, then that path is inside the docker image. Linux is case-sensitive, so if you specify the path inside the application with a different upper/lower case than that specified in the mapping, then that is a different path and will be inside the docker image instead of mapped storage on the host. Also, any path that is not absolute (beginning with /) will also be inside the docker image. The system share, if it was already all on cache, could be set to only. But mover won't move cache-no or cache-only shares. To get it moved to cache it would have to be cache-prefer. And mover can't move open files, so you would have to disable Docker and VM Manager (not the individual dockers and VMs, the whole thing) in Settings before they could be moved. Same applies to your domains share. If you have appdata (yours is OK), domains, or system shares on the array, your docker and VM performance will be impacted by the slower parity writes, and they will keep array disks spinning.
  2. Why do you have 40G allocated to docker image? 20G is usually much more than enough, but I see you have already used that much. This is often because the user has applications misconfigured so they are writing into the docker image instead of mapped storage. Also, your system share, where you docker image is stored, has files on the array. So does your domains share, in fact, it is set to have files moved to the array.
  3. The license is associated with the flash drive you register it on. That flash drive can be used on any computer and the license is still in effect. If for some reason the flash drive needs to be replaced, the license can be transferred to the new flash. See the last sentence on this wiki page here: https://wiki.unraid.net/UnRAID_6/Changing_The_Flash_Device
  4. I assumed that's what you meant. I just didn't know if you knew that you weren't speaking to them and that they would likely not see your post.
  5. Anything you have submitted on the Add Container form is saved as a template on your flash drive. You can easily reuse that saved template by selecting it from the Template dropdown at the top of the Add Container form.
  6. Are you sure your network connection to Unraid hasn't been degraded to 100M instead of 1G? That is the usual way ethernet works if one of the wires doesn't have signal for some reason. You should be able to see what the network speed is on the Dashboard on the left under Interface.
  7. Just wanted to comment on this bit. Almost everyone you will encounter on this forum are fellow Unraid users like yourself, so the pronoun "you" here isn't really appropriate. Limetech is the Unraid company, with very few employees. Free support is provided by the user community on this forum. From time to time Limetech may get involved in some things on the forum, but most posts they will not even read. Now on to your question. That backup USB flash drive you have should have a license .key file. The required location for that file going forward is in the config folder, but some older versions of Unraid would use it from the top folder of flash so look there for it also. Make a copy of that file to make sure nothing happens to it during the upgrade. As long that flash drive isn't very small and it is working well, I would just use it since that license file is associated with it. If you use another, you would have to get your license transferred to that other flash. That isn't difficult and can be done in the webUI of the latest versions of Unraid, but working with what you already have will be simpler, at least until you get the upgrade done.
  8. Maybe a lot of plugins won't be available for that old version. You may have to wait and do those checks after you upgrade. This has all gotten a little off-topic for this release thread. Go through the rest of that link I gave and if you need more help with any of it, start a new thread in General Support with your diagnostics.
  9. With multiple cache pools, you can get faster SSD storage that is capable of having redundancy. There are lots of ways to use that. Then just use the much larger, cheaper, and slower HDDs for archiving. Just upsize HDDs instead of adding more. More disks requires more ports and other hardware, more license if you don't already have max, and each disk is just another point of failure. I've never understood why some people have 20 or more 2TB disks in their array.
  10. Unless you set things up so the multiple writes occur on different data disks, it won't matter at all since the data disk being written will be the deciding factor no matter how fast parity is.
  11. Normally you can reinstall any docker using the exact same settings as before by using the Previous Apps feature of the Apps page. Here is how this all works under the hood: The Add/Edit Container page is just a form for filling out the things that go in the docker run command. The Apps page just uses the docker templates to fill in the Add Container page. The Previous Apps feature uses the templates you have already filled in the past, which are stored on flash, to fill in the Add Container page. Those templates are named after the --name parameter in the docker run (Add Container page). So, if you haven't overwritten those templates be reusing that name, then you should be able to use the Previous Apps feature to get your docker added again just as it was. If you want to dig into this some more, those templates you have already filled in are on flash at config/plugins/dockerMan/templates-user
  12. I don't use this particular plex container, but it seems reasonable that if only plexpass supports HW acceleration, then you would want to use the plexpass version of the binhex plex. On the other hand, I wouldn't think that would be the reason for the crash since I would expect plex to just fallback to not using HW, and if it runs at all that must be what it is doing. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post. Also, post your docker run command for this container as explained at this very first link in the Docker FAQ: https://forums.unraid.net/topic/57181-docker-faq/?do=findComment&comment=564345
  13. Each disk in the array is an independent filesystem. Each file must fit completely on a single disk.
  14. Your cache is unmountable. You should see that very clearly in Main - Cache Devices.
  15. Did you do this using the Unassigned Devices plugin? That plugin has a support thread. You can go directly to the correct support thread for any of your plugins by using its Support Link on the Plugins page.
  16. Try deleting or renaming config/network.cfg on flash and reboot so it will use default network settings.
  17. Your last post before this thread was in 2017. Had you upgraded at all since then?
  18. You seem to have some things in your syslog referencing a plugin you don't actually have installed. Possibly that is from having another browser open somewhere to your server. Close all other browsers on all other computers on your network, clear browser cache, and post new diagnostics. Mar 20 17:43:02 Tower nginx: 2020/03/20 17:43:02 [error] 2446#2446: *1000 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.60, server: , request: "POST /plugins/preclear.disk/Preclear.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.1.2", referrer: "http://192.168.1.2/Settings/SMB" Mar 20 17:43:07 Tower nginx: 2020/03/20 17:43:07 [error] 2446#2446: *1000 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.60, server: , request: "POST /plugins/preclear.disk/Preclear.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.1.2", referrer: "http://192.168.1.2/Settings/SMB" Mar 20 17:43:08 Tower nginx: 2020/03/20 17:43:08 [error] 2446#2446: *1000 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.60, server: , request: "POST /plugins/preclear.disk/Preclear.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.1.2", referrer: "http://192.168.1.2/Settings/SMB" Mar 20 17:43:14 Tower nginx: 2020/03/20 17:43:14 [error] 2446#2446: *1264 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.60, server: , request: "POST /plugins/preclear.disk/Preclear.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.1.2", referrer: "http://192.168.1.2/Settings/SMB"
  19. Maybe something in these links in case you haven't seen them: and https://forums.unraid.net/topic/51703-vm-faq/?tab=comments#comment-557627
×
×
  • Create New...