Leaderboard

Popular Content

Showing content with the highest reputation on 01/12/19 in all areas

  1. There should be a new update available. This has quite a number of changes to how ffmpeg is executed. It should resolve some issues with inotify and library scanning. For those people who created issues on github you will see the ones that "should" be now fixed marked as closed. If I get time tomorrow I will make a push for adding some features. Perhaps some additional settings for file conversions.
    3 points
  2. I'm taking a look now Edit: This may require a bit more investigation. I'm leaving some tests running while I head to bed. I'll let you know what I find in the morning.
    2 points
  3. Your screenshot shows appdata all on cache but system share has the same amount on cache and disk2, probably a duplicate of your docker image. Maybe I missed it in a previous post but which 2 config files?
    1 point
  4. Split level over-rides Allocation Method when selecting a disk. This can result in a disk being selected even when it does not have much free space. You probably need to either relax your Split Level Settings, or move files off disk1 to free up space.
    1 point
  5. Check if mover logging is enable, it might show some more info.
    1 point
  6. I would recommend you backup your cache, since it's in a state that can go unpredictable and unmountable, reformat with just the device you want and restore the data, you can use this for help with the backup/restore.
    1 point
  7. Your mounted capacity for cache doesn't really make any sense for either or both of those 2 disks so something not right. I don't know if your dockers are working or not. And the diagnostics from your old version don't help me figure out which disks they would be using anyway. I know your system share is configured to move files off cache so the docker image may not even be on cache. And your appdata share is configured to write new files to cache and not move them but that doesn't mean some of it wasn't already on the array. Whether your storage configuration problems are the cause of your other problems is unclear. Go to Shares - User Shares and click on Compute All at the bottom of the page, then wait for it to get the results and post a screenshot.
    1 point
  8. Can you currently read files from cache? If not then something is going to need fixing before you can have any hope of getting that data to an Unassigned Device. And just taking that HDD and moving it to Unassigned Devices isn't going to result in the pool data you want to be on Unassigned anyway. So as I said, some detailed work to do to reconfigure things. I question whether moving the HDD to Unassigned for your dockers is the right approach if that is what you had in mind. I would leave the HDD as cache and put the SSD as Unassigned for apps if that is the approach you want to go for. If you don't mind spending some money I would just put that HDD in the parity array and get 1 or 2 larger SSDs for the cache pool and forget about running apps on an Unassigned Device, since it will require some reconfiguration of how you use docker and install containers.
    1 point
  9. 1 point
  10. Not currently since the SSD was removed from the pool, you can re-add it then remove the HDD, best to clear the SSD first though.
    1 point
  11. SSD was dropped from cache at some point in the past, likely dropped offline, you can re-add it but IMO not much point in running an hybrid SSD/HHD cache, I would use one or the other, depending on if you need speed or capacity.
    1 point
  12. You need to post your Diagnostic file. Tools >>> Diagnostics
    1 point
  13. Your cache description isn't entirely clear, but in any case, however you have it configured, it is a pool with no way to specify the disks separately when accessing them. Most people would never consider mixing hdd and ssd in the cache pool. By "removing hdd from the array" I assume you actually mean removing it from the cache pool, not the parity array. People often use an Unassigned Device for dockers, etc. but an SSD is going to perform better with apps. I really don't have enough details about your system to make specific recommendations about how to proceed, and getting your configuration changed will probably take some detailed work, but it can be done. To give us a more complete idea of what you have and what might be happening, go to Tools - Diagnostics and attach the complete diagnostics zip file to your next post.
    1 point
  14. I followed the manual instructions via the link on the first post. It does state you can use the web interface also, but I couldn't find any proof that was the preferred method after reading this entire thread. I did have to re-install all of my apps for some reason, not sure why, but I did follow the directions exactly as provided. Re-adding the apps was very simple, configuration settings relating to those apps appear to have been retained. So I'm now on 15.0.2.
    1 point
  15. I haven't actually used this feature of transmission for a while, maybe not ever with this container. I just tried it and the only way I can get it to pick up a torrent in the watch folder is to restart the container. Looking inside the container it doesn't look like it has inotify capability which I assume is what would trigger it when a new file is added.
    1 point
  16. Managed to move the config and even the session files over to your docker
    1 point
  17. Install VirtualBox and create a macOS VM. You can skip any special configuration tweaks as the most basic install will give you the App Store. Be prepared for a laggy low rez experience, but at least you'll be able to download clean OS imgs. This was my entry point. There are guides available that will point you to prebuilt VBox images.
    1 point
  18. Well the plugin is doing exactly what it's supposed to. For some reason you can't connect to the rclone website to download the zip. Seems that you can't resolve the domain. Perhaps try increasing the timeout value and try the command again.
    1 point
  19. I sometimes have flaky connection to my server and often lose the terminal connection and have to open a new one. Would be nice if there was a way of logging off or exiting all existing connections somehow.
    1 point
  20. So I got it to work. And it is very nice. Install traefik plugin form: https://github.com/yaskor/unraid-docker-templates Then download this file: traefik.toml <- click to download replace: <your-email> with your email <your-domain> with your domain (duckdns) then copy it to /mnt/user/appdata/traefik/ Now (re)start the traefik container via unraid! Now go to the docker image you want to access from outside and put following as extra Argument (Unraid - Advanced View) --label="traefik.enable=true" --label="traefik.port=<port>" --label="traefik.frontend.rule=Host:<container-name>.<your-domain>.duckdns.org" replace: <container-name> with a name of your choosing (the name of the container) <your-domain> with your domain <port> with the internal port of the container !!!Attention: not the port which is mapped!!! Restart container. Now it should working
    1 point
  21. I can confirm that after rebooting and dozens of access uses via web terminal... Typing exit allows repeat functioning. @limetech can the web terminal send an 'exit' command upon closure to help dummy proof it (as I look in the mirror)
    1 point
  22. How do I close some of them without restarting unraid?
    1 point