primeval_god

Members
  • Content Count

    345
  • Joined

  • Last visited

Everything posted by primeval_god

  1. Yeah thats part of what i need to figure out how to express in the plugin interface. It looks like a simple subfolder, but when we are talking about BTRFS it has to be a subvolume which is created differently than a normal subfolder. That is why it is important to let the plugin create the subfolder when the underlying filesystem is BTRFS. That said creating the subvolume manually is possible just a pain.
  2. No i think you have it correct, if using a BTRFS file system it MUST be a single device (rather than a BTRFS RAID pool). In this case you are hitting one of the limitations of the UI/Help that i havent figured out how to address (mostly inform the user of) yet. For swapfile to be used on BTRFS they must be placed in a non-COW subvolume of a BTRFS file system. I assume you are trying to do something like this /mnt/working/swapfile where swapfile is in the file name field and /mnt/working is in the path field. Instead you should have something like this /mnt/working/swapfile_dir/swap
  3. @JibbsIsMe @Adriano Frare I have made an update to the plugin that i believe will address your issues. It is live now, try it out when you get a chance.
  4. The plugin logs to the main system log.
  5. I suggest uninstalling both the new and old swap plugins, rebooting, and then installing the new plugin. That should clear things out.
  6. Probably not as the swap file is only used to swap out applications Actually I believe that when ram space is low any pages in ram (except maybe for some kernel stuff) are eligible to be written to swap. Theoretically if you have a swapfile active and are writing data to tmpfs those pages could make there way into the swapfile. That said i dont imagine performance would be that great, my experience with very large swapfiles (as big or bigger than system ram) hasnt been great. Might be worth trying though.
  7. Swapfile (Unraid 6.9.0 and up) This plugin adds swapfile creation and management to unRAID. This is a fork of the original "Swap File Plugin for unRAID" by @theone. The goal of this fork is to revamp the plugin for unRAID 6.9.0 and add support for swapfile creation on BTRFS drives. Notes: Currently in Beta You must remove the original Swap File Plugin before installing this plugin. This plugin no longer contains the feature to auto-update on startup. Limitations: Swapfiles should only be placed on single disk BTRFS drives. BTRFS drive p
  8. So the dev pack plugin wont help you here. The thing you need to understand here is that a docker container essentially has a separate os within it which is different from the OS on the host. To get a compiler within the container you would need to exec into the container and install a compiler there using whatever method is correct for the os within the container (assuming that the container creator hasn't stripped out package management functionality). Off hand i dont know what os the netdata container is based on. Your best bet is likely to go to https://learn.netdata.cloud/ and see if they
  9. Basically the same as you would using it client side. Open the interface, select an encrypted container, enter the password to mount the it. Using docker bind mounts with the "shared" flag you can make the veracrypt mounts accessible from the host and thus shareable via SMB.
  10. You can run VeraCrypt in a docker container.
  11. This is what i was referring to https://hub.docker.com/_/python
  12. Ah yes you have found one of the classic stumbling blocks for new docker developers. This isnt an unRAID issue rather a non-intuitive specific of the Docker volume/bind mount system. You understood correctly however there are caveats that are not obvious. The VOLUME directive in a dockerfile tells docker that a volume needs to be mounted at the provided path. When the container is run if no volume flag is specified (-v) docker will automatically attach an anonymous volume to that path, alternatively you can use the -v flag to attach a named volume to your container. These volumes are m
  13. It looks like the emby/embyserver container may be based on a base image with an s6-overlay. If that is the case you could potentially put your script launch command watch -n30 "/system-share/transcoding-temp-fix.sh" > /transcode/transcoding-temp-fix.log & into a launch script and bind mount that into the container under the /etc/cont-init.d directory. If i understand s6 correctly it would run your script when the container starts. https://github.com/just-containers/s6-overlay#init-stages
  14. I am fairly certain that is not how the Post Arguments field is meant to be used. Its purpose is to add extra arguments to the end of the Docker run command, not append a completely new docker command. The problem you are facing stems from the fact that the Post Arguments become part of the docker run command, and thus only executes when the container is recreated not when it starts or stops. Unfortunately i am not really certain how to achieve what you are trying to do aside from forking the official Emby container and integrating your script.
  15. They are not meant to be required, I believe that this plugin just hasn't been updated to account for swap file support in BTRFS.
  16. No you definitely need to run mkswap on the file before using it. I didnt notice that the steps you had posted skipped that. Your missing steps are (note 1G for a 1GB swapfile).
  17. I have had the same issue with the plugin before, particularly when using large swapfiles. Run swapon /mnt/cache-nvme/swap/swapfile
  18. First step would probably be to go to the Nerdpack thread and request the addition of pixz. A less popular option, but one I always recommend because I think it should be more popular, would be to attempt to run it in docker. Normally I would suggest looking on docker hub for a pixz container, but I only see one available and it doesnt have a docker file listed. The your next best bet would be building your own docker image with pixz. Plenty of tutorials online of how to do it properly with a dockerfile, an easy start however would be to pull an alpine base image, run a console con
  19. I have actually been toying around with a very similar idea myself for some time. In my case specifically however i was looking to have dockerman add this label net.unraid.dockerman.managed along with ones for the webui and icon to every docker container. The idea would be to have an option for unraid to display only containers labeled with the net.unraid.dockerman.managed label on the dashboard and an option on the docker page to have tabs for managed and unmanaged containers. This would allow me to hide any containers created by other sources (such as ephemeral containers) from the m
  20. A docker container can be connected to multiple docker networks, just not if i remember correctly anywhere in the unRAID web interface. You would use the command docker network connect from the terminal to connect a container to a second network. The annoying bit, again if i remember correctly, is that you would have to rerun that command everytime the container is re-created, such as when you use the update button in the web-gui.
  21. So I am seeing something oddly similar. I just installed 6.9.1 yesterday and now i see this at the bottom if the "Array Operation" tab on the "Main" page. The thing is I dont have the s3 sleep plugin installed, and I havent for quite some time.
  22. A user script that runs on a schedule and looks in the next cloud share for specified file extensions maybe?
  23. @ogi Its not really an answer to why the above happens but you might want to look at using ionice on you cmd line copy operations https://linux.die.net/man/1/ionice