primeval_god

Community Developer
  • Posts

    854
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by primeval_god

  1. Having Docker containers and VMs that access the internet (but are not accessible from the internet) is generally fine, they are isolated from the host os to varying degrees. As for remote access the recommended way for unRAID is using a VPN into your home network as that is the most secure. Some people expose services in VMs and containers using a secured reverse proxy, which can be ok if you know what you are doing and are not exposing the unRAID host OS itself.
  2. Personally I achieve this with the user.scripts plugin. I have scripts that run on a schedule that execute docker pause and resume commands.
  3. Keep in mind that the "Apps Page" is actually a third party plugin called community applications (though it is developed more closely with Limetech than most). It acts as a convenient way to discover and install containers and plugins but it is not the only way to do so. Also speaking from personal experience I have in the past typically not run the latest version of unRAID (I usually stay a version back). The lack of security updates for older versions (which was how it used to work before the licensing change) was rarely something that concerned me. unRAID is designed to run from within a home network, with minimal exposure to the outside world.
  4. This is a NAS appliance OS security has never been a priority. Ok thats not entirely true (said mostly for effect) but the fact is that the unRAID OS has always been slow to get security updates (except for the occasional critical issue), has a fairly lax security model, and is intended to run on a home network with no exposure to the outside internet. unRAID is a consumer product only with no enterprise level.
  5. The term "Array" is old nomenclature that refers to the singular required drive pool that uses the unRAID's proprietary unRAID scheme. The "Array" pool can contain mixed drive sizes and types (though mixing SSDs and HDDs in not recommended), as well as up to 2 parity drives to maintain redundancy for the pool. unRAID also supports optional secondary drive pools (previously called cache, but weren't a cache in a typically sense of the word). These secondary pools can be single disks or have redundancy via ZFS or BTRFS software raid levels. These secondary pools are separate from the "Array" pools in terms of redundancy. In terms of file access unRAID has user shares which preset a combined view of folders from all the various drive pools. There are options to set which pool data gets written to when writing to a user share and when and if data gets moved between pools. One common usage of this is to direct all writes to an SSD pool and then have unRAID move the files onto the Array pool later. Finally to the point above, the general recommendation is to have an SSD based pool to store appdata on, separate from the Array. Whether or not the pool used to store appdata has redundancy is up to you but it does not effect the configuration of the "Array" pool.
  6. Yes send and receive work on subvolumes only (snapshots are just a type of subvolume). Yes the other filesystem has to be BTRFS for send and receive to work, the send subvolume becomes a subvolume on the receiving filesystem. I am not entirely sure about the capabilities of this plugin with regards to scheduling. BTRFS send and receive does a sort of differential send when subvolume/snapshot is based on subvolume/snapshot that is present in both filesystems (assuming you use the option to specify the parent). This reduces the amount of data sent for subsequent snapshots of the same subvolume. I am not sure if this plugin actually makes that option available though as I do my snapshot sending via the command line.
  7. You will want to stop any Docker containers that have a mapping to or within the share you are operating on while you make changes, aside from that though you dont need to make any changes since you are creating the subvolume with the original path. Likewise you shouldnt need to make any changes to share settings since so far as unRAID is concerned the new subvolume (which has the same path as the original user share) is the existing user share. Where you store snapshots doesnt really matter, they can be anywhere within the same pool (they dont have to be within a subvolume). Snapshots themselves are just subvolumes anyway. This is not entirely true but it requires some explanation. When you snapshot a subvolume the snapshot must be made somewhere on the same filesystem (pool) as it is a CoW copy of the subvolume (and a new subvolume itself). You can however send subvolumes from one BTRFS filesystem to another using btrfs send and receive (which are available in this plugin). Doing this copies the subvolume to the other filesystem and thus it is no longer CoW copy but a full copy taking up space on the other filesystem. Once a subvolume is sent to the other filesystem there is a way to send subsequent snapshots of that subvolume between the two filesystems in a way that maintains the CoW relationship between the subvolume and its snapshots.
  8. The issue is with unRAID not this plugin. Last I was aware there was an open PR fixing the issue.
  9. Is there a question here? I dont really understand the post.
  10. The changes you are making are likely not taking effect. There is a long standing issue with how dockerman handles the webui and icon labels. Basically it caches them the first time a container is seen and subsequent changes dont have an effect. Try restarting your server, that may fix the webui label issues.
  11. That is a Dockerman question. The compose plugin only assigns the labels to the containers. Unraid's builtin Dockerman is what actually reads the label and caches/displays the icon.
  12. Still having that flash drives fail with that frequency is not normal. It suggests some sort of hardware or configuration problem. Do you notice continual writes to your flash?
  13. Is this a container run via unRAID's built in Dockerman interface or by some other method?
  14. Other options for installing things are via virtual machines, or LXC containers (using the lxc plugin)
  15. If this is the case then you are doing something very wrong. I have been using the same flash drive for 5+ years without issue. On a normal unRAID system the flash drive incurs minimal reads and writes as the os is run completely from ram. The only writes should be one the occasion that settings are changed or plugins or OS is updated.
  16. Logs (specifically anything that mentions swapfile) and the specifics of your settings would be helpful. Also the actual unRAID Version as i am not certain what the last 2 minor releases were.
  17. This thread is not really the right place to ask about issues with specific compose stacks, unless the problem is with the compose manager plugin rather than the stack itself. Having lots of people post whole compose files in this thread makes it harder for others to find info about the plugin itself. I am not really an expert on compose files in general and i dont use that specific application so i dont know how much help I can be. On general observation is that you should remove the version: "3.8" at the top of the file. Compose no longer requires it and actually recommends against calling out a specific compose version in compose files to better support mixing and matching syntax from various version of the compose spec.
  18. I think what you are looking for is the label "net.unraid.docker.webui". You can apply this label to a container created by Portainer (or any other container creation method) and set its value to the url of your container and unRAID will show a WebUi button for the container. There is also a "net.unraid.docker.icon" for setting an icon (url to the icon) and "net.unraid.docker.shell" for setting the shell option (bash for example). Important to note there is a bit of buggy behavior with these labels in that unRAID caches the value and subsequent changes are not reflected in the ui. So essentially you get one shot at setting these things, make sure they are correct before spinning up a container with one (it is possible to wipe the cached value but its not as simple as a page refresh).
  19. Several plugins that include editors are based on https://ace.c9.io/
  20. unRAID has always been a purely home solution and i would never recommend it for business purposes.
  21. I dont know about GOHardDrive but since you mention ServerPartDeals I will say that i have had great success with their "Manufacturer Recertified" Drives? In my experience and from what I have read those "manufacturer recertified" drives are basically good as new. I use them in my main array without any additional worry.
  22. This. Files do not span between disks.
  23. Why not? Using containers to run scripts is actually quite convenient. See my comments in this thread for more info on how to do it.
  24. No. I am not a fan of autoupdates for anything, but especially not container stacks. If you are looking to auto update containers i would suggest looking into watchtower.