primeval_god

Community Developer
  • Posts

    852
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by primeval_god

  1. I asked to have it marked completed based on its 2014 creation date. I read this as solved by the presence of dual parity.
  2. Traefik can be used for free without signing up for anything. I am sure they sell some sort of service but I wouldnt even know what it was.
  3. @JonathanM @SpencerJ Bit of a semantics question here, what should i do with "Feature Requests" that are solved by functionality available in a plugin? My thought would be move them to completed unless the request is specifically to integrate the functionality (or plugin) into unRAID itself. Also what about requests that weren't really completed so much as they already existed and the user just needed to be pointed to the functionality? p.s. Happy Thanksgiving
  4. I wasnt suggesting a reverse proxy to the internet. I use one (traefik) purely locally so that all my services are on different named paths vs different ports. You can go a step further and put the reverse proxy container on an ipvlan type network so it has its own ip address and serves on the common ports 80 and 443.
  5. You would change the host port. Using the linux user id sounds kind of complicated to me. Would that be the user id from unRAID or from another machine? How will the users know their id? Personally i prefer using a reverse proxy like traefik for something like this. Then each user can have a nice address to use http://Server_IP:Traefik_Port/username/
  6. @JonathanM Just to be clear when you say report, do you mean using the forum report function or ping a mod from the thread?
  7. Considering posts like the above, maybe at some point the Feature Requests forum deserves a cleanup. It would make it easier for people, especially new users and those coming in from searches, to know what feature requests are still valid. Can moderators mark a topic as Solved or is it only the original poster? Or maybe there should be another subforum called completed?
  8. The only labels currently supported are for icon, shell, and webui.
  9. Yeah that is what should work, make sure the containers were redeployed after you added that line. You can do a "docker inspect 'container_name'" on the containers to make sure the labels are set. You may need to reboot unRAID, one downside of using the container labels is that currently the webui caches the icon and changing the icon url label will not change the image until that cache is invalidated.
  10. @Kilrah was talking about the Compose manager unRAID plugin, not docker compose itself. I dont really know that much about the template system itself but if you can get your master container to add labels to the containers it spawns that would be the easiest way to get icons. All you need is to add a label "net.unraid.docker.icon=url-to-icon" to each container. Alternatively maybe look into the FolderView plugin.
  11. Creating your own containers with all the needed scripts and applications is an option but there are other ways to go about it as well that might not require creating your own containers. For instance in your first post you mention needing to use 7zip in a script. Personally I use 7zip with an ephemeral container on the command line and in scripts launched from the user scripts plugin. Basically instead of calling a 7z binary i do this docker run --rm --workdir /data -it -v $PWD:/data crazymax/7zip 7z x where {args} is whatever i would pass to 7z. It launches a crazymax/7zip container with $PWD bind mounted to the /data directory in the container and the 7z command started with my specified arguments in the /data directory. The container deletes itself when it finishes. I do something similar with ffmpeg when i occasionally need to use it though since i rarely have need of ffmpeg (I transcode in dedicated handbrake containers) I dont keep an ffmpeg image on hand (there are plenty of them available) and I just use another instance of a jellyfin container instead. In the case of the ffmpeg command i have a bash alias assigned such that bash replaces 'ffmpeg' with the quoted code above.
  12. Yes. From a linux standpoint unRAID is really a single user system. The "users" created in the webui exist for the purpose of controlling share access only. Not at all. Docker containers are the preferred way user added apps/programs in unRAID. They are light weight, and provide an easy way to isolate user programs from the core system. It makes it much easier to create and manage environments for your customizations to run in that wont risk destabilizing the the unRAID os. The general rule of thumb is anything that can be done in docker, should be done in docker. If it cant be done in docker a VM is the next best, and plugins/native services should be reserved for things that need to integrate with the webui or OS. There is also the option of LXC containers, which would sit between docker and VMs in the hierarchy, but are currently provided by a third party plugin rather than as a core unRAID feature (though they work quite well).
  13. Right but the things it reports arent things that the user can do anything about (unless they are willing to build there own docker images that is). Its primary audience would be the developers who build the images. The majority of unRAID users aren't building their own images, or even getting them from Docker hub i would wager, they are using what is available in CA.
  14. This looks like a tool for docker images developers. I am not sure of how much utility it would be for unRAID users.
  15. The answer is in the comment directly after the one you linked. It has nothing to do with the link itself. The webui isnt using the updated label.
  16. It is not symlinks, its lower level than that more like mount points. Its a fundamental property of BTRFS filesystems. If it helps to think of it that way then sure, I am not entirely sure what the lower levels of BTRFS look like. The important point is that no matter how they are structured subvolumes can only be snapshotted separately.
  17. That is the expected behavior per my previous comment. It is the way btrfs snapshots work for nested subvolumes.
  18. It wont matter if you do. If you have nested subvolumes and snapshot the outer most one, the snapshot will not contain the contents of the nested subvolumes because BTRFS snapshots are non-recursive with respect to subvolumes. And of course the key piece of info is that a snapshot is a type of subvolume. Likewise if you were to create subvolumes on the disk, and then snapshot the root volume of the disk, the resulting snapshot would not contain a copy of the contents of the subvolume. You would have to snapshot the root volume and the subvolume separately. To snapshot a snapshot you would have to specifically target the snapshot in the command.
  19. I though technically the root volume of a btrfs filesystem was also a subvolume (with id 5), and could be snapshotted.
  20. I dont think that the Snapshot plugin GUI allows you to take snapshots of the root volume of the drive (the entire drive). At least thats what your screenshot shows, and i have never done it myself. Personally I have replaced my share folders (every top level folder on the drive) with subvolumes (which look like folders but can be snapshotted), and then i take snapshots of each share at different schedules. It looks like what you have done is create a subvolume called "snapshots" and then snapshotted that empty subvolume a bunch of times, saving the snapshots into the subvolume. Since the subvolume called snapshots is empty, each snapshot of it will be empty as well (note that snapshots do not recurse through subvolumes so the snapshots in the base subvolume will not appear in subsequent snapshots of the base subvolume).
  21. I am confused as to what you are doing / trying to do here. From the snapshot plugin it looks like you have a subvolume called "snapshots" on /mnt/cache and you have several snapshots of that subvolume, which are located under within that subvolume at /mnt/cache/snapshots/2023-*
  22. It only applies to "incremental sends". I believe its purpose is to define the "Master Snaptshot" (not a btrfs term) against which the snapshot will be compare to do an incremental send. This is a feature of the plugin vs something to do with the underlying fs. P.S. If you expand the help text (either globally or by clicking on labels like Master Snap) you will find some useful information.
  23. Snapshots are not "based on a previous snap" they are a Copy on Write copy of a subvolume. For the purpose of restoration there are no dependencies between them, (all of the data sharing stuff is handled by the CoW nature of the FS). You can delete any of them without effecting the others . The only time the relationship of one snapshot to another really matters is when sending them between filesystems using btrfs send. With send if the snapshot to be transferred has an ancestor snapshot at both the source and destination then the amount of data to transfer is reduced (highly simplified explanation). There is not really a simple gui way to handle rolling back. Snapshots appear as just folders on the filesystem. The simplest way of restoring is to delete the live file or folder and then copy it from a snapshot directory back into place. If you are restoring entire subvolumes (the whole snapshot), there are fancier ways of doing it involving deleting the subvolume and then creating a writable snapshot of the snapshot you want to restore, but copying is the easiest to understand. Since snapshotting only involves data disks an not the OS there is no need to bring the server down when restoring something. At most you might have to stop some VM or Docker containers that are using data from the subvolume to be restored.