primeval_god

Community Developer
  • Posts

    836
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by primeval_god

  1. To an extent you do. What you dont have is the ability to use parity data to recover a file corruption found by BTRFS or use filesystem checksums to determine where a parity invalidation is located (par1, par2, or a data drive). Personally i have no more interest in SnapRAID than i do ZFS, or a pure BTRFS solution. I am very happy with unRAID (the disk pooling system not the OS, though i am also happy with the OS) and would not trade any of its great features like realtime parity, and independent disk file systems for any other solution. That said bitrot resistance (whether it addresses a realistic problem or not) would be the cherry on top, and the underlying data to make it happen is already there just waiting for someone more clever than myself to figure out how to reach across the layers of the storage stack and bring it together.
  2. Combining BTRFS integrity checking with the unRAID parity system is a feature i have long wished for, but I am well aware of how dauntingly complex the implementation would be. That said seeing this brought up again and your comment specifically made me wonder how much of a hurdle that first part is. After some digging I think there may be ioctls for that functionality, GETFSMAP for XFS and maybe BTRFS_IOC_LOGICAL_INO for BTRFS. Its only the first of many hurdles but I have spent enough time down this particular rabbit hole for today.
  3. Despite what this article implies the debate about what bitrot is and how much of a threat it poses is by no means settled. If you search this forum you will find well informed arguments on either side. For my money i will take an unRAID array over zfs any day of the week. In the end though the most important point to make is that RAID is not backup, checksums are not backup, snapshotting is not backup (well technically its a kind of back up but thats part of a much larger discussion), backup requires having a copy of data on an independent media (i.e. a copy on something/somewhere else that is likely to survive destruction of the original).
  4. Interesting, so this isnt going to play nice with the Compose Manager plugin without some modification. The problem is likely that Compose Manager assumes all .yml files in its directories to be compose files. I would suggest the following changes. Create a folder for diaspora under appdata and place the database.yml and diaspora.yml there. Create a new stack with compose manager and copy the contents of the docker-compose.yml into the compose file. Modify ./diaspora.yml and ./database.yml with the path to those files in appdata. Create subfolders in the appdata folder for each of the volumes listed in the docker file. Remove the volumes section at the bottom of the compose file and replace all volume references in the file with the associated appdata path.
  5. In 6.10.1+ the dockerman icon caching mechanism does not work correctly for icons specified by the net.unraid.docker.icon container label on containers without a dockerman template. For containers created with this label outside of dockerman the caching mechanism initially downloads and displays the correct icon. If the icon url is changed however the new icon is never downloaded because the old icon remains cached and there is no mechanism in place for invalidating the icon in the absence of a dockerman template. This issue was discovered by others including those using the compose plugin. Since I have not seen this raised in the Bug Reports section yet I am posting this as a bug report in addition to my proposed fix here https://github.com/limetech/webgui/pull/1146.
  6. The webui portion of this plugin is designed only to show compose stacks created and launched via the webui. I think your issue specifically is that you are launching the stack differently than the plugin does. The plugin uses something like docker compose up -f docker.compose.yml -f <possibly other compose files> -p <projectname> Where project name is a sanitized version of the stack name assigned in the webui. The project name is what the plugin uses to match against and display the status of stacks.
  7. @hasown@NAS A new version is available with an updated version of compose.
  8. Reason: I would prefer to restrict access to the unRAID webUI to only one of my Ethernet adapters and use ports 80 and 443 on the other adapters for hosting docker applications. Previously i was using the hidden BIND_MGT setting to restrict the WebUI to eth0. However it appears that functionality has been removed in 6.10 per the latest in the discussion here I would like to request a new mechanism, preferably not hidden this time, to support this use case.
  9. So just to clarify are you saying this is a bug or that the ability to bind the webui to a single IP address with this mechanism is no longer a feature?
  10. I love the idea of having a locked release announcement thread! I would like to see this paradigm extended to the threads in the Announcements category of the forum as well, though maybe with a link to a discussion thread in that case (see here). This makes it much easier to get notifications when something is announced or updated rather than whenever someone posts a comment about the announcement.
  11. Yeah its been a while since i updated the opening post (not sure many people read it anyway). If there are any security issues or important bug fixes in the packages i include feel free to bring them to my attention and i will try to patch them asap. Otherwise the answer is probably sporadically or when i remember to check my dependencies. For this last release I forgot, I will try to get an update out soon.
  12. A new version of the plugin is available. This version allows choosing alternate project directories for storing more complex docker stacks. This version also implements functionality for managing webui and icon labels for unRAID webui integration.
  13. What directory is it writing the tmp files to? Not sure if it is the correct way to fix it but you can mount a tmpfs directory to your container so that the files are not written into the docker filesystem. In the extra parameters section you can add something like --mount type=tmpfs,destination=/tmp
  14. With that you should be able to identify which container is growing in size. From there its a matter of determining why the application in the container is writing data to a directory inside the container rather than a bind-mounted host directory.
  15. Run the following command in your terminal. docker ps --size It should list all of your containers along with their size (both the read only image size and the writeable layer size). See https://docs.docker.com/engine/reference/commandline/ps/ for specifics.
  16. I dont really know all that much about macvlan networks but this sounds like the same issue as @Kilrah has.
  17. This is normal. The unRAID web ui will not properly show the update status of containers created via compose. For any container created with compose the update status displayed in the webui is meaningless. The compose plugin run compose up on startup and compose stop on shutdown. What do your network configurations look like in your compose files?
  18. Indeed, the compose file you attached uses volumes to store data instead of bind mounts (host directories). You will want to remove the 'volumes' section at the bottom and then replace each volume name in the rest of the file with a host path. This volumes: - mosquitto-conf:/mosquitto/config - mosquitto-data:/mosquitto/data becomes something like volumes: - /mnt/user/appdata/mosquitto/config:/mosquitto/config - /mnt/cache/appdata/mosquitto/data:/mosquitto/data
  19. See here https://docs.docker.com/storage/bind-mounts/#use-a-bind-mount-with-compose volumes: - type: bind source: /mnt/user/appdata/myapp target: /folder/in/container
  20. If the containers are on the same custom bridge network (not the default bridge) then you should be able to use the container name as the database host. Custom bridge networks have an internal DNS mapping for each container so containers can reference each other by name.
  21. Many containers available in User Applications (and many found in the wild) have environmental variables like PUID and PGID that allow you to specify the user and group that the app in the container runs under. Match those to nobody:users on the unRAID host. It will not survive reboot. You would have to use a user script or go file mod to make that change on every boot. unRAID is not really designed with security at the forefront. Its inclusion of docker is more aimed at providing simple to install apps rather than hosting internet facing services. If security of your hosted apps is your primary concern i would recommend running a VM with a more general os and run your secure docker services on that.
  22. You should not install things directly on the unRAID host, and this is one of the reasons why. Use a docker container instead https://hub.docker.com/r/linuxserver/ffmpeg. Your script can launch the ffmpeg container just as easily as it could launch a native app.
  23. Try docker compose instead of docker-compose
  24. The point is that unRAID doesnt use file permissions the way general purpose linux does. The users you create in unRAID only exist from the perspective of share level network security. Under the hood linux users and groups are not really used to control file access. Applications that run directly on unRAID run as root, applications in containers are expected to have their container permissions mapped to nobody:users on the unraid host and only be given bind mounts to the specific directories need as a means of controlling what they can access.
  25. Another possible piece of the puzzle i am running netdata/netdata:v1.28.0