primeval_god

Members
  • Posts

    487
  • Joined

  • Last visited

  • Days Won

    1

primeval_god last won the day on October 4 2021

primeval_god had the most liked content!

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

primeval_god's Achievements

Enthusiast

Enthusiast (6/14)

104

Reputation

1

Community Answers

  1. I have always been a big advocate of this stance, and doing as much in docker as possible. That said these plugins do provide a few essentials that i would argue should be part of the base system but are not (iotop, screen, tmux, powertop). And gives us the ability to update some minor things out of band with the base os releases.
  2. I think i may see the issue. Rather than specifying container-name:my_port just specify container-name to the application, it looks like it will add the default port number on its own. When working with containers that communicate on an internal network there is not need to use port mapping (-p option on cli or the port field in dockerman templates). When communicating via the same custom docker network containers have access to all of each others ports. Port mappings are just for making ports accessible externally on the docker host.
  3. I am not certain i understand the question. It sounds like you are trying to allow one container to talk to another using the container name as a hostname. If so you will need to have both containers on the same custom bridge network, the default bridge network will not work as it does not support using container names as internal hostnames. If they are on the same custom bridge network then the specifying container-name:port in the setting of one containers application should allow you to reach another (no need to make a port mapping in this case either so long as the port is only for internal communication).
  4. To an extent you do. What you dont have is the ability to use parity data to recover a file corruption found by BTRFS or use filesystem checksums to determine where a parity invalidation is located (par1, par2, or a data drive). Personally i have no more interest in SnapRAID than i do ZFS, or a pure BTRFS solution. I am very happy with unRAID (the disk pooling system not the OS, though i am also happy with the OS) and would not trade any of its great features like realtime parity, and independent disk file systems for any other solution. That said bitrot resistance (whether it addresses a realistic problem or not) would be the cherry on top, and the underlying data to make it happen is already there just waiting for someone more clever than myself to figure out how to reach across the layers of the storage stack and bring it together.
  5. Combining BTRFS integrity checking with the unRAID parity system is a feature i have long wished for, but I am well aware of how dauntingly complex the implementation would be. That said seeing this brought up again and your comment specifically made me wonder how much of a hurdle that first part is. After some digging I think there may be ioctls for that functionality, GETFSMAP for XFS and maybe BTRFS_IOC_LOGICAL_INO for BTRFS. Its only the first of many hurdles but I have spent enough time down this particular rabbit hole for today.
  6. Despite what this article implies the debate about what bitrot is and how much of a threat it poses is by no means settled. If you search this forum you will find well informed arguments on either side. For my money i will take an unRAID array over zfs any day of the week. In the end though the most important point to make is that RAID is not backup, checksums are not backup, snapshotting is not backup (well technically its a kind of back up but thats part of a much larger discussion), backup requires having a copy of data on an independent media (i.e. a copy on something/somewhere else that is likely to survive destruction of the original).
  7. Interesting, so this isnt going to play nice with the Compose Manager plugin without some modification. The problem is likely that Compose Manager assumes all .yml files in its directories to be compose files. I would suggest the following changes. Create a folder for diaspora under appdata and place the database.yml and diaspora.yml there. Create a new stack with compose manager and copy the contents of the docker-compose.yml into the compose file. Modify ./diaspora.yml and ./database.yml with the path to those files in appdata. Create subfolders in the appdata folder for each of the volumes listed in the docker file. Remove the volumes section at the bottom of the compose file and replace all volume references in the file with the associated appdata path.
  8. In 6.10.1+ the dockerman icon caching mechanism does not work correctly for icons specified by the net.unraid.docker.icon container label on containers without a dockerman template. For containers created with this label outside of dockerman the caching mechanism initially downloads and displays the correct icon. If the icon url is changed however the new icon is never downloaded because the old icon remains cached and there is no mechanism in place for invalidating the icon in the absence of a dockerman template. This issue was discovered by others including those using the compose plugin. Since I have not seen this raised in the Bug Reports section yet I am posting this as a bug report in addition to my proposed fix here https://github.com/limetech/webgui/pull/1146.
  9. The webui portion of this plugin is designed only to show compose stacks created and launched via the webui. I think your issue specifically is that you are launching the stack differently than the plugin does. The plugin uses something like docker compose up -f docker.compose.yml -f <possibly other compose files> -p <projectname> Where project name is a sanitized version of the stack name assigned in the webui. The project name is what the plugin uses to match against and display the status of stacks.
  10. @hasown@NAS A new version is available with an updated version of compose.
  11. Reason: I would prefer to restrict access to the unRAID webUI to only one of my Ethernet adapters and use ports 80 and 443 on the other adapters for hosting docker applications. Previously i was using the hidden BIND_MGT setting to restrict the WebUI to eth0. However it appears that functionality has been removed in 6.10 per the latest in the discussion here I would like to request a new mechanism, preferably not hidden this time, to support this use case.
  12. So just to clarify are you saying this is a bug or that the ability to bind the webui to a single IP address with this mechanism is no longer a feature?
  13. I love the idea of having a locked release announcement thread! I would like to see this paradigm extended to the threads in the Announcements category of the forum as well, though maybe with a link to a discussion thread in that case (see here). This makes it much easier to get notifications when something is announced or updated rather than whenever someone posts a comment about the announcement.
  14. Yeah its been a while since i updated the opening post (not sure many people read it anyway). If there are any security issues or important bug fixes in the packages i include feel free to bring them to my attention and i will try to patch them asap. Otherwise the answer is probably sporadically or when i remember to check my dependencies. For this last release I forgot, I will try to get an update out soon.