Ryonez

Members
  • Posts

    159
  • Joined

  • Last visited

Posts posted by Ryonez

  1. I've been experiencing what appears to be this issue.

    I've been shifting almost all of my docker management over to Portainer. My best guess so far is that the templates added fields back into templates that started screwing with the docker image and broke things somehow. I've been trying pretty much everything, deleting the docker img, even replaced the cache drives, reformatting the os drive. The only things kept that entire time was the config files.

    There's still some dockers to shift over, but most are on Portainer, and things seem to be steady.
    This issue first appeared when updating some images. Otherwise, I'd been running what I had for over a year.
     

  2. Second week in a row:

     

    Apr 25 02:10:37 Atlantis CA Backup/Restore: docker stop -t 60 atlantis-yoko
    Apr 25 02:10:37 Atlantis CA Backup/Restore: Backing up USB Flash drive config folder to 
    Apr 25 02:10:37 Atlantis CA Backup/Restore: Using command: /usr/bin/rsync  -avXHq --delete  --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" /boot/ "/mnt/user/Backups/machine_specific_backups/atlantis_flash/" > /dev/null 2>&1
    Apr 25 02:10:42 Atlantis CA Backup/Restore: Changing permissions on backup
    Apr 25 02:10:42 Atlantis CA Backup/Restore: Backing up libvirt.img to /mnt/user/Backups/machine_specific_backups/atlantis_libvirt/
    Apr 25 02:10:42 Atlantis CA Backup/Restore: Using Command: /usr/bin/rsync  -avXHq --delete  --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" "/mnt/user/system/libvirt/libvirt.img" "/mnt/user/Backups/machine_specific_backups/atlantis_libvirt/" > /dev/null 2>&1
    Apr 25 02:10:42 Atlantis CA Backup/Restore: Changing permissions on backup
    Apr 25 02:10:42 Atlantis CA Backup/Restore: Backing Up appData from /mnt/user/appdata/ to /mnt/user/Backups/server_backups/atlantis_docker_appdata/[email protected]
    Apr 25 02:10:42 Atlantis CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/user/Backups/server_backups/atlantis_docker_appdata/[email protected]/CA_backup.tar'  --exclude "alteria-plex/Library/Application Support/Plex Media Server/Metadata" --exclude "0-pihole" --exclude "alteria-postgres" --exclude "alteria-hydrus-server"  * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress
    Apr 25 02:10:42 Atlantis CA Backup/Restore: Backup Complete
    Apr 25 02:10:42 Atlantis CA Backup/Restore: Verifying backup
    Apr 25 02:10:42 Atlantis CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar --diff -C '/mnt/user/appdata/' -af '/mnt/user/Backups/server_backups/atlantis_docker_appdata/[email protected]/CA_backup.tar' > /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress
    Apr 25 02:10:42 Atlantis kernel: br-55ef392db062: port 1(veth391173c) entered blocking state
    Apr 25 02:11:43 Atlantis kernel: br-55ef392db062: port 70(vetha9a6916) entered forwarding state
    Apr 25 02:11:44 Atlantis CA Backup/Restore: #######################
    Apr 25 02:11:44 Atlantis CA Backup/Restore: appData Backup complete
    Apr 25 02:11:44 Atlantis CA Backup/Restore: #######################
    Apr 25 02:11:44 Atlantis CA Backup/Restore: Deleting /mnt/user/Backups/server_backups/atlantis_docker_appdata/[email protected]
    Apr 25 02:11:45 Atlantis CA Backup/Restore: Backup / Restore Completed


    That's two weeks of missed backups, this is a big issue.

  3. Just had the plugin fail to backup.

    Looking at the logs, everything worked fine, verification was good, but the backup file just does not exist.

     

    Apr 18 02:10:41 Atlantis CA Backup/Restore: docker stop -t 60 atlantis-yoko
    Apr 18 02:10:41 Atlantis CA Backup/Restore: Backing up USB Flash drive config folder to 
    Apr 18 02:10:41 Atlantis CA Backup/Restore: Using command: /usr/bin/rsync  -avXHq --delete  --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" /boot/ "/mnt/user/Backups/machine_specific_backups/atlantis_flash/" > /dev/null 2>&1
    Apr 18 02:10:44 Atlantis CA Backup/Restore: Changing permissions on backup
    Apr 18 02:10:44 Atlantis CA Backup/Restore: Backing up libvirt.img to /mnt/user/Backups/machine_specific_backups/atlantis_libvirt/
    Apr 18 02:10:44 Atlantis CA Backup/Restore: Using Command: /usr/bin/rsync  -avXHq --delete  --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" "/mnt/user/system/libvirt/libvirt.img" "/mnt/user/Backups/machine_specific_backups/atlantis_libvirt/" > /dev/null 2>&1
    Apr 18 02:10:44 Atlantis CA Backup/Restore: Changing permissions on backup
    Apr 18 02:10:44 Atlantis CA Backup/Restore: Backing Up appData from /mnt/user/appdata/ to /mnt/user/Backups/server_backups/atlantis_docker_appdata/[email protected]
    Apr 18 02:10:45 Atlantis CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/user/Backups/server_backups/atlantis_docker_appdata/[email protected]/CA_backup.tar'  --exclude "alteria-plex/Library/Application Support/Plex Media Server/Metadata" --exclude "0-pihole" --exclude "alteria-postgres" --exclude "alteria-hydrus-server"  * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress
    Apr 18 02:10:45 Atlantis CA Backup/Restore: Backup Complete
    Apr 18 02:10:45 Atlantis CA Backup/Restore: Verifying backup
    Apr 18 02:10:45 Atlantis CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar --diff -C '/mnt/user/appdata/' -af '/mnt/user/Backups/server_backups/atlantis_docker_appdata/[email protected]/CA_backup.tar' > /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress
    Apr 18 02:10:45 Atlantis kernel: br-55ef392db062: port 1(veth2d7642e) entered blocking state
    Apr 18 02:11:46 Atlantis kernel: br-55ef392db062: port 70(veth7449b16) entered forwarding state
    Apr 18 02:11:46 Atlantis CA Backup/Restore: #######################
    Apr 18 02:11:46 Atlantis CA Backup/Restore: appData Backup complete
    Apr 18 02:11:46 Atlantis CA Backup/Restore: #######################
    Apr 18 02:11:46 Atlantis CA Backup/Restore: Backup / Restore Completed


    Looking at the folder the backup should be in, it's empty.

    image.png.8be87d3fa644fd982bbc51b2acbc56d5.png

     

    Any idea what happened?

  4. I'm unable to fill a var via a command via scripts currently. Going back, I can see it stopped working correctly on the 14/12/21 (NZT).
    The command is this:
     

    #!/bin/bash -x
    DBLIST="$(docker exec -t alteria-postgres psql -U postgres -d postgres -q -t -A -c 'SELECT datname from pg_database')"
    echo "${DBLIST}"


    I'm at a loss atm, I can't think what would stop this, and the command can be run manually in the terminal, returning the list of databases correctly.
    The failure was very wonky as well. It saved all the databases fully, then I got zero bit files, then some had data but not all, then the script started running multiple times before outright just stopping.

    Overview of the base folder:
    image.png.b8fc67ee95650dc7282017386b46df70.png

    Last Good Backup:
    image.png.519ef8c4d08433eabdc7c9bcad9b345b.png

    When it started messing up
    image.png.e5f0d170ab352d3ef1e3a381037420b2.png

    Would anyone have any ideas what on earth happened and how to get this working?

  5. 5 hours ago, Squid said:

    Adding the tag now.  Thanks.


    To be fair, this container can run without mysql. It's just with the template enforcing yanking the `DB_HOST` field back in it'll keep breaking the container. Honestly that's something that needs to be exposed to users, a way to tell CA to not change/update the template/s.

    As it stands the container is working fine with sqlite, showing mysql is not needed.

    While it has been a annoying experience for the user, seeing a requirement for something that's not actually needed would likely turn them away completely. If the `requires` tag is to be used, it should be used only for things that are actually required, otherwise it losses it's purpose.

  6. 2 minutes ago, wambo said:

    Is there some kind of documentation / guide to look into how the Unraid Templates work ? ( as in, is there a file (in the that I can comprehend from, for example that DB_HOST causes the search for a DB ?) 


    The detection doesn't have anything to do with the template. The template is basically a form someone has stuck together, telling unRaid where to pull the container, what env vars the user is expected to provide, and some other information, like support links. I found out the `DB_HOST` was causing the issue for looking a mysql server as I looked at the files used to make the container image, finding the relevant check here.

    Here is where you can find information on how templates are made and used: https://wiki.unraid.net/DockerTemplateSchema
    Also, templates gotten from CA will replace removed template values. `DB_HOST` will return at some point. I have this issue with templates exposing ports that just should not be exposed and it drive me freaking crazy. This is something managed by the CA plugin.
    The way to "fix" this is to remove the `TemplateURL` value from the template. You won't get updates for the template anymore, but it will stop the template from changing to what can be undesired or even unsafe values for your environment.

    1. You'll have to manually edit the template file on the usb (via the unraid console or ssh).
    2. Open the unRaid Console
    3. Navigate to `/boot/config/plugins/dockerMan/templates-user`
    4. Look for the xml file with the name matching the name of the template you're wanting to edit (This is the "Name" field at the top of the update container page for the container). `nano <name>.xml` should work. Ctrl+x and confirm save to close once the change are made.
    5. Find the template url field, it'll look like ` <TemplateURL>https://raw.githubusercontent.com/selfhosters/unRAID-CA-templates/master/templates/openldap.xml</TemplateURL>`
    6. Chance it to be just ` </TemplateURL>`
    7. Save the file.


    It might help to think of the CA Templates as just shortcuts. They are great for quickly setting things up, but they are bad for forcing values you might not want around on you.

    And don't worry to much about my time, I have plenty to spare >.<

  7. 7 hours ago, wambo said:

    Weirdly its still waiting for MySQL service before it reads the config ? (At least json mistakes were brought up only later)


    Right, hammered at this a little and got it kinda working.

    1. Delete the `config.json` file. It's setup incorrectly anyway (example config is here), but we'll have the container remake it.
    2. Remove `DB_HOST` from the template. This is what is triggering the search for a mysql database.
    3. Add a new variable to the template. Call it something like "DB_URL". The key: "CMD_DB_URL". The Value: "sqlite:///config/hedgedoc.sqlite". Don't include the quotes.
    4. Start the container, navigate to the exposed port. It might look kinda broken, that's because the config isn't fully setup.

    Google from this point should be able to help you fill out the config as needed, like why the page looks broken. I won't be to much help here, my set-up uses a database and a domain with a reverse proxy. It's also been a while since I've done it.

    Hope this helps.

  8. 16 minutes ago, JonathanM said:

    It's highly recommended to keep docker containers and their operating appdata on SSD based pools, for many reasons, including this.


    I should clarify, I am using ssd cache pools. I used hdd incorrectly in this instance. However, I'd still prefer to not have several database systems having a go at them, it feels much better having one system managing the read/writes and queuing.

  9. 52 minutes ago, wambo said:

    Does the docker layering also count

    I'm not sure what you mean by docker layering sorry. But there's no issue using the template from CA (Community Apps), it's what I started with.

     

     

    53 minutes ago, wambo said:

    I've also found the Support/Readmefirst link, https://github.com/linuxserver/docker-hedgedoc#readme but I really can't see any explicit mention, that something is needed in addition, that I have to provide the database or such.


    Personally, I feel that's more for the people who make the application to tell you. Linux Server typically makes containers for applications that might not have one already, or to or to make a container that's better than the official one that might be provided.

    Linux Server have provided a link to the application configuration details from the Application Setup section of the doc, and state afterwards that example they provide uses mysql. In HedgeDoc's configuration doc is a basics section which has a link to the different databases you can use here. One of the options is an SQLite option, which is a single file database thing.

    Looking at the manual-setup here. You could drop the need for a database container by popping this into your config file for the db:

    "db": {
          "dialect": "sqlite",
          "storage": "/config/db.hedgedoc.sqlite"
    },

    That should work for the CA Template. You can probably ignore the "DB_" sections of the template, the config using sqlite should mean it ignores those values.

  10. 59 minutes ago, JonathanM said:

    For less experienced users, I recommend multiple database containers each with their own app connected. Since the docker engine shares image layers, the extra storage is extremely minimal, a few KB for the config files.

     

    The advantages of being able to blow away a single database without affecting the others while not needing to know database management commands is helpful.


    I'm not to much worried about the storage, more with having several database systems potentially thrash the hdd's. Might not be a big issue at the start, but it never hurt to learn and understand the systems you use and try to use them effectively. Plus, if you need to work on them, you can get at the from one point of entry.

    To be fair here, I'm 100% I don't do things effectively. There's so many different components, getting it all right sucks.

  11. 1 hour ago, wambo said:


    In the Docker Notes I saw that mariadb was used, and I whondered whether I might have to install that seperately? (But since this is based on docker, why ?)
    I also skimmed through a few pages of this thread, there was the talk about a config file, but I have yet to find the mention of that in any of the Doc / Info I found related to Unraid...



    It's considered bad practice to lump major containers from outside of your project into your container as far as I know. That's not to say there aren't cases where it's a good idea, for example building upon the base images, or maybe using the ngnix image if your thing needs a web service to function. Databases are one of those things where it's better to host one database container, then have other containers talk to that and use their own database in there. For example, one database container can host the databases for hedgedoc, keycloak, kanboard, etc. You have one container managing them all.

    Next to the install button (for hedgedocs in the Community Apps page) is a support button. Clicking that shows a Read Me First link at the top. This links to what's used to generate the docs you have linked above it seems.

    Hedgedoc does require a db. I'm using postgres as mysql database container, which my hedgedoc talks to.

    I suggest spending some time learning about databases. Don't just make a root user and password and give that to hedgedoc. You'll want to use this for other things down the line, so make a user and database for each service to use.

    I'd recommend postgres for the database, and pgadmin4 as a web tool you can use to work on the databases.

    If you are serving things to others (I consider it a thing to just do even if you're the only one using things), I really recommend securing things behind a reverse proxy with a valid https and a domain. I have my containers on a user defined bridge network, and only really expos ports where needed, like for the reverse proxy. You don't need to expose postgres's ports (like 5432). Exposing lets things on the lan access that port, container images don't need that. If you do do that, be awere anything with a templateurl will add then back on later, which is frankly frustrating as hell.
    I do take a more "I want to secure things as much as I can" approach. It's a lot of work, not everyone will need or be bothered by it. The docs and use of LinuxServer's Swag container would be a good place to start for the last little bit I dumped on ya: https://docs.linuxserver.io/general/swag

    Servers administration is a handful, good luck.

  12. @Frank Ruhl
    Please calm down on the caps, it's considered as shouting and thus rude.

    Your reply here seems to be about two things. One is the timeframe for getting a replacement key manually. There is a system in place were you can replace a key yourself, but it has a one year cooldown between uses. You've made no mention of weather or not you've even attempted this. If you have, there are suggestions above for using a trial key, which did work for me until my key was replace by the UnRAID team.
    The second thing seems to be some sort of issue? I recommend making a post in the general support forum. That is where you'd get support for problems, this is a feature request thread and isn't related to the problem.

    Good luck, I hope you are able to get help sorting it all out.

  13. 17 hours ago, Squid said:

    USB Flash Drive backup is now deprecated in favour of one of the features (automatic backup of the flash device) present within the Unraid.net plugin when running on Unraid 6.9+

     

    This feature will not actually get removed, because there are still use cases for it, but no coding improvements etc will ever happen to this feature.

     

    https://forums.unraid.net/topic/104018-my-servers-early-access-plugin/

     

     
    The USB Flash Drive backup lets us keep more control over our backups. Deprecating this in favor of a method that only backups to the cloud is a mistake in my opinion. Would you do the same with appdata if unRaid added an option to backup appdata to their servers? This isn't even accounting for the fact that the backups made by unRaid aren't encrypted currently.

    I ask you to please reconsider this stance, maintaining the ability to create controlled backups of flash is important.

    • Like 4
  14. 28 minutes ago, x88dually said:

    Sorry, an error (403) occurred registering USB Flash GUID 0930-6544-377D-C42159580FD9

    The error is: GUID '0951-1666-CFA5-E331292FCD97' is blacklisted

     

     

    Brand new,   WTH  ??

     

    7 minutes ago, ChatNoir said:

    My bad, I misspoke, I meant what kind of usb flash drive ?


    Maybe this would be better making a support thread for. This one is a feature request thread.

  15. Hmm, isn't the key 

    28 minutes ago, remotevisitor said:

    As a licensed version of Unraid has to be able to work without an internet connection the blacklist has to be built in to each release.

     

    The trial version specifically has a requirement that it has a internet connection so it can use a more dynamic blacklist mechanism.

     

    The trial version is deliberately set up the way it is to prevent people just continually getting a trial version and never actually buying a license.

    Hmm... Does the trail key have to be online all the time, or just for registration?

    And could some sort of "emergency key" system be set up? Have it work like a trial, except this requires an old key to be present/given and will only last 7-15 days.
    Then the old key can be added to the "live" blacklist (prevents multiple emergency keys from being issued), while giving a licence holder the ability to get a server back up asap whille they what for a proper one to be issued. The keys should have the dates they work in them, so it would work for an offline system to.

     

     

    8 minutes ago, itimpi said:

    I think you CAN use a trial key as long as you know all the drive assignments.  If you delete the sonfig/super.dat file off the flash drive and replace it with the default you will no longer get the warning but will have to redo all your drive assignments to get back to where you were.


    It would work, if I wanted to manually go through and set every thing back up the way it was. Drives, docker, plugins, vms... It's a bit much.

  16. Interesting, wasn't aware the blacklist was baked into the system.

    I wonder if there's a way to still alleviate the issue I'm having without to much disruption to the system in place. Maybe allowing trail keys for existing unraid configurations. I believe it has to ask the unraid servers for a key to be issued, so it shouldn't be to hard to do a check there. Is there a reason for the current prevention there?
    I'll agree it's outside of a trail scenario, but it would allow us to bring the server back up, for a while. I wouldn't even care if it was for a smaller time frame as long as it'd hold until a replacement key is issued.

  17. 15 minutes ago, tjb_altf4 said:

    I've never had a key die, but if I did I would pop in the replacement, copy backup config and run in trial mode, which should net you up to 60 days for a replacement key to arrive which should be more than adequate.

    That's a brilliant idea, didn't even think of that!

    I tried, but it's throwing this: `It is not possible to use a Trial key with an existing Unraid OS installation.`

    So, can't do that.

     

    • Like 1
  18. I've just had a usb key die on me for the second time in the space of a year. The registration system allows for automatic key replacement once a year. If you need more, then you have to contact support.

    I personally am finding this frustrating right now. I've sent an email, and the form said it could take days. That's days my plex server is down. Days my automation system is down. Days my email server, discord bot that servers several guilds, database systems, wikipedia and notekeeping, git services, everything are all down. And not because I'm waiting on hardware, but because I need to wait for another key for software I've paid for, that has an automatic key replacement system. And sure, I'm not some big company, this doesn't affect others to badly (bar the bot that's down), but this is important to me and a few others.

    Why is there such a limit? The old GUID's are blacklisted, and I'm not seeing a benefit with the once a year key replacement.

    • Like 1