ElectricBadger

Members
  • Posts

    98
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ElectricBadger's Achievements

Apprentice

Apprentice (3/14)

13

Reputation

  1. Thanks — I had forgotten about Previous Apps. That would probably have saved a lot of time 🤦‍♂️ For some reason I didn't get the Reinstall from Previous option. I don't have a different version of the GitLab container in Previous Apps, so I'm guessing I did click on the same one that I was using previously, but it's probably too late to find out what happened there… I'm not even sure why that environment variable is added via Extra Parameters (I suspect I was following a set of install instructions that were incorrect) but I'll try adding it in the correct way when I have a bit more time. Not having to deal with the escaping is reason enough to make the switch 🙂
  2. After reinstalling the container, it seemed to zero the databases and start afresh, even though it was pointed at the same appdata locations. It's a good job I had a backup, even though it took two tries to restore the user database and I still don't have the projects restored! Mismatching quotes in an environment variable absolutely should not cause this level of data loss.
  3. If an error is made in the Extra Parameters setting when configuring a Docker container, the container is not added to the list of containers. If an error is made when editing this setting for an existing container, the container is removed from the list of installed containers and the user configuration is lost. There is no warning or confirmation before this happens. To reproduce: Install the GitLab CE container and start it. Edit the container settings. Add the following to Extra Parameters: --env GITLAB_OMNIBUS_CONFIG="external_url 'https://git.example.com';nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n";prometheus_monitoring['enable']=false;" Note that the code pasted from https://docs.gitlab.com/ee/administration/integration/plantuml.html uses double quotes which should have been escaped, as it's already inside double quotes. An easy error to miss, but one with a hefty penalty, as it causes the container creation to fail. This causes the container to vanish; when reinstalled, any existing configuration changes are lost and the container has its default settings once more. Expected behaviour: the container remains installed, in a stopped state, with its existing config. The user is free to edit that config until they come up with one that works. I'm marking this as Urgent as it is, technically, a data loss bug (while I didn't lose any data from the container's volumes, I did have to recreate all the config options. If I didn't have a copy of that Extra Parameters, which I've pasted a sanitised version of below, it would have taken hours to get the container up and running again.) --env GITLAB_OMNIBUS_CONFIG="external_url 'https://git.example.com';registry_external_url 'https://registry.example.com';gitlab_rails['usage_ping_enabled']=false;gitlab_rails['lfs_enabled']=true;gitlab_rails['gitlab_ssh_host']='git.example.com';gitlab_rails['gitlab_email_from']='gitlab@example.com';gitlab_rails['smtp_enable']=true;gitlab_rails['smtp_address']='mail.example.com';gitlab_rails['smtp_port']=25;gitlab_rails['smtp_authentication']=false;gitlab_rails['smtp_tls']=false;gitlab_rails['smtp_openssl_verify_mode']='none';gitlab_rails['smtp_enable_starttls_auto']=false;gitlab_rails['smtp_ssl']=false;gitlab_rails['smtp_force_ssl']=false;gitlab_rails['manage_backup_path']=false;gitlab_rails['backup_path']='/backups';gitlab_rails['backup_archive_permissions']=0644;gitlab_rails['backup_pg_schema']='public';gitlab_rails['backup_keep_time']=604800;nginx['listen_port']=9080;nginx['listen_https']=false;nginx['hsts_max_age']=0;nginx['client_max_body_size']='0';nginx['custom_gitlab_server_config']='location /-/plantuml/ {\nproxy_cache off;proxy_pass http://plantuml:14480/;\n}\n';registry_nginx['listen_port']=9381;registry_nginx['listen_https']=false;registry_nginx['client_max_body_size']='0';registry_nginx['enable']=true;registry['enable']=true;gitlab_rails['pipeline_schedule_worker_cron'] = '7,37 * * * *';prometheus_monitoring['enable']=false;postgresql['auto_restart_on_version_change']=false" celestia-diagnostics-20220801-0905.zip
  4. If I stop the array and then restart it, all Docker containers that have autostart set to ON are restarted. The same does not apply to VMs: they only autostart after a full reboot, and must be restarted manually in this case. This is really annoying, as I usually forget to do this and don't notice until I try to use something that's running on a VM and find it's not working. Would it be possible for autostart to be consistent between Docker and VMs, and for both to autostart things after an array stop/start? celestia-diagnostics-20220219-1402.zip
  5. Yes, the plugin autoupdates, so it's current. Looks like Firefox doesn't support the `list` attribute for `input type='color'`, so not a lot you can do here. I'll use Vivaldi for when I need to edit colours. Thanks!
  6. Everything I'm reading suggests that I should get a dropdown when I click on a custom colour control, which includes an option to reset the colour to its default. However, in Firefox on macOS, I just get the standard system colour picker. The posts I read were from a while ago — has the behaviour changed since then? There no longer seems to be a way to reset a colour to default without resetting them all to default…
  7. I am using binhex-delugevpn, and occasionally the endpoint goes down, requiring the container to be restarted. If, while the container is restarting, I click on its icon and choose Logs, the logs open, but the container is left stopped. Once this happens, attempts to stop the container produce "Execution error — container already started", even though `docker ps` shows that it is not. Turning the entire Docker service off and on again does not clear this and allow Unraid to get back in sync — only a full reboot (which takes about 5–10 minutes with a Dell R720) will clear things. Is there a way to force the frontend to throw away what it thinks the Docker state is, and set it based on `docker ps`? And are there any plans to allow the frontend to multitask properly, so that users can restart a container and then bring its logs up without having to wait for the restart to complete? (This also occurs to an extent when starting multiple containers — if you click Start on one, then click Start on a second before the first has finished starting, it gets confused, though not usually to the point of requiring a full reboot…) (EDIT: Actually, in this particular case, it looks like the container wasn't starting anyway, because I'd messed up a config file, so that may not have helped. But I have had Unraid get out of sync before when doing this. It's possible that restarting the whole Docker service might have avoided the reboot if I hadn't messed up the config files, though!)
  8. After some digging, I found that the problem is due to Deluge setting a cookie whose value is longer (1819 bytes) than noVNC can cope with, hence the 403 Request Entity Too Large. Quite why it needs that much to list which columns should be shown, and in what order, I don't know! I don't suppose this is that easy to fix — I assume the limit is in upstream code somewhere? As a workaround I've changed the Deluge Docker container to provide http://«servername»:8112 as its web interface, so that the browser sees it as being a separate webserver from http://«ip», which is what the VNC Remote button uses, but if any Docker container that's been accessed via the server IP and a port can break browser VNC, it's not good…
  9. I've cleared my browser cache, but still cannot connect via noVNC using Firefox. I can connect using a third-party VNC client, so the problem seems to be in noVNC. Does anybody have any idea what the problem could be? It's a pain to have to fire up TigerVNC and find the correct port to connect to when you just want to verify something…
  10. Is there a list of banned share names anywhere? I'd like to check mine are OK before I hit the update button 😁
  11. When I have symlinks in a share, and mount that share over CIFS to my Mac, the symlinks show as copies of the file, rather than symlinks. For example, if I have a share at /mnt/user/test mounted on the Mac at /Volumes/test, and the share contains a file called "test", then I see this: $ ls -l /Volumes/test -rwx------ 1 user staff 10 4 Feb 10:00 test $ ssh unraid # cd /mnt/user/test # ln -s ./test othertest # ls -l -rwxrwxrwx 1 root root 1024 4 Feb 10:00 test lrwxrwxrwx 1 root root 21 4 Feb 10:01 othertest -> ./test # exit $ ls -l /Volumes/test -rwx------ 1 user staff 1024 4 Feb 10:00 test -rwx------ 1 user staff 1024 4 Feb 10:01 othertest The file "othertest" appears on the Mac as a plain file which is an exact copy of "test". Changes made to one show in the other. If I create a symlink on the Mac, however, it shows up properly: $ cd /Volumes/test $ ln -s ./test yetanothertest $ ls -l -rwx------ 1 user staff 1024 4 Feb 10:01 othertest -rwx------ 1 user staff 1024 4 Feb 10:00 test lrwx------ 1 user staff 21 4 Feb 10:00 yetanothertest -> ./test Unfortunately, I need to do stuff on the Mac side where the symlinks actually appear as symlinks. Is there a way I can configure it so symlinks work as expected, or do I have to deal with the horrors of NFS for this?
  12. Please could we have the RSS feed? It shouldn't be too difficult to implement, and it makes life much easier for those of us who do use RSS. I get far too much email as it is, and I don't want to know about new videos and popular forum posts — just blog articles. IMHO, without an RSS feed, it shouldn't even be called a "blog"
  13. When I run the Docker container, the log says XML template file for the vm was placed in Unraid system files. This file assumes your vm path is /mnt/user/domains if it isnt you will need to manually edit the template changing the locations accordingly Would it be possible for it to output the path to that XML file at that point, instead of just saying "Unraid system files"? I've looked all through /boot and I can't find it — and I need to change /mnt/user/domains to /mnt/disks/vms… EDIT: d'oh — I can just edit the XML after restarting the server and before starting the VM, of course. Might be more helpful if the message mentioned that, for the decaffeinated 😁
  14. Do we need to change anything in our unRAID configs in order to continue getting updates to this container, if it's being renamed? Or will the rename get picked up automatically?