ElectricBadger

Members
  • Posts

    98
  • Joined

  • Last visited

Everything posted by ElectricBadger

  1. Thanks — I had forgotten about Previous Apps. That would probably have saved a lot of time 🤦‍♂️ For some reason I didn't get the Reinstall from Previous option. I don't have a different version of the GitLab container in Previous Apps, so I'm guessing I did click on the same one that I was using previously, but it's probably too late to find out what happened there… I'm not even sure why that environment variable is added via Extra Parameters (I suspect I was following a set of install instructions that were incorrect) but I'll try adding it in the correct way when I have a bit more time. Not having to deal with the escaping is reason enough to make the switch 🙂
  2. After reinstalling the container, it seemed to zero the databases and start afresh, even though it was pointed at the same appdata locations. It's a good job I had a backup, even though it took two tries to restore the user database and I still don't have the projects restored! Mismatching quotes in an environment variable absolutely should not cause this level of data loss.
  3. If an error is made in the Extra Parameters setting when configuring a Docker container, the container is not added to the list of containers. If an error is made when editing this setting for an existing container, the container is removed from the list of installed containers and the user configuration is lost. There is no warning or confirmation before this happens. To reproduce: Install the GitLab CE container and start it. Edit the container settings. Add the following to Extra Parameters: --env GITLAB_OMNIBUS_CONFIG="external_url 'https://git.example.com';nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n";prometheus_monitoring['enable']=false;" Note that the code pasted from https://docs.gitlab.com/ee/administration/integration/plantuml.html uses double quotes which should have been escaped, as it's already inside double quotes. An easy error to miss, but one with a hefty penalty, as it causes the container creation to fail. This causes the container to vanish; when reinstalled, any existing configuration changes are lost and the container has its default settings once more. Expected behaviour: the container remains installed, in a stopped state, with its existing config. The user is free to edit that config until they come up with one that works. I'm marking this as Urgent as it is, technically, a data loss bug (while I didn't lose any data from the container's volumes, I did have to recreate all the config options. If I didn't have a copy of that Extra Parameters, which I've pasted a sanitised version of below, it would have taken hours to get the container up and running again.) --env GITLAB_OMNIBUS_CONFIG="external_url 'https://git.example.com';registry_external_url 'https://registry.example.com';gitlab_rails['usage_ping_enabled']=false;gitlab_rails['lfs_enabled']=true;gitlab_rails['gitlab_ssh_host']='git.example.com';gitlab_rails['gitlab_email_from']='gitlab@example.com';gitlab_rails['smtp_enable']=true;gitlab_rails['smtp_address']='mail.example.com';gitlab_rails['smtp_port']=25;gitlab_rails['smtp_authentication']=false;gitlab_rails['smtp_tls']=false;gitlab_rails['smtp_openssl_verify_mode']='none';gitlab_rails['smtp_enable_starttls_auto']=false;gitlab_rails['smtp_ssl']=false;gitlab_rails['smtp_force_ssl']=false;gitlab_rails['manage_backup_path']=false;gitlab_rails['backup_path']='/backups';gitlab_rails['backup_archive_permissions']=0644;gitlab_rails['backup_pg_schema']='public';gitlab_rails['backup_keep_time']=604800;nginx['listen_port']=9080;nginx['listen_https']=false;nginx['hsts_max_age']=0;nginx['client_max_body_size']='0';nginx['custom_gitlab_server_config']='location /-/plantuml/ {\nproxy_cache off;proxy_pass http://plantuml:14480/;\n}\n';registry_nginx['listen_port']=9381;registry_nginx['listen_https']=false;registry_nginx['client_max_body_size']='0';registry_nginx['enable']=true;registry['enable']=true;gitlab_rails['pipeline_schedule_worker_cron'] = '7,37 * * * *';prometheus_monitoring['enable']=false;postgresql['auto_restart_on_version_change']=false" celestia-diagnostics-20220801-0905.zip
  4. If I stop the array and then restart it, all Docker containers that have autostart set to ON are restarted. The same does not apply to VMs: they only autostart after a full reboot, and must be restarted manually in this case. This is really annoying, as I usually forget to do this and don't notice until I try to use something that's running on a VM and find it's not working. Would it be possible for autostart to be consistent between Docker and VMs, and for both to autostart things after an array stop/start? celestia-diagnostics-20220219-1402.zip
  5. Yes, the plugin autoupdates, so it's current. Looks like Firefox doesn't support the `list` attribute for `input type='color'`, so not a lot you can do here. I'll use Vivaldi for when I need to edit colours. Thanks!
  6. Everything I'm reading suggests that I should get a dropdown when I click on a custom colour control, which includes an option to reset the colour to its default. However, in Firefox on macOS, I just get the standard system colour picker. The posts I read were from a while ago — has the behaviour changed since then? There no longer seems to be a way to reset a colour to default without resetting them all to default…
  7. I am using binhex-delugevpn, and occasionally the endpoint goes down, requiring the container to be restarted. If, while the container is restarting, I click on its icon and choose Logs, the logs open, but the container is left stopped. Once this happens, attempts to stop the container produce "Execution error — container already started", even though `docker ps` shows that it is not. Turning the entire Docker service off and on again does not clear this and allow Unraid to get back in sync — only a full reboot (which takes about 5–10 minutes with a Dell R720) will clear things. Is there a way to force the frontend to throw away what it thinks the Docker state is, and set it based on `docker ps`? And are there any plans to allow the frontend to multitask properly, so that users can restart a container and then bring its logs up without having to wait for the restart to complete? (This also occurs to an extent when starting multiple containers — if you click Start on one, then click Start on a second before the first has finished starting, it gets confused, though not usually to the point of requiring a full reboot…) (EDIT: Actually, in this particular case, it looks like the container wasn't starting anyway, because I'd messed up a config file, so that may not have helped. But I have had Unraid get out of sync before when doing this. It's possible that restarting the whole Docker service might have avoided the reboot if I hadn't messed up the config files, though!)
  8. After some digging, I found that the problem is due to Deluge setting a cookie whose value is longer (1819 bytes) than noVNC can cope with, hence the 403 Request Entity Too Large. Quite why it needs that much to list which columns should be shown, and in what order, I don't know! I don't suppose this is that easy to fix — I assume the limit is in upstream code somewhere? As a workaround I've changed the Deluge Docker container to provide http://«servername»:8112 as its web interface, so that the browser sees it as being a separate webserver from http://«ip», which is what the VNC Remote button uses, but if any Docker container that's been accessed via the server IP and a port can break browser VNC, it's not good…
  9. I've cleared my browser cache, but still cannot connect via noVNC using Firefox. I can connect using a third-party VNC client, so the problem seems to be in noVNC. Does anybody have any idea what the problem could be? It's a pain to have to fire up TigerVNC and find the correct port to connect to when you just want to verify something…
  10. Is there a list of banned share names anywhere? I'd like to check mine are OK before I hit the update button 😁
  11. When I have symlinks in a share, and mount that share over CIFS to my Mac, the symlinks show as copies of the file, rather than symlinks. For example, if I have a share at /mnt/user/test mounted on the Mac at /Volumes/test, and the share contains a file called "test", then I see this: $ ls -l /Volumes/test -rwx------ 1 user staff 10 4 Feb 10:00 test $ ssh unraid # cd /mnt/user/test # ln -s ./test othertest # ls -l -rwxrwxrwx 1 root root 1024 4 Feb 10:00 test lrwxrwxrwx 1 root root 21 4 Feb 10:01 othertest -> ./test # exit $ ls -l /Volumes/test -rwx------ 1 user staff 1024 4 Feb 10:00 test -rwx------ 1 user staff 1024 4 Feb 10:01 othertest The file "othertest" appears on the Mac as a plain file which is an exact copy of "test". Changes made to one show in the other. If I create a symlink on the Mac, however, it shows up properly: $ cd /Volumes/test $ ln -s ./test yetanothertest $ ls -l -rwx------ 1 user staff 1024 4 Feb 10:01 othertest -rwx------ 1 user staff 1024 4 Feb 10:00 test lrwx------ 1 user staff 21 4 Feb 10:00 yetanothertest -> ./test Unfortunately, I need to do stuff on the Mac side where the symlinks actually appear as symlinks. Is there a way I can configure it so symlinks work as expected, or do I have to deal with the horrors of NFS for this?
  12. Please could we have the RSS feed? It shouldn't be too difficult to implement, and it makes life much easier for those of us who do use RSS. I get far too much email as it is, and I don't want to know about new videos and popular forum posts — just blog articles. IMHO, without an RSS feed, it shouldn't even be called a "blog"
  13. When I run the Docker container, the log says XML template file for the vm was placed in Unraid system files. This file assumes your vm path is /mnt/user/domains if it isnt you will need to manually edit the template changing the locations accordingly Would it be possible for it to output the path to that XML file at that point, instead of just saying "Unraid system files"? I've looked all through /boot and I can't find it — and I need to change /mnt/user/domains to /mnt/disks/vms… EDIT: d'oh — I can just edit the XML after restarting the server and before starting the VM, of course. Might be more helpful if the message mentioned that, for the decaffeinated 😁
  14. Do we need to change anything in our unRAID configs in order to continue getting updates to this container, if it's being renamed? Or will the rename get picked up automatically?
  15. Ah — I was connecting to port 7806 with the VNC client. That explains a lot. I don't have Chrome installed, but Vivaldi is based on Chromium (it loses a bunch of Google-specific stuff I don't want, and adds several very useful configuration options). As I said, it worked fine there, too. I cleared the Firefox cache and it still didn't work without using private mode — but clearing all cookies and localStorage set by the unRAID machine did. Not sure what had got stored that was causing the confusion, but it's all resolved now. Thanks for your help!
  16. It works inside a private window, and if I use Vivaldi rather than Firefox Developer Edition (but that's a pain as I have to copy/paste the link rather than just clicking, obviously). All adblockers are disabled for the unRAID server's address in Firefox, but not in Vivaldi — but they all report nothing is blocked, anyway. I've tried disabling each extension individually, with no effect. Most of them were active in Vivaldi anyway. The connection doesn't get established when I use TigerVNC. console.log.txt It's very mysterious… ¿ⓧ_ⓧﮌ
  17. Thanks — no joy with a cache clear, or with enabling privileged mode. Doesn't work in Vivaldi or Firefox (where it used to work). This is what I get in the browser console: Msg: Starting VNC handshake util.js:218:50 Msg: Sent ProtocolVersion: 003.008 util.js:218:50 Source map error: Error: request failed with status 404 Resource URL: http://192.168.69.99:7806/css/bootstrap.min.css?v=be005ac911 Source Map URL: bootstrap.min.css.map WebSocket on-close event util.js:218:50 Msg: Server disconnected (code: 1006) Not sure the source map error is relevant and it sometimes occurs after the server disconnect.
  18. I can't get the Web UI to work — I just get the toolbar at the top with a red X next to the MakeMKV logo. When I mouse over the X, I see a tooltip with the message "Server disconnected (code: 1006)". I've tried uninstalling and reinstalling the container. There is a "s6-svwait: fatal: timed out" in the log (attached) but I don't know if it's significant. Does anybody have any idea what's going wrong? I've just bought a new DVD box set that I'd like to rip… log.txt
  19. Slightly confused — are you saying that I can just define this disk as a separate cache pool and then use that for the VM storage? I'm talking about the disk images for the VMs, not libvirt.img…
  20. It's turning red for me as well. It's not changing to a state where "it's working now". I can change each VM individually, and they all work, but new VMs created still try to use /mnt/user/domains. The only way I was able to change it was by editing /boot/config/domains.cfg and rebooting (which is no quick task with a Dell R720 — it takes over five minutes to restart!) Why is there this restriction? And why does it just turn the text red rather than displaying a helpful error message — it's really poor UX… Just to make me feel like I've really got value for the $49 I've just paid to upgrade my licence, I've tried to rename a stopped VM. The button changes from "Update" to "Updating…" — and then stays like that (update: now over ten minutes since I clicked the button. TEN MINUTES TO RENAME A VM!) You would think that changing the name of a VM would be pretty near instant, but apparently it takes over a minute. What exactly is unRAID trying to do that takes so long — and why isn't there any sort of feedback (a window with estimated time remaining would be useful. Doing the update in the background and letting me use the UI for other things would be even more useful, because navigating away from the page stops the rename!)
  21. I've been having a similar problem — I'm using a Dell R720 with a NIC that has four ports, but eth0 and eth1 are SFP fibre ports. I'm using eth3, but unRAID lost the DNS config once I switched. The network settings page has the DNS settings under eth0 rather than either per-port or separate from all the ports. unRAID had decided to bridge eth3 to eth0, even though I'm not using eth0. When I turned bridging off on eth0, and enabled it on eth3 with no other interfaces on the bridge (and then had to change the bridge on every VM — a "bulk edit" mode would be greatly appreciated) DNS stopped working after a reboot. When I looked at /boot/config/network.cfg, these lines were missing: DHCP_KEEPRESOLV="yes" DNS_SERVER1="192.168.1.1" DNS_SERVER2="1.1.1.1" DHCP6_KEEPRESOLV="no" Adding them back in and rebooting seemed to confuse unRAID, because console now said this after boot. Presumably the code that generates this is assuming eth0 will always be used. unRAID Server OS version: 6.8.3 IPv4 address: not set IPv6 address: not set I could still ssh in, but the web interface was down. I eventually resolved this by reverting to the original network.cfg that used eth0 for everything, and editing /boot/config/network-rules.cfg to change the port assignments, so the port in use is now treated as eth0. (That might be an easier fix for anybody else who has a port failure )
  22. The PERC H810 is Dell's official recommendation for the MD1200; it's flashed to IT mode, meaning it's showing as an LSI card (I can't tell which one without rebooting, as it doesn't show in IDRAC without Dell firmware). The output of `dmesg` contains a load of lines like this: [832578.039795] mpt2sas_cm1: log_info(0x31080000): originator(PL), code(0x08), sub_code(0x0000) which I suspect is related. I can't believe I forgot you could configure the warnings per-drive. More coffee needed, I think…
  23. I'm running unRAID on a Dell R720 with an additional MD1200 attached for another 12 drive bays, connected through a PERC H810 in IT mode. The drives in the R720 itself are fine, but the drives in the MD1200 keep racking up the UDMA CRC errors — I'm getting three or four alerts per day, per drive. I've tried these steps: disassembling and reassembling the MD1200, just in case the backplane was slightly out of place using the other controller in the MD1200 using other channels on the PERC H810 replacing the cable swapping the drives to different bays in the MD1200 swapping the drives from the MD1200 with drives in the R720 — the errors stay with the MD1200 rather than following the drives (as expected) Nothing seems to make any difference. Does anybody have any idea how I could fix this? (Or, is there a way to disable UDMA CRC error warnings for those drives only? At least I wouldn't have to clear a ton of notifications every morning 🙂 )