ElectricBadger

Members
  • Posts

    107
  • Joined

  • Last visited

Everything posted by ElectricBadger

  1. Ah yes, the subheadings look the same as the headings. Got it now. Unfortunately, the refresh button didn't do anything, so I guess I'll have to reboot at some point after the parity rebuild. Thanks! Update: it seems to have noticed overnight that the drive had gone, so no need for a reboot. Except there's an unRAID update anyway 😁
  2. No, it's not in Unassigned Devices — that's not showing anything. This is what I'm seeing — but Dev 2 (sdq) is no longer physically connected to the server.
  3. I'm using a Dell MD1200 which allows hot-plugging. A failing disk has been removed from the array and is now showing in Disk Devices while the parity rebuilds without it. I've removed the physical drive from its bay, but it's still showing in Disk Devices on the Main screen, with a Format button. How do I get unRAID to refresh its disk list and notice that this drive is no longer attached and should be moved to Historical Devices, without stopping the array or rebooting? There doesn't seem to be a "refresh drive list" button, and clicking on the device name doesn't offer a Remove Device button. Surely, given the amount of hardware that supports hot plugging/unplugging, there must be a way to have unRAID handle this? It's been 30 minutes since I disconnected the disk, and it hasn't even noticed the change yet
  4. Thanks — yes, it did finish the parity check successfully; it was about 50% through (and paused during the daytime) when I ran the upgrade — so in progress but paused automatically. UPDATE: While I'm sure I had a notification saying it had finished a couple of days after the reboot, after removing the parity.tuning.restart file I got a notification about half an our later saying the parity check had finished after 9 days, so I guess that gave it a bit of a kick 😁
  5. I ran an upgrade to Unraid 6.12.3 while a parity check was running. It paused, and resumed correctly after the upgrade when the resume time came around, and completed successfully after a couple of days. But now, every day at 07:00 (the pause time) I get a Pushover notification saying "Paused. No array operation in progress (0.0% completed)". Is there a way to get rid of this? Would editing the progress.save file be enough to fix it? (Turning "send notifications for pause or resume of increments" off did stop it, but I want those to be on when a check is actually running…) Here's the parity.check.tuning.progress.save file: type|date|time|sbSynced|sbSynced2|sbSyncErrs|sbSyncExit|mdState|mdResync|mdResyncPos|mdResyncSize|mdResyncCorr|mdResyncAction|Description MANUAL|2023 Jul 26 06:34:22|1690349662|1690348691|0|0|0|STARTED|11718885324|129515364|11718885324|1|check P|Manual Correcting Parity-Check| PAUSE|2023 Jul 26 07:00:06|1690351206|1690348691|0|0|0|STARTED|11718885324|334602096|11718885324|1|check P|Manual Correcting Parity-Check| RESUME (MANUAL)|2023 Jul 26 10:24:39|1690363479|1690363219|0|0|0|STARTED|11718885324|369333984|11718885324|1|check P|Manual Correcting Parity-Check| PAUSE|2023 Jul 27 01:00:54|1690416054|1690363219|0|0|0|STARTED|11718885324|6269949784|11718885324|1|check P|Manual Correcting Parity-Check| PAUSE (MANUAL)|2023 Jul 27 01:30:18|1690417818|1690363219|1690416054|0|-4|STARTED|0|6270038284|11718885324|1|check P|Manual Correcting Parity-Check| RESUME (MANUAL)|2023 Jul 27 06:24:22|1690435462|1690435247|0|0|0|STARTED|11718885324|6294077764|11718885324|1|check P|Manual Correcting Parity-Check| PAUSE|2023 Jul 27 07:00:08|1690437608|1690435247|0|0|0|STARTED|11718885324|6526254048|11718885324|1|check P|Manual Correcting Parity-Check| RESUME (MANUAL)|2023 Jul 27 12:30:39|1690457439|1690457240|0|0|0|STARTED|11718885324|6545790780|11718885324|1|check P|Manual Correcting Parity-Check| PAUSE|2023 Jul 28 01:00:48|1690502448|1690457240|0|275|0|STARTED|11718885324|11664698612|11718885324|1|check P|Manual Correcting Parity-Check| PAUSE (MANUAL)|2023 Jul 28 01:12:46|1690503166|1690457240|1690502448|275|-4|STARTED|0|11664770296|11718885324|1|check P|Manual Correcting Parity-Check| RESUME (MANUAL)|2023 Jul 28 06:18:21|1690521501|1690521397|0|275|0|STARTED|11718885324|11674986128|11718885324|1|check P|Manual Correcting Parity-Check| COMPLETED|2023 Jul 28 06:30:16|1690522216|1690521397|1690521948|275|0|STARTED|0|0|11718885324|1|check P|No array operation in progress| And here's the parity.check.tuning.progress: type|date|time|sbSynced|sbSynced2|sbSyncErrs|sbSyncExit|mdState|mdResync|mdResyncPos|mdResyncSize|mdResyncCorr|mdResyncAction|Description SCHEDULED|2023 Aug 21 22:00:07|1692651607|1690521397|1690521948|275|0|STARTED|0|0|11718885324|1|check P|No array operation in progress| PAUSE|2023 Aug 22 07:00:06|1692684006|1692651607|0|0|0|STARTED|11718885324|1975074812|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| RESUME|2023 Aug 22 22:30:08|1692739808|1692651607|1692684006|0|-4|STARTED|0|1975112588|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE|2023 Aug 23 01:00:27|1692748827|1692739809|0|0|0|STARTED|11718885324|2939569160|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE (MANUAL)|2023 Aug 23 01:12:19|1692749539|1692739809|1692748827|0|-4|STARTED|0|2939681292|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| RESUME|2023 Aug 23 22:30:08|1692826208|1692739809|1692748827|0|-4|STARTED|0|2939681292|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE|2023 Aug 24 01:00:35|1692835235|1692826208|0|0|0|STARTED|11718885324|3717129116|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE (MANUAL)|2023 Aug 24 01:30:19|1692837019|1692826208|1692835236|0|-4|STARTED|0|3717246472|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| RESUME|2023 Aug 24 22:30:08|1692912608|1692826208|1692835236|0|-4|STARTED|0|3717246472|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE|2023 Aug 25 01:00:27|1692921627|1692912609|0|0|0|STARTED|11718885324|4750530368|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE (MANUAL)|2023 Aug 25 01:18:23|1692922703|1692912609|1692921628|0|-4|STARTED|0|4750665580|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| RESUME|2023 Aug 25 22:30:07|1692999007|1692912609|1692921628|0|-4|STARTED|0|4750665580|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE|2023 Aug 26 01:00:25|1693008025|1692999008|0|0|0|STARTED|11718885324|5863042816|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE (MANUAL)|2023 Aug 26 01:12:17|1693008737|1692999008|1693008026|0|-4|STARTED|0|5863140880|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| STOPPING|2023 Aug 26 08:46:08|1693035968|1692999008|1693008026|0|-4|STARTED|0|5863140880|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE (RESTART)|2023 Aug 26 08:46:09|1693035969|1692999008|1693008026|0|-4|STARTED|0|5863140880|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| RESUME (RESTART)|2023 Aug 26 08:57:18|1693036638|1693036628|0|0|0|STARTED|11718885324|5863827540|11718885324|0|check P|No array operation in progress| RESUME|2023 Aug 26 22:30:07|1693085407|1693036628|1693036640|0|-4|STARTED|0|5864121344|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE|2023 Aug 27 07:00:06|1693116006|1693085407|0|0|0|STARTED|11718885324|9213726048|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| RESUME|2023 Aug 27 22:30:08|1693171808|1693085407|1693116007|0|-4|STARTED|0|9213817272|11718885324|0|check P|Scheduled Non-Correcting Parity-Check| PAUSE|2023 Aug 28 07:00:07|1693202407|1693171809|1693194257|0|0|STARTED|0|0|11718885324|0|check P|No array operation in progress| PAUSE|2023 Aug 29 07:00:08|1693288808|1693171809|1693194257|0|0|STARTED|0|0|11718885324|0|check P|No array operation in progress| PAUSE|2023 Aug 30 07:00:06|1693375206|1693171809|1693194257|0|0|STARTED|0|0|11718885324|0|check P|No array operation in progress| PAUSE|2023 Aug 31 07:00:07|1693461607|1693171809|1693194257|0|0|STARTED|0|0|11718885324|0|check P|No array operation in progress| PAUSE|2023 Sep 01 07:00:06|1693548006|1693171809|1693194257|0|0|STARTED|0|0|11718885324|0|check P|No array operation in progress| PAUSE|2023 Sep 02 07:00:07|1693634407|1693171809|1693194257|0|0|STARTED|0|0|11718885324|0|check P|No array operation in progress| PAUSE|2023 Sep 03 07:00:07|1693720807|1693171809|1693194257|0|0|STARTED|0|0|11718885324|0|check P|No array operation in progress| PAUSE|2023 Sep 04 07:00:07|1693807207|1693171809|1693194257|0|0|STARTED|0|0|11718885324|0|check P|No array operation in progress| PAUSE|2023 Sep 04 13:50:11|1693831811|1693171809|1693194257|0|0|STARTED|0|0|11718885324|0|check P|No array operation in progress| This is with the 2023.09.03 version of the plugin. I've attached diagnostics, and a syslog with logging set to Testing. Thanks! parity.txtcelestia-diagnostics-20230904-1400.zip
  6. Ah, there's the problem. I was still running unRAID 6.9.2. Although 6.12 is still only in the RC stage — but at least I'm now on 6.11.5. Not sure why unRAID stopped telling me there were updates available, but thanks for reminding me 😁
  7. Is there a way to specify the order in which containers should be backed up? It takes quite a while for the GitLab backup to run, and I'd like to specify that it run first so that it's finished when I need it in the morning. I also have a couple of questions about the settings page: Is it possible to have a "don't show this again" option on the warning popup saying that Docker containers will be stopped before the backup runs and restarted afterwards? I've seen it about twenty times now. I know. It's annoying to have to keep dismissing the alert every time — it reminds me of Windows Vista! The support link at the bottom of the page still goes to the old thread. That means clicking it, then clicking "go to post" on the pinned new thread announcement, then clicking the link to the new thread. It would be much easier if the link could go straight to the new thread, possibly with a note added saying "an older, read-only, thread is available here"… (Ooh, just noticed one more thing. I have /mnt/user/appdata on an SSD, but I also have /mnt/user/appdata_hd for things that take up too much space for an SSD and don't need the speed. Is there a way to get it to look at both of these? It seems like it can only do one or the other…)
  8. I am getting this problem as well — it sits a "checking issues" for an unacceptably long time, with no indication that it will ever complete. Would it be possible for the issue checker to output a percentage progress, just so users know whether they should hang around waiting for it or come back tomorrow?
  9. I had to upgrade the base OS Pihole that provides DHCP, and now it will not resolve my unRAID server in DNS because the unRAID server has not asked for a new DHCP lease. Forcing unRAID to renew its DHCP lease proved to be more difficult than I'd hoped. It would be great for there to be a "renew DHCP lease" button next to each interface in Settings > Network Settings, but there is not. Online tutorials say to use `dhclient`, but it's not installed. Eventually I realised that you could do it with killall -HUP dhcpcd so I wanted to leave this here in case anybody else is having trouble (but also to ask for a Renew Lease button in the UI )
  10. Thanks — I had forgotten about Previous Apps. That would probably have saved a lot of time 🤦‍♂️ For some reason I didn't get the Reinstall from Previous option. I don't have a different version of the GitLab container in Previous Apps, so I'm guessing I did click on the same one that I was using previously, but it's probably too late to find out what happened there… I'm not even sure why that environment variable is added via Extra Parameters (I suspect I was following a set of install instructions that were incorrect) but I'll try adding it in the correct way when I have a bit more time. Not having to deal with the escaping is reason enough to make the switch 🙂
  11. After reinstalling the container, it seemed to zero the databases and start afresh, even though it was pointed at the same appdata locations. It's a good job I had a backup, even though it took two tries to restore the user database and I still don't have the projects restored! Mismatching quotes in an environment variable absolutely should not cause this level of data loss.
  12. If an error is made in the Extra Parameters setting when configuring a Docker container, the container is not added to the list of containers. If an error is made when editing this setting for an existing container, the container is removed from the list of installed containers and the user configuration is lost. There is no warning or confirmation before this happens. To reproduce: Install the GitLab CE container and start it. Edit the container settings. Add the following to Extra Parameters: --env GITLAB_OMNIBUS_CONFIG="external_url 'https://git.example.com';nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n";prometheus_monitoring['enable']=false;" Note that the code pasted from https://docs.gitlab.com/ee/administration/integration/plantuml.html uses double quotes which should have been escaped, as it's already inside double quotes. An easy error to miss, but one with a hefty penalty, as it causes the container creation to fail. This causes the container to vanish; when reinstalled, any existing configuration changes are lost and the container has its default settings once more. Expected behaviour: the container remains installed, in a stopped state, with its existing config. The user is free to edit that config until they come up with one that works. I'm marking this as Urgent as it is, technically, a data loss bug (while I didn't lose any data from the container's volumes, I did have to recreate all the config options. If I didn't have a copy of that Extra Parameters, which I've pasted a sanitised version of below, it would have taken hours to get the container up and running again.) --env GITLAB_OMNIBUS_CONFIG="external_url 'https://git.example.com';registry_external_url 'https://registry.example.com';gitlab_rails['usage_ping_enabled']=false;gitlab_rails['lfs_enabled']=true;gitlab_rails['gitlab_ssh_host']='git.example.com';gitlab_rails['gitlab_email_from']='[email protected]';gitlab_rails['smtp_enable']=true;gitlab_rails['smtp_address']='mail.example.com';gitlab_rails['smtp_port']=25;gitlab_rails['smtp_authentication']=false;gitlab_rails['smtp_tls']=false;gitlab_rails['smtp_openssl_verify_mode']='none';gitlab_rails['smtp_enable_starttls_auto']=false;gitlab_rails['smtp_ssl']=false;gitlab_rails['smtp_force_ssl']=false;gitlab_rails['manage_backup_path']=false;gitlab_rails['backup_path']='/backups';gitlab_rails['backup_archive_permissions']=0644;gitlab_rails['backup_pg_schema']='public';gitlab_rails['backup_keep_time']=604800;nginx['listen_port']=9080;nginx['listen_https']=false;nginx['hsts_max_age']=0;nginx['client_max_body_size']='0';nginx['custom_gitlab_server_config']='location /-/plantuml/ {\nproxy_cache off;proxy_pass http://plantuml:14480/;\n}\n';registry_nginx['listen_port']=9381;registry_nginx['listen_https']=false;registry_nginx['client_max_body_size']='0';registry_nginx['enable']=true;registry['enable']=true;gitlab_rails['pipeline_schedule_worker_cron'] = '7,37 * * * *';prometheus_monitoring['enable']=false;postgresql['auto_restart_on_version_change']=false" celestia-diagnostics-20220801-0905.zip
  13. If I stop the array and then restart it, all Docker containers that have autostart set to ON are restarted. The same does not apply to VMs: they only autostart after a full reboot, and must be restarted manually in this case. This is really annoying, as I usually forget to do this and don't notice until I try to use something that's running on a VM and find it's not working. Would it be possible for autostart to be consistent between Docker and VMs, and for both to autostart things after an array stop/start? celestia-diagnostics-20220219-1402.zip
  14. Yes, the plugin autoupdates, so it's current. Looks like Firefox doesn't support the `list` attribute for `input type='color'`, so not a lot you can do here. I'll use Vivaldi for when I need to edit colours. Thanks!
  15. Everything I'm reading suggests that I should get a dropdown when I click on a custom colour control, which includes an option to reset the colour to its default. However, in Firefox on macOS, I just get the standard system colour picker. The posts I read were from a while ago — has the behaviour changed since then? There no longer seems to be a way to reset a colour to default without resetting them all to default…
  16. I am using binhex-delugevpn, and occasionally the endpoint goes down, requiring the container to be restarted. If, while the container is restarting, I click on its icon and choose Logs, the logs open, but the container is left stopped. Once this happens, attempts to stop the container produce "Execution error — container already started", even though `docker ps` shows that it is not. Turning the entire Docker service off and on again does not clear this and allow Unraid to get back in sync — only a full reboot (which takes about 5–10 minutes with a Dell R720) will clear things. Is there a way to force the frontend to throw away what it thinks the Docker state is, and set it based on `docker ps`? And are there any plans to allow the frontend to multitask properly, so that users can restart a container and then bring its logs up without having to wait for the restart to complete? (This also occurs to an extent when starting multiple containers — if you click Start on one, then click Start on a second before the first has finished starting, it gets confused, though not usually to the point of requiring a full reboot…) (EDIT: Actually, in this particular case, it looks like the container wasn't starting anyway, because I'd messed up a config file, so that may not have helped. But I have had Unraid get out of sync before when doing this. It's possible that restarting the whole Docker service might have avoided the reboot if I hadn't messed up the config files, though!)
  17. After some digging, I found that the problem is due to Deluge setting a cookie whose value is longer (1819 bytes) than noVNC can cope with, hence the 403 Request Entity Too Large. Quite why it needs that much to list which columns should be shown, and in what order, I don't know! I don't suppose this is that easy to fix — I assume the limit is in upstream code somewhere? As a workaround I've changed the Deluge Docker container to provide http://«servername»:8112 as its web interface, so that the browser sees it as being a separate webserver from http://«ip», which is what the VNC Remote button uses, but if any Docker container that's been accessed via the server IP and a port can break browser VNC, it's not good…
  18. I've cleared my browser cache, but still cannot connect via noVNC using Firefox. I can connect using a third-party VNC client, so the problem seems to be in noVNC. Does anybody have any idea what the problem could be? It's a pain to have to fire up TigerVNC and find the correct port to connect to when you just want to verify something…
  19. Is there a list of banned share names anywhere? I'd like to check mine are OK before I hit the update button 😁
  20. When I have symlinks in a share, and mount that share over CIFS to my Mac, the symlinks show as copies of the file, rather than symlinks. For example, if I have a share at /mnt/user/test mounted on the Mac at /Volumes/test, and the share contains a file called "test", then I see this: $ ls -l /Volumes/test -rwx------ 1 user staff 10 4 Feb 10:00 test $ ssh unraid # cd /mnt/user/test # ln -s ./test othertest # ls -l -rwxrwxrwx 1 root root 1024 4 Feb 10:00 test lrwxrwxrwx 1 root root 21 4 Feb 10:01 othertest -> ./test # exit $ ls -l /Volumes/test -rwx------ 1 user staff 1024 4 Feb 10:00 test -rwx------ 1 user staff 1024 4 Feb 10:01 othertest The file "othertest" appears on the Mac as a plain file which is an exact copy of "test". Changes made to one show in the other. If I create a symlink on the Mac, however, it shows up properly: $ cd /Volumes/test $ ln -s ./test yetanothertest $ ls -l -rwx------ 1 user staff 1024 4 Feb 10:01 othertest -rwx------ 1 user staff 1024 4 Feb 10:00 test lrwx------ 1 user staff 21 4 Feb 10:00 yetanothertest -> ./test Unfortunately, I need to do stuff on the Mac side where the symlinks actually appear as symlinks. Is there a way I can configure it so symlinks work as expected, or do I have to deal with the horrors of NFS for this?
  21. Please could we have the RSS feed? It shouldn't be too difficult to implement, and it makes life much easier for those of us who do use RSS. I get far too much email as it is, and I don't want to know about new videos and popular forum posts — just blog articles. IMHO, without an RSS feed, it shouldn't even be called a "blog"
  22. When I run the Docker container, the log says XML template file for the vm was placed in Unraid system files. This file assumes your vm path is /mnt/user/domains if it isnt you will need to manually edit the template changing the locations accordingly Would it be possible for it to output the path to that XML file at that point, instead of just saying "Unraid system files"? I've looked all through /boot and I can't find it — and I need to change /mnt/user/domains to /mnt/disks/vms… EDIT: d'oh — I can just edit the XML after restarting the server and before starting the VM, of course. Might be more helpful if the message mentioned that, for the decaffeinated 😁
  23. Do we need to change anything in our unRAID configs in order to continue getting updates to this container, if it's being renamed? Or will the rename get picked up automatically?
  24. Ah — I was connecting to port 7806 with the VNC client. That explains a lot. I don't have Chrome installed, but Vivaldi is based on Chromium (it loses a bunch of Google-specific stuff I don't want, and adds several very useful configuration options). As I said, it worked fine there, too. I cleared the Firefox cache and it still didn't work without using private mode — but clearing all cookies and localStorage set by the unRAID machine did. Not sure what had got stored that was causing the confusion, but it's all resolved now. Thanks for your help!