sonofdbn
Members-
Posts
502 -
Joined
-
Last visited
Converted
-
Gender
Undisclosed
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
sonofdbn's Achievements
-
I got this warning/message as well as an email while backing up my plugins: Event: Tailscale State Subject: Tailscale state backed up to Unraid connect. Description: The Tailscale state file has been backed up to Unraid connect. This is a potential security risk. From the Management Settings page, deactivate flash backup and delete cloud backups, then reactivate flash backup. Importance: alert I haven't done anything different with Tailscale for at least a few months. Should I take the actions recommended? I can see where to deactivate flash backup, but have no idea how to delete cloud backups. Also, not sure how whether this helps. If I delete cloud backups then I've lost the backups. Sounds like I should backup manually first. But if I simply deactivate, delete backups and then reactivate, how does this prevent the problem happening again?
-
EDIT: PLEASE IGNORE - ALL SEEMS TO BE WORKING NOW! I'm trying to get this to work on a couple of Win 11 VMs that I have, but have been unable to wake the VMs from my Win 10 PC. I use a WOL utility that works fine with other physical PCs and I can "see" the VMs on it, just can't wake them. Is there anything I need to set up on the VM side? If it's relevant, for the VMs, Network Source is br0 and Network Model is e1000. When I look inside the VMs at the network adapter (comes up as Intel PRO/1000 MT) properties, there doesn't seem to be an option for Wake on Magic Packet and the "Allow this device to wake the computer" option is greyed out. Perhaps I'm totally misunderstanding how WOL works for VMs. I read about a libvirt WOL option, but it looks like that was a plug-in that is now no longer available on the app store. Diagnostics are attached in case they're helpful. t2-diagnostics-20240709-0925.zip
-
[Support] Linuxserver.io - Nextcloud
sonofdbn replied to linuxserver.io's topic in Docker Containers
I'm aware of NC updating via the container vs the old way of doing updates via the webui. The repo I'm using is lscr.io/linuxserver/nextcloud:latest, and have been using that for a long time. What is the "docker tag" I should be changing or aligning with the NC version? I'm on "Nextcloud Hub 8 (29.0.3)". If I had been many versions behind, I think I would have had to update a few major versions to get to the current version, and that certainly didn't happen. I should add that after I changed that parameter in the config file, there was no lengthy download, verification, etc. procedure, as there usually is with an update. I'm not totally convinced that there was actually an update done after the change. I got into the WebUI immediately and apart from some message about an app (Recognize - can't remember that the message was, unfortunately - maybe updating or restarting) everything was normal. -
[Support] Linuxserver.io - Nextcloud
sonofdbn replied to linuxserver.io's topic in Docker Containers
I managed to fix this by editing the config.php file to allow updating via browser, which in retrospect seems quite obvious. (Just changed 'upgrade.disable-web' from true to false.) Has the way of updating the Nextcloud version changed? Maybe I missed an announcement. Anyway, it's working again for me now. -
[Support] Linuxserver.io - Nextcloud
sonofdbn replied to linuxserver.io's topic in Docker Containers
I just updated the docker container; I'm on 29.0.3-ls328 (and unRAID 6.12.6). For a few days at least prior to the update I was unable to sync to Nextcloud on my server. I couldn't get into the Web GUI because the home page had a message saying: "Update needed. Please use the command line updater because updating via browser is disabled in your config.php." I recalled that a while back the docker container was changed so that Nextcloud updates were done by updating the container (vs doing it "manually" from within the container). So I thought/hoped that updating the container would fix this problem, but after updating today, I'm still getting the same "Update needed" message. What do I need to do to fix this? -
I'm struggling with the Czkawka GUI, or don't understand what it's meant to be doing. I tried a small test and it showed a list of all duplicated files. Let's say all the duplicates are in two folders, All or Sorted, and I want to delete all the duplicates in All. Is there a way to do this quickly? (Of course I can select each file individually, but that could take a while.) I thought that by clicking on the Sort button in the lower right and then sorting by folder I would at least be able to group all the All duplicates and delete those. But the Sort button doesn't seem to do anything. I tried selecting all the files and then clicking Sort but that also didn't work. What should I be doing?
-
I have the same problem (6.12.6), no array devices showing in the Main tab of the GUI. This happens regardless of how I access the GUI. I've used my usual http://tower/Main as well as the IP address and it happens regardless of browser. Same problem if I try to access from my Android phone. Other tabs and the rest of the GUI seem fine. I've tried stopping and starting the array (NOT rebooting), but that doesn't work. tower-diagnostics-20240603-2025.zip
-
I run a Win11 VM with an nVidia GPU passthrough. I'd like to create another Win11 VM on the same server and passthrough the same GPU to it. I realise I can't run both VMs simultaneously, but I assume that by shutting down the active VM and then booting up the other one, I should be OK. Is that correct? If by mistake I boot up the other VM while the first one is active, what will happen?
-
Getting helium warning on non-helium HGST drive
sonofdbn replied to sonofdbn's topic in Storage Devices and Controllers
The drive is still in the array, so I can't see what the top looks like. But here's where I saw the model number in some HGST document. It's a Deskstar NAS and there's no mention of helium anywhere. Casual Googling shows that most (all?) HGST helium drive model numbers start with HUH and are Ultrastars. Or perhaps I'm so far behind the tech that most modern drives are helium drives so they don't bother mentioning it? -
I have an HGST Deskstar NAS drive in my array. Model is HDN728080ALE604, and from what I can find, it's not a helium drive (and I don't recall ever buying a helium drive). But I'm getting this in my notification email from the server: Subject: Warning [TOWER] - helium level (failing now) is 22. I ran a short SMART self-test and no error was reported. This is the third time I've seen the message (the number is going up: 7, 16, 22) but have so far ignored it. First was in in February this year. Should I be concerned?
-
Can't connect to custom network after unclean shutdown
sonofdbn replied to sonofdbn's topic in General Support
I tried deleting and then recreating proxynet, but that didn't help. Then I tried creating a new network, proxynet2. I assigned my Swag and Nextcloud containers to it and they seemed to be OK. Then I reassigned the containers back to proxynet and lo and behold, they're now working. To tidy up I deleted proxynet2. So far, so good. If things run fine for a few days, I'll mark this as solved. -
I'm on 6.12.6. I ran into some problems that led to an unclean shutdown. I got some advice here to fix the problems, and followed those steps (switched from macvlan to ipvlan, recreated the custom docker network (same name "proxynet") and recreated the docker image. But now my docker containers (Swag and Nextcloud) aren't connecting to proxynet. Host access to custom networks is enabled, as is Preserve user defined networks. Searching a bit on the forum there did seem to be some cases of problems with custom docker networks not working after an unclean shutdown and possibly the Host access to custom networks setting being shown as enabled when in fact it had not been enabled. The suggested solution was to disable and re-enable this setting, which seemed to work in some cases. I've tried that, also rebooted and tried that again, and I still have the same problem. Here's what I have: root@Tower:~# docker network ls NETWORK ID NAME DRIVER SCOPE bd5116d783dc br0 ipvlan local a3b3aaa3ce56 bridge bridge local 83cf0e1b1ef5 host host local 9e74c89874cb none null local 69ed37939ece proxynet bridge local root@Tower:~# So it looks like proxynet is running? There was also a suggestion that the problem was caused by a race condition, where the docker container tried to connect to the custom network before the network was up. I tested that as well by restarting Swag, same problem. Then also tested by setting Swag autostart to off, disabling docker service, re-enabling docker service, waiting a few minutes and then starting Swag. Still had the same problem. Any suggestions on how to fix this? tower-diagnostics-20240331-2100.zip
-
I've switched to ipvlan, recreated the custom docker network (same name "proxynet") and recreated the docker image. But now my docker containers (Swag and Nextcloud) aren't connecting to the custom network. Host access to custom networks is enabled, as is Preserve user defined networks. I'm sure I've done this before, so I think I'm missing some obvious step. Here's the result of docker network ls: root@Tower:~# docker network ls NETWORK ID NAME DRIVER SCOPE 16453c5dced9 br0 ipvlan local 7c5d56aee35f bridge bridge local 83cf0e1b1ef5 host host local 9e74c89874cb none null local 69ed37939ece proxynet bridge local tower-diagnostics-20240331-1848.zip