Ulrar

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by Ulrar

  1. Oh perfect, then I can just do the exact same procedure I just did but with parity 1 instead. Thank you very much !
  2. Hi forum, I've got two 4Tb parity drives, that I want to replace with 20Tb drives. Nothing is broken, just planning to start buying 20Tb in the future so planning ahead. I just did : - Stop the array - Remove parity 2 (4 Tb) - Start the array - Stop the array - Added the new drive as parity 2 (20 Tb) - Start the array That's now rebuilding, ETA is 5 days with my services running (ouch). In 5 days, I'll want to do the same with parity 1, but thinking about it now I'm not sure you can have parity 2 filled with parity 1 empty. What would be my best bet ? I believe a "parity swap" procedure would keep the array offline for days, so that's out of the question. Should I have removed both parity drives now and rebuilt 1 instead of 2, leaving 2 out ? Thanks
  3. Same issue as @sersh this morning for intel, the intel_gpu_top command works fine but the php file says `Vendor data valid, but not enough received.` and `Vendor command returned unparseable data.`, then just returns N/A for everything. I did see the plugin auto-update just before it broke so presumably it's a bug in the latest version ? I'm fairly sure it was working yesterday, but I could be wrong.
  4. Hi, I've been using unraid since the previous stable version (6.9) where that issue was already present, can't talk about before. Regularly unraid will tell me there's an update available for some containers, but uppon clicking on `apply update` it won't actually pull anything and just re-creates the container. Usually it then goes to 'up to date' afterwards, until I click the check update button again. It doesn't do it on every container and not every time, it seems a bit random but for example nextcloud does it a lot. Not as important and might be unrelated but it's also doing it on containers started from the cli (not through unraid), I use linuxserver/ffmpeg a lot and it's _always_ showing as update available in the list, but of course clicking on it refuses to update it since it was started outside of unraid. It'll show as update available even if it's running the latest image and I click the check update, for some reason. I've seen an old thread saying DNS issues can cause this behavior so I made sure my local pihole isn't blocking the docker hub and isn't rate limiting unraid, and the dashboard doesn't show anything blocked while unraid is checking for updates. Thanks
  5. Hi, I'm seeing the same issue when updating bigger containers, like home assistant for example. Is there a way to increase the fastcgi timeout setting ? I've got my HAProxy in front of unraid set to a 15 minutes timeout, which is a bit more reasonable for that use case, it'd be nice to get nginx to the same. Does't impact anything, the update still completes in the background, it's just not a great user experience. Thanks
  6. Hadn't thought of that, but no they are not EDIT: they're gone from the auto update plug-in view as well this morning, nevermind. I had a power cut yesterday so this all got reset. I'll update back in a few days once some more containers have run.
  7. Hi, I have a script creating temporary containers (docker --rm), and as expected they show in unraid while they run then they go away. Somehow the plug-in remembers them, they keep showing in the auto update tab for containers even though they're not in the docker tab, and not in the output of docker ps -a. Is there a way to clean that up ? Without rebooting, of course, I don't want to reboot every day. Not causing any issues but I imagine this means they stay stored somewhere indefinitely, and over years I'll be creating thousands of those automatically so I'd like to have that working cleanly. Thanks
  8. Hi, Is there a way to specify the IP to bind a port forward to when creating a container ? They seem to always use 0.0.0.0. I have an issue with a container generating a bunch of errors when my router scan the ports, I'd like to just bind it to 127.0.0.1 since the only thing using that port is running in another container in host mode anyway, it doesn't need to be exposed to the network. I've tried putting 127.0.0.1 in the Port setting, but it looks like . and : are not valid characters in there. I've used the extra parameters field to specify the -p myself for now, which works fine, just wondering if there's a way using the normal Port config option. Sorry if I'm missing something obvious, I'm new to unraid. Thanks