Kilrah
-
Posts
2001 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by Kilrah
-
-
You can use rsync or freefilesync that will list both sides and copy only what's missing.
- 1
- 1
-
There's a setting to enable mover logging, enable it when needed for troubleshooting then disable afterwards, point is not to have thousands of lines of log spam when nothing's wrong.
- 1
-
Quote
Unraid is affected.
Nope, the liblzma version used in unraid is safe.
- 1
-
The downgrade option uses the "last version you were running" so if you did an upgrade from .5 straight to .9 that's what it'll offer.
-
-
Folder on a ZFS drive is known to cause these issues. Best to go back to image if you have a ZFS cache.
-
Updated and went to browse my remote mount from unraid to check and had such messages as well. Downgraded as precaution because my server backs up 20TB worth of another machine's data using rsnapshot twice a day via SMB and it'd wreak complete havoc in the retention history if suddenly a bunch of files were not seen.
-
I "update all" pretty much every evening but it only occurs maybe once a month. What's very annoying is that of course it'll happen when one container is running a migration that takes a couple minutes, and the loop will cause the container to be stopped in the middle of it leaving things in shambles.
-
Compose is a 3rd party plugin, not part of unraid so you should post in its support thread. And yeah integration isn't perfect outside of trivial stacks.
-
1 minute ago, JorgeB said:
That's irreverent, currently that share also exists on disk1, please read what I wrote:
OP said this isn't the share that was involved
45 minutes ago, sabertooth said:No, this share has 2TB+ data and is from zfs on 6.11.5.
But which ones was it then?
-
One of your shares is set to cache only but has data on disk1
Quotep------t shareUseCache="only" # Share exists on zdata, disk1
Was that one involved?
Exclusive access requires primary storage only and the share must not exist on any other disk/pool
-
This bar shows docker image file usage, not docker RAM usage.
-
I don't have before/after graphs but I have 32GB RAM and 60 containers running and am at ~50% RAM usage which is about the same as before.
-
1 hour ago, goezzkim said:
After investigating more, it looks like it's an expected behavior(!!) of docker using zfs driver. I certainly didn't expect this, I think I should revert it back to btrfs.
It's also caused super high disk usage for some. docker.img on ZFS is fine.
-
Yeah and haven't seen it, but as mentioned in my case on 6.11 I'd only get it once every few weeks anyway (I update daily) so it doesn't mean much.
-
2 hours ago, NickI said:
Hi Everyone!
i've just decided to move on to 6.12.0 rc6 to start catching up with all the latest developments of Unraid, and i had a nice surprise i wanted to share. When the update completed i navigated to the docker tab and checked all my dockers for updates. I had 4 to update so i tested again the update all button and all worked as they should, no loops 🎉
One try doesn't say much since it only happens occasionally.
-
It is indeed stored in the browser, do you have any browser extension that might prevent saving/automatically clear cookies?
-
Something that might be an issue, if you manually unmount a disk (like for the "clear an array drive" scenario) subsequently stopping the array will loop forever when it tries to unmount disks, treating "not mounted" as something it should retry instead of ignore.
May 26 09:28:39 Unraid2 emhttpd: shcmd (504): umount /mnt/disk2 May 26 09:28:39 Unraid2 root: umount: /mnt/disk2: not mounted. May 26 09:28:39 Unraid2 emhttpd: shcmd (504): exit status: 32 May 26 09:28:39 Unraid2 emhttpd: Retry unmounting disk share(s)... May 26 09:28:44 Unraid2 emhttpd: Unmounting disks... May 26 09:28:44 Unraid2 emhttpd: shcmd (505): umount /mnt/disk2 May 26 09:28:44 Unraid2 root: umount: /mnt/disk2: not mounted. May 26 09:28:44 Unraid2 emhttpd: shcmd (505): exit status: 32 May 26 09:28:44 Unraid2 emhttpd: Retry unmounting disk share(s)... May 26 09:28:49 Unraid2 emhttpd: Unmounting disks...
-
What's the reason for changing exclusive mounts from bind mount to symlink? Seems a number of services aren't happy with the symlink...
- 1
-
How's the share accessed? SMB share?
AFAIK both | and \ are illegal characters in SMB paths.
-
Did you reboot the Windows client? Windows tends not to like when something changes on the server while it's still "connected" to it.
-
To clarify pools support trim, just not the array.
-
That's because it's built-in in 6.12, so it's something else.
-
2 hours ago, foux said:
Indead, the 88443 might be the issue!
Port number is 16 bits, so anything above 65535 is invalid.
Starting a drive preclear during a format function will pause the format
in Stable Releases
Posted
Yep, at the end of format sync is used to make sure any pending writes are committed to the drive but that's system wide and preclear does a constant write that won't end until it's done.
But you can pause preclear and it should go through, then resume.