-Daedalus
-
Posts
426 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by -Daedalus
-
-
I recently bought, among other things, an 8500G (AM5 upgrade) for a server with a dead motherboard.
Found out thet hard way that the CPU's IGP causes a kernel panic on boot.
I've got a small window of time before I have to return it.
Any chance we'll get a 6.13 RC on newer kernel this week?
-
Have a look at AMP. There's a docker image for it on Community Applications.
-
+1
It's been asked before under different names, but some kind of "mirror" or "both" setting for mover would be great.
- 1
-
Hurray thread necromancy!
For what it's worth, I'm 100% with @mgutt on this one. I'm really not sure why USB was even brought up; I can think of very little that would write less.
If you really wanted you could write it to RAM and copy it to flash once an hour or something, but still miniscule.
For what it's worth: +1, I like the idea. Probably wouldn't use it too much myself, but seems like a nice little QoL add.
-
I can't add to this, except to put a question 4 in there: Why was there not a big red flashing alert in the UI for this?
- 1
-
Not sure of the technical limitations for this, but figured I'd flag it anyway given similar work is probably happening with the 6.13 train.
Kind of like split levels in shares now.
When a share is created on a ZFS pool, another setting would appear with a dropdown.
This would allow the user to create child datasets x levels deep in the share.
eg. Create share 'appdata'
Setting: Create folders as child datasets:
- Only create datasets at the top level
- Only create datasets at the top two levels
- Only create datasets at the top three levels
- Automatically create datasets at any level
- Manual - Do not automatically create datasets
The obvious usecase for this being the ability for people to snapshot individual VMs or containers, as they're more likely to be using ZFS for cache pools rather than the array.
And I know, Spaceinvader has a video series for this, but I'd like it native.
- 1
-
I actually did this because I found another one of your answers to a similar issue. It was, I think, this:
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512
Or, at least, these were the two files listed. The values may have differed. fwiw
-
For anyone looking at this in future:
I ended up forcing it down. The only thing I could think of still running is the syslog server. It was writing to a dataset on the 'ssd' pool.
When the server came back up, that share was on the array.
-
This has happened both times I've tried to restart the array since upgrading to 6.12.4, and changing my two pools from BTRFS to ZFS.
Syslog:
Sep 25 08:44:09 server emhttpd: shcmd (2559484): /usr/sbin/zpool export ssd Sep 25 08:44:09 server root: cannot unmount '/mnt/ssd': pool or dataset is busy Sep 25 08:44:09 server emhttpd: shcmd (2559484): exit status: 1 Sep 25 08:44:09 server emhttpd: Retry unmounting disk share(s)... Sep 25 08:44:14 server emhttpd: Unmounting disks... Sep 25 08:44:14 server emhttpd: shcmd (2559485): /usr/sbin/zpool export ssd Sep 25 08:44:14 server root: cannot unmount '/mnt/ssd': pool or dataset is busy Sep 25 08:44:14 server emhttpd: shcmd (2559485): exit status: 1 Sep 25 08:44:14 server emhttpd: Retry unmounting disk share(s)... Sep 25 08:44:19 server emhttpd: Unmounting disks...
I tried to unmount /mnt/ssd/ as well as forcibly, but no luck. I'd post diags, but they seem to be stuck collecting (10+ mins now)
Any output I can get on this before I force it down?
-
That would goes a way towards explaining it. Cache still useful if it's NVME/SSD vs spinners, but certainly less so alright.
-
I wasn't the one above quoting disk speeds, that was @PassTheSalt. It is weird that he's getting such high speeds to array though. I only get around 90 or so sustained.
-
On 8/12/2022 at 4:37 PM, trurl said:
It is impossible to move from fast cache to slow array as fast as you can write to cache. You are doing it wrong.
I have to say, this leaves a bit of a bad taste in my mouth. It's a shit comment. Constant writes were never mentioned, but you assumed so for some reason, then decided to go full Apple with a "You're holding it wrong" variant.
Regardless of it's feasability, usefulness, or anything else, it is a legitimate feature request, and one I've suggested before.
+1 from me. The Mover Tuning plugin can do this, but it would be a nice option to have native.
-
I had moved away from an image to prevent it filling up, or allocating too much space to it. Image it is I guess!
There's no work around to prevent this behaviour? I'd got used to not tracking space for it.
-
I could use some help here.
Sample output:
root@server:~# zfs list NAME USED AVAIL REFER MOUNTPOINT nvme 492G 1.28T 483G /mnt/nvme nvme/021c0d8e3c51fa2043a923ae8827453a8368e01d44f109e517a0ccb29fa110e3 110M 1.28T 116M legacy nvme/033e003d210f53465a73b12264a92ec7620eeaa1f466233d48636eb0041364ac 228K 1.28T 794M legacy nvme/03cd86d9fadbaa84de181842cd3398a29e9d7ca5b3af7c00dd3075fb8f24f3e0 65.1M 1.28T 1.03G legacy nvme/03dd3c2c139853025a8cf253bb181fa781b184c5da1fa59c2834c9c6cf392bc6 208K 1.28T 694M legacy nvme/03e24096d6226b6ada14c0770faef6b17f9957e4df067c78ae7805ca0c7b395d 17.9M 1.28T 32.2M legacy nvme/03fd3b4ba6d133e7799eafe3c455dbea737db4693381612cd712b64c7990d38c 16.4M 1.28T 588M legacy nvme/0599aa2763c0aa6c5d36710bab682d78186f4b21259ef3e9b74fe941f9ab7570 100K 1.28T 15.1M legacy nvme/06ba9a9cca2e8c1ce12c9f0907f4eff9537afaf98d806a0d319c3dcd6e1a6747 276K 1.28T 986M legacy nvme/07883f0e01ef4020154f9d9cd77961d6bfbf0a850f460592bb7660ab40f309fc 12.9M 1.28T 128M legacy nvme/09840a34ce6780b9bbda5bb473e3379f373d53e8edd00617efca44b9fd7d4731 1.39M 1.28T 812M legacy nvme/09840a34ce6780b9bbda5bb473e3379f373d53e8edd00617efca44b9fd7d4731-init 224K 1.28T 811M legacy nvme/0a90b9fbdb52e1bbe5aa3d6e22866ad8965b81d8986f7316df7716a9b2d3dc49 176K 1.28T 250M legacy nvme/0cd905b9df7f949ce3502e3915b05e517bf38a1c804363605de49a940b0a598f 141M 1.28T 1.21G legacy nvme/0cd905b9df7f949ce3502e3915b05e517bf38a1c804363605de49a940b0a598f-init 256K 1.28T 1.08G legacy nvme/0d3dea73b93807f65b3ab60e197ad8f8ace3cd25f9ac2607567a319458aa71c5 160K 1.28T 360M legacy nvme/0fd6e752e00f3b9e056b8f6bdf8ab3354aff2c37296cda586496fd07ac5d8ea3 14.9M 1.28T 366M legacy nvme/10651987709cf54cf33ae0c171631fa00a812f55dd4afb9f8881d343e5004b85 360K 1.28T 360K legacy nvme/113ead63506a17cc2bdd4f7a0933d0347cea62a6ef732b94979414ca4c3192d0 116K 1.28T 6.64M legacy nvme/14eb062140a9e49382c36a7f9ba7f91ed67072ef2d512a8a5913a4dd4fb10e8c 436M 1.28T 468M legacy nvme/15821de8faf7118665d9258d6dd4da7653b10e80e3b848a43375ac3c7656c40b 54.1M 1.28T 1.08G legacy nvme/1839c100fafb9797b2499dfe297006ff78ab8c2b99d2bce14852a08b40c9544d 140K 1.28T 96.5M legacy nvme/191974c0b556409cb8cf587cfc4e7b45696b708e3b6e010d96e3c016d72c5315 96.4M 1.28T 96.4M legacy nvme/19ed814063c99a1a639ca806a75beabfd694e0f5601f52047962028a68e86542 131M 1.28T 250M legacy nvme/1a0e9188edae81deda39532926c3f0211b0d6675726063ec7015aa41f37698e8 591M 1.28T 623M legacy nvme/1a5b1c75b7ad68b229187bd83b60c5b8632e82eca39b98540b705c08afb8bf79 45.1M 1.28T 258M legacy nvme/1b1dba293258c26b8a3b52149c3ec99b20d5c13f1958f307807754703903f2fc 7.14M 1.28T 231M legacy
That's a small sampling. There are around 200 datasets, according to ZFS Master.
So, this went:
Move everything to array via mover.
Erase pool, format as ZFS.
Create directories (not datasets) via CLI: appdata, docker, domains, system.
Recreate docker folder (it was taking a very long time to move, so I deleted it)
Move data back to pool from array.
Reinstall containers from CA.
I had ZFS Master installed in preparation for the move, but my first interaction with it was clicking "Show Datasets" and finding all of these.
Can anyone shed any light on what happened here?
-
Apologies if this has been asked before, but is there an option to exclude certain directories from the backup?
With the old plugin, I was backing up my Plex container, but I was excluding the thumbnails directory, which meant I was backing up 30GB instead of 500GB. That option appears to not exist here, unless I missed it.
Edit: Found it! You have to click the container name to get a bunch of extra settings, fantastic!
Edit2: Small feature requests as it's the first time I'm using this. Maybe not have the log output force the scrollbar to the bottom every few seconds. Makes reviewing the log while a backup is running pretty obnoxious.
Also, is there a reason single-core compression is the default? Or at least it is when importing settings from the 2.5 app. Makes backups horrendously slow (as in, a 33 minute backup gets reduced to 3 minutes with multi-core compression)
-
It sounds like you might have had an issue unrelated to parity. Adding a second parity drive would not destroy the filesystem or anything like that. In fact parity doesn't know about filesystems; it's literally just 0s and 1s. If you were able to browse to a parity drive, that's all you would see - junk.
My guess it adding the second drive exposed an underlying problem, which caused the drives to start getting disabled and then the corruption. Usually this happens from a bad cable or SATA/SAS controller.
To try and answer your questions:
1) XFS/ZFS won't help you with disabled drives. That sounds like a hardware problem that you'll need to look into. It's like asking "are blue cards or red cards better to build a tall house of cards?" If the table is wobbly, it'll be the same either way. To your snapshot question though: No, not that I'm aware. There's a plugin (ZFS Master) that can do snapshots with ZFS, but XFS doesn't have this functionality, though there might be something hacky with rsync.
2) You can simply set primary storage to your RAIDZ2 pool, and secondary storage to "none". That'll keep files on the pool without moving to the array. There is (currently) no way to move files between two pools though.
3) I'm not familiar enough with ZFS master to answer this - I don't know if an array drive formatted with ZFS would appear there - but you can certainly use ZFS functions on the command line to work with snapshots - you can use 'zfs send' and 'zfs receive' for example to sync data between two places. That's my plan as it happens, though I haven't implenented it yet: To convert my dedicated backup drive on my array to ZFS, and snapshot my cache pool datasets to it, instead of my current rsync implementation. You cannot however span multiple drives, as they array doesn't support multi-drive ZFS. Each drive is its own drive.
-
Just to chime in here with my two cents:
I can see both points of view here. You seam to not understand some things, so asking for advise is probably a good idea in a case like this in future.
That said, I've requested before for better drive management features in unRAID. All too often the advice is "replace it and let parity rebuild" which to me (working in enterprise storage) is a terrible default option. Yes homelab is different to enterprise, but leaving user data in a zero-redundancy state shouldn't be the default option. unRAId should have native evac/replace type functionality, and no rely on a parity rebuild to save the day, falling back on "well you should have a backup" if something goes wrong.
-
Yes. That's one of the most common use-cases actually.
-
If this is the case by default that seriously needs changing.
Limetech's always said the one thing they care most about with unRAID is the data it holds.
-
+1 for this. An "Apply at next reboot" button would be great, and maybe a little badge on the icon on the dashboard to indicate pending changes.
-
I did, they came back ok. Have been running fine since. I have replacement SSDs on the way anyway (yay Prime Day...). Will keep an eye on it.
-
Hi all,
Server crashed at some point after sleep last night. Didn't notice until I got back this evening.
I came back to a server online, but both my M.2 SSDs were missing. These were recently installed in an M.2 PCIE adapter to cool them a little better. They also have a fair amount of medium errors racked up from when they were overheating when first installed.
All of this is to say there are absolutely hardware causes for this. I'm just curious if someone spots something obvious in the logs that I've overlooked that could also be a culprit.
The plan is to go to 6.12, but I'm going to let it mature a little first.
-
Nice!
Scheduling of snapshots?
-
Or an option with Mover Tuning to ignore settings and move certain shares on a schedule or something.
I have the same issue; I have my backups going to an SSD for write speed, but I don't want them sitting there taking up space that could otherwise be used for frequently accessed files that I'd like to keep on the SSDs for a longer period.
The answer is probably going to be "Soon", but just for the sake of it...
in General Support
Posted
Thanks for the responses guys.
Is this as simple as just replacing the bz* files, or am I missing something?