FirbyKirby

Members
  • Posts

    24
  • Joined

  • Last visited

About FirbyKirby

  • Birthday 09/17/1984

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

FirbyKirby's Achievements

Noob

Noob (1/14)

8

Reputation

2

Community Answers

  1. @calmasacow Ditto over here, with the exact same devices. Maybe it's a vendor ID, and not a device ID? My home assistant VM really doesn't like it though. Did you, by chance, resolve this?
  2. Oh! Thank you. That explains a ton. I haven't setup pass-through yet, so the GPU is just sitting idle in Unraid. I'll be very curious to see how that impacts the power usage.
  3. I feel like that's a premonition. 😄 Thanks for all your help @Renegade605!
  4. One quick follow-up question: So, I understand that docker is off making ZFS datasets of it's own on my system, and apparently snapshots too. I've recently learned docker is smart enough to detect the underlying FS and then manipulate the FS appropriately to it's own ends. So excluding these docker generated datasets and snapshots certainly cleans up my GUI, but am I safe to ignore them and just let docker do what docker is going to do? I'm vaguely nervous about letting docker create (and destroy hopefully) datasets and snapshots on its own all willy-nilly. But then again, I assume everyone running docker on Unraid is doing the same thing here.
  5. Ahh. That makes sense. Essentially, there is a "hole" in my datasets between the Unraid share dataset (/system) and the docker files further down the directory tree (which are also, I guess, created as datasets?) Yep. My docker folder is just a folder, not a dataset. And thanks for the explanation of what a dataset is and your recommendation of making docker a dataset. In my server upgrade, I moved /system, /appdata, and /domains over to single ZFS pool with exclusive access for the best performance. So I am vaugely nervous about the amount of space these directories will consume as they dynamically grow. So a reservation and quota for docker makes a ton of sense for me as well. I guess I wouldn't really need to make a dataset for any directories below /docker based on your description of the ZFS design philosophy being focused on 1 dataset per task (everything under /docker is the same "docker" task, so to speak.) OK, off to figure out how to create a new dataset with the ZFS Master plugin....
  6. Thanks @Renegade605! I suspect you're absolutely right. As it is, my blindly plugging in the example "/dockerfiles/.* would not have worked since my directory structure is "/system/docker/docker/". But I'm still having a tough time coming up with a lua exclusion pattern that works. I've reviewed the tutorial, and if I'm understanding this correctly, "/docker/.*" or "/docker/.+" should work for me (the former matching /docker/, as well as all subfolders and files, and latter mattching only the subfolders and files of /docker/.) I don't think I need to add the full path if I don't include the start of string pattern match (the ^) as well. But, despite all these permutations, I still can't exclude these files. Any advice? Here's a look at my directory structure for docker. I did try "/docker/+" witch didn't work. And if I understand Lua patterns correctly, the "." is necessary as the "all character class" character before the * or + pattern match character.
  7. Quick question: what are these cryptic entries with a legacy mount point under my system share/dataset in the ZFS Master plugin entries on the Main tab? I'll be the first to admit, ZFS is new to me, and I'm no expert. I recently did a server upgrade, and in the process, I added new ZFS pools and moved my appdata and system shares over to it. These all popped up under my system after a reboot (It might have been a dirty reboot) after the server was initially booted up and my files were moved to the new shares on the ZFS pool. I see that some of them have snapshots and some of them don't. Generally, there are a lot of them (hundreds) and I'm bothered because I didn't make them, and I don't know what they are. On a more superficial level, they're really mucking up my GUI since they always open expanded, and I've got to really scroll to get down to my unassigned devices now. So, besides "what are these?" My follow up question is "how can I get rid of them?" Or at least "how can I hide them?" My system share has standard stuff in it (docker and libvirt.) Docker is set as a directory (not an image.) I tried setting the dockerfile filter in the ZFS Master settings (/dockerfiles/.*) but that didnt work. I didn't have high hopes since these didnt seem to match any of those file strings (like the files in the /zfs/graph/ subdirectory.)
  8. I would like to use Dynamic UPnP for remote access on my server. I've set it up following the documentation without any issue, and it works fine on most networks, but my employers corporate firewall is picky about what ports devices on the network can connect through. So far, every port the Dynamic UPnP setting of the Connect Plugin uses get's blocked by my corporate firewall overlords. I've previously had a static port forwarded without any issue whatsoever for remote management, so I know there are acceptable ports. I assume that if I cycle remote access off and on again, leasing and releasing ports from my router for long enough, I'll stumble upon an acceptable one for my corporate firewall, but I'd much rather define a specific range to use in a setting somewhere. I respect that this may not be something the Connect Plugin can control (this may be a router UPnP implementation specific implementation issue) but I thought I'd see what this forum thinks.
  9. Just to close the loop on this if anyone finds it in the future, I never solved this, but had good backups, so I took a risk and just removed the old cache, set up shares on the new pools, and restored from backup. That actually had similar problems, but they were in non-critical files that I was able to manually remove and delete, and successfully complete the restore. FWIW, I think the mover might have been choking on my Kasm container. Kasm uses docker-in-docker and requires bypassing the FUSE FS. So, I think the files in question are related to the docker in docker containers of some of the workspaces I setup in Kasm. Certainly, when I had issues restoring from backup, it was these container files that caused the issues. I ultimately deleted my Kasm appdata folder from the backup (it didn't have anything important in it since the config was stored in the template on the flash drive) and re-installed Kasm on the new drive without any loss of data or functionality. So, all in all, it's "fixed" though I don't really understand why it failed in the first place, and the way I got back to normal operation does bother me a bit for the future.
  10. Can anyone help me remove Dynamix System Autofan? At least, I think that's my issue. I'm trying to remove this dashboard widget. After some research, I believe this is related Dynamix System Autofan, though it was not installed on my server (I may have installed it in the past.) I tried installing it, and then uninstalling it, but this widget remains. Is there a manual method to remove it by chance?
  11. Good point, I do indeed: 3x M.2 NVMe 1x NVIDIA GTX 4060 1x Mellanox 2-port QSFP+ (with DAC for 10 GbE.) Additionally, though they're not spun up, I've got 5x SATA drives (3x HDD and 2x SSD).
  12. @Neo78 and @preepe, here's a look at my power consumption reported in Unraid after boot. The array is down and thus docker and VM services down as well. Nothing else significant is running. And here's system info for reference.
  13. I'll join you in the "unraid drive won't boot" boat. I just completed a major server upgrade using this motherboard and this is the only remaining hardware issue I have. My experience is similar to yours @firstTimer, but I'll add that it's more intermittent, and it's getting worse. When I first booted after assembly, the drive booted without any issue. But as I've been adjusting BIOS setting (based on previous posts in this thread, for example,) and updating firmware, the issue seems to have gotten worse. But it's been intermittent and gradual enough that I can't pinpoint what I did to break things. But, at this point, after a cold boot, it almost never boots the Unraid drive and instead falls back to the BIOS. I then need to cycle anywhere between roughly 1 and 10 times before the drive will properly boot. I think soft resets are better then hard resets (CTRL+ALT+DEL vs reset switch,) but again, I can't be sure. "Discard and Exit" actions from the BIOS almost never work. One trick that often works for me is to try and catch the boot menu, rather then let if fall into BIOS on a cold start or power cycle. This took me a little while to figure out, but F8 works on the logo screen to pull up the boot menu. Sometimes, the unraid disk will be in the menu and you can bypass the BIOS. In terms of "things I've tried" I can say that I've fiddled quite a bit with the CSM (Compatability Support Module.) Turning this on gets Unraid to boot pretty reliably (maybe 100% of the time) but always in legacy mode, and I need UEFI for GPU pass-through, so it's a no-go for me. I've also fiddled with plenty of boot settings in the BIOS such as extending the delay on logo screen, turning on/off USB Legacy Support, and Enabling/Disabling Fast Boot (thanks @Daniel15 for your advice here as I know others in the thread have suggested this to make Unraid boot, but it's not working for me, unfortunately.) I've also tried every USB port on the motherboard without any change in behavior. Let's compare notes a bit more on setups. For example, what USB drive are you using @firstTimer? Maybe it's specific to the drive. I'm using a SanDisk Cruzer Blade 64GB (started with an 8GB, but I purchased a replacement and migrated as a troubleshooting step because I was worried the drive was failing.) What BIOS version are you on? I've updated to the latest (3101).
  14. I should add that my first attempt to manually run mover failed miserably. Pressing "the button" in the GUI did nothing. The log never even showed a mover event. Upon research, I found this post. I have the same boot error, and I've had the mover tuning plugin for years. So, I removed it and re-installed it from the app store. Witch didn't fix the issue (still have the bad softlink.) So, since I don't need the tuning right now, just removed the mover tuning plugin altogether. That got mover working again. That's what I've been using when this hang started. I don't think it's relevant, but thought I would share (and I'd love to find a way to get that link fixed since I DO need mover tuning once the server is back up and running from the upgrade.)
  15. I'm in the middle of a big server HW upgrade (don't worry, I have backups of everything.) I'm near the end of the process without any major hardware issues. My last major step was to migrate some old btrfs cache drives to new ZFS NVMe drives. I'm following the new-ish SIO video that demonstrates how to migrate a drive to ZFS by using the mover to move everything to the array first, and then back to the newly formatted (on in my case, just new) ZFS pool. However, after a long time, when I return to the machine, mover is still running. I turned on mover logging before the operation, and I can see that hours before I returned to the server, mover's last log entry was an unknown btrfs file in the cache drive. At this point, I attempted to stop mover manually using the "mover stop" command at the CLI witch seemed to work. However, when I went to stop the array for a reboot (seemed like a good first step) the drives would not unmount. Manually calling "umount -l /mnt/docker_cache/" again, seemed to work (returned no errors) but I was unable to stop the array. Ended up doing a hard shutdown. After a night of parity check (no errors), I did a btrfs scrub on the 2 cache drives I have (no errors.) I can also confirm that docker and VM services were both stopped before manually starting the mover (per the SIO video,) so I dont think there were in use file issues. Could it be a permissions thing? Attached are my diagnostics after completing parity check and having the mover run again and get stuck again (stopped at what I think is about the same place, though the web browser based log doesn't show the exact place the mover stopped last night, but the diagnostics seem like they're at the same place.) I'll admit, I don't fully understand what these files are (something to do with the btrfs FS?) Barring y'all's advice, my next best guess is to give up, and remove the old cache drives from the server, adjust my shares mappings to use the new ZFS drives (as planned) and restore my shares from backup (I backed up all the shares on the cache drives for just this type of contingency.) But I'd really like to know why this happened so I can prevent it in the future? Maybe a permissions thing? wondermutt-diagnostics-20240107-0936.zip