wildfire305

Members
  • Posts

    140
  • Joined

  • Last visited

Retained

  • Member Title
    member title

Converted

  • Personal Text
    personal text

Recent Profile Visitors

772 profile views

wildfire305's Achievements

Apprentice

Apprentice (3/14)

15

Reputation

  1. There is something about this pre-existing zpool that makes unriad unable to create new shares on it from the GUI. I am easily able to create new datasets from the command line and then unraid creates a share (it improperly assigns the dataset generated share primary storage to the array) with the dataset name. Previously in another bug report a user mentioned that behavior (wrong storage assignment) when manually creating datasets from CLI - so that behavior is expected since that isn't the recommended method. ZFS attributes pasted below and error from syslog when attempting to create a share. I did notice that a few attributes do not match my other newly created post-6.12 zpool that adding shares to works fine on. root@CVG02:~# zfs get all snapshot NAME PROPERTY VALUE SOURCE snapshot type filesystem - snapshot creation Fri Nov 11 2:47 2022 - snapshot used 12.8T - snapshot available 1.18T - snapshot referenced 41.5K - snapshot compressratio 1.02x - snapshot mounted yes - snapshot quota none default snapshot reservation none default snapshot recordsize 128K default snapshot mountpoint /mnt/snapshot local snapshot sharenfs off default snapshot checksum on default snapshot compression off local snapshot atime off local snapshot devices on default snapshot exec on default snapshot setuid on default snapshot readonly off default snapshot zoned off default snapshot snapdir hidden default snapshot aclmode discard default snapshot aclinherit restricted default snapshot createtxg 1 - snapshot canmount on default snapshot xattr on default snapshot copies 1 default snapshot version 5 - snapshot utf8only off - snapshot normalization none - snapshot casesensitivity sensitive - snapshot vscan off default snapshot nbmand off default snapshot sharesmb off default snapshot refquota none default snapshot refreservation none default snapshot guid 8916430419615625548 - snapshot primarycache all default snapshot secondarycache all default snapshot usedbysnapshots 0B - snapshot usedbydataset 41.5K - snapshot usedbychildren 12.8T - snapshot usedbyrefreservation 0B - snapshot logbias latency default snapshot objsetid 54 - snapshot dedup off local snapshot mlslabel none default snapshot sync standard default snapshot dnodesize legacy default snapshot refcompressratio 1.00x - snapshot written 41.5K - snapshot logicalused 13.1T - snapshot logicalreferenced 13.5K - snapshot volmode default default snapshot filesystem_limit none default snapshot snapshot_limit none default snapshot filesystem_count none default snapshot snapshot_count none default snapshot snapdev hidden default snapshot acltype off default snapshot context none default snapshot fscontext none default snapshot defcontext none default snapshot rootcontext none default snapshot relatime off default snapshot redundant_metadata all default snapshot overlay on default snapshot encryption off default snapshot keylocation none default snapshot keyformat none default snapshot pbkdf2iters 0 default snapshot special_small_blocks 0 default root@CVG02:~# zpool status snapshot pool: snapshot state: ONLINE scan: scrub canceled on Sat Jul 1 10:11:14 2023 config: NAME STATE READ WRITE CKSUM snapshot ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdk ONLINE 0 0 0 sdi ONLINE 0 0 0 sdj ONLINE 0 0 0 sdg ONLINE 0 0 0 sdl ONLINE 0 0 0 sdf ONLINE 0 0 0 errors: No known data errors Yes, the scrub was cancelled on purpose because I didn't realize I had it as a scheduled cron task AND unraid can now run it on ZFS pools on a schedule. It had just ran a few days prior. I just pasted that so you could see what kind of pool it is - made out of 10yo 3tb Hitachis. Error in syslog when trying to create a share on the zpool named "snapshot". Jul 3 08:27:24 CVG02 shfs: share cache full Jul 3 08:27:24 CVG02 emhttpd: error: shfs_mk_share, 6451: No space left on device (28): ioctl: /newshare Jul 3 08:27:24 CVG02 emhttpd: shcmd (405): rm '/boot/config/shares/newshare.cfg' Jul 3 08:27:24 CVG02 emhttpd: Starting services... Jul 3 08:27:25 CVG02 emhttpd: shcmd (408): /etc/rc.d/rc.samba restart This error makes it sounds like it cannot identify the FS properly as it runs shfs_mk_share rather than zfs create. For comparison - successful share creation on the other newly create zpool: Jul 3 08:33:56 CVG02 shfs: /usr/sbin/zfs create 'cache/newsharenewpool' Jul 3 08:33:56 CVG02 emhttpd: Starting services... Jul 3 08:33:56 CVG02 emhttpd: shcmd (421): chmod 0777 '/mnt/user/newsharenewpool' Jul 3 08:33:56 CVG02 emhttpd: shcmd (422): chown 'nobody':'users' '/mnt/user/newsharenewpool' Jul 3 08:33:56 CVG02 emhttpd: shcmd (423): /etc/rc.d/rc.samba restart Diagnostics attached for posterity. In summary I'm trying to figure out why unraid can't create the datasets on this particular zpool, and if it can be resolved. cvg02-diagnostics-20230703-0829.zip
  2. I got it!!! That share was incompatible because I had sharenfs feature turned on. After turning off that feature AND rebooting, I now have access to the files. root@CVG02:/mnt/snapshot/rsnapshot# ls alpha.0/ alpha.2/ alpha.4/ beta.0/ beta.2/ beta.4/ beta.6/ alpha.1/ alpha.3/ alpha.5/ beta.1/ beta.3/ beta.5/ root@CVG02:/mnt/snapshot/rsnapshot# So officially the "BUG" is unraid 6.12 is not compatible with sharenfs zfs feature. Which probably isn't something that needs to be fixed, just use the NFS share in the share created by unraid. Thanks again for integrated ZFS support and continuing to make what I consider to be the most flexible of all the home server operating systems that caters to all tech levels.
  3. root@CVG02:~# zfs mount snapshot/rsnapshot /mnt/snapshot/rsnapshot snapshot /mnt/snapshot snapshot/NVR /mnt/snapshot/NVR snapshot/cachemirror /mnt/snapshot/cachemirror root@CVG02:~# This is where I originally had them mounted before upgrade to 6.12. They have now also been mounted in /mnt/user/* by 6.12 although that doesn't show up in zfs mount or regular mount. Those are the shares created by unraid 6.12. For example, when I go into the folders in /mnt/snapshot/cachemirror or /mnt/user/cachemirror - the same files appear in both. It's the rsnapshot one that is blank.
  4. Wizardry: zfs get all snapshot/rsnapshot > rsnapshot_info.txt There's all the details about that dataset. It mounts as an empty folder now. But zfs list shows it still has the 9TB of data. rsnapshot_info.txt
  5. Following Spaceinvader One's absolutely fantastic video of the upcoming (at the time) release of 6.12 I added my existing ZFS pool into the pools after upgrading. After adding it and rebooting - one of the three datasets did not add in properly. I noticed that shares were created to manage the data sets. The one that didn't add properly "rsnapshot" came in empty. I checked the mountpoints both where I had it mounted and where the share put it and they were both empty so I removed the share. (that was a mistake...don't do that! The array can't stop properly because it can't be unmounted). zfs list still shows the dataset still exists, but I'm struggling to figure out how to get it imported properly or access it. Not to worry about the data it's just 9TB of backup data from the main array. I have an additional server backing up an additional copy of all the data. I'm not worried about losing it, but I would like to figure out why it isn't being added properly. After a forced reboot, the share was added back into the shares and remounted, but the directory is empty. It's doing a parity check since it couldn't unmount the removed share. I have attached diagnostics. I'm sure there is a wizardly ZFS command that can help with more information, but I'm a newb on that - help me help you with that info. If I recall there was some unusual things about dataset, but I can't remember the specifics. cvg02-diagnostics-20230616-1917.zip
  6. I'm going to mark this as solved. I never would have suspected a plugin for wake on lan to have that much influence on the system stability. I believe it should have a caution label on the plugin. It didn't immediately cause problems, but removing it has resolved the issues I was having. I could imagine folks that like to fire parts cannons at problems being extremely upset at replacing hardware over a silly plugin. I'm not using "server grade" hardware, but I think it's close enough to it when you look at the base chips. And it is standardized enough that everything so far has "just worked".
  7. Why then would it not be pulled from the app store or at least have an incompatibility warning? It wasted a lot of my time if it ends up being the cause - so far stable as a rock today and I've been running it at about 400 watts worth of processes.
  8. The last one I installed...about a week ago... was the WOL plugin - which appeared to be partially broken. I removed it and performed the same tests and the server did not crash. I have a hard time trusting that as the "fix" though. I would assume that plugin does nothing until you push for it to wake a computer.
  9. I was able to reliably get the server to crash when writing to the cache drive ssd 4 out of 4 tries dd'ing 100-200GBs to the cache drive it locked and rebooted every time. This was performed while doing parity checks on the main array and ZFS array. The cache drive (and three of the hard drives) isn't connected to the HBA. I rebooted and checked the ram with 4 passes using MemTest86 v10 - Passed 64GB ECC DDR4. I then rebooted into unraid safe mode (selected from the thumb drive) and have written 500GB to the cache SSD with no hiccups. I was simultaneously scrubbing the cache drive to hammer that disk as hard as I could. No lockups. Smart attributes are clean on that SSD, BTRFS device stats are all 0, scrub is clean. So then I started recreating the same load in safe mode - started a scrub of the unraid array, imported my ZFS pool and started a scrub on it and continued to hammer everything. No lockups whatsoever. All dockers that normally run - running fine (didn't test the others - irrelevant). So, are plugins the primary difference between safe mode and regular mode? If so, I may have a rogue plugin.
  10. Maybe that was my fault - changed the command to " dd if=/dev/random of=test.img bs=1M count=1000000 status=progress" and it has completed almost a terrabyte so far of writing - while also performing a full ZFS scrub. I think my previous command ran me out of RAM.
  11. Well....that DD command crashed it...looks like maybe I've got a clue.
  12. Started this command on the ZFS array to try to rule out write issues with the HBA "dd if=/dev/random of=test.img bs=1G count=500 status=progress" ...while running a zfs scrub - this outta tax it.
  13. Server seems to be crashing nearly every day after running mostly solid for a couple years. Where can I start to look. Syslog is mirrored to flash and available if desired. The only events leading up to the crash is the flash backup plugin running every 30 minutes - which seems excessive to me. Sometimes the crash reboots the server, sometimes I have to reboot it manually. Connecting a monitor displays a black screen. Only recent hardware change was a slightly different HBA card (external connectors vs internal). It ran for a couple weeks after that before this crashing though, so I doubt that it is. How can I start to look for clues? I would like to rule out the HBA quickly because it is still returnable. I allow the parity checks after the crashes (4 data + 1 parity + 1 cache on primary array and 6 zfs disk array) - so I think this rules out read issues. Write issues might be ruled out by the nightly backups - main array and cache disk backs up to zfs array. The only real new addition - I added a second server that is pulling a backup from this server over an NFS share on the ZFS filesystem. I switched from a btrfs pool to the ZFS pool a couple months ago. The new backup is putting a heavy read load on that ZFS share - but it still completed last night with no error then 30 minutes later the primary server locked and rebooted at 4:30am - then again at 6:30am. The only scheduled task during that time is a remote server outside of my local network backs up to this server through an rsync docker that has a static IP. I recently found a forum post about switching from macvlan to ipvlan when running custom ip dockers and made that change this morning. cvg02-diagnostics-20221213-0834.zip
  14. Certainly the new integration of the VirtioFS is exciting. I upgraded to 6.11.1 with no problems. I got VirtioFS working on a windows 10 vm . I get an unexpected result - I can read all the data I want from the share, I can delete anything I want, I can create folders.....but I can't create files. I tried this on a BTRFS and XFS filesystem. The VM user is one of the approved users for those shares (although that is likely irrelevant because this would bypass all user permissions). Where would I look to start diagnosing this?