Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

38 Good

About tr0910

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Followup on the 6.9.0 b22 with Win10 ZFS problems. If a Win10 VM is started and stopped, the server will shutdown successfully. The attempt to restart a shutdown Win10 VM will fail and this is what is bringing ZFS and unRaid to a bad place where unRaid will not shutdown.
  2. Followup on the 6.9.0 b22 with Win10 ZFS install problems. When the VM is shut down, and then attempt is made to restart it, the VM tab on the server will lock up. Other than this, the server will continue to operate normally. When the server is shut down from the GUI, to attempt to bring the VM back online, unRaid reports that shutdown is successful with system powered off. A putty telnet window open gets the expected "The system is going down for system halt NOW!" message, but after unRaid reports the system is powered off, a press of the enter key shows putty still connected with a telnet session active. Even the unRaid array remains mounted. I don't know if this is unRaid's fault or ZFS? I look forward to somebody else attempting to replicate this issue with 6.9.0 b22 and a Win10 vm. Syslog entries after initating shutdown from GUI - It should be off, but its still live. Jul 2 09:33:08 Cara shutdown[140908]: shutting down for system halt Jul 2 09:33:08 Cara init: Switching to runlevel: 0 Jul 2 09:33:08 Cara init: Trying to re-exec init Jul 2 09:33:12 Cara kernel: mdcmd (44): nocheck cancel Jul 2 09:33:13 Cara emhttpd: Spinning up all drives... Jul 2 09:33:13 Cara kernel: mdcmd (45): spinup 1 Jul 2 09:33:14 Cara emhttpd: Stopping services... Jul 2 09:33:14 Cara emhttpd: shcmd (791): /etc/rc.d/rc.libvirt stop Jul 2 09:34:42 Cara root: Status of all loop devices Jul 2 09:34:42 Cara root: /dev/loop1: [2049]:4 (/boot/bzfirmware) Jul 2 09:34:42 Cara root: /dev/loop2: [0044]:264 (/mnt/disk1/system/libvirt/libvirt.img) Jul 2 09:34:42 Cara root: /dev/loop0: [0044]:260 (/mnt/disk1/system/docker/docker.img) Jul 2 09:34:42 Cara root: Active pids left on /mnt/* Jul 2 09:34:42 Cara root: USER PID ACCESS COMMAND Jul 2 09:34:42 Cara root: /mnt/MFS: root kernel mount /mnt/MFS Jul 2 09:34:42 Cara root: /mnt/disk1: root kernel mount /mnt/disk1 Jul 2 09:34:42 Cara root: /mnt/disks: root kernel mount /mnt/disks Jul 2 09:34:42 Cara root: /mnt/user: root kernel mount /mnt/user Jul 2 09:34:42 Cara root: root 54064 f.c.. smbd Jul 2 09:34:42 Cara root: /mnt/user0: root kernel mount /mnt/user0 Jul 2 09:34:42 Cara root: Active pids left on /dev/md* Jul 2 09:34:42 Cara root: Generating diagnostics... Jul 2 09:34:47 Cara avahi-daemon[4244]: Interface docker0.IPv4 no longer relevant for mDNS. Jul 2 09:34:47 Cara avahi-daemon[4244]: Leaving mDNS multicast group on interface docker0.IPv4 with address Jul 2 09:34:47 Cara avahi-daemon[4244]: Withdrawing address record for on docker0.
  3. At 10:26 there are a few zfs entries, but it is now 13:30 and the start VM icon is still spinning away. About 12:00 would be the time that the VM was shut down. Other than for this, the server continues to work normally. ZFS list commands return expected results nothing seems broken, except the VM doesn't start. On shutting down the server, unRaid claims the server is down, but a putty window shows the server is still live, and will respond to ZFS list etc. What it will not do is ZPOOL export -a This comes back with. cannot unmount '/mnt/MFS/VM/Win10ZFS': umount failed cara-diagnostics-20200701-1327.zip
  4. I have been testing on 6.9.0 b22 and have a ZFS mirror created on 2 spinners. On this mirror, I have a Win10 VM created, and now twice unRaid has refused to start the VM after a VM shutdown. The VM start command in unRaid just sits with the VM stopped and the red spinning arrows going in a circle attempting to restart the VM. UnRaid is still reponsive as you can create another tab, and work with unRaid normally. The VM had been running successfully after a fresh unRaid boot, but shutting down a Win10 VM and restarting the VM causes this issue.
  5. Hey, thanks for the report. ZFS is very cool, but @steini84maybe we should update post1 with "Where not to use ZFS" It would appear that using ZFS to mount USB devices is not a good use case, (or should only be done in cases where you are aware that ZFS is not plug and play with USB). For normal disk maintenance, unRaid leads us to believe that we can safely do this with the array stopped. ZFS is a totally different animal. Would best practice be to put a "zpool export" command into Squid's awesome user scripts plugin and set it to happen on array stop, and "zpool import" on array start? On first startup of array, you should not need this as ZFS will automatically do the zpool import. It would seem that user scripts supports all this and could make ZFS behave like the unRaid array. Would this make sense?
  6. So the conclusion is that ZFS is extremely sensitive to disconnections and does not recover on reconnection. Have you ever seen data corruption as a result of these issues??
  7. ZFS lockup Array stop and restart I have a 6.8.3 production server that has ZFS installed recently. I took a single disk and set it up with ZFS for testing as follows. root@Tower:/# zpool create -m /mnt/ZFS1 ZFS1 sde The server behaved as expected and the ZFS1 pool was active. I then took the unRaid array offline and performed some disk maintenance. I didn't do anything to unmount ZFS1. By mistake, the ZFS disk was also pulled from the server and reinserted into the hotbay. Bringing the server back online resulted in ZFS being locked up. The ZFS and ZPOOL commands would execute, but a ZFS list command would result in a hung terminal window. ZFS import also locked the terminal window. The unRaid server was working fine, but to recover ZFS, the only thing I could do was reboot the server. The ZFS pool came back online after reboot. I expect that I needed to perform some ZFS commands on array stop and restart? What should I have done? Is there any way to recover a locked up ZFS without reboot?
  8. I'm not really disappointed in the performance with btrfs. I may just reformat and run this drive as zfs and let that be it.
  9. Awesome. My apologies for missing this. Just to confirm, this only works with typed passwords, not a keyfile correct?
  10. Thanks @dlandon. I love UD and had used it to mount VM's on a UD SSD. Having multiple cache pools now, I can leave UD for my temporary disks rather than using it for mission critical VM use. Still UD never broke on me. Thanks for your support, and especially your adding encryption support to UD. Some day it would be nice to have multiple encryption key support so we could mount a disk with UD that had a different encryption key than the current array.
  11. Just learning ZFS, so pardon the dumb questions about l2arc. Can you describe your device setup for this? And how you are using it? Presently I have my VMs on a single SATA SSD cache drive using btrfs. I don't have NVMe slots on this Intel 2600cp2 MB so I won't be able to go full speed, but I have 64gb ECC RAM installed. How should I configure zfs for best VM performance assuming that I move my VMs from btrfs cache drive to some kind of zfs pool. Will ZFS provide a burst of speed to my aging hardware?
  12. I'm playing with it on a test server on 6.9.0 b22. It installed and created a simple test pool so far.
  13. Is there a version of the ZFS plugin that works with 6.9.0 beta22? Installing from Community App gets the one that works with 6.8.3 Unsupported kernel detected! ZFS not installed! - Please follow this post https://forums.unraid.net/topic/41333-zfs-plugin-for-unraid/ and reinstall the plugin when this version of unRAID is supported
  14. So the new multi-cache pool nibbles away at UD territory. Is the plan to continue this direction rather than include UD as part of native unRaid as was originally mentioned several years ago? Loving the possibility of ZFS being included in base unRaid, as it seems to be getting more and more love.
  15. Tom just hinted a roadmap for storage that might include ZFS as a native filesystem in the 6.9.0 beta 22 release notes. A future release will include support for multiple "unRAID array" pools. We are also considering zfs support. I have been playing around with the ZFS plugin, and have been fascinated with the possibilities. The multiple cache pools included in beta22, and multiple unRaid array pools promised for a future release all are linked to keeping unRaid relevant. We have a few more years before SSD will be cheaper per TB than spinning disks. I would expect both SSD and mechanical drives to continue to decrease in price per TB in the foreseeable future. What it would take to eliminate mechanical drives is a disruptive technology or a new "Dark Age" to be thrust upon us. If that happens, more than hard drives will become paperweights.