limetech Posted June 16, 2023 Author Share Posted June 16, 2023 3 hours ago, Goldmaster said: What's going to be in zfs phase 2? Hybrid pools. 2 Quote Link to comment
GeorgeJetson20 Posted June 16, 2023 Share Posted June 16, 2023 I upgraded from RC6 to release without any issue except (and this might be my own fault) - I backed up flash, updated plugins, update dockers, stopped dockers, did the update, waited for the update to finish then rebooted as it said but for some reason it's doing a parity check but I didn't have a bad shutdown. But I have 14tb's drives so it's going to take 24 hours to run - only negative, otherwise it's fine. I have 2 other servers both on previous build but I will wait to make sure no one has any issues. Quote Link to comment
JustOverride Posted June 16, 2023 Share Posted June 16, 2023 (edited) Upgraded from 6.11.5 to 6.12 without problems. Everything seems to be working fine... ... but how do I reset the dashboard layout, I know I read it sometime before but I forgot. Doesn't seem to be something that is clearly visible. found it.. its the blue wrench at the main panel. Edited June 16, 2023 by JustOverride Quote Link to comment
jonathanselye Posted June 16, 2023 Share Posted June 16, 2023 in this release is it possible to start pool without a device on main array? Quote Link to comment
gacpac Posted June 16, 2023 Share Posted June 16, 2023 can someone walk me through this ? To restore VM autostart, examine '/var/log/vfio-pci-errors' and remove offending PCI IDs from 'config/vfio-pci.cfg' file and reboot. in simpler terms please Quote Link to comment
itimpi Posted June 16, 2023 Share Posted June 16, 2023 1 hour ago, jonathanselye said: in this release is it possible to start pool without a device on main array? No., but likely to be a possibility in a future release. 1 Quote Link to comment
SimonF Posted June 16, 2023 Share Posted June 16, 2023 42 minutes ago, gacpac said: can someone walk me through this ? To restore VM autostart, examine '/var/log/vfio-pci-errors' and remove offending PCI IDs from 'config/vfio-pci.cfg' file and reboot. in simpler terms please In this file has the PCI devices to be bound to vfio. root@computenode:~# cat /boot/config/vfio-pci.cfg BIND=0000:03:00.0|8086:56a0 0000:05:00.0|8086:56a 0000:05:01.0|8086:56a0 root@computenode:~# On my system 05:01.0 does not exist so my VMs do not auto start. So I would need to remove this. Also the vendor:product is in correct. Take a note of your devices in tools->system devices. Make a change to the checked devices save and then set how you want this will update the file in /boot. So I unchecked group 20 saved, added in group 20 and now my file looks like this root@computenode:~# cat /boot/config/vfio-pci.cfg BIND=0000:03:00.0|8086:56a0root@computenode:~# Changes do not take effect until a reboot. Quote Link to comment
Masterwishx Posted June 16, 2023 Share Posted June 16, 2023 Thanks to all Limetech team and all contributers for amazing work. Updated to 6.12 from 6.11.5 without Issues. Waiting for next cool futures. 2 Quote Link to comment
isvein Posted June 16, 2023 Share Posted June 16, 2023 6 hours ago, limetech said: Hybrid pools. Will the stuff "ZFS Master" plugin do be part of Unraid without the need for the plugin at any time? Quote Link to comment
Dr. Bogenbroom Posted June 16, 2023 Share Posted June 16, 2023 (edited) From the Release Notes: "If you created any zpools using 6.12.0-beta5, please Erase those pools and recreate." Does anybody know why that is? I think i created a zpool on RC5, currently on RC8 and everything seems to be running fine. Edited June 16, 2023 by Dr. Bogenbroom Quote Link to comment
chis34 Posted June 16, 2023 Share Posted June 16, 2023 Just did the migration. It works and i have no problem 😋 Thanks for the hard work done. 1 Quote Link to comment
Kilrah Posted June 16, 2023 Share Posted June 16, 2023 (edited) 25 minutes ago, Dr. Bogenbroom said: From the Release Notes: "If you created any zpools using 6.12.0-beta5, please Erase those pools and recreate." Does anybody know why that is? I think i created a zpool on RC5, currently on RC8 and everything seems to be running fine. beta5 is not RC5, beta is the private phase before public RCs. Edited June 16, 2023 by Kilrah Quote Link to comment
Ruato Posted June 16, 2023 Share Posted June 16, 2023 Long time Unraid user here. Thank you for your hard work for this new release! ZFS is a major change and I am considering whether I should convert my individual array drives to ZFS or whether I should start utilizing ZFS pools. I am sure it would be greatly appreciated by many if part of the Unraid release notes or general documentation there were migrate instructions from XFS array disks to ZFS pool/pools or to individual ZFS formatted array disks. In addition, it would be great, if one could find pros and cons listed for different ZFS options and maybe also preconditions for ZFS usage (for example, RAM requirements). I am pretty sure that many Unraid users (that are not adept with ZFS) are considering what to do with ZFS. 2 2 Quote Link to comment
dopeytree Posted June 16, 2023 Share Posted June 16, 2023 (edited) I'm thinking to remove one of the cache pool mirrors (total 2x drives) Then wipe the other one format as ZFS. Copy data back to NEW ZFS cache pool single. Confirm data. Then wipe the other BTrFS cache, then add it to the ZFS pool (requiring a format of the BTrFS drive). Does this sound like the right steps for converting a cache pool (mirror) to ZFS mirror? Edited June 16, 2023 by dopeytree Quote Link to comment
ELP1 Posted June 16, 2023 Share Posted June 16, 2023 (edited) I would have a question regarding GUI in Dasboard view. There are new icons for cog, wrench and info. Is it intention to have that squares background? In my opinion it somehow does not fit in general view. For me it is every login little bit eye catching. Maybe it is matter of taste or time? I do not know. What do you guys think? I know that it is just stupid thing but... Please see attached example. Edited June 16, 2023 by ELP1 1 Quote Link to comment
itimpi Posted June 16, 2023 Share Posted June 16, 2023 1 hour ago, dopeytree said: I'm thinking to remove one of the cache pool mirrors (total 2x drives) Then wipe the other one format as ZFS. Copy data back to NEW ZFS cache pool single. Confirm data. Then wipe the other BTrFS cache, then add it to the ZFS pool (requiring a format of the BTrFS drive). Does this sound like the right steps for converting a cache pool (mirror) to ZFS mirror? I am not sure that would get you to the type of ZFS pool you want to end up with. ZFS is far more restrictive than btrfs in the options for adding extra drives after the initial setup of a ZFS pool. 1 Quote Link to comment
dopeytree Posted June 16, 2023 Share Posted June 16, 2023 (edited) Ah ok so best to backup to another disk (perhaps in array) or other pool. Then wipe the cache pool. Then set up as a new pool with all drives present & ready to go? Then copy data back. Out of interest has anyone done any performance tests of btrfs mirror pool vs zfs mirror pool. 2x nvme ssd's. https://www.raidz-calculator.com/default.aspx Eventually i will migrate some of my array disks to a 6disk ZFS array of video files so can be quickly edited. Edited June 16, 2023 by dopeytree 1 Quote Link to comment
gyto6 Posted June 16, 2023 Share Posted June 16, 2023 (edited) 15 minutes ago, dopeytree said: Ah ok so best to backup to another disk (perhaps in array) or other pool. Then wipe the cache pool. Then set up as a new pool with all drives present & ready to go? Then copy data back Out of interest has anyone done any performance tests of btrfs mirror pool vs zfs mirror pool. 2x nvme ssd's. Concerning this situation, you can simply wipe a disk and set it as a single ZFS. Then copy your datas back on it, wipe your second disk and add it as a mirror. Your zpool shall be then two devices mirrored. For the perf, don't expect ZFS to speedup your nvme drives. As your supposed to store videos on it, the "simplest" tweak to use are the "Ashift" and "Recordsize" parameters dedicated respectively to zpool and dataset. ZFS benefice only's it ARC read cache mechanism. So if your editing your videos, there are no gain. Edited June 16, 2023 by gyto6 1 Quote Link to comment
Revan335 Posted June 16, 2023 Share Posted June 16, 2023 Is the Share Floor Plugin included and removed by Upgrade? Or mean the Entry in Release Notes another thing? Quote Link to comment
JorgeB Posted June 16, 2023 Share Posted June 16, 2023 33 minutes ago, dopeytree said: Out of interest has anyone done any performance tests of btrfs mirror pool vs zfs mirror pool. 2x nvme ssd's. Write speed should be very similar, read speed should be faster with zfs since it stripes both members, unlike btrfs. 1 Quote Link to comment
craigr Posted June 16, 2023 Share Posted June 16, 2023 5 hours ago, Ruato said: Long time Unraid user here. Thank you for your hard work for this new release! ZFS is a major change and I am considering whether I should convert my individual array drives to ZFS or whether I should start utilizing ZFS pools. I am sure it would be greatly appreciated by many if part of the Unraid release notes or general documentation there were migrate instructions from XFS array disks to ZFS pool/pools or to individual ZFS formatted array disks. In addition, it would be great, if one could find pros and cons listed for different ZFS options and maybe also preconditions for ZFS usage (for example, RAM requirements). I am pretty sure that many Unraid users (that are not adept with ZFS) are considering what to do with ZFS. 3 1 Quote Link to comment
itimpi Posted June 16, 2023 Share Posted June 16, 2023 1 hour ago, Revan335 said: Is the Share Floor Plugin included and removed by Upgrade? Or mean the Entry in Release Notes another thing? It was not removed when I did the upgrade. I think that: webgui: DeviceInfo: added automatic floor calculation this simply means that this plugin will no longer be required as changing the value to be a sensible default value rather than 0 is built-in, so you can remove the plugin if you have it installed. Quote Link to comment
IonelChila Posted June 16, 2023 Share Posted June 16, 2023 On 6/15/2023 at 8:38 AM, JorgeB said: Didn't you notice the edit? You can use SimonFair's updated releases. LOL who reads the fine print Thanks 3 Quote Link to comment
chis34 Posted June 16, 2023 Share Posted June 16, 2023 Since the upgrade my nvme cache disk is getting very hot 55 degrees while it was 45 degrees max before. Any idea where the problem could come from ? 1 Quote Link to comment
gacpac Posted June 16, 2023 Share Posted June 16, 2023 10 hours ago, SimonF said: In this file has the PCI devices to be bound to vfio. root@computenode:~# cat /boot/config/vfio-pci.cfg BIND=0000:03:00.0|8086:56a0 0000:05:00.0|8086:56a 0000:05:01.0|8086:56a0 root@computenode:~# On my system 05:01.0 does not exist so my VMs do not auto start. So I would need to remove this. Also the vendor:product is in correct. Take a note of your devices in tools->system devices. Make a change to the checked devices save and then set how you want this will update the file in /boot. So I unchecked group 20 saved, added in group 20 and now my file looks like this root@computenode:~# cat /boot/config/vfio-pci.cfg BIND=0000:03:00.0|8086:56a0root@computenode:~# Changes do not take effect until a reboot. Thanks, i'll review and report back if i have issues Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.