Unraid OS version 6.12.0 available


Recommended Posts

I upgraded from RC6 to release without any issue except (and this might be my own fault) - I backed up flash, updated plugins, update dockers, stopped dockers, did the update, waited for the update to finish then rebooted as it said but for some reason it's doing a parity check but I didn't have a bad shutdown.   But I have 14tb's drives so it's going to take 24 hours to run - only negative, otherwise it's fine.  I have 2 other servers both on previous build but I will wait to make sure no one has any issues.

Link to comment

Upgraded from 6.11.5 to 6.12 without problems. Everything seems to be working fine...

 

 

... but how do I reset the dashboard layout, I know I read it sometime before but I forgot. Doesn't seem to be something that is clearly visible. found it.. its the blue wrench at the main panel.

Edited by JustOverride
Link to comment
42 minutes ago, gacpac said:

can someone walk me through this ?

 

To restore VM autostart, examine '/var/log/vfio-pci-errors' and remove offending PCI IDs from 'config/vfio-pci.cfg' file and reboot.

 

in simpler terms please

In this file has the PCI devices to be bound to vfio.

 

root@computenode:~# cat /boot/config/vfio-pci.cfg
BIND=0000:03:00.0|8086:56a0 0000:05:00.0|8086:56a 0000:05:01.0|8086:56a0
root@computenode:~# 

 

On my system 05:01.0 does not exist so my VMs do not auto start. So I would need to remove this. Also the vendor:product is in correct.

 

Take a note of your devices in tools->system devices. 

 

Make a change to the checked devices save and then set how you want this will update the file in /boot.

 

So I unchecked group 20 saved, added in group 20 and now my file looks like this

 

root@computenode:~# cat /boot/config/vfio-pci.cfg
BIND=0000:03:00.0|8086:56a0root@computenode:~# 

 

Changes do not take effect until a reboot.

 

image.png

 

Link to comment
25 minutes ago, Dr. Bogenbroom said:

From the Release Notes:

 

"If you created any zpools using 6.12.0-beta5, please Erase those pools and recreate."

 

Does anybody know why that is? I think i created a zpool on RC5, currently on RC8 and everything seems to be running fine.

beta5 is not RC5, beta is the private phase before public RCs.

Edited by Kilrah
Link to comment

Long time Unraid user here. Thank you for your hard work for this new release!

ZFS is a major change and  I am considering whether I should convert my individual array drives to ZFS or whether I should start utilizing ZFS pools. I am sure it would be greatly appreciated by many if part of the Unraid release notes or general documentation there were migrate instructions from XFS array disks to ZFS pool/pools or to individual ZFS formatted array disks. In addition, it would be great, if one could find pros and cons listed for different ZFS options and maybe also preconditions for ZFS usage (for example, RAM requirements). I am pretty sure that many Unraid users (that are not adept with ZFS) are considering what to do with ZFS.

  • Like 2
  • Upvote 2
Link to comment

I'm thinking to remove one of the cache pool mirrors (total 2x drives)

Then wipe the other one format as ZFS.

Copy data back to NEW ZFS cache pool single.

Confirm data.

Then wipe the other BTrFS cache, then add it to the ZFS pool (requiring a format of the BTrFS drive). 

 

Does this sound like the right steps for converting a cache pool (mirror) to ZFS mirror?

Edited by dopeytree
Link to comment

I would have a question regarding GUI in Dasboard view. There are new icons for cog, wrench and info. Is it intention to have that squares background? In my opinion it somehow does not fit in general view. For me it is every login little bit eye catching. Maybe it is matter of taste or time? I do not know. What do you guys think? I know that it is just stupid thing but...

Please see attached example.

Unraid.png

Edited by ELP1
  • Like 1
Link to comment
1 hour ago, dopeytree said:

I'm thinking to remove one of the cache pool mirrors (total 2x drives)

Then wipe the other one format as ZFS.

Copy data back to NEW ZFS cache pool single.

Confirm data.

Then wipe the other BTrFS cache, then add it to the ZFS pool (requiring a format of the BTrFS drive). 

 

Does this sound like the right steps for converting a cache pool (mirror) to ZFS mirror?

I am not sure that would get you to the type of ZFS pool you want to end up with.   ZFS is far more restrictive than btrfs in the options for adding extra drives after the initial setup of a ZFS pool.

  • Upvote 1
Link to comment

Ah ok so best to backup to another disk (perhaps in array) or other pool. Then wipe the cache pool. Then set up as a new pool with all drives present & ready to go? Then copy data back.

 

Out of interest has anyone done any performance tests of btrfs mirror pool vs zfs mirror pool. 2x nvme ssd's.

 

200808580_Screenshot2023-06-16at14_41_11.thumb.png.6a51e72ede9da390ec227cb91e715104.png

 

https://www.raidz-calculator.com/default.aspx

 

Eventually i will migrate some of my array disks to a 6disk ZFS array of video files so can be quickly edited.

 

533890619_Screenshot2023-06-16at14_44_41.thumb.png.f28bafbce52d69a67b0317aaa8c19dab.png

Edited by dopeytree
  • Like 1
Link to comment
15 minutes ago, dopeytree said:

Ah ok so best to backup to another disk (perhaps in array) or other pool. Then wipe the cache pool. Then set up as a new pool with all drives present & ready to go? Then copy data back

Out of interest has anyone done any performance tests of btrfs mirror pool vs zfs mirror pool. 2x nvme ssd's.

Concerning this situation, you can simply wipe a disk and set it as a single ZFS. Then copy your datas back on it, wipe your second disk and add it as a mirror. Your zpool shall be then two devices mirrored.

 

For the perf, don't expect ZFS to speedup your nvme drives. As your supposed to store videos on it, the "simplest" tweak to use are the "Ashift" and "Recordsize" parameters dedicated respectively to zpool and dataset.

ZFS benefice only's it ARC read cache mechanism. So if your editing your videos, there are no gain.

 

Edited by gyto6
  • Like 1
Link to comment
33 minutes ago, dopeytree said:

Out of interest has anyone done any performance tests of btrfs mirror pool vs zfs mirror pool. 2x nvme ssd's.

Write speed should be very similar, read speed should be faster with zfs since it stripes both members, unlike btrfs.

  • Like 1
Link to comment
5 hours ago, Ruato said:

Long time Unraid user here. Thank you for your hard work for this new release!

ZFS is a major change and  I am considering whether I should convert my individual array drives to ZFS or whether I should start utilizing ZFS pools. I am sure it would be greatly appreciated by many if part of the Unraid release notes or general documentation there were migrate instructions from XFS array disks to ZFS pool/pools or to individual ZFS formatted array disks. In addition, it would be great, if one could find pros and cons listed for different ZFS options and maybe also preconditions for ZFS usage (for example, RAM requirements). I am pretty sure that many Unraid users (that are not adept with ZFS) are considering what to do with ZFS.

 

 

  • Thanks 3
  • Upvote 1
Link to comment
1 hour ago, Revan335 said:

Is the Share Floor Plugin included and removed by Upgrade?

Or mean the Entry in Release Notes another thing?

It was not removed when I did the upgrade.

 

I think that:

webgui: DeviceInfo: added automatic floor calculation

this simply means that this plugin will no longer be required as changing the value to be a sensible default value rather than 0 is built-in, so you can remove the plugin if you have it installed.

Link to comment
10 hours ago, SimonF said:

In this file has the PCI devices to be bound to vfio.

 

root@computenode:~# cat /boot/config/vfio-pci.cfg
BIND=0000:03:00.0|8086:56a0 0000:05:00.0|8086:56a 0000:05:01.0|8086:56a0
root@computenode:~# 

 

On my system 05:01.0 does not exist so my VMs do not auto start. So I would need to remove this. Also the vendor:product is in correct.

 

Take a note of your devices in tools->system devices. 

 

Make a change to the checked devices save and then set how you want this will update the file in /boot.

 

So I unchecked group 20 saved, added in group 20 and now my file looks like this

 

root@computenode:~# cat /boot/config/vfio-pci.cfg
BIND=0000:03:00.0|8086:56a0root@computenode:~# 

 

Changes do not take effect until a reboot.

 

image.png

 

 

 

Thanks, i'll review and report back if i have issues

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.