Unraid OS version 6.12.0 available


Recommended Posts

Drives won't mount. I should have known not to upgrade. Updates have been a hot mess for some time now. Very disappointed. Fortunately this is on a home server that is not critical. I have a prod server that won't be getting upgraded any time soon. The upgrade process needs to be made much more bulletproof, with an automatic rollback if anything is found to be problematic. The server in question is from my recollection fairly vanilla, yet it still blew up. btrfs cache drive, xfs data drives.

Edit to add. Rebooted again, and everything came up ok. Fishy at best.

Edited by lbkNhubert
  • Confused 1
Link to comment
6 minutes ago, lbkNhubert said:

Drives won't mount. I should have known not to upgrade. Updates have been a hot mess for some time now. Very disappointed. Fortunately this is on a home server that is not critical. I have a prod server that won't be getting upgraded any time soon. The upgrade process needs to be made much more bulletproof, with an automatic rollback if anything is found to be problematic. The server in question is from my recollection fairly vanilla, yet it still blew up. btrfs cache drive, xfs data drives.

Edit to add. Rebooted again, and everything came up ok. Fishy at best.

 

Sorry to hear about the trouble.  I'd suggest in the future waiting to update until at least many days or many other people have updated first.  The update is less than 12 hours old.  If you don't want to possibly have to tinker with issues, let others be your guineapigs first.  All Liemtech can do is build stable release candidates that a limited number of people test and then do a full release when it seems ready.  Heck, Windows has loads of issues with updates and their user base is huge...  as does every linux distribution I've ever run.

 

I usually wait, but I had free time last night.  I must say almost always my unRAID updates go flawlessly.  On the rare occasion I have had issues, I just restore my flash drive to the prior config.  As stated in every update release note, backup your flash drive before updating.  As long as you have the backup you can't ever be down more than 15 minutes if you just want to roll back.

  • Like 1
Link to comment

Upgraded from 6.11.5 to 6.12 without issues.

 

One small inconvenience though. I'm using Firefox. When opening the console the bottom part of the text is cut off on each line as it seems. Most importantly I don't see my underscores.

 

Example Firefox:

grafik.png.8935ccb1b0f7c0be0c628cff1eae71a2.png

 

While on Edge is looks fine:

grafik.png.bf9ce7d17d5a0028f30bba7a9ca596c2.png

Edited by Pete0
Typos
Link to comment

I've not been following the ZFS developments for a while now. Can anyone confirm if it's possible to import existing TrueNAS ZFS pools into a fresh unraid install? Or is that more of a "phase 2" piece of work?

 

What I am really asking is if it's possible to migrate my current TrueNAS Scale server over. Docker+apps I can move easily, I'm more curious if unRAID is in a position whereby I can just mount the existing pools directly as primary and secondary storage and go from there or if that's not "ready" yet. I've ready the docs and I'm not entirely sure.

Can anyone confirm?

Link to comment
6 minutes ago, Kushan said:

Can anyone confirm if it's possible to import existing TrueNAS ZFS pools into a fresh unraid install?

If the pools were created with TueNAS using the default settings they won't import because they use partition #2 for zfs, if you manually disabled the "create swap on pool" option in TrueNAS (IIRC Core has a GUI option for that, Scale only using the CLI), then zfs will be on partition #1 and they should import. Importing with zfs on partition #2 is planned for phase 2, or at least it's the last info I have.

 

 

  • Like 1
  • Thanks 2
Link to comment
15 minutes ago, JorgeB said:

If the pools were created with TueNAS using the default settings they won't import because they use partition #2 for zfs, if you manually disabled the "create swap on pool" option in TrueNAS (IIRC Core has a GUI option for that, Scale only using the CLI), then zfs will be on partition #1 and they should import. Importing with zfs on partition #2 is planned for phase 2, or at least it's the last info I have.

 

 

 

Thanks, that makes sense actually! I never considered the swap partition. Oh well, guess I'm waiting until 6.13 then!

Link to comment

I created a new ZFS pool with 4 disks (2 groups of 2 devices) - a 2 vdev mirror.  All went very well...added data and moved some stuff over from my XFS pools. I then put my docker containers on it and migrated the appdata over...all was working.

 

I stopped the array to make another change..and it couldn't unmount my ZFS pool. I tried to do a lazy unmount, then I got:

image.png.da9feb3f15594bb9dc39761cd0beef27.png

 

(the pool name is thunder) over and over.

 

Had to perform an unclean shutdown afterwards.

Anything I did wrong?

Link to comment

@JorgeB, unfortunately I did not think to take diagnostics prior to rebooting a second time. That was my mistake. @craigr, I did have a backup of the flash drive, so I wasn't hugely worried about losing information or being stuck, I was (and am) frustrated that the upgrades for me have been pretty bumpy. I appreciate all of the work that the team puts into the system, I would like to have a more robust rollback process, ideally automatic, but if manual, "one step" manual versus multiple steps.

That makes me wonder if something akin to timeshift exists - I'll have to look, and if not, see if I can come up with anything, or get those who are sharper than I (a low bar, to be sure) to help create it.

  • Upvote 1
Link to comment
10 hours ago, craigr said:

Thank you.  Successfully edited and removed the device from the config file.  VM's are auto starting again.

Another way to accomplish this would be to click some other box which is unchecked, then click again to uncheck again.  This will enable the "Bind selected to vfio at boot" button.  Click that and it will update the config/vfio-pci.cfg file (copying original to config/vfio-pci.cfg.bak).

 

10 hours ago, craigr said:

Love the new unRAID logo in place of the TianoCore boot screen 😁.

That's @ich777 magic!

  • Like 4
  • Haha 1
Link to comment
3 minutes ago, limetech said:

Another way to accomplish this would be to click some other box which is unchecked, then click again to uncheck again.  This will enable the "Bind selected to vfio at boot" button.  Click that and it will update the config/vfio-pci.cfg file (copying original to config/vfio-pci.cfg.bak).

 

Oh of course : )  Sorry for making you do it the hard way @craigr

  • Haha 1
Link to comment
1 minute ago, limetech said:

Another way to accomplish this would be to click some other box which is unchecked, then click again to uncheck again.  This will enable the "Bind selected to vfio at boot" button.  Click that and it will update the config/vfio-pci.cfg file (copying original to config/vfio-pci.cfg.bak).

 

Doh!  That would have only been super simple.

 

However, now that I think about it, are you sure it would have worked?  I ask because I did bind a new NIC after pulling the old, and the old NIC remained in the file (i never did a bind after pulling the old NIC, just after I added the new one).  I liked that because I could put the old NIC back in and it would already be (remained) setup to pass through to the VM's.

Link to comment

Total zfs newbie here, but would there be any reason not to convert any btrfs/xfs pools to zfs? From what I understand zfs has a better featureset but I am very unfamiliar about the pros and cons. I have multiple pools ranging from small and fast cache pools to large media pools in btfrs raid1 and I am debating on converting to raidz1 or raidz2. I'm currently doing some research on the topics but any personal anecdotes are welcome!

Link to comment

Thanks for the update, however I'm also a bit confused as I don't know much about what ZFS brings, and I don't have the need or desire to move on to ZFS for the time being.

I have a very basic, standard unraid setup, consists of 1 parity drive, 1 cache drive and 2 data drives, all XFS formatted.

What I'd like to know is:

- I assume I can continue with my existing setup without having to make any changes, correct?

- In a ZFS setup, is there no parity drive anymore? Which means I need a raid setup and have to mirror disks for data security?

- What about cache drive/pool? Don't we need it anymore either? Since we don't have parity which means write speeds are better, correct?

Edited by unraiden
Link to comment

Release note here explain: https://docs.unraid.net/unraid-os/release-notes/6.12.0

 

ZFS is essentially another file format like XFS & BTrFS..

The core benefit of ZFS is data integrity & repair built in.

 

The array still exists & you can use ZFS single drives in the array (for data intregiry benefits). You can also use all the old formats too.

 

ZFS speed benefits only exist in a pool much like the current BTFS raid options available in pools.

 

More detailed ZFS article here: https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/

Edited by dopeytree
Link to comment

I upgraded one server no issue. The other however will not boot it crashes right after the auto start and the bzroot line comes up a small red line appears horizontally across the screen then the server reboots and it keeps doing that over and over again. But if I replace the flash drive files with the backup it boots right into 6.11.5.  

Edited by Aspect
Link to comment

Would it be possible to get a clarification on either the release notes or the linked topic that the "Tailscale setup instructions" only apply to the Docker version of Tailscale, and not the plugin version? The plugin version handles the needed configuration automatically, and the instructions are not right for the plugin anyways (wrong interface name, and it attaches the service restart to docker starting, when one of the key benefits of the plugin is that it's not tied to the array/docker running).

Link to comment
11 hours ago, trurl said:

No

 

How large is your flash drive? How much RAM?

Hey

Flash size is only 4GB Imation drive.  Ram I've got 2 x 8, so 16GB total.  The RC updates have aall been great for me

My main server also had issues with 2 of the earlier RC so I rolled them back because some buttons would not work and were unselectable specifically the individual start buttons for any VM or docker...but I could click the start all and then stop individual ones but not ideal...I'll probably try the new stable release to see if thats been ironed out too as I didnt see anyone else with those issues, and its a much more powerful server that one

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.