Unraid OS version 6.12.6 available


ljm42

Recommended Posts

Asrock N100DC-ITX.

Realtek 8111H Network

 

No deep C-Pkg States more possible, I stuck at C3. Reached with 6.12.4 as Pkg-State C8. With 6.12.5 and .6 I see that ASPM is disabled:

1000669460_Bildschirmfoto2023-11-29um16_28_12.thumb.png.e9c4279039a1cd2b6c9bd5def34acaa0.png

 

Installation of the RTL8xxx App Package is useless. Also with them only C3 can reached (although ASPM is shown as active, but the Github Pages states, that ASPM is not active with this driver package). Power consumption increased by more than 25%.

 

Needed to revert to 6.12.4.

 

Hope for a fix.

 

 

With kind regards,

Joerg

 

Edited by MPC561
  • Upvote 2
Link to comment
22 hours ago, nekromantik said:

my server broke completely after the upgrade.

can no longer boot it

it wont go past bios or get network connection,

dont use realtik or nvidia on mine.

If you're not booting beyond bios you have something else going on. You should create a separate support thread since I don't think this is related to the release. 

 

My guess is your bios settings have reset so you're no longer booting from USB key, key itself has died, or motherboard has a hardware issue such as the USB port the key is plugged in to going bad. Could just be bad luck/timing when paired with the update. 

Link to comment

I've got some really weird sh*t going on with movies being in the wrong folders. Its only one 1 disk.

I have no idea whats going on.

 

For the moment I'm going to disable mover plugin. But I think something else is going on. These files were fine before the .5 & .6 updates.

 

The most annoying this is having to back and rejig the plex posters for these movies.

 

Possibly they got some kind of zfs corruption and this .6 fixed this but has put them in the original location. I moved them from here ages ago.

Edited by dopeytree
Link to comment

  

30 minutes ago, dopeytree said:

What makes me think it is some kind of ZFS bug is when I move then with 'File Manager' they are moving instantly.. from /data/ to /data/media/movies/ on disk6

 

 

Not a bug, each zfs dataset is essentially a separate filesystem so it will be a physical move.

 

 

Edited by Kilrah
Link to comment

It was moving files within a share So initially sometimes radarr sets films to download to /data/ rather than /data/media/movies 

 

/data is the dataset right? It's not the name root name of a drive. It's the hard links style of working. Anyway so then I manually moved the files to /data/media/movies because they all appeared in PLEX and I chose custom posters for them all. 

 

I'm just trying to work out why they've moved back. It's definitely something to do with these 2 updates.

 

At first I thought I'd been hacked but no all the data is here and it's all good.

 

I'll double check my read up on zfs.

 

I am also getting some whinging about files names being too long so it stops an smb transfer.

 

It wouldn't transfer file 41 out of 54. As you can see from the folder list on the server there are many longer names already existing..

 

No need for support. Diagnostics are included incase anyone wants to explore.

 

1673442479_Screenshot2023-12-03at21_05_13.thumb.png.f1b291c98ebff544e8d7b15a3c602d8e.png

802623411_Screenshot2023-12-03at21_15_53.thumb.png.d58faa269e5233453da802605613835e.png

 

moulin-rouge-diagnostics-20231203-2105.zip

Edited by dopeytree
Link to comment
12 hours ago, dopeytree said:

It was moving files within a share So initially sometimes radarr sets films to download to /data/ rather than /data/media/movies 

 

/data is the dataset right? It's not the name root name of a drive. It's the hard links style of working.

 

 

Zpools in Unraid can obtain their names in 2 different ways -
1. When you create an independent Zpool in Unraid  you name the pool and add the disks. The pool gets its name from what you name it as.
2. However if you format a drive that is part of your array in ZFS format, then that drive, although a part of the array, it is also it's own Zpool. When done this way, the Zpool name is taken from the disk number.

So assuming your disk6 is ZFS formatted, it is therefore a Zpool. The pool name being "disk6".
A Zpool can contain both regular folders and/or datasets. So the /data in disk6, it could be either a dataset or just a regular folder. 
But if it is a dataset, then yes, the dataset name would be /data 
( a dataset path in ZFS is,  poolname/dataset ,  so your ZFS path would be disk6/data) 

To see what datasets are in your disk6 (or any other Zpools) install the ZFS master plugin. Then you can see the datasets clearly on the main tab.

So if i understand, you say you are using hardlinks. Hard links in ZFS do work but with some limitations. Hard links can only be created within a single dataset but not across datasets.
For example, within your disk6/data dataset, hard links can be made between files, functioning just like they would in a traditional filesystem. However hard links cannot span across different datasets in ZFS. This means that you cannot create a hard link between a file in disk6/data and another in disk6/media.
This limitation is part of the ZFS design, which emphasizes data integrity and clear boundaries between datasets. Each dataset is basically an isolated filesystem in itself, which has advantages for management, snapshots, and data integrity. But a downside is, it also means that traditional filesystem features like hard links have these constraints. I hope this helps

  • Like 2
  • Thanks 1
  • Upvote 1
Link to comment

trying to upgrade from 6.12.4 and stuck on "trying to unmount disk share target is busy"

docker daemon is stopped; all vm are stopped

no file open on the disk share

the share is on a 2 ssd btrfs pool.

fuser tells me there is no process using the disk share.

godzilla-diagnostics-20231204-1101.zip

 

edit: libvirt.img is still mounted

 

edit2: umount /etc/libvirt solved it

Edited by caplam
Link to comment
10 hours ago, SpaceInvaderOne said:

Zpools in Unraid can obtain their names in 2 different ways -
1. When you create an independent Zpool in Unraid  you name the pool and add the disks. The pool gets its name from what you name it as.
2. However if you format a drive that is part of your array in ZFS format, then that drive, although a part of the array, it is also it's own Zpool. When done this way, the Zpool name is taken from the disk number.

So assuming your disk6 is ZFS formatted, it is therefore a Zpool. The pool name being "disk6".
A Zpool can contain both regular folders and/or datasets. So the /data in disk6, it could be either a dataset or just a regular folder. 
But if it is a dataset, then yes, the dataset name would be /data 
( a dataset path in ZFS is,  poolname/dataset ,  so your ZFS path would be disk6/data) 
To see what datasets are in your disk6 (or any other Zpools) install the ZFS master plugin. Then you can see the datasets clearly on the main tab.

So if i understand, you say you are using hardlinks. Hard links in ZFS do work but with some limitations. Hard links can only be created within a single dataset but not across datasets.
For example, within your disk6/data dataset, hard links can be made between files, functioning just like they would in a traditional filesystem. However hard links cannot span across different datasets in ZFS. This means that you cannot create a hard link between a file in disk6/data and another in disk6/media.
This limitation is part of the ZFS design, which emphasizes data integrity and clear boundaries between datasets. Each dataset is basically an isolated filesystem in itself, which has advantages for management, snapshots, and data integrity. But a downside is, it also means that traditional filesystem features like hard links have these constraints. I hope this helps

 

Thanks -

 

Ok looked into a bit more. The Data share is split across normal XFS array drives & a ZFS cache pool of x4 2TB sata SSDs.

It looks like a few weeks ago I tried to manually move some files from the Zpool to the array drive to free up space on the zfs cache for more downloads. 

 

Non of my array drives are zfs. 

 

Zfs format is only in separate pools.

 

When I look at the zfs master there are only a few zfs datasets.

 

Will be re watching some of your videos for sure.

Edited by dopeytree
Link to comment

I seem to be having strange problems going from 6.12.4 to 6.12.6. Everything comes up initially. The first issue that I ran into on this version was when I started updating primary/secondary storage for my shares. The whole GUI went unresponsive. I used ssh to initiate shutdown, and it responded with the initiating shutdown message, but never actually shutdown so I had to force shutdown. After that, I rolled back to 6.12.4 and everything was fine again. I upgraded back to 6.12.6 and kicked off a parity check, things started to hang again. Console/logs start throwing nginx gateway errors. I'm no longer able to stop the array. I can no longer SSH, and am unable to finish generating diagnostics logs, which seems to freeze at ip command.

 

I've been using unraid for a couple of years but it's my first time running into these kinds of issues after an upgrade. I'll stick to 6.12.4 for now.

Link to comment

I have also noticed the server never properly shutdowns.

It unmounts everything safely but you have you hold the power button for the final power down.

As it unmounted safely there is no parity check on re-start so a minor issue.

 

There is also a lockup if you change DNS servers.

 

My server has Intel I225V 2.5Gbps LAN controller.

 

It normally stays on 24/7 but I was fiddling with getting to it use local pfsense firewall for DNS to enforce encryption.

 

Otherwise all good. Seems to be quicker to cpu idle.

Edited by dopeytree
Link to comment
11 hours ago, JorgeB said:

Do you have a Realtek 8125 NIC? Post the diags from 6.12.4.

I'm using a PRIME Z790-P WIFI which does have Realtek NIC but not sure if its 8125. Not sure if it makes a difference, but it's not currently hooked up at the moment, I'm actively using the Mellanox ConnectX-2. Currently it has been running parity check for the past day, so can't reproduce and generate any new diag yet.

Link to comment
12 hours ago, cpxazn said:

not sure if its 8125. Not sure if it makes a difference, but it's not currently hooked up at the moment

It should be, not sure if it still can be a problem when not in use, but if it's using jumbo frames set the MTU to 1500, problem only occurs when using jumbo frames. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.