Unraid OS version 6.9.2 available


Recommended Posts

14 hours ago, wgstarks said:

Same here. Tested Chrome, Firefox and Safari on macOS Big Sur. They all just went to the top of the page.

 

This is expected behavior when the GUI is displayed in non-tabbed mode. While in tabbed mode (see display settings) it will select the appropriate tab.

 

This behavior hasn't changed since this feature was introduced, I can't remember exactly when but it is a long time ago.

 

Link to comment
1 hour ago, bonienl said:

 

This is expected behavior when the GUI is displayed in non-tabbed mode. While in tabbed mode (see display settings) it will select the appropriate tab.

 

This behavior hasn't changed since this feature was introduced, I can't remember exactly when but it is a long time ago.

 

Ok, it works in Tabbed.

Might be because English is not my native language, but this option was not so clear without GUI help.

Once activated and since I had a before / after experience of a page, I understand what it does now.

Link to comment
On 4/24/2021 at 10:35 AM, ken-ji said:

Ok. now that I have time to try upgrading again. I did and the problem has gone away.

controller-benchmark-6.9.2.thumb.png.48f32af3ae8cf33e887bec98f91f8139.png

The one thing I did though beforehand was to finally flash my controller to P40 so maybe that was it.

 

So there really is something wrong with 6.9.2 and some LSI controllers

I had a parity check run start and was wondering why it said it was running at 90MB/s at the start. So I stopped it and ran some benchmarks and saw this:

 

controller-benchmark-6.9.2-2.thumb.png.36b671d2f0bd873b636c2adc721de5ce.png

 

Disk 4 actually read 60MB/s at one point but restarting the benchmark painted the usual 244MB/s value

I don't see any other issues in my logs.

 

I'm currently allowing a parity check to complete before I try anything else.

I'll probably try things like switching ports on the controller or switching controllers to an LSI 9200-8e (I think) when I get back home to my server

 

mediastore-diagnostics-20210424-1011.zip

Link to comment
3 hours ago, ken-ji said:

It might be, but I've only heard from @Zonediver so have no other points of reference and am not having luck seeing anything like this in the wild (or my ability to google has failed me)

 

In any case I'll try some hardware tweaks when I am able and report back.

I have an LSI Broadcom SAS 9300-8I with bios = P14 Version 16.00.00.00 and firmware = P16 Version 17.00.00.00 installed. 

I've never encountered any issues with it since unRAID version 6.8.2. I'm currently running 6.9.2. 

 

The only things I can think of checking is maybe reflash the LSI card, check for a new motherboard bios and do a CMOS reset and reconfigure, then check your cables. 

Screen Shot 2021-04-26 at 11.33.44 AM.png

Screen Shot 2021-04-26 at 11.34.39 AM.png

Screen Shot 2021-04-26 at 1.21.48 PM.png

Edited by FQs19
added controller screenshot
Link to comment

It's an intermittent problem.

Every time when the disks sleep and i start the test, some weird things happen (see screenshot).

But when i wakup all disk and "then" start the test, all is normal... stange but ok.

I've only been observing this behavior since v6.9.2 but not sure if this is an unraid problem, a driver or something else...

 

SAS2308 - pic1.jpg

Edited by Zonediver
Link to comment
17 hours ago, Zonediver said:

It's an intermittent problem.

Every time when the disks sleep and i start the test, some weird things happen (see screenshot).

But when i wakup all disk and "then" start the test, all is normal... stange but ok.

I've only been observing this behavior since v6.9.2 but not sure if this is an unraid problem, a driver or something else...

 

SAS2308 - pic1.jpg

 

Holy --- ! You're correct about the spin up thing.... hmm I think I'll look into this angle... and probably create a cron script to wake up the drives prior to major activities

controller-benchmark.thumb.png.eda004bddbd55163837ba42915249003.png

 

Link to comment

Something is very wrong here...

If I start a film via Plex and the HDD on which the film is located is in sleep, it takes up to 2 (!) Minutes until the film starts - that is very unpleasant ...

There is also an error message that says the disk is too slow...

The problem is new - I've only been observing this since yesterday ...

I hope it is a client problem on plex and not on unraid.

 

EDIT: During the "filmstart", the CPU-load goes up to 27% - thats not normal...

It seems the sys doesn't know on which HDD the film is located or the disk is terrible slow - but that's just a guess...

 

EDIT2: Ok - command back - it's up to the Kodi-client... damn...

Edited by Zonediver
Link to comment

Cross-posting here for greater user awareness since this was a major issue - on 6.9.2 I was unable to perform a dual-drive data rebuild, and had to roll-back to 6.8.3.

 

I know a dual-drive rebuild is pretty rare, and don't know if it gets sufficiently tested in pre-release stages.  Wanted to make sure that users know that, at least on my hardware config, this is borked on 6.9.2.

 

Also, it seems the infamous Seagate Ironwolf drive disablement issue may have affected my server, as both of my 8TB Ironwolf drives were disabled by Unraid 6.9.2.

 

I got incredibly lucky that I only had two Ironwolfs, so data rebuild was an option.  If I had 3 of those, recent data loss would likely have resulted.

 

Paul

Edited by Pauven
  • Like 1
  • Thanks 1
Link to comment
On 4/26/2021 at 6:21 PM, Zonediver said:

It's an intermittent problem.

Every time when the disks sleep and i start the test, some weird things happen (see screenshot).

But when i wakup all disk and "then" start the test, all is normal... stange but ok.

I've only been observing this behavior since v6.9.2 but not sure if this is an unraid problem, a driver or something else...

 

BUMP

Is there any new knowledge about this problem?

Edited by Zonediver
Link to comment
8 hours ago, elcapitano said:

Attempted an update from 6.8.3 but I had the Webgui issue. No containers or VM's started.

Reverted to 6.8.3. All good.

Will be a while before I make another attempt . . 

 

Not sure what you mean by "the webgui issue", but if you try again please capture diagnostics. 

 

If the webgui doesn't load you won't be able to go to Tools -> Diagnostics to download the zip file. But you can SSH to the server or use a local keyboard/monitor to login and then type "diagnostics". This will place a zip file in the logs folder on your flash drive which you can then upload here.

Link to comment

I'm unable to update from 6.8.3 to 6.9.2 via the GUI.  I consistently get

Quote

plugin: updating: unRAIDServer.plg
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.9.2-x86_64.zip ... done
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.9.2-x86_64.md5 ... done
wrong md5
plugin: run failed: /bin/bash retval: 1

What's the problem here, and how can I fix it?

I'm not sure if this is the proper forum or thread; kindly direct me if I should post this elsewhere.

Link to comment
On 4/26/2021 at 9:21 AM, Zonediver said:

It's an intermittent problem.

Every time when the disks sleep and i start the test, some weird things happen (see screenshot).

But when i wakup all disk and "then" start the test, all is normal... stange but ok.

I've only been observing this behavior since v6.9.2 but not sure if this is an unraid problem, a driver or something else...

 

SAS2308 - pic1.jpg

 

This issue isn't due to UNRAID OS.

 

For anyone getting this issue, spin up your drives first before running a controller benchmark. The cause is that my spin-up logic isn't working. The first second of the 15 seconds if data read is discarded to account for any drive head relocation but the spin up times is causing at least one second of 0 MB/sec read which is hurting the average. This issue isn't evident on the individual drive benchmarks because it performs seek & latency tests first so the drive is spun up before it gets to the benchmark.

  • Like 1
  • Thanks 1
Link to comment

I'd like to add my 2bits here that the issue seems to be present when doing parity checks, so it might not be a problem of the Diskspeed docker, but of the mpt3sas driver interacting with the LSI SAS2 controllers.

When I don't spin up my disks, my parity check gets capped to about 77MB/s ave vs the usual 140MB/s

 

I'm still looking into making tweaks to work around this as I don't see anything anywhere about this spinup issue.

 

Link to comment
15 hours ago, Squid said:

I just tried it and the md5 is working properly for me.  I'd suggest to reboot and try again, after that run a memtest if only to rule it out.

Thank you, Squid.  I don't understand why, but the reboot fixed the problem.

Link to comment
7 hours ago, ken-ji said:

I'd like to add my 2bits here that the issue seems to be present when doing parity checks, so it might not be a problem of the Diskspeed docker, but of the mpt3sas driver interacting with the LSI SAS2 controllers.

When I don't spin up my disks, my parity check gets capped to about 77MB/s ave vs the usual 140MB/s

 

I'm still looking into making tweaks to work around this as I don't see anything anywhere about this spinup issue.

 

This is likely two different issues that present in the same way. I spun down my drives connected to the MB controller and performed a DiskSpeed controller benchmark and was able to duplicate.

Link to comment

I just discovered the reason behind DiskSpeed's controller benchmark low scores. The dd command progress MB/sec speed indicator is an overall average, not the speed over the last second as I had assumed. dd includes the spinup delay in it's speed average calculations. The following command against my Parity drive spun up shows top speed right away but much slower speeds that slowly climb if the drive was spun down. Take note if you use dd to test your workaround.

 

dd if=/dev/sdh of=/dev/null bs=1310720 skip=0 iflag=direct status=progress conv=noerror

Link to comment

I'm very excited to see a larger push for security. Thank you. Security is very important to me, and I applaud the efforts! I'm very pleased to see this!

 

However, is there a way to adjust the GUI's failed login lock threshold, or cool-down timer? I personally believe that a max of 3 attempts is... extremely aggressive, such that it's counter-productive. This leaves very little room to allow humans to be humans; fallible, and prone to mistakes. These values are simply a punishment and unfriendly to those cursed with fat fingers, rather the serving as an effective protection measure. This is especially true with long and complex passwords. It would be brilliant to provide us a drop-down to let us adjust these values ourselves, or at least share the config value names in the config files, if they exist.

 

 

 

Please allow me to articulate an argument to change these default values away from 3 and 15 minutes.

 

Yes, a tight value like this is technically more secure, but doing so renders the extra layer of protection impractical. It's not sensible or realistic.  I would suggest a default value of at least 5 attempts, better 10.  Actually, you could flip these values: 15 attempts, 3 minute cool-down, and this would still be incredibly effective at protecting a moderately complex password, yet still minimizing the friction of getting an authorized user logged in.  That would result in only 300 password attempts per hour. You are not cracking an 8 character password at that rate within a single year. An attacker would be better served profiling and exploiting a vulnerability.

 

Honestly, a 15 minute cooldown after 3 type-os is a "go sit in the corner time-out, and wear this dunce cap" punishment.  These are not sensible values for a home NAS.  I suggest taking the minimum password length allowed, calculating the worst case keyspace (user chooses a dumb password with minimal complexity), and work from there to establish an acceptable "attempts per minute" to prevent brute forcing.

Edited by bitcore
  • Like 4
Link to comment
15 hours ago, bitcore said:

I'm very excited to see a larger push for security. Thank you. Security is very important to me, and I applaud the efforts! I'm very pleased to see this!

 

However, is there a way to adjust the GUI's failed login lock threshold, or cool-down timer? I personally believe that a max of 3 attempts is... extremely aggressive, such that it's counter-productive. This leaves very little room to allow humans to be humans; fallible, and prone to mistakes. These values are simply a punishment and unfriendly to those cursed with fat fingers, rather the serving as an effective protection measure. This is especially true with long and complex passwords. It would be brilliant to provide us a drop-down to let us adjust these values ourselves, or at least share the config value names in the config files, if they exist.

 

 

 

Please allow me to articulate an argument to change these default values away from 3 and 15 minutes.

 

Yes, a tight value like this is technically more secure, but doing so renders the extra layer of protection impractical. It's not sensible or realistic.  I would suggest a default value of at least 5 attempts, better 10.  Actually, you could flip these values: 15 attempts, 3 minute cool-down, and this would still be incredibly effective at protecting a moderately complex password, yet still minimizing the friction of getting an authorized user logged in.  That would result in only 300 password attempts per hour. You are not cracking an 8 character password at that rate within a single year. An attacker would be better served profiling and exploiting a vulnerability.

 

Honestly, a 15 minute cooldown after 3 type-os is a "go sit in the corner time-out, and wear this dunce cap" punishment.  These are not sensible values for a home NAS.  I suggest taking the minimum password length allowed, calculating the worst case keyspace (user chooses a dumb password with minimal complexity), and work from there to establish an acceptable "attempts per minute" to prevent brute forcing.

 

for those that use password managers, one failed login is too many. but anyways fail2ban should be part of unraid and can be configured to meet the users need. just unraid needs to log failures in a sensible way that fail2ban can parse out.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.