shire Posted September 3, 2016 Share Posted September 3, 2016 any movement on the NFS share issues? In the absence of more info, this is going to have to wait until 6.2 'stable' is released. i can send you a player if it will help? That's alright, what I need to do is publish a series of test releases that generate more debugging output and/or downgrade certain components in an attempt to bisect the release to find out where the problem got introduced. Normally if this affected all NFS clients it would be something I'd jump on right away (and in that case we could probably easily reproduce). But we cannot delay the 6.2 'stable' release any further, and this will have to be a "known issue" until we can get it sorted. I had the same issues with KODI running on an amazon fire tv stick. At least this would be a cheap device to purchase and to reproduce the problem. Quote Link to comment
bonienl Posted September 3, 2016 Share Posted September 3, 2016 I have noticed a difference in behaviour between 6.1.9 and 6.2 RC4. I am not reporting it as a bug as I am not sure it is. Perhaps it is expected behaviour that 6.2 only shows bridges configured in unRAID and not via the CLI. For context, my setup requires me to add 3 more custom bridges to my network config. These bridges are directly associated with their own eth interface. I achieve this by using the brctl command. In 6.1.9 I assigned each of these bridges to a VM in using the VM GUI the allowing the VM to use the bridges. The problem in 6.2 RC4 (and I didn't test in previous RC's or BETA's) is that once added to the OS via brctl the unRAID VM GUI drop down does not contain the bridges for me to assign to a VM. ... Has anyone else come across this before or know of an explanation or fix? Unfortunately a forum search has not helped me. With unRAID 6.2 it is not allowed anymore to use arbitrary names for bridges. All available interfaces are configured via the GUI and when an ethernet interface gets the bridge function enabled, the GUI (system) will automatically generate the corresponding bridge function and makes the ethernet interface a member of it. This bridge may also include any VLAN interfaces associated that ethernet interface. In short unRAID 6.2 does the complete management of all interfaces including creating and deletion of bridges and/or VLANs. It uses fixed names br0, br1, etc to be able to track assignments. Unfortunately that means that users who have created custom bridge names in the past can no longer use these. Best to delete them and use the GUI instead to create the necessary interfaces. Quote Link to comment
danioj Posted September 3, 2016 Share Posted September 3, 2016 I have noticed a difference in behaviour between 6.1.9 and 6.2 RC4. I am not reporting it as a bug as I am not sure it is. Perhaps it is expected behaviour that 6.2 only shows bridges configured in unRAID and not via the CLI. For context, my setup requires me to add 3 more custom bridges to my network config. These bridges are directly associated with their own eth interface. I achieve this by using the brctl command. In 6.1.9 I assigned each of these bridges to a VM in using the VM GUI the allowing the VM to use the bridges. The problem in 6.2 RC4 (and I didn't test in previous RC's or BETA's) is that once added to the OS via brctl the unRAID VM GUI drop down does not contain the bridges for me to assign to a VM. ... Has anyone else come across this before or know of an explanation or fix? Unfortunately a forum search has not helped me. With unRAID 6.2 it is not allowed anymore to use arbitrary names for bridges. All available interfaces are configured via the GUI and when an ethernet interface gets the bridge function enabled, the GUI (system) will automatically generate the corresponding bridge function and makes the ethernet interface a member of it. This bridge may also include any VLAN interfaces associated that ethernet interface. In short unRAID 6.2 does the complete management of all interfaces including creating and deletion of bridges and/or VLANs. It uses fixed names br0, br1, etc to be able to track assignments. Unfortunately that means that users who have created custom bridge names in the past can no longer use these. Best to delete them and use the GUI instead to create the necessary interfaces. Gotchya. I guess thats not a big deal. The important thing is the ability to do what i need. The names are no big deal, it was just something i did for my ease on the command line and in the go file. EDIT: Now I've had a look at the GUI it actually looks nice and simple. Plus now I think about it, by managing it in the GUI the config will be persistent (hopefully) without me having to manage the content of the go file. Thanks for the quick reply. EDIT: The GUI worked just fine. Actually really happy. Don't have to drop to the command line any more. Thanks. Quote Link to comment
Frank1940 Posted September 3, 2016 Share Posted September 3, 2016 I finally upgraded from 6.1.9 to 6.2rc4, and had some issues. <<<snip >>> * System I/O seems much slower now. It's too soon to blame 6.2rc4, without reloading 6.1.9 and retesting, but so far I'm seeing considerably slower I/O. I did a parity check, and it finished in 21 hours 10 minutes. Previously, they generally run between 14 and 15 hours. Another operation was very slow too. I need to do more research, and tweak the tunables, but so far, I can't account for the slowdown. There are no SMART changes, or new errors. Memory usage is fine, lots of unused memory. CPU usage seems very low, now. What are the hardware specs for this sever? And you might want to peruse this thread: http://lime-technology.com/forum/index.php?topic=51036.45 There is also work being done on a new tunable tester which is detailed here: http://lime-technology.com/forum/index.php?topic=29009.0 But, in the end, I don't know if there is anything which will 'fix' older (and slower) hardware. Version 6.2 is apparently going to increase the hardware bar for folks who are going to expect the same performance as with older versions. Quote Link to comment
landS Posted September 3, 2016 Share Posted September 3, 2016 Jumping from v5 to v6 made my D525 atom (supermicro) barely usable. Write speed to cache has gone way down, Gui response is much slower, and parity checks are slower.... Before adding any plugins. But the features brought by v6 are worth the tradeoff. Am I SOL for using v6. 2 on this hardware? If so, I'll have no choice but to swap hardware due to security patches... Quote Link to comment
bonienl Posted September 3, 2016 Share Posted September 3, 2016 ... EDIT: The GUI worked just fine. Actually really happy. Don't have to drop to the command line any more. Thanks. Good to hear you like it! Quite some effort went into getting this done as easy as possible for the end-user (this includes creation of VLANs as well). Under the hood additional coding needed to be done, because Slackware doesn't support VLAN configuration out-of-the-box. All changes done thru the GUI will be persistent, and there is no (more) need to do special configurations in the go file. Your unRAID life became a lot easier Quote Link to comment
Frank1940 Posted September 3, 2016 Share Posted September 3, 2016 Jumping from v5 to v6 made my D525 atom (supermicro) barely usable. Write speed to cache has gone way down, Gui response is much slower, and parity checks are slower.... Before adding any plugins. But the features brought by v6 are worth the tradeoff. Am I SOL for using v6. 2 on this hardware? If so, I'll have no choice but to swap hardware due to security patches... There is one way to find out. Stop the array. Now backup your flash drive by copying its contents to a safe location. Now upgrade to the latest rc (6.2 rc4). Be sure to read the upgrade notes in the release thread! Test, as it is highly likely this will be ver 6.2.0. You can then evaluate its performance. IF you are unhappy, it will be easy to roll back to your previous version by copying back your backup files and rebooting. Quote Link to comment
landS Posted September 3, 2016 Share Posted September 3, 2016 Cheers. Thanks frank Quote Link to comment
Smitty2k1 Posted September 3, 2016 Share Posted September 3, 2016 Jumping from v5 to v6 made my D525 atom (supermicro) barely usable. Write speed to cache has gone way down, Gui response is much slower, and parity checks are slower.... Before adding any plugins. But the features brought by v6 are worth the tradeoff. Am I SOL for using v6. 2 on this hardware? If so, I'll have no choice but to swap hardware due to security patches... In the same boat with my D525. unRaid 6 is so slow writing to the array, and CPU utilization never exceeds 40% or so. I love my little Atom and don't want to spend $400 on the newer gen hardware Quote Link to comment
Frank1940 Posted September 3, 2016 Share Posted September 3, 2016 Jumping from v5 to v6 made my D525 atom (supermicro) barely usable. Write speed to cache has gone way down, Gui response is much slower, and parity checks are slower.... Before adding any plugins. But the features brought by v6 are worth the tradeoff. Am I SOL for using v6. 2 on this hardware? If so, I'll have no choice but to swap hardware due to security patches... In the same boat with my D525. unRaid 6 is so slow writing to the array, and CPU utilization never exceeds 40% or so. I love my little Atom and don't want to spend $400 on the newer gen hardware The tunables tester may be able to help some in these cases. I would tend to wait until the next release (as it will be optimized to shorten the testing time) before using it... http://lime-technology.com/forum/index.php?topic=29009.0 Quote Link to comment
mr-hexen Posted September 3, 2016 Share Posted September 3, 2016 How can the "min requirements" bar be so much higher from 6.1.9 -> 6.2? Seems like there is something chewing through the CPU and LT should really investigate this quite a bit post 6.2.0 stable. Quote Link to comment
BRiT Posted September 3, 2016 Share Posted September 3, 2016 How can the "min requirements" bar be so much higher from 6.1.9 -> 6.2? Seems like there is something chewing through the CPU and LT should really investigate this quite a bit post 6.2.0 stable. Before or after they fix the known bugs in 6.2.0 to release 6.2.1? Quote Link to comment
mr-hexen Posted September 3, 2016 Share Posted September 3, 2016 How can the "min requirements" bar be so much higher from 6.1.9 -> 6.2? Seems like there is something chewing through the CPU and LT should really investigate this quite a bit post 6.2.0 stable. Before or after they fix the known bugs in 6.2.0 to release 6.2.1? I guess my position is irrelevant since I have more than enough CPU for 6.2, but since they've publicly stated they are no longer going to support 6.1.x once 6.2 final is released they might be alienating a large customer base. Quote Link to comment
BRiT Posted September 3, 2016 Share Posted September 3, 2016 How can the "min requirements" bar be so much higher from 6.1.9 -> 6.2? Seems like there is something chewing through the CPU and LT should really investigate this quite a bit post 6.2.0 stable. Before or after they fix the known bugs in 6.2.0 to release 6.2.1? I guess my position is irrelevant since I have more than enough CPU for 6.2, but since they've publicly stated they are no longer going to support 6.1.x once 6.2 final is released they might be alienating a large customer base. They don't support 6.1.x now. Quote Link to comment
aptalca Posted September 3, 2016 Share Posted September 3, 2016 How can the "min requirements" bar be so much higher from 6.1.9 -> 6.2? Seems like there is something chewing through the CPU and LT should really investigate this quite a bit post 6.2.0 stable. Before or after they fix the known bugs in 6.2.0 to release 6.2.1? I guess my position is irrelevant since I have more than enough CPU for 6.2, but since they've publicly stated they are no longer going to support 6.1.x once 6.2 final is released they might be alienating a large customer base. They don't support 6.1.x now. 6.1.9 is the latest stable and is currently supported until 6.2.0 stable is released which will be very soon Quote Link to comment
BRiT Posted September 3, 2016 Share Posted September 3, 2016 How can the "min requirements" bar be so much higher from 6.1.9 -> 6.2? Seems like there is something chewing through the CPU and LT should really investigate this quite a bit post 6.2.0 stable. Before or after they fix the known bugs in 6.2.0 to release 6.2.1? I guess my position is irrelevant since I have more than enough CPU for 6.2, but since they've publicly stated they are no longer going to support 6.1.x once 6.2 final is released they might be alienating a large customer base. They don't support 6.1.x now. 6.1.9 is the latest stable and is currently supported until 6.2.0 stable is released which will be very soon No. Its not supported. Its been over 6 months with no security updates. There have been security issues and other bugs reported against it, to which the official response has been it wont be fixed or patched and to run 6.2 series. That sure isn't supported by all definitions of the word. Quote Link to comment
spl147 Posted September 4, 2016 Share Posted September 4, 2016 How can the "min requirements" bar be so much higher from 6.1.9 -> 6.2? Seems like there is something chewing through the CPU and LT should really investigate this quite a bit post 6.2.0 stable. Before or after they fix the known bugs in 6.2.0 to release 6.2.1? I guess my position is irrelevant since I have more than enough CPU for 6.2, but since they've publicly stated they are no longer going to support 6.1.x once 6.2 final is released they might be alienating a large customer base. They don't support 6.1.x now. 6.1.9 is the latest stable and is currently supported until 6.2.0 stable is released which will be very soon But with the issues with 6.2 that will not be fixed for the final, forces many users to stick with 6.1.9, so i think 6.1.x should be supported till all the issues in 6.2 are fixed, not to mention the fact that 6.2 is crazy resource hungry... Many people run low power machines, atom processors...apparently that is not possible with 6.2 Quote Link to comment
aptalca Posted September 4, 2016 Share Posted September 4, 2016 How can the "min requirements" bar be so much higher from 6.1.9 -> 6.2? Seems like there is something chewing through the CPU and LT should really investigate this quite a bit post 6.2.0 stable. Before or after they fix the known bugs in 6.2.0 to release 6.2.1? I guess my position is irrelevant since I have more than enough CPU for 6.2, but since they've publicly stated they are no longer going to support 6.1.x once 6.2 final is released they might be alienating a large customer base. They don't support 6.1.x now. 6.1.9 is the latest stable and is currently supported until 6.2.0 stable is released which will be very soon No. Its not supported. Its been over 6 months with no security updates. There have been security issues and other bugs reported against it, to which the official response has been it wont be fixed or patched and to run 6.2 series. That sure isn't supported by all definitions of the word. Supported means if you're having issues with it, you can ask them, they will look into it and will address it. If you're on 4.7 (not supported), you're having issues and ask them, they will say update to the latest stable. That's what "not supported" means. Support doesn't necessarily mean updates. I have a bunch of docker containers that I still support but no longer update. I'll still help out and fix bugs if necessary (ie. zoneminder) Quote Link to comment
BRiT Posted September 4, 2016 Share Posted September 4, 2016 That's a very different definition of supported from what the rest of the software industry uses. Quote Link to comment
mr-hexen Posted September 4, 2016 Share Posted September 4, 2016 6.1.9 is supported, not maintained, does that make sense? Quote Link to comment
BRiT Posted September 4, 2016 Share Posted September 4, 2016 No, that is End of Lifed. Though feel free to use your own non-standard definitions. Quote Link to comment
limetech Posted September 4, 2016 Author Share Posted September 4, 2016 How can the "min requirements" bar be so much higher from 6.1.9 -> 6.2? Seems like there is something chewing through the CPU and LT should really investigate this quite a bit post 6.2.0 stable. It's not intended that min requirements will change unless you're using dual-parity. We will look into transfer performance after 6.2 'stable' is released. One of our backup servers uses Supermicro X7SPA-HF motherboard sporting an Atom D510 @ 1.66GHz with 4GB RAM. I don't see any dramatic transfer slowdown unless parity-check is running. It's a 10-drive dual-parity array and I do see CPU spikes to 100% with a slowdown in parity check speed but that's expected with dual-parity on this cpu since the Q calculation can't make use of any specialized instruction sets because the don't exist on the Atom. Quote Link to comment
limetech Posted September 4, 2016 Author Share Posted September 4, 2016 How can the "min requirements" bar be so much higher from 6.1.9 -> 6.2? Seems like there is something chewing through the CPU and LT should really investigate this quite a bit post 6.2.0 stable. Before or after they fix the known bugs in 6.2.0 to release 6.2.1? I guess my position is irrelevant since I have more than enough CPU for 6.2, but since they've publicly stated they are no longer going to support 6.1.x once 6.2 final is released they might be alienating a large customer base. They don't support 6.1.x now. 6.1.9 is the latest stable and is currently supported until 6.2.0 stable is released which will be very soon No. Its not supported. Its been over 6 months with no security updates. There have been security issues and other bugs reported against it, to which the official response has been it wont be fixed or patched and to run 6.2 series. That sure isn't supported by all definitions of the word. You all please quit griping about the release process in this topic. Quote Link to comment
RobJ Posted September 4, 2016 Share Posted September 4, 2016 I finally upgraded from 6.1.9 to 6.2rc4, and had some issues. <<<snip >>> * System I/O seems much slower now. It's too soon to blame 6.2rc4, without reloading 6.1.9 and retesting, but so far I'm seeing considerably slower I/O. I did a parity check, and it finished in 21 hours 10 minutes. Previously, they generally run between 14 and 15 hours. Another operation was very slow too. I need to do more research, and tweak the tunables, but so far, I can't account for the slowdown. There are no SMART changes, or new errors. Memory usage is fine, lots of unused memory. CPU usage seems very low, now. What are the hardware specs for this server? I've attached my diagnostics, if anyone is interested. My CPU is an Athlon 64 X2 4600+, 2.4GHz, with a PassMark score of 1365, not great but OK for a NAS with no VM's. RAM is 5GB (DDR2). Looking it up, I realized I didn't know how old it all is. While I do find the speed disappointing, I'm expecting Tom to improve it when he finds the time, and I have considerable tunable testing to do. I do find the 'reconstruct write' feature to be an awesome boost, and have decided for now to keep it on at all times. I believe many users should consider doing that. Found a small bug in the Parity History feature. I've been testing removing and adding a small drive (250GB), and found that when I add the drive, unRAID starts the array and clears it and logs it almost like a parity check, close enough that it's included in the parity check history. Examining the syslog, they both result in the very same final lines, and they both begin with "mdcmd (##): check CORRECT". But the next line is different, the only difference. The parity check has "md: recovery thread: check P ...", the clear has "md: recovery thread: clear ...". I suppose the parity check could also have "check Q" or both P and Q. Another New Config "Retain current configuration" failed for me, and I believe there's a bug in it. This time I was testing, so I could record the exact keystrokes and mouse clicks to use, and I had selected the 'All' option and closed, and then wanted to see it again. I had NOT clicked the 'Apply' button. I used the browser Back button, then reselected what I wanted , then clicked the Apply button then the Done button. But when I went to the Main screen it was empty, no assignments at all. I then checked the config folder and examined the super.dat and the super.old, and found that BOTH were empty. I suspect that the New Config tool had *prematurely* renamed the super.dat file to old, even though Apply had not been clicked, which meant that when I next selected a Retain option and clicked Apply, the super.dat was renamed again, losing the super.old file that actually had my assignments. I really like this feature, but so far it has not once worked correctly. I realize it's brand new, and we're not final, and my systems like to be different! jacoback-diagnostics-20160904-1436.zip Quote Link to comment
limetech Posted September 4, 2016 Author Share Posted September 4, 2016 I finally upgraded from 6.1.9 to 6.2rc4, and had some issues. <<<snip >>> * System I/O seems much slower now. It's too soon to blame 6.2rc4, without reloading 6.1.9 and retesting, but so far I'm seeing considerably slower I/O. I did a parity check, and it finished in 21 hours 10 minutes. Previously, they generally run between 14 and 15 hours. Another operation was very slow too. I need to do more research, and tweak the tunables, but so far, I can't account for the slowdown. There are no SMART changes, or new errors. Memory usage is fine, lots of unused memory. CPU usage seems very low, now. What are the hardware specs for this server? I've attached my diagnostics, if anyone is interested. My CPU is an Athlon 64 X2 4600+, 2.4GHz, with a PassMark score of 1365, not great but OK for a NAS with no VM's. RAM is 5GB (DDR2). Looking it up, I realized I didn't know how old it all is. While I do find the speed disappointing, I'm expecting Tom to improve it when he finds the time, and I have considerable tunable testing to do. I do find the 'reconstruct write' feature to be an awesome boost, and have decided for now to keep it on at all times. I believe many users should consider doing that. Found a small bug in the Parity History feature. I've been testing removing and adding a small drive (250GB), and found that when I add the drive, unRAID starts the array and clears it and logs it almost like a parity check, close enough that it's included in the parity check history. Examining the syslog, they both result in the very same final lines, and they both begin with "mdcmd (##): check CORRECT". But the next line is different, the only difference. The parity check has "md: recovery thread: check P ...", the clear has "md: recovery thread: clear ...". I suppose the parity check could also have "check Q" or both P and Q. Another New Config "Retain current configuration" failed for me, and I believe there's a bug in it. This time I was testing, so I could record the exact keystrokes and mouse clicks to use, and I had selected the 'All' option and closed, and then wanted to see it again. I had NOT clicked the 'Apply' button. I used the browser Back button, then reselected what I wanted , then clicked the Apply button then the Done button. But when I went to the Main screen it was empty, no assignments at all. I then checked the config folder and examined the super.dat and the super.old, and found that BOTH were empty. I suspect that the New Config tool had *prematurely* renamed the super.dat file to old, even though Apply had not been clicked, which meant that when I next selected a Retain option and clicked Apply, the super.dat was renamed again, losing the super.old file that actually had my assignments. I really like this feature, but so far it has not once worked correctly. I realize it's brand new, and we're not final, and my systems like to be different! Hi RobJ, thanks for your detailed report and logs. We will definitely be taking a look at the speed issues. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.