tjb_altf4 Posted October 27, 2018 Share Posted October 27, 2018 Updated bios and made the jump from 6.5.3 to 6.6.3, new interface is nice and some issues I had with VNC have gone away. VM performance feels to have improved. Quote Link to comment
Gico Posted October 27, 2018 Share Posted October 27, 2018 (edited) The GUI is partially unresponsive. Had this yesterday and rebooted. Happened again now. I can get Main tab but without the array actions. Can't get dashboard & dockers tab. Tools --> diagnostics, and I'm on "Please wait" for 15 minutes now. Array shares are available. Dockers are not. Uninstalled nerdpack and rebooted. Update: It happened again. Syslog shows nothing meaningful. Any ideas? I can SSH but can't download diagnostics. Update 2: Downgraded to 6.6.2 Edited October 28, 2018 by Gico Quote Link to comment
stewartwb Posted October 27, 2018 Share Posted October 27, 2018 On 10/25/2018 at 7:44 PM, garycase said: . . . I do recall outlining the issues in an early v6 release thread (probably ~ 6.1) and I also created an "UnRAID on Atom D525" thread … 😊 Gary - I've not scoured the forums looking for your system config, but I would have to guess that your performance issue is related to your Atom D525 and supporting chipset interacting negatively with a newer kernel or driver version introduced in unRAID 6.xx. That said, one of my servers is still running an old AMD Athlon 64 quad-core CPU, and its performance improved with 6.xx releases. Two quick questions: Are your data drives formatted XFS? Are you using any sort of disk controller card to attach your drives? -- stewartwb Quote Link to comment
garycase Posted October 27, 2018 Share Posted October 27, 2018 2 hours ago, stewartwb said: I've not scoured the forums looking for your system config, but I would have to guess that your performance issue is related to your Atom D525 and supporting chipset interacting negatively with a newer kernel or driver version introduced in unRAID 6.xx. I agree -- that was obvious 3 years ago when I first tried v6 on this system. I spent a fair amount of time "fiddling" with various settings to try and tweak the performance, and although it was possible to improve it some, it never came close to the performance with v5. I simply reverted that system to v5 until about a year ago, when I decided to just live with the lower performance and longer parity check times so all of my servers would be on the same version. But clearly something else has changed with 6.6.3, which adds yet another 3 hours to the parity check times. Not a big deal … I simply reverted the system to 6.5.3 and it will simply remain on that until I eventually decide to get rid of this server. 2 hours ago, stewartwb said: Are your data drives formatted XFS? Yes 2 hours ago, stewartwb said: Are you using any sort of disk controller card to attach your drives? No. The SuperMicro X7SPA-HF motherboard has 6 SATA ports on the board. Quote Link to comment
cyberspectre Posted October 27, 2018 Share Posted October 27, 2018 On 10/23/2018 at 12:01 AM, Hoopster said: When upgrading to 6.6.x, the upgrade process automatically creates a "previous" folder on the flash drive and copies the necessary files there to facilitate a roll back, if desired. If you want to roll back, you just go to Tools-->Update OS-->Restore. Thanks. Updated it from my phone while lying on my couch, drinking a beer. All seems well so far. Quote Link to comment
alturismo Posted October 28, 2018 Share Posted October 28, 2018 Hi, may just some weird behavior here, but my personal feeling is since the latest updates (6.6 x) my drive spinups are way more then before ... and hard to reproduce what is causing them, i usually access almost always cache files only (wich is an nvme), but my disks are also starting to spinup when accessing cache medias ... i tried now with some plugins (active streams, open files, file activity) to catch what is causing this but im really stuck cause theres nothing really shown wich could explain this odd behavior. also tried now the plugin cache directories, still not better. what i see (sometimes) is smbd processes opening with the mounts, but there are 0 entries in file activities wich could explain this ... also, i remember before this started, when i opened a movie wich was physically on disk 3 on the array for example, only disk 3 was activated and running, now when i access something from the array ALL disks are always up ... may here someone has an idea what to look for or experience the same behavior, sadly i cant really tell since when exactly this all started. Quote Link to comment
hawihoney Posted October 28, 2018 Share Posted October 28, 2018 6 minutes ago, alturismo said: Hi, may just some weird behavior here, but my personal feeling is since the latest updates (6.6 x) my drive spinups are way more then before ... +1 Today I saw small blinks (activity LEDs) on all disks when writing to one single disk. I never saw that before. The activity LED of the disk that was written to (and the parity disks) had a constant LED light. The other disks showed minimal blinks- hard to see. LSI 9300-8i controller and Supermicro 5in3 cages. Quote Link to comment
bonienl Posted October 28, 2018 Share Posted October 28, 2018 25 minutes ago, alturismo said: Hi, may just some weird behavior here, but my personal feeling is since the latest updates (6.6 x) my drive spinups are way more then before ... If you haven't done so, make a bug report under stable releases. This will keep the "issue" visible. Quote Link to comment
Frank1940 Posted October 28, 2018 Share Posted October 28, 2018 (edited) 3 hours ago, alturismo said: Hi, may just some weird behavior here, but my personal feeling is since the latest updates (6.6 x) my drive spinups are way more then before ... I really suspect that the cache directory plugin has some serious issues with the latest releases. First, many folks are finding that it does not start 'running' automatically. Second, (in my case), even after it is cycled and the the status says it is running, it does not seem to be actually caching any directories or file names. I can't see any reading activity going on (looking at the Main tab) during the time I would think it should be doing its thing. I have already posted about it here but have yet to get any response... Edited October 28, 2018 by Frank1940 Quote Link to comment
bastl Posted October 28, 2018 Share Posted October 28, 2018 Switching vdisk bus to SCSI for an Q35 machine is not possible. I did that before on older Unraid versions. Quote Link to comment
NewDisplayName Posted October 28, 2018 Share Posted October 28, 2018 (edited) On 10/25/2018 at 2:58 PM, trurl said: Seems to me it is a good thing to make people think twice and perhaps consult the release thread before upgrading. Is it really a lot of extra effort for you? Why is it to hard to add a good thing? What you loose by adding? Its my decicionw hat i wanna do and when ai always want to have the latest unraid, then why not. Edited October 28, 2018 by nuhll Quote Link to comment
alturismo Posted October 28, 2018 Share Posted October 28, 2018 3 hours ago, Frank1940 said: I really suspect that the cache directory plugin has some serious issues with the latest releases. First, many folks are finding that it does not start 'running' automatically. Second, (in my case), even after it is cycled and the the status says it is running, it does not seem to be actually caching any directories or file names. I can't see any reading activity going on (looking at the Main tab) during the time I would think it should be doing its thing. I have already posted about it here but have yet to get any response... ok, but i tested this plugin "after" i experienced the spinups ... just as "solution" wich didnt really worked out, but thanks for the hint. Quote Link to comment
Frank1940 Posted October 28, 2018 Share Posted October 28, 2018 1 hour ago, alturismo said: ok, but i tested this plugin "after" i experienced the spinups ... just as "solution" wich didnt really worked out, but thanks for the hint. Did you try this? https://forums.unraid.net/topic/34889-dynamix-v6-plugins/?page=88&tab=comments#comment-692895 I not really sure this is 'the' answer but it seems to have worked in my case. By the way, there is a script that does the actual work behind the scenes. The plugin that you install is a GUI wrapper for this script. You can read about the actual working script in this thread: https://forums.unraid.net/topic/4351-cache_dirs-an-attempt-to-keep-directory-entries-in-ram-to-prevent-disk-spin-up/ You might want to start reading about page 36... Quote Link to comment
alturismo Posted October 28, 2018 Share Posted October 28, 2018 1 hour ago, Frank1940 said: Did you try this? https://forums.unraid.net/topic/34889-dynamix-v6-plugins/?page=88&tab=comments#comment-692895 I not really sure this is 'the' answer but it seems to have worked in my case. By the way, there is a script that does the actual work behind the scenes. The plugin that you install is a GUI wrapper for this script. You can read about the actual working script in this thread: https://forums.unraid.net/topic/4351-cache_dirs-an-attempt-to-keep-directory-entries-in-ram-to-prevent-disk-spin-up/ You might want to start reading about page 36... thank you for the infos, the win explorer test i can reproduce, about the script, i wait for the update for further tests, but im pretty sure theres something wrong anyway ... cause i never needed that kinda script before, was just to calm the disks down for now ... Quote Link to comment
ryoko227 Posted October 29, 2018 Share Posted October 29, 2018 Updated both servers from 6.6.2 with no issues to note o/ Quote Link to comment
hawihoney Posted October 29, 2018 Share Posted October 29, 2018 Regarding: Spinups This morning I looked at my server. All disks spun down. Then I tried to look at the result of an User Script that runs every night. The dialog for the log opened but stayed empty. I saw that an update for a plug-in was announced (Community Applications). After clicking the Update button the dialog opened but stayed empty. Then I looked at the syslog and found "Oct 29 06:48:58 Tower emhttpd: error: send_file, 139: Broken pipe (32): sendfile: /usr/local/emhttp/logging.htm" I switched to the Main window: All drives spun up. Quote Link to comment
Mat1926 Posted October 29, 2018 Share Posted October 29, 2018 On 10/23/2018 at 8:01 AM, Hoopster said: If you want to roll back, you just go to Tools-->Update OS-->Restore. How do you do that manually?! I mean if I was forced to take the USB out, connect it to my windows desktop, then shall I just move some files from one folder to another...then reboot? If so, which files/folders? Thnx Quote Link to comment
wgstarks Posted October 29, 2018 Share Posted October 29, 2018 3 minutes ago, Mat1926 said: How do you do that manually?! I mean if I was forced to take the USB out, connect it to my windows desktop, then shall I just move some files from one folder to another...then reboot? If so, which files/folders? Thnx Just move the contents of the Previous folder to the root of the flash drive, overwriting the matching existing files. Quote Link to comment
Mat1926 Posted October 29, 2018 Share Posted October 29, 2018 Just now, wgstarks said: Just move the contents of the Previous folder to the root of the flash drive, overwriting the matching existing files. Thnx, is "Previous" the name of the folder also? Quote Link to comment
Mat1926 Posted October 29, 2018 Share Posted October 29, 2018 Just now, wgstarks said: Yes Thnx 👍 Quote Link to comment
Pauven Posted October 29, 2018 Share Posted October 29, 2018 On 10/24/2018 at 11:47 PM, garycase said: Another annoyance … I just finished running a parity check with 6.6.3 and … The new version has added almost 3 HOURS to the parity check times !! This system used to do parity checks in just under 8 hours with v5. When I upgraded it to v6, the time for a parity check jumped to almost 14 hours !! Data transfers were also notably slower. I reverted it to v5 for a long time, but last year decided to just put up with the slower transfer speeds and longer parity checks and upgraded it to v6 so all of my servers were on the same version. This is purely a storage server – no add-ons, Dockers, or VMs – so the slower performance wasn’t really a big deal. The times never got better as new v6 versions were released – the original time of 13:50 jumped up to 16:08, then decreased to 15:18 with 6.5, and has been steady in that range throughout 2018 – my last few checks were all in the 15:10 to 15:14 range. But the upgrade from 6.5.3 to 6.6.3 has now jumped these times to nearly 18 hours (17:52) !! That’s 10 HOURS longer than they took with v5 !! I’ve never understood why v6 was so much slower for these checks … these should be the exact same set of calculations. This IS a low-end CPU … a SuperMicro Atom D525 board … but as I noted v5 did the exact same parity check in under 8 hours. The drives are all 3TB WD Reds, and it’s a single parity system. Somehow I feel you were secretly calling out to me to update the Unraid Tunables Tester. 😎 Maybe it is time. 1 1 Quote Link to comment
landS Posted October 29, 2018 Share Posted October 29, 2018 1 hour ago, Pauven said: Somehow I feel you were secretly calling out to me to update the Unraid Tunables Tester. 😎 Maybe it is time. Pauven - that tool was wonderful! Quote Link to comment
garycase Posted October 29, 2018 Share Posted October 29, 2018 4 hours ago, Pauven said: Somehow I feel you were secretly calling out to me to update the Unraid Tunables Tester. 😎 Maybe it is time. Agree with landS => that was a wonderful tool. It'd be really cool to get an updated version 😀 Quote Link to comment
jlruss9777 Posted October 30, 2018 Share Posted October 30, 2018 Upgraded from 6.6.1 to 6.6.3. All but my VM was fine with the update. The single VM wouldn't start and gave the following execution error: "internal error: Did not find USB device 093a:2510" At the same time the VM Log would read: "2018-10-30 17:51:06.614+0000: shutting down, reason=failed" And the Sytem log would indicate: "Oct 30 13:51:06 LargeServer kernel: vfio-pci 0000:03:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none Oct 30 13:51:06 LargeServer kernel: vfio-pci 0000:03:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none" I have reverted to 6.6.1 and all is working again. Any thoughts or ideas on why the problem with 6.6.3? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.