olympia

Members
  • Posts

    458
  • Joined

  • Last visited

Everything posted by olympia

  1. I tried this and I can confirm flashing the BIOS is not needed (I only checked with barematal unRAID, not in an ESXi environment). Boot is indeed much faster (not that this matter a lot though).
  2. Right. My first finding is the most worrying. 2, 3, &4 are just some things that might be connected to 1, and point us in the right direction. I'm glad you can reproduce it, so now there's some chance that I am not going insane. did you actually see I also confirmed this right after your post? Sent from my HTC Desire using Tapatalk 2
  3. Just tried and I can confirm that. Reads and Writes are both increasing.
  4. I can't get around this issue. I really turned off everything I can. Limited the number of directories to cache (tried with using include and exlude as well), but this is just continue to occur. Joe L., do you have a clue how should I track down this issue?
  5. So you don't think it's a problem due to running out of memory? I ran the command and it returns correctly the number of the top level directories (shares). Do you mean I should run it in the second when cache_dirs re-schedule itself? I don't know how could I do that timing wise. Cannot we print this to the log file?
  6. Hi Joe L., I am trying to troubleshoot an issue what happens to one of my friend with unRAID and cache_dirs. Cache_dirs is restarting without any log entry every ~1.5-2 hours. My bet was going on that the process gets killed due to not enough memory, but this is occuring even if I exlude all the larger user shares with the '-e' option. Would you have a hint for me where to look at? I uploaded the system log (it is not visible from this log, but please note I tried to exclude a lot more shares as well, keeping only 2 very small in for caching). Thank you for your help in advance! syslog-2013-05-22.txt
  7. Yes sure, however there will be some tuning on my rig next week, so not sure if I will have the necessary uptime either. Anyhow this is a hit and miss for me. I have the impression that sometimes apcupsd survives log rotation, sometimes not.
  8. Hi PeterB, Do you have any update on this issue?
  9. Hi Joe L., I have 96GB SSD mounted as an app drive and had the HPA warning on it. I added 100GB=93778744 to myMain.conf, so it's not an issue for me, but might be useful for others. Certainly I leave it up to you if you add or not to the mainline. ...and... thank you for unMENU 1.6 and it's improvements.
  10. It is in announced in the very first sentence of this thread...
  11. I don't have this issue, but it might look similar to this report? http://lime-technology.com/forum/index.php?topic=25250.msg222665#msg222665
  12. Thanks Joe L., I finally sorted it was the good-old smarthistory which caused the weird behaviour with the spin downs.
  13. Are the colored status balls in the webGui consistent with the disk spinning state? Also, I'd like to see the full system log. Hi Tom, Yes, the colored status ball reflect correctly to the spinning state. However if I look at unmenu, interestingly it shows the disk as spun down (but unmenu is wrong - probably due to different spinning state checking). I am now removing SimpleFeatures (I already planned for a long time). Will come back with the full system log if the issue persists. Hi Tom, OK, I removed SF yesterday evening, but some customization is still running (VBox, CrashPlan and TVHeadend). Please see the system log attached. FYI at 22:28:17 I was the one who manually spin up all the drives. Today morning I see all the drives, but the cache drive spinning. Let me know if you want me to test with full Vanilla unRAID (by removing every customization), but those apps never caused any issue like that before. Yes, please test with vanilla. The end of the syslog is "curious", my analysis: Jan 24 22:08:33 Tower in.telnetd[27827]: connect from 192.168.9.145 (192.168.9.145) Jan 24 22:08:36 Tower login[27828]: ROOT LOGIN on '/dev/pts/0' from 'BUD8288N' Jan 24 22:12:43 Tower in.telnetd[32070]: connect from 192.168.9.145 (192.168.9.145) Jan 24 22:12:45 Tower login[32071]: ROOT LOGIN on '/dev/pts/0' from 'BUD8288N' Jan 24 22:14:02 Tower kernel: mdcmd (51): spindown 0 Jan 24 22:14:02 Tower emhttp: shcmd (78): /usr/sbin/hdparm -y /dev/sdd &> /dev/null Jan 24 22:14:34 Tower kernel: mdcmd (52): spindown 1 Jan 24 22:14:45 Tower kernel: mdcmd (53): spindown 2 Jan 24 22:14:55 Tower kernel: mdcmd (54): spindown 3 Jan 24 22:15:06 Tower kernel: mdcmd (55): spindown 4 Jan 24 22:15:16 Tower kernel: mdcmd (56): spindown 5 Jan 24 22:15:27 Tower kernel: mdcmd (57): spindown 6 Jan 24 22:15:27 Tower kernel: mdcmd (58): spindown 8 Jan 24 22:17:00 Tower in.telnetd[4199]: connect from 192.168.9.145 (192.168.9.145) Jan 24 22:17:02 Tower login[4202]: ROOT LOGIN on '/dev/pts/0' from 'BUD8288N' Jan 24 22:21:18 Tower kernel: mdcmd (59): spindown 7 Jan 24 22:21:39 Tower kernel: mdcmd (60): spindown 9 Jan 24 22:28:17 Tower emhttp: Spinning up all drives... Jan 24 22:28:17 Tower emhttp: shcmd (79): /usr/sbin/hdparm -S0 /dev/sdd &> /dev/null Jan 24 22:28:17 Tower kernel: mdcmd (61): spinup 0 Jan 24 22:28:17 Tower kernel: mdcmd (62): spinup 1 Jan 24 22:28:17 Tower kernel: mdcmd (63): spinup 2 Jan 24 22:28:17 Tower kernel: mdcmd (64): spinup 3 Jan 24 22:28:17 Tower kernel: mdcmd (65): spinup 4 Jan 24 22:28:17 Tower kernel: mdcmd (66): spinup 5 Jan 24 22:28:17 Tower kernel: mdcmd (67): spinup 6 Jan 24 22:28:17 Tower kernel: mdcmd (68): spinup 7 Jan 24 22:28:17 Tower kernel: mdcmd (69): spinup 8 Jan 24 22:28:17 Tower kernel: mdcmd (70): spinup 9 Jan 24 22:28:17 Tower kernel: mdcmd (71): spinup 10 Jan 24 22:58:24 Tower kernel: mdcmd (72): spindown 0 Jan 24 22:58:24 Tower kernel: mdcmd (73): spindown 1 Jan 24 22:58:24 Tower kernel: mdcmd (74): spindown 2 Jan 24 22:58:25 Tower kernel: mdcmd (75): spindown 3 Jan 24 22:58:25 Tower kernel: mdcmd (76): spindown 4 Jan 24 22:58:26 Tower kernel: mdcmd (77): spindown 5 Jan 24 22:58:26 Tower kernel: mdcmd (78): spindown 6 Jan 24 22:58:27 Tower kernel: mdcmd (79): spindown 7 Jan 24 22:58:27 Tower kernel: mdcmd (80): spindown 8 Jan 24 22:58:27 Tower kernel: mdcmd (81): spindown 9 Jan 24 22:58:28 Tower emhttp: shcmd (80): /usr/sbin/hdparm -y /dev/sdd &> /dev/null Jan 24 22:58:50 Tower kernel: mdcmd (82): spindown 10 Jan 24 23:54:25 Tower kernel: mdcmd (83): spindown 10 Jan 25 01:01:26 Tower tvheadend[3986]: htsp: 192.168.9.4 [ XBMC Media Center ]: Disconnected Jan 25 05:10:32 Tower emhttp: shcmd (81): /usr/sbin/hdparm -y /dev/sdd &> /dev/null At 22:08:33 you connected and it took you 3 seconds to type the password and log in. I guess you logged right out. At 22:12:43 you connected and logged in again. About a minute and a half later all the disks were spun down, but I don't see an entry that said this happened via clicking 'Spin Down' button (I should see "Spinning down all drives..." message if so). Besides, they start spinning down more-or-less 10 seconds apart. Why are they spinning down here, because they hit the inactivity time-out? At 22:17:02 you log in again, and at 22:28:17 you manually click the 'Spin Up' button, ok. At 22:58:25 they all spin down again. This is consistent with spin-down delay set at 30 minutes. But disk10 does not spin down until 22:58:50, so something is accessing it. Then is does spin down again at 23:54:25, so something again must have been accessing it earlier. Also something accesses the cache drive because it doesn't spin down again until next day at 5:10:32. I think this must be a plugin or host side app accessing the server causing this. Hi Tom, just to report back on this issue. After a lot of testing and much more grey hair I tracked this down to the good old smarthistory application which were always running just fine and very reliable in the background. I had this application scheduled daily (with wake -ON options) and were running at 4:40 spinning up all disks to read and record some smart parameters. However, it appears that smarthistory is spinning up the disks in a way, that unRAD still thinks the disks are all spun down, so it is not commanding them to spin down again until there is no "offical" wake up on those drives making unRAID aware that those are spinning. Disabling smarthistory solved the issue. Apologizes for the false "alert"!
  14. Hello Joe L. I am trying to troubleshoot a spin down issue, detailed here: http://lime-technology.com/forum/index.php?topic=25250.msg222150#msg222150 I have 6GB memory. Would it be possible by any chance that cache_dirs go mad by the high memory issue on rc10 (I don't seem to have the slow down issue though) and could somehow cause such issue? It also occured once, that cache_dirs was - I beleive - crashing overnight, because it disappeard from the running processes when I checked in the morning without anything in the log.
  15. Are the colored status balls in the webGui consistent with the disk spinning state? Also, I'd like to see the full system log. Hi Tom, Yes, the colored status ball reflect correctly to the spinning state. However if I look at unmenu, interestingly it shows the disk as spun down (but unmenu is wrong - probably due to different spinning state checking). I am now removing SimpleFeatures (I already planned for a long time). Will come back with the full system log if the issue persists. Hi Tom, OK, I removed SF yesterday evening, but some customization is still running (VBox, CrashPlan and TVHeadend). Please see the system log attached. FYI at 22:28:17 I was the one who manually spin up all the drives. Today morning I see all the drives, but the cache drive spinning. Let me know if you want me to test with full Vanilla unRAID (by removing every customization), but those apps never caused any issue like that before. systemlog.txt
  16. Are the colored status balls in the webGui consistent with the disk spinning state? Also, I'd like to see the full system log. Hi Tom, Yes, the colored status ball reflect correctly to the spinning state. However if I look at unmenu, interestingly it shows the disk as spun down (but unmenu is wrong - probably due to different spinning state checking). I am now removing SimpleFeatures (I already planned for a long time). Will come back with the full system log if the issue persists.
  17. I upgraded from 4.7 to 5.0 rc8a and then quick upgraded to rc10. I just noticed an issue with spin down on rc10 (don't know if it was there or not on rc8a). It appears that spindown of disks are not always (kind of hit and miss) happening even though I see the spin down command executed in the log file. 1682: Jan 24 16:55:03 Tower kernel: mdcmd (84): spindown 10 1681: Jan 24 16:55:02 Tower kernel: mdcmd (83): spindown 7 1680: Jan 24 16:55:02 Tower kernel: mdcmd (82): spindown 6 1679: Jan 24 16:55:01 Tower kernel: mdcmd (81): spindown 5 1678: Jan 24 16:55:01 Tower kernel: mdcmd (80): spindown 4 1677: Jan 24 16:55:00 Tower kernel: mdcmd (79): spindown 3 1676: Jan 24 16:55:00 Tower kernel: mdcmd (78): spindown 2 1675: Jan 24 16:55:00 Tower kernel: mdcmd (77): spindown 1 After the above all disks are spinning down, but one, in this case disk 10 which keeps spinning. In another case, another disk is affected. Does anyone has an idea what can casue this? I saw this reported earlier in this thread by somehow whose unraid box didn't go to sleep due to the same issue. Following that, because unraid think the disk has spun down, it never tries to spun it down again. In case I use the command '/root/mdcmd spindown 10' it spins down immediately and keeps spun down. Is this a potential bug or there is something I can look at on my side?
  18. @lainie could you pplease clarify what version of the packages are using to compile? Are you using the packages from the 13.1 repo as described on this wikie? http://lime-technology.com/wiki/index.php/Installing_VirtualBox_in_unRAID Because of dvb support I have a custom compiled kernel. I used newer compiler than what is described above and now your virtualbox packages doesn't fit in. I am just guessing that this is the reason, but would be a big help to know what packages are you using. Thank you!
  19. Ah, ok, just a little correction: The fix is not for getting the USB devices showing up in running VMs, but to get USB devices working at all (without using the trick of plugging another USB device). I still need to restart the VM to recognize an USB device assigned to it while it was running. Not sure about this one. This will end up in an error message if the dir is already created. Not that this cause any harm though...
  20. Great work lainie! Did you already included the USB fixes to doinst.sh?
  21. I can confirm this. It is even worst for me because I only have ~850KB/sec regardless of USB 2.0 is enabled or disabled. With regards to the usb fix, I don't mind to have it in the go file either, but since this belongs to the installation of vbox, imo this more belongs to the installation script, not to the go script. Cheers again for the solution!
  22. You Sir are awesome! THANK YOU! @lainie, if you allow me, I would suggest to inject the above lines to the 'doinst.sh' for the next release. This would prevent this clutter in the go script and more belongs to there anyway
  23. I could buy these HBAs at the moment, but it is not clear to me if unRAID supports them? -I know LSI SAS3801E has external connectors -I understand that LSI SAS9220-8i is supported, but the HBA which is widely mentioned here has a different FRU number (FRU 46M0861) - not sure if this makes any difference? Can you please help me out? Thank you!