Neo_x

Members
  • Posts

    116
  • Joined

  • Last visited

Everything posted by Neo_x

  1. Noticed that Justin has created an XML for moviegrabber link : http://lime-technology.com/forum/index.php?topic=34168.0 Hope he doesn't mind if you include it
  2. *doh* *note to self Monday mornings always requires a double dose of coffee
  3. Hi Needo / Guys firstly - Thank you for your dedicated effort on this - If a point and click solution becomes available for unraid, i am quite sure that many customers will be smiling all the way to the bank *Hint for Tom - Please buy everybody a beer* Can i bug you to add one more App/Docker which i am using on a daily basis? Its called Moviegrabber. I ended up trusting it much more than Couch potato, and have been performing very solidly for me the past few years. Basically its an RSS scanner on both NZB and torrent, looking for new movie releases based on your criteria, and then releases them to black hole folders of your choice( Or Que them for you to pick from and manually release) Link to the Forum / App / Download links : https://forums.sabnzbd.org/viewtopic.php?t=8569 Binhex is active on Unraid forums too, and should be able to help where needed thx again Neo_x
  4. This seems to be the newest package. in the info it shows 64 bit so it might work but since it is not in a repository kind of at a loss on how to implement this one. confirmed to be Working. thx Thornwood / Joe.L
  5. Neo_X commendable work. As WeeboTech points out you need to look a bit deeper to get the info you actaully need. Look at this post for some useful relevant debug commands http://lime-technology.com/forum/index.php?topic=4500.msg286103#msg286103 Hi Nas/ guys as promised, some more data below -which hopefully can assist. Some observations - strangely not seeing major memory leakage (compared 4 hours and 10 hours), will keep monitoring. Big difference is that on the same hardware, Memory usage on V5 was 1116MB versus 2777MB on V6(no xen) and 2392MB on V6(xen)..... which is odd at best. Unless 64 bit has more overheads to store the same data? All testing was performed with stock unraid's , capturing before cache_dirs, and after running cache_dirs for about 4 hours. v6 cache_dirs was modified to comment out the ulimit line I know some data was repeated unnecessary(eg file counts and sizes), but rather repeated them to make sure nothing slips through Character limit - had to attach the captures rather (2 posts) EDIT -As recommended by NAS, pastebin was utilized v5_0_5 - no cache_dirs -> http://pastebin.com/cHRDuEy8 v5_0_5 - cache_dirs running(4 hours) -> http://pastebin.com/GPCB9tuB v6b4(no xen) - no cache_dirs -> http://pastebin.com/UP3TQ36w v6b4(no xen) - cache_dirs running(4 hours) -> http://pastebin.com/4RTYZrAW v6b4(xen) - no cache_dirs -> http://pastebin.com/6LXjv40P v6b4(xen) - cache_dirs running(10 hours) -> http://pastebin.com/Zy2esyEZ
  6. i did run cache-dirs via go script, i found it consumed all resources on my server and basically killed dom0 and all domU's after approx 2 days, what issue were you seeing when starting it via go script, high memory consumption, or something else?. i have of course stayed away from running cache-dirs for the time being after that experience, can anybody confirm they have this running stable on v6 and if so what flags they are using. cheers. binhex. p.s nice to see ya on another board :-) Hi Binhex same here - glad to see you. I have one domain running , and upon starting cache_dirs via the go script, and checking top after about 30 mintues to an hour, i saw 100% cpu utilization (most of which was taken up by the cache_dirs script). Luckily on my end i have dedicated one core to Dom0, which stabilized the system drastically (thus nothing crashed) my syslinux : label Xen/unRAID OS kernel /syslinux/mboot.c32 append /xen dom0_max_vcpus=1 dom0_vcpus_pin --- /bzimage --- /bzroot strange part - upon trying "cache_dirs -q" it reported that it is not running... thus i dad to resort to manually killing the process id i tried adding a sleep 120 in the go script before the cache_dirs line, but it didn't help only option so far was to manually telnet into the box after a few minutes, and run the cache_dirs from the telnet prompt. busy running with stock tests , will check if the same occurs when Xen is not running. regards Neo_x
  7. Neo_X commendable work. As WeeboTech points out you need to look a bit deeper to get the info you actaully need. Look at this post for some useful relevant debug commands http://lime-technology.com/forum/index.php?topic=4500.msg286103#msg286103 Hi Nas I can make the debug captures no problem (even a comparison after a few hours between V5 / V6 again) , but the very strange part is that i am seeing the reverse - eg V5 is managing the Cache with less memory than what V6 (64bit) is. hmmm just for fun, i think i will make a capture without XEN as well, just in case regards Neo_X
  8. Sigh - figured as much. Really love B4 though - the auto start of domains makes life much easier. but i am missing network statistics. But yes - keeping up with GUI changes during a beta phase will be a pain. Hopefully there will be an easier way soon Thx for the good work BonieL!
  9. Some findings on Cache_dirs ( little bit troubling). on V5 , via Top, caching both my movies and series media ( yes there is a huge amount(11TB) of that..). On V5, top reports Full Top Capture (V5) -> http://imgbin.org/images/17191.JPG On V6 top reports (note about 2Gb from max is used by a domain/VM, but i think dom0 still remains separate) Full Top Capture ( V6) -> http://imgbin.org/images/17192.JPG Thus V5 caching uses about 1.58GB of Memory total on the V5 unraid box, versus the huge 3.62GB V6 unraid (and V5 has Mysql(+-300Mb) running as well) - that is over 1.5GB more to cache the same data .... I know Cache_dirs is only performing a simple find function, and thus the issue is most probably not with the script, but does anybody else maybe have an idea as to why this could be? I really don't like leaving the server running with only about 50MB of memory free root@Storage:~# free -m total used free shared buffers cached Mem: 3593 3539 53 0 767 1696 -/+ buffers/cache: 1076 2517 Swap: 0 0 0 Hopefully a linux guru can maybe assist? :) - *free beers is available* Ps V6 Cachedirs also gave an issue when started via go script - not sure if related. had to kill the process id , and restart manually.
  10. definitely sounds like a bargain still please let us know if any stock is still available?
  11. guys...... /girls? may i recommend that we stay on topic...hence the title : For those using NFS.. AFP was shortly touched on, and i am very sure Tom sill start a new thread For those using AFP.. , when time is right ( his hands will be more than full enough with all the current work, and thus reading through a whole AFP discussion 3 pages long in an NFS thread will just be a to much. reading through the history of the NFS problems, and points listed in this thread, its a no-brainer. Move to NFS V4 only with the new unraid. If he keeps it on disk shares, the users still have a work around if they must use it ( most media players will still build a single movie database even if you give it disk1/movies , disk2/movies etc etc.) the ESXI part though is a bit troubling (if somebody has a working ESXI system using unraid...). maybe get a poll going to get a better range of feedback (ESXI can also point to the disc share with the data store on, although more cumbersome). keep up the good work tom!
  12. Hi guys I guess the whole idea of a beta, is that feedback is provided when anything strange is noted. I was extremely excited to see unraid merge with a VM type server, although i have to admit this is the first time i expose myself to XEN. trusting the unraid community though, i decided to jump into the testing pool with my only server ( thus it is my production and test baby ) i unsuccessfully tried beta3 twice this week, having weird issues both times. first time dom0 failed on me completely during a GPLPV install on a VM ( i lost connectivity to the DOM0/Unraid ip ( it would ping for 15 seconds, then drop for 5 minutes - just not enough to putty in an try and grab the syslog . System is headless, so i was stuck in that case) today i had the array almost completely fail on me (it was reporting read-only when trying to move files on my cache drive, lost connectivity to my windows7 VM, and upon stopping the array, it was reporting 10 of my 12 drives missing ( that alone there gave me a few more grey hairs ) upon falling back to V5 everything was restored - BEEEEG sigh or relief. (note to self - purchase a second key and a testing server next time ) system was up for 4 hours before it crashed. syslog is in link below : http://pastelink.me/dl/c408c5 I formattted my usb, and copied beta 3 onto it. also restored my config folder with the exception of config/plugins and the go file, thus keeping it as "stock" as possible Started the system, nothing strange was noted(eg all shares restored, and all drives was green). Enabled the Xen bridge as per instructions added a Windows 7 VM as per this instructions : - http://lime-technology.com/forum/index.php?topic=31674.msg288613#msg288613 The machine installed fine, and i enabled RDC towards it. didnt get the chance yet to install the GPLPV drivers ( i'm not sure if this could be a cause, but then also the GPLPV drivers caused a fatal DOM0 crash when i tried it earlier this week, so i decided i would try without it after this i just left the VM running ( was planning to install mysql and a few downloading tools later on, but a nap was more important roughly 3 hours later : could still ping the systems( host + VM) VM was not responding to RDC(RDC disconnected), and VNC was also non responsive trying to rename a file on my cache drive was also not succesfull, and was reporting readonly (typical with a reiserFS problem usually...) using "xl destroy windows7" i managed to force a shutdown on my VM ( seems like without GPLPV drivers it didn't want to listen to restart or shutdown....) i then went ahead to stop the array (was planning maintenance mode so that i can do a resisefs check) suprise suprise - a big list of missing drives. only drive that was showing was i think disk 1, and my cache drive... Not wanting to push my luck starting the array in such a state, i decided to abort. Managed to copy syslog to boot disk before initiating a shutdown i'm not sure what could have caused this issue, was hoping you guys could assist, so that a better bug report can be available for Tom / Limetech. system is semi old hardware (Quad core 2.4Ghz Q6600) with about 6GB of ram ( 2 for host 2 for VM). Drives is rather new though (mostly 3TB's western digital and cache a 2TB Seagate) System was and still is completely rock solid under V5 - parity checks is clean (monthly for about 4/5 months now), plugins stable ( Mysql/ Moviegrabber / Cachedirs / Fancontrol / unmenu) if any recommendations is possible as to what could cause this ( maybe a VM setting i missed?), i would try and assist to retest. ( or maybe a few more commands if the failure occurred?. I'm not exactly sure what went wrong - the syslog is Massive!! first sign of trouble was : Storage kernel: sd 11:0:4:0: [sdg] command ffff880024053c00 timed out , which usually is a controller/cable/drive problem I will monitor that drive ( SDG is my cache drive), for a bit in order to see if it is still an issue on V5, but i highly doubt it. if any other information is need, please ask thank you Neo_X *Edit* Reiser check was performed on the drive in question - no issues there.... I also noticed my VM settings file had HVM enabled. since this is possibly only supported for newer hardware, could this maybe be a cause? kernel = '/usr/lib/xen/boot/hvmloader' builder = 'hvm' ///and//// vif = [ 'type=ioemu, bridge=xenbr0' ] Ideas anyone? (i just dont want to leave this bone alone it seems ) windows.cfg
  13. Just an update - although not an unmenu package, the author of the software (thx Binhex!!) has updated package files available instructions on : http://sourceforge.net/projects/moviegrabber/files/unraid/plugin/
  14. Hi guys i having a peculiar problem where my server doesn't go to s3 Sleep once i initiate it ( i am currently testing with the latest version 5, and also as barebones as possible (only added simplefeatures base and S3 sleep packages). I initially thought it was my cache drive and other plugins causing the issue, but after eliminating each, the issue still remains. the server goes missing on the network for about 2/3 minutes, but doesn't physically power down. Looking at the syslog, it seems like it is complaining about some of the drives. some of the entries : ata9.00: failed command: STANDBY IMMEDIATE PM: Device 10:0:0:0 failed to suspend async: error 134217730 sd 10:0:0:0: [sdc] START_STOP FAILED sd 9:0:0:0: [sdb] Stopping disk Sep 1 13:27:24 Storage kernel: PM: Some devices failed to suspend now some entries refer to ata9, some to sd9 / sd10 and some to sdb/sdc confusing part is that sdb is disk 11 on the gui, and sdc disk 10... will somebody please be able to assist in pointing out how i could determine which drives is a possible cause, so that i can try moving / swapping them around in order to determine if it is a drive and/or controller issue? regards Neo_X 20130901_syslog.txt.txt
  15. a preclear is never a bad idea no matter when a rebuild will force it to restore all the possible blocks on the drive, thus i dont think a time difference will be noted. pre clearing just adds that peace of mind
  16. Just thought i should post an update : tried the alternating drive recommendation by Garycase above and had some interesting results. configuration was split in two 11 drive setups ( eg removing 11 drives from the setup first set had 4 drives each on the MV8 controllers(testing the breakout panel for the Norco case on the first 4 slots). the last two Breakout panels required SAS to SATA breakout cables towards the motherboard sata ports. For the first pass i had the parity +data on slot 6, and one more drive on slot 5. building and checking parity twice revealed a huge reduction in parity errors (still not 0) - but was less than 20. repeating the same setup with all the other drives (leaving the parity on slot 6 and 3 data drives on slot 5 + 8 drives on the other slots), suddenly resulted in a huge jump in errors (think it was in the region of 600) since i still encountered errors in both setups - it confirmed that power supplies is not a possible cause. Due to the higher number of errors with more drives on slot 5, i started to think that there is problem specific to that slot. I then swopped the breakout cable on slot 5 with one i had spare, and moved the parity to that slot(eg bypassing slot 6 as well). so far so good - running parity check 2, and no errors was noted yet. *HAPPY FACE* not sure why syslog wont show drive communication issues related to problematic break out cables though - but it seems that i have found a possible culprit (or me fiddling around in the case has chased the gremlin somewhere else) will integrate the drives on slot 6 next, and should hopefully have a working setup again. needless to say - this was a massive monster to troubleshoot - luckily i haven't made any additional expenses yet with regards to motherboards etc (although im still tempted to -extra CPU power is always welcome :P)
  17. not sure if power will prove to be the fault used the same power supply (and thus power connectors as well) with the same drives ( 24 drives + 1 cache) without issue on my previous SATA controller (Adaptec 52445) which is a 28 port card. Only issue was i couldn't resolve spin-down functionality with it. thus i recently swapped out the card with the two MV8's and utilized the remaining ports on the MB - and thats when the fun started. the breakout cables towards the motherboard ports had to be replaced as well - since i discovered the hard way that break out cables is directional (eg from controller to drive or from breakout panel towards the sata ports for the MB. but murphy can always teach us something new - struggling for two weeks already - almost willing to give up and replace MB/ CPU etc ( board is semi-old(model evades my memory now ) with a Q6600 quad core cpu.
  18. Wow - excellent plan Will probably repeat for both sets - should give an indication if any connectivity issues exists as well Thx for the idea:-) Sent from my Nexus 10 using Tapatalk 2
  19. If memory serves i went with a corsair 750 watt ( 100 watt higher than the recommended corsair model for the beast builds...)
  20. just wanted to leave an update - hopefully someone can recommend something else i can check.... hi guys some updates.... for sake of simplification - i have two supermicro SASLP-MV8 controllers, and then the MB which offers 6 ports. i ran various tests - clearing the configurations, building the parity and then checking it. 1 x MV8 controller : pass 2 x Mv8 controller : pass 2 x MV8 + Motherboard : fail Motherboard only : pass Motherboard + 1xMV8 : fail(although only 100 errors versus the 100,000 when using the full setup syslog doesn't show any issues during parity generation or checks - thus i don't think it is sata / power or drive related ( keep in mind i performed a simultaneous DD + MD5 check for the 2x MV8 + motherboard setup, and didnt pick up any issues... i don't want to keep stressing the system with data carrying hardware, as a failure somewhere along the line will leave me very sad... considering to maybe replace MB/ CPU / RAM - although its a huge $400 expense which i don't want to do unless the only option.. any other recommendations is welcome...
  21. i know about the overheating issue - was in a completely different case, and didn't cause any issues on my previous controller ( had an adaptec 24 port handling all the drives - only negative part was that it didn't support spin-down and i was required to do JBOD to get the drives visible in unraid) In order to rule out hardware, i have run a long smart test and a memory test - both seems clear(see attached) next up i will repeat the parity test on the three different sections of the machine ( both MV8 controllers and also the 6 ports i use on the MB) will give feedback once i located a possible cause for the issue. smart_long_test.zip
  22. My first post included smart reports if memory serves. I will post updated reports if needed once I am back - currently away on a business trip :-) Sent from my Nexus 10 using Tapatalk 2
  23. Yes i am sure Will initiate a long smart test and submit a report on completion Ps Memory test seems fine... Sent from my awesome Galaxy S2 using Tapatalk2