Jump to content

Neo_x

Members
  • Posts

    116
  • Joined

  • Last visited

Everything posted by Neo_x

  1. Noticed that Justin has created an XML for moviegrabber link : http://lime-technology.com/forum/index.php?topic=34168.0 Hope he doesn't mind if you include it
  2. *doh* *note to self Monday mornings always requires a double dose of coffee
  3. Hi Needo / Guys firstly - Thank you for your dedicated effort on this - If a point and click solution becomes available for unraid, i am quite sure that many customers will be smiling all the way to the bank *Hint for Tom - Please buy everybody a beer* Can i bug you to add one more App/Docker which i am using on a daily basis? Its called Moviegrabber. I ended up trusting it much more than Couch potato, and have been performing very solidly for me the past few years. Basically its an RSS scanner on both NZB and torrent, looking for new movie releases based on your criteria, and then releases them to black hole folders of your choice( Or Que them for you to pick from and manually release) Link to the Forum / App / Download links : https://forums.sabnzbd.org/viewtopic.php?t=8569 Binhex is active on Unraid forums too, and should be able to help where needed thx again Neo_x
  4. Neo_X commendable work. As WeeboTech points out you need to look a bit deeper to get the info you actaully need. Look at this post for some useful relevant debug commands http://lime-technology.com/forum/index.php?topic=4500.msg286103#msg286103 Hi Nas/ guys as promised, some more data below -which hopefully can assist. Some observations - strangely not seeing major memory leakage (compared 4 hours and 10 hours), will keep monitoring. Big difference is that on the same hardware, Memory usage on V5 was 1116MB versus 2777MB on V6(no xen) and 2392MB on V6(xen)..... which is odd at best. Unless 64 bit has more overheads to store the same data? All testing was performed with stock unraid's , capturing before cache_dirs, and after running cache_dirs for about 4 hours. v6 cache_dirs was modified to comment out the ulimit line I know some data was repeated unnecessary(eg file counts and sizes), but rather repeated them to make sure nothing slips through Character limit - had to attach the captures rather (2 posts) EDIT -As recommended by NAS, pastebin was utilized v5_0_5 - no cache_dirs -> http://pastebin.com/cHRDuEy8 v5_0_5 - cache_dirs running(4 hours) -> http://pastebin.com/GPCB9tuB v6b4(no xen) - no cache_dirs -> http://pastebin.com/UP3TQ36w v6b4(no xen) - cache_dirs running(4 hours) -> http://pastebin.com/4RTYZrAW v6b4(xen) - no cache_dirs -> http://pastebin.com/6LXjv40P v6b4(xen) - cache_dirs running(10 hours) -> http://pastebin.com/Zy2esyEZ
  5. i did run cache-dirs via go script, i found it consumed all resources on my server and basically killed dom0 and all domU's after approx 2 days, what issue were you seeing when starting it via go script, high memory consumption, or something else?. i have of course stayed away from running cache-dirs for the time being after that experience, can anybody confirm they have this running stable on v6 and if so what flags they are using. cheers. binhex. p.s nice to see ya on another board :-) Hi Binhex same here - glad to see you. I have one domain running , and upon starting cache_dirs via the go script, and checking top after about 30 mintues to an hour, i saw 100% cpu utilization (most of which was taken up by the cache_dirs script). Luckily on my end i have dedicated one core to Dom0, which stabilized the system drastically (thus nothing crashed) my syslinux : label Xen/unRAID OS kernel /syslinux/mboot.c32 append /xen dom0_max_vcpus=1 dom0_vcpus_pin --- /bzimage --- /bzroot strange part - upon trying "cache_dirs -q" it reported that it is not running... thus i dad to resort to manually killing the process id i tried adding a sleep 120 in the go script before the cache_dirs line, but it didn't help only option so far was to manually telnet into the box after a few minutes, and run the cache_dirs from the telnet prompt. busy running with stock tests , will check if the same occurs when Xen is not running. regards Neo_x
  6. Neo_X commendable work. As WeeboTech points out you need to look a bit deeper to get the info you actaully need. Look at this post for some useful relevant debug commands http://lime-technology.com/forum/index.php?topic=4500.msg286103#msg286103 Hi Nas I can make the debug captures no problem (even a comparison after a few hours between V5 / V6 again) , but the very strange part is that i am seeing the reverse - eg V5 is managing the Cache with less memory than what V6 (64bit) is. hmmm just for fun, i think i will make a capture without XEN as well, just in case regards Neo_X
  7. Sigh - figured as much. Really love B4 though - the auto start of domains makes life much easier. but i am missing network statistics. But yes - keeping up with GUI changes during a beta phase will be a pain. Hopefully there will be an easier way soon Thx for the good work BonieL!
  8. Some findings on Cache_dirs ( little bit troubling). on V5 , via Top, caching both my movies and series media ( yes there is a huge amount(11TB) of that..). On V5, top reports Full Top Capture (V5) -> http://imgbin.org/images/17191.JPG On V6 top reports (note about 2Gb from max is used by a domain/VM, but i think dom0 still remains separate) Full Top Capture ( V6) -> http://imgbin.org/images/17192.JPG Thus V5 caching uses about 1.58GB of Memory total on the V5 unraid box, versus the huge 3.62GB V6 unraid (and V5 has Mysql(+-300Mb) running as well) - that is over 1.5GB more to cache the same data .... I know Cache_dirs is only performing a simple find function, and thus the issue is most probably not with the script, but does anybody else maybe have an idea as to why this could be? I really don't like leaving the server running with only about 50MB of memory free root@Storage:~# free -m total used free shared buffers cached Mem: 3593 3539 53 0 767 1696 -/+ buffers/cache: 1076 2517 Swap: 0 0 0 Hopefully a linux guru can maybe assist? :) - *free beers is available* Ps V6 Cachedirs also gave an issue when started via go script - not sure if related. had to kill the process id , and restart manually.
  9. definitely sounds like a bargain still please let us know if any stock is still available?
  10. guys...... /girls? may i recommend that we stay on topic...hence the title : For those using NFS.. AFP was shortly touched on, and i am very sure Tom sill start a new thread For those using AFP.. , when time is right ( his hands will be more than full enough with all the current work, and thus reading through a whole AFP discussion 3 pages long in an NFS thread will just be a to much. reading through the history of the NFS problems, and points listed in this thread, its a no-brainer. Move to NFS V4 only with the new unraid. If he keeps it on disk shares, the users still have a work around if they must use it ( most media players will still build a single movie database even if you give it disk1/movies , disk2/movies etc etc.) the ESXI part though is a bit troubling (if somebody has a working ESXI system using unraid...). maybe get a poll going to get a better range of feedback (ESXI can also point to the disc share with the data store on, although more cumbersome). keep up the good work tom!
  11. Hi guys I guess the whole idea of a beta, is that feedback is provided when anything strange is noted. I was extremely excited to see unraid merge with a VM type server, although i have to admit this is the first time i expose myself to XEN. trusting the unraid community though, i decided to jump into the testing pool with my only server ( thus it is my production and test baby ) i unsuccessfully tried beta3 twice this week, having weird issues both times. first time dom0 failed on me completely during a GPLPV install on a VM ( i lost connectivity to the DOM0/Unraid ip ( it would ping for 15 seconds, then drop for 5 minutes - just not enough to putty in an try and grab the syslog . System is headless, so i was stuck in that case) today i had the array almost completely fail on me (it was reporting read-only when trying to move files on my cache drive, lost connectivity to my windows7 VM, and upon stopping the array, it was reporting 10 of my 12 drives missing ( that alone there gave me a few more grey hairs ) upon falling back to V5 everything was restored - BEEEEG sigh or relief. (note to self - purchase a second key and a testing server next time ) system was up for 4 hours before it crashed. syslog is in link below : http://pastelink.me/dl/c408c5 I formattted my usb, and copied beta 3 onto it. also restored my config folder with the exception of config/plugins and the go file, thus keeping it as "stock" as possible Started the system, nothing strange was noted(eg all shares restored, and all drives was green). Enabled the Xen bridge as per instructions added a Windows 7 VM as per this instructions : - http://lime-technology.com/forum/index.php?topic=31674.msg288613#msg288613 The machine installed fine, and i enabled RDC towards it. didnt get the chance yet to install the GPLPV drivers ( i'm not sure if this could be a cause, but then also the GPLPV drivers caused a fatal DOM0 crash when i tried it earlier this week, so i decided i would try without it after this i just left the VM running ( was planning to install mysql and a few downloading tools later on, but a nap was more important roughly 3 hours later : could still ping the systems( host + VM) VM was not responding to RDC(RDC disconnected), and VNC was also non responsive trying to rename a file on my cache drive was also not succesfull, and was reporting readonly (typical with a reiserFS problem usually...) using "xl destroy windows7" i managed to force a shutdown on my VM ( seems like without GPLPV drivers it didn't want to listen to restart or shutdown....) i then went ahead to stop the array (was planning maintenance mode so that i can do a resisefs check) suprise suprise - a big list of missing drives. only drive that was showing was i think disk 1, and my cache drive... Not wanting to push my luck starting the array in such a state, i decided to abort. Managed to copy syslog to boot disk before initiating a shutdown i'm not sure what could have caused this issue, was hoping you guys could assist, so that a better bug report can be available for Tom / Limetech. system is semi old hardware (Quad core 2.4Ghz Q6600) with about 6GB of ram ( 2 for host 2 for VM). Drives is rather new though (mostly 3TB's western digital and cache a 2TB Seagate) System was and still is completely rock solid under V5 - parity checks is clean (monthly for about 4/5 months now), plugins stable ( Mysql/ Moviegrabber / Cachedirs / Fancontrol / unmenu) if any recommendations is possible as to what could cause this ( maybe a VM setting i missed?), i would try and assist to retest. ( or maybe a few more commands if the failure occurred?. I'm not exactly sure what went wrong - the syslog is Massive!! first sign of trouble was : Storage kernel: sd 11:0:4:0: [sdg] command ffff880024053c00 timed out , which usually is a controller/cable/drive problem I will monitor that drive ( SDG is my cache drive), for a bit in order to see if it is still an issue on V5, but i highly doubt it. if any other information is need, please ask thank you Neo_X *Edit* Reiser check was performed on the drive in question - no issues there.... I also noticed my VM settings file had HVM enabled. since this is possibly only supported for newer hardware, could this maybe be a cause? kernel = '/usr/lib/xen/boot/hvmloader' builder = 'hvm' ///and//// vif = [ 'type=ioemu, bridge=xenbr0' ] Ideas anyone? (i just dont want to leave this bone alone it seems ) windows.cfg
  12. I did consider cases like this when I added that "-d type" option. You are one of the first to report they were able to use it on their hardware. ... .. dev/sdb and /dev/sdg will never be the same disk. You can type: ls -l /dev/disk/by-id/* to see a listing of all your disks and disk partitions by model and serial number. The preclear script was written to not allow you to clear a disk that is assigned to the array, or mounted and in use. Have fun, Joe L. Thx for an informative post Joe yes the controller cost me a pretty penny (more in the region of penny wise pound foolish ). about 2/3 years ago i was a strong follower of RAID5/ RAID6 - and thus decided i needed a good controller that can give me the biggest RAID-6 possible relative to my case capacity at that stage - 16 drives max. problem is - as with any data - i outgrow the server recently, and thus needed to upgrade. Problem is - although its a very nice controller, it was following the same rules of RAID. all drives needs to be the same capacity -and it seems it had another limitation - 16 drives max for a cluster. so yes - i wasn't willing to shell out additional cash upgrading the 16 x 1.5TB drives to 16 x 3TB drives, as this would just be pointless(converting the RAID would have taken days if not weeks - in the region of 30 hours per drive), and then i would be stuck in the same position a few years down the line. so yes - now i am moving over to Unraid - and definitely seems to be the way to go. otherwise - nope are having a "normal" desktop pc in use ( Tri-SLI board with one of the first Quad cores(Q6600) running and i think in the region of 4GB RAM - nothing serious ( home user ). the controller have a duo core cpu and 512MB ram onboard- so i think i am safe performance wise. i am seemingly having some issues with spin-up / spin-down on the controller - will investigate a bit more once i finished migrating data over from backups onto the server. will keep you guys updated
  13. Hi guys nvm - Solved - discovered after studying the usage script that the following command is possible:\ preclear_disk.sh -d sat /dev/sda this instructs preclear to utilize alternate commands when running Smartctl. hope someone can assist - i am running Unraid via an Adaptec controller (Model 52445). It is rather overkill for unraid since it is meant for high levels of RAID, but i didn't want to shell out additional $$$'s to get another controller - currently its performing admirably with roughly 64MB/s on post read clearing 12 drives at the same time everything seems fine - ie i set all the disks up as JBOD - which seems to simluate pass through(not sure on the correct terms) anyhow - so far so good. Unraid picks up the first set of 12 drives i connected (having power issues with connecting more ). Problem i am having - it seems that smartctl doesnt give correct stats on the drive. See sample output below. root@Storage:~# smartctl -a /dev/sdb smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: ST2000DL003-9VT1 Version: CC3C Serial number: 6YD1RLL6 Device type: disk Transport protocol: SAS Local Time is: Sat Sep 1 08:46:46 2012 SAST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Error Counter logging not supported Device does not support Self Test logging root@Storage:~# as can be expected - this messes up Preclear a bit, since it is unable to read Smart results before ,during and after. (although otherwise - it doesn't crash or halt the preclear in any way - GREAT SCRIPT JOE L ) I managed to find a smartctl command that does give the output for the drive as required Thank you Google root@Storage:~# smartctl -d sat --all /dev/sg1 smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: ST2000DL003-9VT166 Serial Number: 6YD1RLL6 Firmware Version: CC3C User Capacity: 2,000,398,934,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Sat Sep 1 08:47:26 2012 SAST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 612) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 255) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x30b7) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 109 099 006 Pre-fail Always - 24419656 3 Spin_Up_Time 0x0003 090 090 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 286 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 073 060 030 Pre-fail Always - 4318984658 9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3906 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 286 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 189 High_Fly_Writes 0x003a 099 099 000 Old_age Always - 1 190 Airflow_Temperature_Cel 0x0022 059 023 045 Old_age Always In_the_past 41 (75 200 42 25) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 285 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 286 194 Temperature_Celsius 0x0022 041 077 000 Old_age Always - 41 (0 14 0 0) 195 Hardware_ECC_Recovered 0x001a 036 015 000 Old_age Always - 24419656 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 66043712114499 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 2825939409 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 3158544765 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. So i guess my question is then - how do i go about trusting a drive after a preclear? reading through some of the posts, it seems i need to look for the following : FAILING NOW attributes, 5 Reallocated_Sector_Ct (this should be preferably zero - or else stay a very low number.) 197 Current_Pending_Sector (this should be preferably zero - or else stay a very low number.) also - should i be worried about which "device" i am clearing? (since i gather that SDB and SDG is possibly the same thing...) clear should be completed in about 6 hours - will report on any results i don't understand Thank you Neo_x PS syslog attached just in case syslog.zip
×
×
  • Create New...