dheg

Members
  • Posts

    413
  • Joined

  • Last visited

Everything posted by dheg

  1. I'm having issues with the parity check. I have 7 2TB drives (5 data + parity + cache). All of them are WD green drives. In 4.7 my parity checks lasted about 6h. They increased to 9h on 5rc8a but since some people were having some issues, I didn't pay much attention. However in 5rc10 (I didn't upgrade to rc9) duration has increased to 13h. I have run two parity checks two days in a row and duration has been +13h in both of them. This is my syslog: http://tny.cz/2ca38aac Any hints?
  2. I shut down the system 3 times through the vSphere client on a freshly boot system. It unmounted and shut down cleanly with no parity check on start up. I didn't get a single error/warning message. It looks very good! On the down side, I'm afraid there is a 'but'. It took unRAID more than 5 minutes to shut down (one of the times it took 6 whole minutes). I was very tempted to click on the power button twice but fought the urge I can live with this, but will have to configure accordingly the ups shutdown procedure. Thanks Zeron, PS: My system is: unraid 5 rc10 7 drives (5 data + parity + cache) no ifup line on the go file Were the drives spun down when you shut down? I would check it myself, but my wife is watching movies on the server while I watch football. They were up, I re-started the system before each shutdown, so they were spinning up. Could it be because the system was "fresh", a lot of cache_dirs activity i guess
  3. I shut down the system 3 times through the vSphere client on a freshly boot system. It unmounted and shut down cleanly with no parity check on start up. I didn't get a single error/warning message. It looks very good! On the down side, I'm afraid there is a 'but'. It took unRAID more than 5 minutes to shut down (one of the times it took 6 whole minutes). I was very tempted to click on the power button twice but fought the urge I can live with this, but will have to configure accordingly the ups shutdown procedure. Thanks Zeron, PS: My system is: unraid 5 rc10 7 drives (5 data + parity + cache) no ifup line on the go file
  4. and how do I set up the system? Just drop them in /boot/packages? What about the configuration? I checked my 4.7 installation and have the following files: powerdown_ctlaltdel-unmenu-package.conf powerdown-1.02-noarch-unRAID.tgz powerdown-1.02-noarch-unRAID.tgz.auto_install powerdown-overtemp-unmenu-package.conf unraid-overtemp-shutdown.auto_install I also have in the go file: CTRLALTDEL=YES SYSLOG=YES STATUS=YES START=YES LOGSAVE=30 installpkg /boot/packages/powerdown-1.02-noarch-unRAID.tgz sysctl -w kernel.poweroff_cmd="/sbin/powerdown" BTW, how will this play with VMware tools? (I should've said I'm running an ESXi server )
  5. Two pretty handy addons I run in my 4.7 version are unRAID Power-Down on disk overtemp Clean Powerdown do these exist for v5-rc8a?
  6. Hi Roancea, I have an ESXi host with several guests (windows, several linux servers and unRAID). I'm planning on implementing a new guest to have a ZFS server (Open Indiana is very well positioned). To do this, I'll fit a sas card which will be pass-throughed to this guest. This way everything will be in the same VMXNET3 network @ 10gbps.
  7. Not really, I am just very risk adverse Did you mean that with ZFS you have a better chance of staying lucky or the other way around. I though ZFS was built with data security in mind you could create an unraid VM in a new datastore with the unraid USB and fire the system up, then you could restore the ZFS system. Did I get this right? That's very good advise and I plan to follow it
  8. @Johnm, siamsquare, SidebandSamurai, et al. you seem to prefer OI vs NexentaStor. May I ask why? As sidebandsamurai wrote in an earlier post, they both perform the same, but NexentaStor interface is nicer? Am i missing something?
  9. Exactly! I only need a 1U case though. Four 1T drives is more than enough for me. My estimations are: 500-750GB for cache. I don't really need more. I think my highest throughput per day has been around 250GB. 500GB for newznab That leaves me, assuming I'll go with raidZ2, 1T (maybe 600GB to leave some headroom ) for datastores. In the short term this is more than enough. If I ran out of space I can always upscale my drives.
  10. Thanks! Great work! Are you in the newznab forums? Would you mind if I post this there?
  11. fair enough, the mod @ newznab IRC couldn't confirm whether this could happen often. He only said "Sometimes it may clash, especially if using MP3 samples"
  12. @SidebandSamurai wow, thanks a lot, I have some more reading to do RAM won't be a problem, I have 16 GB and just ordered 2 8GB sticks. My intention is to dedicate at least 8GB, could be more if needed, for 4 1TB HDDs. After Johnm's post I decided to go with this, so 8GB for a 4 TB array should be plenty. Napp-it seems easy, Nexenta nicer... maybe I should try Have anyone tried this case. The 'Add to Cart' button is calling my name
  13. This is a short summary of my findings: [*]Update binaries searches usenet for headers. [*]Update releases consolidates the headers into releases. [*]Post Processing looks up the releases (NFO's) on the internet and fills the information on the web interface. By default, newznab works sequentially (assuming you use the screen or init scripts): [*]Updating binaries [*]Updating releases [*]Post processing 100 releases, regardless of how many you actually have However if you follow these instructions you will make post processing to run continuously, kind of multitasking. It will post process old releases while it's grabbing headers or sorting releases. Benefits: Post processing will run continuously and will, potentially, post process more releases per day You will not spam Amazon's API, assuming of course you change the postprocess.php file in www/lib to process just one release at a time However, as the postprocess.php script will also be called during standard operations, there is a chance (don't know how big) that they will clash and get killed, so you'll have to re-start it again. I hope it's clear !
  14. To boot from a USB you would still need a datastore to define the guest. ESXi will boot the VM that will boot FreeNAS from the USB. Kind of like unRAID. I'm trying a different approach, to boot FreeNAS from a CF card that will be a datastore. I haven't received the parts yet, so it's early to say whether this will work
  15. Regarding post processing, I had a short chat with one of the mods at newznab irc. I'll write a quick summary once i get home. Sent from my GT-I9100 using Tapatalk 2
  16. Thanks Thanks for the link to your post, that was the one that first put me on the idea of using FreeNAS, although I couldn't find it. This thread has become HUGE! If you would to start again, would you still recommend OpenIndiana? Considering I'm a newbie, FreeNAS seems to have a larger community for support. I was actually thinking of raid-z2 (which i believe is equivalent to raid6). I may seem to be too paranoid, but right at the beginning of starting using unraid I had two drives failing in just 3 days. So I lost some data. You are never too sure. The thing about raid 10 is that if two drives within the same vdev fail, you are done, right? Any other alternatives? you are fully right, I just needed someone else than my wife to tell me about the cost of SSD drives Besides, once the server is up, I don't really need SSD speeds. I didn't mention it, but my goal is to use FreeNAS for the cache drive as well as the datastores. I should say that one of my datastores is a newznab server and it uses a lot of space... I’ve found these so far: https://ri-vier.eu/1u-server-case-w-4-hotswappable-satasas-drive-bay-rpc1204-p-3.html?cPath=1_3_4 https://ri-vier.eu/1u-server-case-w-4-hotswappable-satasas-drive-bay-rpc1004-p-1.html?cPath=1_3_4 And then suddenly today I found a SATA III, which is nice thinking ahead: http://www.xcase.co.uk/X-Case-RM-140-1u-4-drive-hotswap-case-with-rails-p/case-xcase-140.htm I already thought of that. I'll make the FreeNAS a guest on the ESXi server to use the vmxnet3 lan. I'm thinking of using a CF card as the FreeNAS datastore. I ordered this and I'm waiting for it to arrive to test it I want it to be a standalone case so I can use the 24 drives of the norco for unraid, but it'll be daisy chained to the server. Thanks a lot for that, you saved me some serious money and maybe also my wife. I'll go with the spinners. However, this brings a new issue: cooling. One of the nice features, apart from speed and consumption, is that they don't need much cooling. I'll have to give some though to this. I'll keep you posted.
  17. I’m thinking of implementing FreeNAS as a fileserver for the datastores. Main reason is redundancy. Right now my server has only one SDD as a datastore, so if it fails I’m sc*** ! However, and although my main purpose is redundancy, I’m also very happy with the speed of my setup. I’m using a 256MB Plextor M3 SDD connected to one of the SATA III ports of the X9SCM-F-O. Last time I checked reading speeds were in excess of 500MB/s . So ideally I’d like to get raid protection without sacrificing performance (and even improve it if possible). My intention is to dedicate a M1015 for the FreeNAS fileserver with only 4 drives (1 dedicated 1U case), so the M1015 will be under used. And here is where I start having questions: Since I’ll be using only 4 drives, can I plug the M1015 in one of the PCIE x4 ports without sacrificing performance? I know close to nothing about FreeNAS, but I believe it’s a stripped raid. What kind of writing/reading performance improvement should I expect? Are SDDs really worth it for FreeNAS, or should I go with spinners? If using SSDs, what kind of speed should I expect? What about spinners? Please feel free to recommend parts ! When looking around I’ve seen many, reasonably priced, 1U SATA II enclosures for 4 drives. However, when looking for SATA III, prices really go up, especially here. I’m Spanish, so apart from the spelling mistakes (sorry!), my options to shop around aren’t many. If I go with SSDs, going with SATA III is a no brainer, but if using spinners, could I take SATA II? Anything else I’m missing? I know Johnm has a similar setup, but does anyone else have/is planning, something like this? I wouldn’t like to overload Johm with so many questions; he has already contributed so much to this thread !
  18. thanks BLKMGK and graywolf
  19. Where should it go? after the update_releases.php?
  20. There are a couple other scripts that can help.. First, need to setup unrar and then in setup specify where unrar is, and also setup for deep dive password checking. Then supposedly the ./misc/testing/update_parsing.php script should help look into nfo & the rar files to try and help make out the garbled ones. just add that to your newznab_local.sh script to run each time. There are also other scripts in the ./misc/testing directory that might help. need to do the setup myself, but supposedly from the irc chat, that is the only/best way to try and figure out the garbled releases There are a lot of scripts in the testing folder. It would be nice if they had a list of what they did so we can choose to run them. I'm already using the importmodified from there which is great but that's only because I read about it on a blog. What does it do? Do you have the link to that blog? Sent from my GT-I9100 using Tapatalk 2
  21. For those who are a bit lost http://aceshome.com/newznab/ Sent from my GT-P7500 using Tapatalk 2
  22. You should try the init script... Totally headless now. Sent from my GT-I9100 using Tapatalk 2
  23. Well, FWIW, I'm a linux noob. i know I should create a script, copy it to init.d, make it executable and update rc. the question is: what goes in the script ?
  24. To be honest, I don't know if there are any actual benefits. It runs in a row: update_binaries.php update_releases.php update_predb.php true optimise_db.php update_tvschedule.php update_theaters.php It then waits 10min (you can modify this value) and starts all over again. I don't know about the cron jobs some are using because I haven't use them. About the default screen script "newznab_screen.sh", it does pretty much the same, although you have to actually run it. I use the init because it runs automatically. That's the only reason. I use a dedicated server that reboots automatically on security updates. I never have to worry whether newznab is running, because it is. I can close the console and putty terminal without worrying whether I connected or disconnected screen. Basically, it's a self-running server. The only thing I'm missing is a low HDD space mail warning (almost there) and how to start sphinx automatically on reboot. I can then forget about it, which it's the actual purpose or running a server, isn't it?
  25. I'm almost there, running quite stable for two days, indexing a few groups (7) and running the init newznab.sh script. So everything is pretty much automated without screen. I'm only missing how to autotart the sphinx daemon (./nnindexer.php daemon). Any hints? PS: running a VM in ESXi with 50GB HDD, 4GB RAM and 4 cores. I know this is too much, but I don't to run into issues because of lack of resources :-)