Jump to content

tucansam

Members
  • Posts

    1,110
  • Joined

  • Last visited

Everything posted by tucansam

  1. When you say the host side and the container side, I'm not clear on what that means. Is host my media server, where the media lives? And container side are the paths inside the container (docker -> edit)? What does the slave option do? I am eventually going to consolidate servers, and wonder if the slave option will be needed when everything eventually resides on the same physical machine.
  2. I need some clarification guys, sorry. I'm new to sonarr, had been using sickbeard for years and years, then sickrage for a few months, now sonarr. I am having issues: sonarr is not moving files, sab is downloading two or three different versions of the same show (720p, 1080p, and obfuscated), filenames aren't being renamed, sab is downloading the same file multiple times (I think sonarr is telling it to, when I manually move a file from the downloads folder to my tv shows share, I think sonarr thinks its missing and tells sab to download again). Actually four files did get moved last night, but they were moved into the root of the TV shows folder, not into the folders corresponding to the TV show itself (ie, "/tv_shows/show_s1_ep1" instead of "/tv_shows/show/s1_ep1"). Plus the shows that were moved didn't get deleted from the downloads directory. Anyway, there are more options in sonarr than there were in sickbeard, and I don't understand many of them. But for starters, I think my paths are messed up. So that's where I need the initial clarification, and then I will move onto the next issue that persists. I have "/mnt/disks/192.168.0.4_tv_shows/" as the directory where all TV media lives, its mounted via CIFS from another server. "/mnt/disks/wd500/appdata/binhex-sabnzbdvpn/Downloads" is the directory where I have told sab to do its work, and there is of course an "/incomplete" sub-directory. That's it. When I ran sickbeard (initially Windows, which was ridiculously easy, and later linux, which required some work), everything ran 100%. Sickbeard scanned the tv_shows share, found a missing episode, told sab about it, sab downloaded it, unpacked it, and it was moved into the appropriate directory on the tv_shows share, then was deleted from the Downloads working directory. Presently I am doing everything by hand, and it very time consuming. One thing I was never clear on, even after all these years of using sickbeard, is who moves the files -- sickbeard or sab? When a show is done downloading, I see sab change its status to "moving..." but then nothing happens. I have read threads about post processing, some say "have sickbeard handle it" and some say "have sab handle it." I configured both back in the day, and it worked, so I was never sure which one was correct and doing the work. Does sonarr handle moving (and hopefully renaming as these obfuscated filenames are insane), or does sab? What does "import" mean with regard to sonarr? What is a "proper" and is it why I am getting multiple versions of the same show" How do I auto-rename obfuscated filenames? Etc etc etc etc (like I said, sorry) I'll start by asking for clarification with the paths, to make sure I am at least telling sab and sonarr to look and work in the correct places. Thanks.
  3. How does the community feel about a WD Red 8TB for parity and the rest of the array being comprised of 8TB Seagate Archives?
  4. Well my LSI M1115 won't fit my DS380 as I suspected, without some case modding. I see a bunch of non-Marvell cards in the wiki that *appear* as though they may work with 6.x......
  5. What cables are you guys using for the Dell and/or IBM cards? My DS380 has both SATA and SAS connectors on the backplane.
  6. Is it possible (easily) to determine if the VPN is working? Sab is downloading.... In the past, when I ran a VPN on my firewall, I would get slower download speeds through the VPN. Running sab+vpn in a docker, I am getting speeds similar to what I would get without the VPN running on the firewall. How to confirm sab is using the VPN?
  7. Can you explain why this is? I am considering consolidating two servers into one using these drives, but suspending all writes to the array during the monthly parity check would be a problem, especially given how long an 8x8TB array would probably take to do the check.... Also, what if two users are writing to the array at the same time, and happen to be hitting the same disk? Thanks.
  8. Tried installing the Community Applications on two different systems, one 6.1.3 and one 6.0-rc5, and am getting this: plugin: installing: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg plugin: downloading https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg plugin: downloading: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg ... done Warning: simplexml_load_file(): /tmp/plugins/community.applications.plg:1: parser error : Document is empty in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): /tmp/plugins/community.applications.plg:1: parser error : Start tag expected, '<' not found in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 plugin: xml parse error What am I doing wrong?
  9. Gary, this is why I am replacing the 320GB Maxtor with the 2TB Red, and the cache drive is just there to fill up space and generate heat, its pretty much optional in this particular build. I have all drive bays filled, including one bay with a dead 80GB drive, just to create a worst-case scenario from an airflow standpoint. I was doing some benchmarking with and without the cache drive so if it fails, its not a big deal. In fairness to the Maxtor, it came with a system I bought in 2000, and its still running every bit as fast (and hot!) as it was back then! Does anyone have any experience with the Gentle Typhoon vs the SilenX Red? http://www.newegg.com/Product/Product.aspx?Item=N82E16835226042&cm_re=silenx-_-35-226-042-_-Product I have a couple in my main unraid server and they've been in service for years. Wondering how the Typhoons compare in noise and CFM. Nidec lists their stats in cubic meters per hour, but google tells me that the rating Nidec lists is equivalent to 50CFM, which is a far cry from the 74CFM claimed by SilenX, yet every review I read says the Typhoons move a ton of air. Curious if anyone has any direct experience comparing both. Also found a very favorable looking alternative at a price that is impossible to beat: http://www.newegg.com/Product/Product.aspx?Item=N82E16835553009&cm_re=cougar_turbine-_-35-553-009-_-Product May have to order a set just for giggles based on reviews and price-per-fan. As always, comments welcome.
  10. Sorry guys, I have been remiss in getting back to this thread, have been working like a dog. Front server door still (!) at machinist, he's totally inundated with real work and side projects get the back burner. I put the whole system back together, sans front server door, which semi-simulates my having a vented door anyway (only a REALLY WELL vented door...) Spent the last few days playing with various fans and fan configs. Here's what I've come up with. Stock Silverstone fans, most horrible fans ever. But they are darned quiet. So I have kept one as an exhaust in the rear of the case. This is the config for all the different fan setups I've tried -- stock Silverstone fan for exhaust in the rear never changes. And I drilled 1/4" holes all along the length of that retarded plastic bracket that blocks the drive bay for whatever pogue decides he needs a video card in this box (Silverstone's design team needs to go back to school) With the case completely sealed up and seven 3.5" drives in the system... - Side two Silverstone fans (stock), intake -- drives were 35-42*C during large copies - Side two Silverstone fans (stock), exhaust -- drives were 32-38*C during large copies - Side two Corsair (the super loud ones with the lame color change rings) on slow speed, intake -- drives were 34-39*C during large copies - Side two Corsair fans exhaust -- drives are 28-32*C during large copies! (One of my drives is a decades old Maxtor (Seagate) 320GB 7200rpm DiamondMax which runs super hot to begin with, and may be acting like a space heater for the rest of the array. I have a 2TB WD Red coming this week to replace it.) I also tried some Lian Li 120mms, too loud, and some unmarked beater 120s I had in a pile, they rattled. Cache drive is still pushing 42-44*C right now, which is irritating, although it is also an elderly WD 7200rpm 160GB model, and probably runs hot. Its currently a 3.5" drive, and will eventually get swapped for a 2.5" 1TB drive to be mounted at the rear of the case. Then all rear vent holes except one row above and one row below the cache drive will be blocked. So hopefully this will generate some good airflow around the cache drive with the all exhaust fans setup. I am debating trying the Nidec/Scythe Gentle Typhoons. I am also debating putting a 140mm fan at the rear for exhaust, either with a 120mm - 140mm adapter, or a Xigmatek 140mm which can also be mounted in a 120mm space (and then use all holes more effectively than a 120mm would) which claims 80-90CFM. I am also thinking about taking two or four old broken 120mm fans, removing all of the center motor parts and motor support arms, so there is only the outer frame left, and then using them as spacers to get the two side fans as close to the drive bay frame as possible, and then try intake and exhaust with that setup, with the empty 120mm frames acting as tunnels to channel the air directly to the drives. In looking at the stock setup more closely, with the utterly anemic stock Silverstones sitting *inches* away from the drives, its no wonder a side intake/rear exhaust config is so horrible. Even the Corsairs, with high static pressure, couldn't get enough air over the drives when using them as side intake (unless I turned them up to afterburner on the fan controller). With the 3.5" bays populated with standard-height drives, there is literally 1/2mm of space or less between the top of the drive and the top of the cutout in the metal drive bay side wall -- hardly enough room for air from high pressure fans, much less the gentle breeze the stock fans put out (and virtually no space for forced air under the drives to cool the breadboard). This case design is horrible on so many fronts, I discover more each time I take a close look at it!!! Also have been opening the side access panel a few inches at a time to simulate me venting it instead of the server door (as I believe Gary Case mentioned, to have air come straight across the drives if using the side fans as exhaust). So far if it makes a difference I can't tell, however I am going to get a parity check started in the next couple of days so I can really see what helps. May end up keeping the front door solid, and venting the opposite side of the case instead. Or vent both the opposite wall AND the front door. So far with the front door removed the Corsairs (side exhaust config) are showing the most promise, however they are insanely loud and I am barely using their potential by putting them on a fan controller. The Gentle Typhoons are $22-25 a pop, and spending another $75 on this case doesn't seem like its the most interesting option at this point. I may try a third Corsair at the rear ($), or at the very least the Lian Li (free but loud unless I drop the voltage), or the 140mm option, to see if I can get more air flowing across the drives. As an aside, where (physically) is the temp sensor located on a hard disk? Even the cache drive, showing 44*C (111*F) in unraid, is cool to the touch. I'm guessing this is internal platter temperature? Or temp on the ICs on the breadboard on the bottom of the drive (most of which are pointed up into the drive these days, so all I can put my fingers on are the circuit traces, and even they don't feel super hot)? Just curious why all of my drives feel cool to the touch despite their reported temps.
  11. yep. try six or eight drives and a parity check and let us know your temps. they're gonna be uncomfortably high.
  12. You are right about right-side vent placement being ideal, however I wire my PS fan to run 100% of the time, so I am hoping that between the rear 120mm fan and the 120mm PS fan, I can get enough suction rearward across the drives to cool their surfaces completely. Given the proximity of the side 120mm fans to the drives, I expect the majority of the intaken air to be driven across the surface of the drives up to about 1/2-3/4 way down the widths of those 120mm side fans. Hopefully two rear fans will help coax some more air across the rears of the drives.
  13. 100% exhaust is how I run every case I own, and have for years. I seal up the entire case, and then open up only those areas where I want air to come in to flow across other components. In my primary server, a 9-bay tower, I have three iStarUSA 5-in-3 cages. Five 120mm fans, two side, two top, and one back, pull air out, and the only place it can enter is the 5-in-3 cages, hence the drives get 100% of the airflow. The CPU is an Athlon X2u, which barely requires a heatsink (although it has a heatsink/fan), and nothing inside the system gets hot, ever. Mid 30s on parity check day, and that's during the hot summer months. I used tape to seal the area between the 5-in-3 cage and the metal case walls, and more tape to seal the area between the metal case walls and the plastic front bezel trim. Hence, again, literally *100%* of airflow goes across the drives. There is no wasted or inefficient air, and dust isn't a problem as I just vacuum the front of the cages whenever I vacuum the floor in that room. With the Silverstone, two 120mm intake and only one 120mm exhaust means that, unless you run the exhaust faster than the intake fans, more air will blow across the drives, presumably warming the air, and then that air will get "stuck" inside the case as there are 1/2 as many fans pulling it out. With the stock fan config, I was seeing drives in the high 30s C during normal operation, inching to the low to mid 40s during a parity check. No thanks, I know I can improve that. With three exhaust fans and every other hole in the case sealed, the only place cool outside air will come in is across the drives. And vented areas that consist of grills of tiny tiny holes are easily kept clean, trapping dust mightily and vacuum out very well. The metal cage that the drives sit in looks like it was designed by a 3rd grade science class, in terms of areas for the air to get across the drives, and the big fat piece of plastic for video cards means one drive essentially gets no air at all. Extremely poor design. Like I said, I know I can improve upon it, and when I get the vented door back, I'll post my results. Three fans pulling 100% of the air across all of the drives is going to be better than two (mostly blocked) fans trying to push air across them, with one in the back trying to remove all the heat on its own. On all of the systems I am speaking of, the fans cannot be heard from 5+ feet away. So its not like I'm running Sunon or Delta industrial jet engine blowers in there for air flow. Efficient airflow design is much more important than fan speed and the amount of air the fans can move. Hence quieter fans can be used with great success. Reference the early 1990s era Silicon Graphics Indigo R4400 with Elan graphics. Massive heat producers in that system, just one fan. The case was designed so that air flowed over specific components in a specific order. Don't knock it til you've tried it And garycase, I considered venting the right side, but that would limit how I can orient the server (ie can't place it right up against another system/wall/etc without blocking the vents). With the vents in the front, I can place the server right up against my desktop's mini-tower and not have to worry about restricting airflow. Plus there is more room between the plastic drive caddies for air to flow across, vs venting on the right side and dealing with that abysmal cage design.
  14. I still love my Silverstone, although I sent the front metal door to my machinist to have a million holes drilled in it, so I can run all three case fans as exhaust. This, combined with sealing the rest of the case, will result in all air flowing directly over the drives. Otherwise, I have temperature problems (not problems per se, drives just run hotter than I prefer). Silverstone screwed up with the design of the case, airflow wise. Otherwise its a great case. I'll post pics/results when I get the door back from the shop.
  15. I have two (with a third on order) of these five-drive units, and one single drive unit. Love them, they're all I'll use. I take the fans off of them. My tower case has five 120mm fans all pulling air out, and the entire case is sealed, so incoming air only flows over the drives. This makes the system very quiet, and my drives never go above 30C during parity checks. I wanted a trayless design for ease, although I rarely swap drives, so in retrospect either would have worked. I know its trivial, but they are (in my opinion) the nicest looking units available.
  16. http://www.newegg.com/Product/Product.aspx?Item=N82E16816124064 I am adding this to another one of my servers after reading numerous reviews.
  17. I have been using one iStarUSA 5-in-3 for about two years in a Coolermaster Centurion 590. I removed the fan and back plastic trim before I even mounted it in the case. I have five 120mm fans (SilenX yellows and reds with the little temperature probe) and a Silverstone 120mm on the CPU heatsink (super low power Athlon X2). The system is virtually silent from a foot away, and all of the fans are drawing air out of the case. I sealed up every mfg hole, screw hole, fancy vent, etc., in the case except where the fans are mounted. 100% of air being drawn out by the fans is coming in from the front of the 5-in-3 enclosure, and I've never had a disk go above 34C, on a warm summer day, during a parity check. Most of my disks run between 28-30C when they are in use. There is enough suction to easily hold a 8.5x11 sheet of paper flush against the front of the drive enclosure. I am using 5900rpm disks and a 3.5" 7200rpm cache disk mounted on a PCI-slot drive adapter, directly above the power supply fan, which is of course also drawing air out (still can't understand why anyone would mount a power supply so the fan is not drawing out internal case air). I will be adding a second iStar 5-in-3 shortly, and will remove the fan from the rear also. All of the case fans are running at their lowest RPM setting. If I ever run into temp problems with a second (and eventually third) 5-in-3, I will either up the RPMs or switch to higher output fans. Dust does collect. The front of the drive bay enclosure gets vacuumed every time the floors do (weekly), and once a year the case side panel gets popped off and the internals get blasted with air and vacuumed. This setup has been 100% stable and 100% cool (literally speaking). I am curious to see what happens with a second 5-in-3 installed.
  18. how do you like this board? any performance issues as reported elsewhere?
  19. Don't we have rules in this forum??? PICS OR GTFO!!!
  20. Yep. Works fantastic.
  21. My $0.02. I decided that, while a major PITA, I can theoretically re-encode all of my DVDs and BluRays should my server blow up. Plus they take up, by far, the most room on the server (and in boxes in the garage, ha!). So my backups are only of the family's critical files, homework, legal papers, taxes, family photos, etc. Stuff that is literally not replaceable. I have two unraid servers. One is full of 4TB drives, a major investment for me. My second server is full of older disks, and when I upgrade in my main server, the second server gets the old drive (ie, I started out with 120 and 320GB drives and now have a few 3TB drives to add when it comes time). I use rsync to copy the critical stuff over to the second server. It has just one share, simply, "backup." I can bring it online in a pinch if the main server is dead and I need data *right now*, or I can use it to copy stuff back. rsync was a major PITA for me, and I have been unable to automate it; it requires near constant babysitting from me, and likes to give me errors and trouble. I don't have a lot of free time anymore, so maybe some day I'll be able to get it all working. For now, once a week (or sooner if something major is added to the main server, like a gig of wedding photos or something), I run a bunch of rsync commands to get the data moved. I like the idea of moving data with Windows, except that, in my experience, Windows copies everything to itself first, and then the second machine, acting like a conduit for the data. So, if I copy/paste from server1 to server2 using Explorer, the process slows because my Windows system is currently on a powerline ethernet adapter. rsync lets the two servers, sitting side-by-side, copy at gig speeds. Once I get my house wired, this won't be so much of an issue, but for now... As for off-site, I would love to be able to do that. I have toyed with the idea of setting up a server at my folks' house, so they can have the benefits of unraid, and each household can backup to the other's server. But even with an initial rsync copy, continuous weekly copying of even just differential data is going to eat up the "allowed bandwidth" our retarded ISPs allow. I'd be getting over quota emails constantly, by the time the kids have watched a few Netflix streams and I've downloaded a bunch of new ISOs. So off-site remains elusive for me, but my eventual goal. My final idea, as has been mentioned, is to simply carry the server to another location (my locker at work for example). This was the idea when I built the second server using an ammo can. But I've yet to actually do it. Yep, my fault.
  22. I was looking at the iStarUSA 7-bay NAS case (a 9-bay in on the horizon), and its larger than the Silverstone, and to add 10/11 how swap bays would be another $200-250. I am using a 5-bay iStarUSA cage in my current unraid server, and I like the product, but for the money, the Silverstone kills is hands down, even if you lose a couple of bays. I am planning on each of my DS380s having a 500GB WD Black 2.5" cache drive mounted internally, and I figure a 4TB Seagate NAS drive mounted elsewhere in the case (floor, etc) since that drive will rarely move... Giving me 8 bays for data drives. Four more bays than I have now in my largest unraid server (my 5-bay iStar holds my parity disk currently along with 4 member disks) and a far smaller footprint than the mega tower I had built with expansion in mind, along with cooling that is more directed on the drives. Seems win/win. It seems disks are getting larger in capacity faster than I am filling my array, so needing more physical disks is not as important as being able to easily swap a 2/3TB drive for a 4/5/6TB drive down the road. Good that Amazon has them... at $38/ea shipping (ebay and elsewhere) I can get an Amazon Prime account, pay for it with what shipping a pair of DS380s would have cost, and get free shipping for the rest of the year. Hopefully we will start seeing reviews and more builds soon. I am excited about this case.
  23. Good info here: http://forums.servethehome.com/chassis-enclosures/3067-silverstone-ds380-itx-8x-sas-sata-hotswap.html And here:
  24. Saw the mention in the Lian Li thread and figured I'd make a thread just for this case... I expect it will be a success. I initially bought a 9-bay tower, and have never gone beyond five data drives (I just keep buying bigger disks). I am now consolidating that and my backup server into two smaller cases...
  25. Can I start one? The only place I've found them so far (US) is Ebay, ~$200/ea with shipping. I am in the middle of moving, but when I'm done, I will be picking up a pair, along with two of ASRock's new quad-core 12 SATA port MBs.. Anyone else using them? Curious how the stock fans are.
×
×
  • Create New...