Jump to content

tucansam

Members
  • Posts

    1,107
  • Joined

  • Last visited

Everything posted by tucansam

  1. This is a somewhat rhetorical question but I have to ask anyway, popping out the drives and using them in an array voids the warranty, correct? On the drive itself I mean. Thanks.
  2. Can you explain why this is? I am considering consolidating two servers into one using these drives, but suspending all writes to the array during the monthly parity check would be a problem, especially given how long an 8x8TB array would probably take to do the check.... Also, what if two users are writing to the array at the same time, and happen to be hitting the same disk? Thanks.
  3. Tried installing the Community Applications on two different systems, one 6.1.3 and one 6.0-rc5, and am getting this: plugin: installing: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg plugin: downloading https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg plugin: downloading: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg ... done Warning: simplexml_load_file(): /tmp/plugins/community.applications.plg:1: parser error : Document is empty in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): /tmp/plugins/community.applications.plg:1: parser error : Start tag expected, '<' not found in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193 plugin: xml parse error What am I doing wrong?
  4. Here's a dumb question that shows my inexperience with linux. How do I configure unraid as a syslog server? I have a few machines running on ssd's and I want to limit writes to those drives, want logs and such written to an unraid share. Thanks.
  5. I pulled the server apart last month to dispose of dust bunnies, I probably bumped something, good call. Thanks to all.
  6. Literally the entire rest of the syslog was spindown entries.
  7. 6.0-rc5 Have been running it for a while now. 1 Aug's automatic parity test resulted in 5 parity errors found. I know this is minimal, but its the first time I've had errors on this system. Syslog entries that piqued my interest: -- Jul 27 12:32:27 ffs2 kernel: ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Jul 27 12:32:27 ffs2 kernel: ata7.00: failed command: IDENTIFY DEVICE Jul 27 12:32:27 ffs2 kernel: ata7.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 19 pio 512 in Jul 27 12:32:27 ffs2 kernel: res 40/00:00:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) Jul 27 12:32:27 ffs2 kernel: ata7.00: status: { DRDY } Jul 27 12:32:27 ffs2 kernel: ata7: hard resetting link Jul 27 12:32:27 ffs2 kernel: ata7: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jul 27 12:32:27 ffs2 kernel: ata7.00: configured for UDMA/133 Jul 27 12:32:27 ffs2 kernel: ata7: EH complete Jul 27 12:49:52 ffs2 kernel: mdcmd (446): spindown 3 Jul 27 13:44:23 ffs2 kernel: mdcmd (447): spindown 3 Jul 27 14:06:51 ffs2 kernel: ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Jul 27 14:06:51 ffs2 kernel: ata7.00: failed command: IDENTIFY DEVICE Jul 27 14:06:51 ffs2 kernel: ata7.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 24 pio 512 in Jul 27 14:06:51 ffs2 kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jul 27 14:06:51 ffs2 kernel: ata7.00: status: { DRDY } Jul 27 14:06:51 ffs2 kernel: ata7: hard resetting link Jul 27 14:06:51 ffs2 kernel: ata7: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jul 27 14:06:51 ffs2 kernel: ata7.00: configured for UDMA/133 Jul 27 14:06:51 ffs2 kernel: ata7: EH complete Aug 1 00:00:01 ffs2 kernel: mdcmd (578): check NOCORRECT Aug 1 00:00:01 ffs2 kernel: Aug 1 00:00:01 ffs2 kernel: md: recovery thread woken up ... Aug 1 00:00:01 ffs2 kernel: md: recovery thread checking parity... Aug 1 00:00:01 ffs2 kernel: md: using 2048k window, over a total of 2930266532 blocks. Aug 1 02:11:41 ffs2 kernel: md: parity incorrect, sector=1565565768 Aug 1 02:11:41 ffs2 kernel: md: parity incorrect, sector=1565565776 Aug 1 02:11:41 ffs2 kernel: md: parity incorrect, sector=1565565784 Aug 1 02:11:41 ffs2 kernel: md: parity incorrect, sector=1565565792 Aug 1 02:11:41 ffs2 kernel: md: parity incorrect, sector=1565565800 Aug 1 05:00:48 ffs2 kernel: ata9.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Aug 1 05:00:48 ffs2 kernel: ata9.00: failed command: IDENTIFY DEVICE Aug 1 05:00:48 ffs2 kernel: ata9.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 22 pio 512 in Aug 1 05:00:48 ffs2 kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Aug 1 05:00:48 ffs2 kernel: ata9.00: status: { DRDY } Aug 1 05:00:48 ffs2 kernel: ata9: hard resetting link Aug 1 05:00:48 ffs2 kernel: ata9: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 1 05:00:48 ffs2 kernel: ata9.00: configured for UDMA/133 Aug 1 05:00:48 ffs2 kernel: ata9: EH complete Aug 1 06:52:14 ffs2 kernel: mdcmd (579): spindown 2 Aug 1 06:52:15 ffs2 kernel: mdcmd (580): spindown 4 Aug 1 06:52:15 ffs2 kernel: mdcmd (581): spindown 5 Aug 1 06:52:15 ffs2 kernel: mdcmd (582): spindown 7 Aug 1 06:58:37 ffs2 kernel: ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Aug 1 06:58:37 ffs2 kernel: ata7.00: failed command: IDENTIFY DEVICE Aug 1 06:58:37 ffs2 kernel: ata7.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 6 pio 512 in Aug 1 06:58:37 ffs2 kernel: res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) Aug 1 06:58:37 ffs2 kernel: ata7.00: status: { DRDY } Aug 1 06:58:37 ffs2 kernel: ata7: hard resetting link Aug 1 06:58:37 ffs2 kernel: ata7: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Aug 1 06:58:37 ffs2 kernel: ata7.00: configured for UDMA/133 Aug 1 06:58:37 ffs2 kernel: ata7: EH complete Aug 1 10:04:54 ffs2 kernel: md: sync done. time=36292sec Aug 1 10:04:54 ffs2 kernel: md: recovery thread sync completion status: 0 Aug 2 11:35:26 ffs2 kernel: ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Aug 2 11:35:26 ffs2 kernel: ata7.00: failed command: SMART Aug 2 11:35:26 ffs2 kernel: ata7.00: cmd b0/d0:01:00:4f:c2/00:00:00:00:00/00 tag 1 pio 512 in Aug 2 11:35:26 ffs2 kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Aug 2 11:35:26 ffs2 kernel: ata7.00: status: { DRDY } Aug 2 11:35:26 ffs2 kernel: ata7: hard resetting link Aug 2 11:35:26 ffs2 kernel: ata7: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Aug 2 11:35:26 ffs2 kernel: ata7.00: configured for UDMA/133 Aug 2 11:35:26 ffs2 kernel: ata7: EH complete -- Not sure what to make of this. Advice welcome. Thanks.
  8. Hello all. I am presently running two unraid servers, both 5.x. I am also running a separate esxi system with a handful of VMs, including Windows 7 (plex/sab/sickbeard), several linux's, pfsense, untangle, and a handful of test VMs which are periodically run (WinXP, freebsd, etc). I am contemplating concentrating all of these various machines into one large system, running unraid 6.x with Xen. I haven't yet tried experimenting with Xen or 6.x, but will have a test system up on my next days off so I can start learning. I would specifically like to run: pfsense, untangle, sab, sickbeard, Plex (transcoding on the fly to iPads, Rokus, etc), Zoneminder (or motion, or Blue Iris, either way, an IP surveillance app for 8-10 cameras, at 1080p minimum), several small linux VMs, and of course unraid. I have an i7-2600 (3.4Ghz, 4 core, 8 threads) and up to 32gb ram to add to this equation. I'd like to know if you guys think this CPU and memory combo would be enough for all of these various apps and VMs (dockers?) etc. Or should I start looking at one or two generation old Xeons in the 6-8 core (12-16 thread) and/or dual CPU configuration? The LGA 2011 Xeons from a gen or two ago can be had for next to nothing on fleabay, and dual socket server MBs aren't too terribly expensive given the amount of horsepower they can bring to bear on my design. I am also saving for 8TB drives, so money saved on CPU/MB combo can be used elsewhere. Hoping the meager i7-2600 can handle the load. Thanks.
  9. You sir are the man. Thank you!
  10. Pulling my PC apart to plug in a new WD Blue to run wdidle would mean shutting down and losing work on a few projects that are currently underway.... I have downloaded idle3 but now need to compile it under unraid to run it on my test unraid server and configure the disk. Is there a way to get make to install and run under unraid, and/or has anyone got a precompiled version of idle3 for unraid that they can post? Failing that, I have available a laptop with a usb->SATA adapter, but I am not sure if I want to try running wdidle under DOS with said adapter. Thanks.
  11. I have only ever used greens for data drives, since the parity drive gets such a thrashing. Any harm in using the new blues for parity? Anyone running blues with success in their 24/7 servers?
  12. I am about to drop the hammer on a 6TB parity drive so I can buy 6TB drives in the future for upgrades. Had been looking at the Seagate NAS and WD Red, now the egg hits my inbox with a $199 WD Blue 6TB. Any comments on running (modern) Blues in unraid? I've got a few Greens left with the head park disabled, otherwise all my unraid disks are Seagate NAS or WD Reds. Wondering about the 24/7 longevity of the Blues (mine all spin down but you know what I mean). Thanks
  13. This happens on both my 6.x servers, and has happened since 5.x. The web GUI is slow to load, because it seems to wait until some (not all) disks spin up, and then loads the web GUI. Has been this way for years across every install I've ever done, even plain vanilla (which is what I am running now, no dockers, no virtualization, no extra programs running, just unraid)
  14. I appreciate the replies, thanks. There is no way to get gig-e upstairs (rental, can't run wires). The Airwire setup works but was a quick fix. I am thinking about upgrading my wifi ap to 802.11ac, and putting a pci AC adapter in my desktop, then moving my secondary unraid server downstairs to hang off the managed switch. I'm hoping this will eliminate the network slowness. Comments on that plan? Unraid is running fine otherwise. Thanks again.
  15. Downstairs, managed gig-e switch, primary unraid server, some esxi stuff, htpc stuff, my pfsense firewall, a Ubiquity Unifi AP, and one end of a Ubiquity Air Wire. Upstairs, the other end of the Air Wire, a gig-e switch, secondary unraid server, and my workstation. Copying from my workstation to the unraid server downstairs, 7-10MB/s. Copying from the upstairs unraid server to the downstairs unraid server, 7-10MB/s. Copying from my workstation to the upstairs unraid server, 40MB/s (no cache disk) and 65-70MB/s reads. But if the secondary unraid server is doing something with any of the systems downstairs, then accessing it with my workstation plugged into the same gig-e switch is where the bottleneck presents.
  16. Hi guys. I have an unraid server sitting next to my desktop workstation (Unraid 6.x, Windows 7). Connected via gig-e, both NICs are gig. Right now I am pulling data off my unraid server at 69.9MB/s according to Windows. Cool. The pair of systems is connected to the rest of my network with what is, essentially, a 100MB link. Copying data from the unraid server to a machine downstairs usually gives me 10MB/s, and sometimes if I am doing a pair of copies at a time, I can get one stream to 9MB/s and the other to 4MB/s. The problem is, when I am copying data from the unraid server to another machine over the 100MB link, any file browsing or copying done from my gig-e connected workstation results in 1-2MB/S. So, I can be doing a 10MB/s copy form unraid to a machine downstairs, and a copy from unraid to my workstation is doing 1-2MB/s. In addition, browsing the shares with Windows Explorer results in similar slowness, and many times timeouts. Confirmed that this happens when pulling off the same drive, or different drives. It seems that any time unraid is involved in file operations over the 100MB link, it forgets its connected via gig-e with my desktop. At least that's my theory. That something is limiting it to the slower speed, no matter where the client is. Any advice on the matter would be appreciated.
  17. Thank you, I appreciate it. Since this RAM is just laying around, I'm going to install it. But what should my expectations be, realistically? Going from 4-8GB RAM on a 15TB array and 4-12GB RAM on a 40TB array... I'm not expecting to have my socks knocked off, but will I likely see any faster response times in terms of writing to, reading from, accessing shares, etc? Or will be benefit be essentially imperceivable? Thanks again.
  18. I am running 4GB in each of two bone-stock plain-vanilla unraid servers (v6). No virtualizations, no extras like sab or sickbeard etc. Just unraid. I've read that certain kernel parameters can be tweaked to take advantage of more memory for things like writes, caching, etc. I do not use cache drives in either system anymore, and have noticed a little drop in performance on writes (as to be expected). I also had my primary unraid server completely stop responding to SMB traffic during a parity check. I could telnet in, but sysload was over 7.0 and all RAM was in use. I have some extra RAM laying around from various builds.... I can put 8GB in one and 12GB in the other. Wondering what kernel tweaks to make to take advantage of the extra memory, seems a waste to have this RAM just sitting in a drawer somewhere, if I can get a little performance bump out of it. Thanks.
  19. Gary, this is why I am replacing the 320GB Maxtor with the 2TB Red, and the cache drive is just there to fill up space and generate heat, its pretty much optional in this particular build. I have all drive bays filled, including one bay with a dead 80GB drive, just to create a worst-case scenario from an airflow standpoint. I was doing some benchmarking with and without the cache drive so if it fails, its not a big deal. In fairness to the Maxtor, it came with a system I bought in 2000, and its still running every bit as fast (and hot!) as it was back then! Does anyone have any experience with the Gentle Typhoon vs the SilenX Red? http://www.newegg.com/Product/Product.aspx?Item=N82E16835226042&cm_re=silenx-_-35-226-042-_-Product I have a couple in my main unraid server and they've been in service for years. Wondering how the Typhoons compare in noise and CFM. Nidec lists their stats in cubic meters per hour, but google tells me that the rating Nidec lists is equivalent to 50CFM, which is a far cry from the 74CFM claimed by SilenX, yet every review I read says the Typhoons move a ton of air. Curious if anyone has any direct experience comparing both. Also found a very favorable looking alternative at a price that is impossible to beat: http://www.newegg.com/Product/Product.aspx?Item=N82E16835553009&cm_re=cougar_turbine-_-35-553-009-_-Product May have to order a set just for giggles based on reviews and price-per-fan. As always, comments welcome.
  20. Sorry guys, I have been remiss in getting back to this thread, have been working like a dog. Front server door still (!) at machinist, he's totally inundated with real work and side projects get the back burner. I put the whole system back together, sans front server door, which semi-simulates my having a vented door anyway (only a REALLY WELL vented door...) Spent the last few days playing with various fans and fan configs. Here's what I've come up with. Stock Silverstone fans, most horrible fans ever. But they are darned quiet. So I have kept one as an exhaust in the rear of the case. This is the config for all the different fan setups I've tried -- stock Silverstone fan for exhaust in the rear never changes. And I drilled 1/4" holes all along the length of that retarded plastic bracket that blocks the drive bay for whatever pogue decides he needs a video card in this box (Silverstone's design team needs to go back to school) With the case completely sealed up and seven 3.5" drives in the system... - Side two Silverstone fans (stock), intake -- drives were 35-42*C during large copies - Side two Silverstone fans (stock), exhaust -- drives were 32-38*C during large copies - Side two Corsair (the super loud ones with the lame color change rings) on slow speed, intake -- drives were 34-39*C during large copies - Side two Corsair fans exhaust -- drives are 28-32*C during large copies! (One of my drives is a decades old Maxtor (Seagate) 320GB 7200rpm DiamondMax which runs super hot to begin with, and may be acting like a space heater for the rest of the array. I have a 2TB WD Red coming this week to replace it.) I also tried some Lian Li 120mms, too loud, and some unmarked beater 120s I had in a pile, they rattled. Cache drive is still pushing 42-44*C right now, which is irritating, although it is also an elderly WD 7200rpm 160GB model, and probably runs hot. Its currently a 3.5" drive, and will eventually get swapped for a 2.5" 1TB drive to be mounted at the rear of the case. Then all rear vent holes except one row above and one row below the cache drive will be blocked. So hopefully this will generate some good airflow around the cache drive with the all exhaust fans setup. I am debating trying the Nidec/Scythe Gentle Typhoons. I am also debating putting a 140mm fan at the rear for exhaust, either with a 120mm - 140mm adapter, or a Xigmatek 140mm which can also be mounted in a 120mm space (and then use all holes more effectively than a 120mm would) which claims 80-90CFM. I am also thinking about taking two or four old broken 120mm fans, removing all of the center motor parts and motor support arms, so there is only the outer frame left, and then using them as spacers to get the two side fans as close to the drive bay frame as possible, and then try intake and exhaust with that setup, with the empty 120mm frames acting as tunnels to channel the air directly to the drives. In looking at the stock setup more closely, with the utterly anemic stock Silverstones sitting *inches* away from the drives, its no wonder a side intake/rear exhaust config is so horrible. Even the Corsairs, with high static pressure, couldn't get enough air over the drives when using them as side intake (unless I turned them up to afterburner on the fan controller). With the 3.5" bays populated with standard-height drives, there is literally 1/2mm of space or less between the top of the drive and the top of the cutout in the metal drive bay side wall -- hardly enough room for air from high pressure fans, much less the gentle breeze the stock fans put out (and virtually no space for forced air under the drives to cool the breadboard). This case design is horrible on so many fronts, I discover more each time I take a close look at it!!! Also have been opening the side access panel a few inches at a time to simulate me venting it instead of the server door (as I believe Gary Case mentioned, to have air come straight across the drives if using the side fans as exhaust). So far if it makes a difference I can't tell, however I am going to get a parity check started in the next couple of days so I can really see what helps. May end up keeping the front door solid, and venting the opposite side of the case instead. Or vent both the opposite wall AND the front door. So far with the front door removed the Corsairs (side exhaust config) are showing the most promise, however they are insanely loud and I am barely using their potential by putting them on a fan controller. The Gentle Typhoons are $22-25 a pop, and spending another $75 on this case doesn't seem like its the most interesting option at this point. I may try a third Corsair at the rear ($), or at the very least the Lian Li (free but loud unless I drop the voltage), or the 140mm option, to see if I can get more air flowing across the drives. As an aside, where (physically) is the temp sensor located on a hard disk? Even the cache drive, showing 44*C (111*F) in unraid, is cool to the touch. I'm guessing this is internal platter temperature? Or temp on the ICs on the breadboard on the bottom of the drive (most of which are pointed up into the drive these days, so all I can put my fingers on are the circuit traces, and even they don't feel super hot)? Just curious why all of my drives feel cool to the touch despite their reported temps.
  21. yep. try six or eight drives and a parity check and let us know your temps. they're gonna be uncomfortably high.
  22. You are right about right-side vent placement being ideal, however I wire my PS fan to run 100% of the time, so I am hoping that between the rear 120mm fan and the 120mm PS fan, I can get enough suction rearward across the drives to cool their surfaces completely. Given the proximity of the side 120mm fans to the drives, I expect the majority of the intaken air to be driven across the surface of the drives up to about 1/2-3/4 way down the widths of those 120mm side fans. Hopefully two rear fans will help coax some more air across the rears of the drives.
  23. 100% exhaust is how I run every case I own, and have for years. I seal up the entire case, and then open up only those areas where I want air to come in to flow across other components. In my primary server, a 9-bay tower, I have three iStarUSA 5-in-3 cages. Five 120mm fans, two side, two top, and one back, pull air out, and the only place it can enter is the 5-in-3 cages, hence the drives get 100% of the airflow. The CPU is an Athlon X2u, which barely requires a heatsink (although it has a heatsink/fan), and nothing inside the system gets hot, ever. Mid 30s on parity check day, and that's during the hot summer months. I used tape to seal the area between the 5-in-3 cage and the metal case walls, and more tape to seal the area between the metal case walls and the plastic front bezel trim. Hence, again, literally *100%* of airflow goes across the drives. There is no wasted or inefficient air, and dust isn't a problem as I just vacuum the front of the cages whenever I vacuum the floor in that room. With the Silverstone, two 120mm intake and only one 120mm exhaust means that, unless you run the exhaust faster than the intake fans, more air will blow across the drives, presumably warming the air, and then that air will get "stuck" inside the case as there are 1/2 as many fans pulling it out. With the stock fan config, I was seeing drives in the high 30s C during normal operation, inching to the low to mid 40s during a parity check. No thanks, I know I can improve that. With three exhaust fans and every other hole in the case sealed, the only place cool outside air will come in is across the drives. And vented areas that consist of grills of tiny tiny holes are easily kept clean, trapping dust mightily and vacuum out very well. The metal cage that the drives sit in looks like it was designed by a 3rd grade science class, in terms of areas for the air to get across the drives, and the big fat piece of plastic for video cards means one drive essentially gets no air at all. Extremely poor design. Like I said, I know I can improve upon it, and when I get the vented door back, I'll post my results. Three fans pulling 100% of the air across all of the drives is going to be better than two (mostly blocked) fans trying to push air across them, with one in the back trying to remove all the heat on its own. On all of the systems I am speaking of, the fans cannot be heard from 5+ feet away. So its not like I'm running Sunon or Delta industrial jet engine blowers in there for air flow. Efficient airflow design is much more important than fan speed and the amount of air the fans can move. Hence quieter fans can be used with great success. Reference the early 1990s era Silicon Graphics Indigo R4400 with Elan graphics. Massive heat producers in that system, just one fan. The case was designed so that air flowed over specific components in a specific order. Don't knock it til you've tried it And garycase, I considered venting the right side, but that would limit how I can orient the server (ie can't place it right up against another system/wall/etc without blocking the vents). With the vents in the front, I can place the server right up against my desktop's mini-tower and not have to worry about restricting airflow. Plus there is more room between the plastic drive caddies for air to flow across, vs venting on the right side and dealing with that abysmal cage design.
  24. I still love my Silverstone, although I sent the front metal door to my machinist to have a million holes drilled in it, so I can run all three case fans as exhaust. This, combined with sealing the rest of the case, will result in all air flowing directly over the drives. Otherwise, I have temperature problems (not problems per se, drives just run hotter than I prefer). Silverstone screwed up with the design of the case, airflow wise. Otherwise its a great case. I'll post pics/results when I get the door back from the shop.
  25. I have two (with a third on order) of these five-drive units, and one single drive unit. Love them, they're all I'll use. I take the fans off of them. My tower case has five 120mm fans all pulling air out, and the entire case is sealed, so incoming air only flows over the drives. This makes the system very quiet, and my drives never go above 30C during parity checks. I wanted a trayless design for ease, although I rarely swap drives, so in retrospect either would have worked. I know its trivial, but they are (in my opinion) the nicest looking units available.
×
×
  • Create New...