Juniper

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by Juniper

  1. I bought a Fractal Meshify 2 XL, transferred my existing hardware into it, and added 2 new parity drives. The build was straight forward, case is very easy to work with. I put the case fans it came with on the top, added 4 120mm Noctua NF-A12x25 PWM fans in the front, and an existing 140mm Arctic fan in the back. The front fans are configured for intake, the back fan and the top fans are configured for exhaust. The hard drives are all mounted from the top with no space in between, directly behind the front fans. The case has dust filters on the top, the front, and the bottom for easy dust maintenance. Cable management is easy: cables can all be routed behind the motherboard, with cable straps provided, and a panel to hide the biggest mess at the bottom. The bottom area has 2 hard drive cages in case you run out of space to mount the hard drives top to bottom. I have now 9 drives in there and there is still room for 3 more before before starting to use the hard drive cages. Now in winter with around 62F ambient temperature in the room the server is in the Noctua fans I put in the front are running on their low-noise-adapter (voltage throttler) at 1700 RPM, and the server runs quietly. Based on how the temps are in the summer, I most likely will need to run the fans at their full 2000 RPM speed since ambient temps here get to around 79 F. I am running parity check since 7.5h, and the temps are staying within 20s C. This case is well built, and offers room for 16 hard drives on the front: 12 mounted from top to bottom right behind the front fans, plus 4 in the hard drive cages on the bottom. If you need to you can probably squeeze more room for hard drives out of the case. Youtube has videos where folks do that. I found I can easily check the 7 segment bios error display through the case's glass door just by peeking into the room the server is in. You see exactly the stages the server goes through when booting without having to open the case. That has helped with troubleshooting. The glass door also gives the server a neat put-together look. Here are some pics of the server, plus the temps 7.5h into the Parity check. Thank you everybody who has posted insights and pics. Your work is much appreciated! I'll post update in summer when the ambient temperature is higher.
  2. Thank you so much for posting info on your build with the Define XL case. Those temperatures are exactly what I am aiming for. Thank you again for taking the time and replying!
  3. Thank you so much Geck0 for responding, and for your pictures! I am sorry for not responding earlier. Somehow "life got in the way" and I just logged in again. My apologies. I see temps in the 40s and 50C for your disks. The pics look like the disks sit next to a radiator whose fans are probably set to take air out. The warm air from the case gets blown over the disks to then be sucked out by the fans across the radiator. That could make the disks hotter than in a configuration where the disks sit right next to intake fans. Does anybody reading this have a Fractal Meshify 2 config where the drives sit right next to the fans and take in air from outside to blow over the disks to cool them? I would love to hear what temps you are getting. Right now I have something like that in a "rigged" case: temps in high 30s C during parity check in summer with ambient temp around 78F (25.5C).
  4. Thank you so much for responding, MadMatt337 and Geck0! That are great temps, MadMatt337. Exactly what I had hoped for. Great airflow design through your case. And nice cable management. Looks very neat and organized. I plan to use the same air cooler for the CPU. Very valid warning about watercooling, Geck0 even though yours looks amazing and gives great temps. It looks very tempting Thank you both for the pictures! First I will just move the existing build to the new case, see how it works, then after a while get new motherboard / cpu etc. Right now I have 7 drives (10TB Iron Wolf parity, 6 mixed 8TB Seagate/WD data), but that won't be for long once I have a new case. 14 drives will probably be my max. Once I am at that number I'll probably start replacing older drives instead of just adding new ones. Great idea with the 120 mm fans to have one at the bottom to push into the drive cage, Geck0. Definitely will try do that as well. Yes, I'd love to see pics of your build when you put in the hard drives, Geck0. The RGB looks lovely, btw. Great idea for the server. Mine looks just black and bland (Rosewill Thor case). Even my PC case (Coolermaster HAF X) has the RBG of the graphics card and the power supply shining through its window.
  5. The youtube video from Linus Tech Tips about building a 20 disk (Seagate Iron Wolf 16 TB) Unraid server in a Fractal Design 7 XL shows 12 Drives in the front, 6 in the bays, 3 on the top, and 1 on the back of the case. While they're copying data Linus calls out the temps as 39-41C for the front drives, and 43-48C for the 4 drives on top and back. It looks like he didn't add any more fans though to the 3 intake front and 1 outtake back 140mm fans. I would love to hear what temps you guys have in those cases. (If you're interested in the video: youtube, search for "ltt fractal design server" it is labeled: 320 Terabytes in a normal case)
  6. Is anybody using the Fractal Design Meshify 2 XL, or the Define 7 XL as Server? Would love to know your experience with hard drive temperatures: How many hard drives and where did you mount them? (trays, bay, in the back, on top, etc) Fan configuration to cool the drives, how many, where? What temps are you getting for your hard drives? The cases can hold drives on mountable trays where the fans in the front blow on them, in a bay on the bottom where air from a front fan can get to, and in a bay behind the bay next to the fan where, as far as I can see in the design, a lot less air might be arriving. And it looks like you can also mount drives on the panel in the back and on top of the case somehow. But airflow could be a problem there. The Define 7 XL case looks like not much air can get in from the front, highly likely leading to higher hard drive temps than with the same configuration in the Meshify XL. I'm planning to give my server a new case this year and would be grateful for your experience with these cases. If you found other large cases with airflow for low hard drive temps I would love to know about them as well. Thank you much for reading
  7. thank you both very much for your help 🙂
  8. I now have uninstalled the deprecated rutorrent docker app and deleted the directories it had created under /mnt/user. After rebooting the server FixCommonProblems no longer complained about the "user" share. But the question remains: When I install a different bittorrent docker app, I will also have to reference the directory where the shares are located in, and that is /mnt/user. Will I then get the same problem again with having a "user" share? Should I rather make a new directory in /mnt which then points to the same where "user" and "user0" point to? If yes, how would I do that? Thank you so much.
  9. FixCommonProblems tells me I have a share named "user". I checked, and deleted the directory named "user" in /mnt/user where all the share-directories are located. That directory "user" was created by a docker application at one point. But after deletion the app still tells me I have a share named "user". I checked the directory tree and found: in /mnt there are directories for all the disks (disk 1 to 6), then the following directories: disks, remotes, user, and user0. The disk1..6 directories point to each of my 6 data disks, the directories "disks" and "remotes" are empty, and both user and user0 point to the directory trees that contains the data from all the shares. Directories "user" and "user0" point to exactly the same stuff. I have a docker application that references: /downloads /mnt/user/ /config /mnt/user/appdata/rutorrent I tried to change the references from "user" to "user0" but I still get the same error message about having a share named "user". Also that the bittorrent docker app is deprecated. I am going to uninstall it, but still there is the problem of the "user" share. Should I rename the directory "user" in /mnt? Or should I create a new directory in /mnt, map it to the same content as user and user0, and then use it for apps? Thank you so much for reading this.
  10. I figured it out, and made a test installation with altogether 12 drives (all the drives I possess). The array itself has 6 drives in 2 5.25" drive cages from my old Antec 900 case. The Thor case has 6 slots in its bottom drive bay which I filled with my other drives (mix of 3TB and 2TB Seagate and WD drives black and green). They all had windows partitions and data on them... But it was all backed up to the server The hottest of the extra drives, 2 WD blacks and the 3TB Seagate, I put in the middle to keep them warm once they warm up. I mounted the 6 extra drives as unassigned drives, and to simulate load I copied large amount of data to and from them. After a while the extra drives temperatures stayed stable from 19 C to 27 C, 7 C hotter than the array drives in the well ventilated Antec drive cages. That was much lower than before the case modification. Back when I used the case as a Win10 PC I tried adding drives in the drive bay. They quickly went up to 40C and higher. I had to take them out again, and use the drive cages instead. Of course it's the cold season now. But even during the summer the 6 drives in the Antec cages staid in 20s C. They only went up to 34 C during parity check. The drives in the drive bay highly likely will stay +7C higher under load than the drives in the Antec cages. Means they might go up to 41C for parity check. But keep in mind for this test install I put the hottest drives in the middle of the drive bay. And those drives were about 5 to 10 years old. In a real-life scenario I would buy new, modern drives for the array to put in the drive bay. Also: the fans I used in the test install were existing, old stuff from my closet. In a "real-life-build" I would buy new, high-airflow fans. So how did I do this: The Thor case has a large useless fan in its case door, and one in front of the drive bay. The one in front you cannot replace. There are no screw holes to put in smaller higher airflow fans. And there would not be much room to blow that air anyways due to the design and placement of the drive bay. But the fan in the case door can be replaced with 4 120mm fans. The case door has both holes and grommets for that. I took out the existing large useless fan from the case door, and put in 4 existing 120mm fans from my closet. But the fans would blow to ... the graphic card area. So .... I turned the door 90 degrees counterclockwise. That aligned the 2 bottom fans with the hard drive bay. This is the key to good temps. Without the turn the drives will get cooked in that drive bay. Now the door of course no longer can be screwed on the case. To mitigate that I added Velcro pads. Enough in key locations to secure the door. This modification will work for me until my finances allow to buy a Fractal Design Define 7 XL case with the additional hard drive holders and fans for more disks. I don't recommend buying the Rosewill Thor case as server case, but folks who have one can use it with this method. The bottom drive bay fits 6 drives, plus the case has 2 5.25" front accessible slots for additional drive cages. Stats of my Server now: Rosewill Thor V2, large fan on door replaced with 4 120mm fans, plus 2 5.25" drive cages (from my old Antec 900) ASUS P8Z68 Deluxe board with Intel I7-2600 CPU, 8 GB RAM, and MSI Graphic Card PSU: Rosewill RBR1000M, 80Plus Bronze (offers 4 12V connectors, 2 with 20A, 2 with 30A) Array: Unraid 6.9.2 Parity: 1 Seagate Ironwolf 10 TB (applied changes to run on 6.9.2) Data: 5 8 TB drives, Seagate Barracuda and Exos Flash: SanDisk Cruzer Glide 16GB Pics: temps, fans, 12 drives location and cabling front and back (not meant to be pretty, but cables free hanging ;-)), and pics how the door is put back on with velcros the SMART errors in the temp pic are just CRC errors. The drives are all fine. I ran both SMART tests Unraid 6.9.2 offers on each drive during the test install.
  11. Small server here: 40TB with 11 TB unused. Soon to be 48TB with 19TB unused with 1 10TB parity drive and 6 8TB data drives in a Rosewill Thor V2 case with old Antec drive cages, and old motherboard. Using existing stuff for now. It's inspiring to read about all the large server setups. For now I have a 1080p (3D) tv and all my content is 1080p. But once I switch to 4k I will need a new server.
  12. None of the above. I would prefer to pay with the existing options for now.
  13. Awesome! Thank you much for your response, itimpi. That helps a lot. Thank you much again for clearing this up.
  14. I ran into the Seagate Ironwolf problem on my server after upgrading to 6.9.2. To make sure there is really nothing wrong with the hard drive I put it in my Win 10 PC and ran all the tests in Seatools, also the long generic test. All of them passed without problems. I disabled EPC and low spinup, and returned the drive to my server, rebuilt the parity under 6.8.3, and then was able to successfully upgrade to 6.9.2. Server has been running without a hitch for several days now. If I look at the SMART info for my drives, however, I see all of them have non-zero values for "Current Pending Sectors" in the fields "Value" and "Worst". "Threshold" is 0 and "Failed" is Never. My question is: how should I interpret these SMART results? Are all my drives bad and need replacing? I bought the Ironwolf last year as parity drive when I started out with Unraid. Sorry if this is a stupid question. I have looked at threads about "pending sectors", but can still not make sense of it. Thank you much for reading 🙂 Wishing you all Happy Halloween! schiethucken-diagnostics-20211031-1626.zip
  15. Config: 5 8TB Seagate non-Ironwolfs as data drives, 10 TB Seagate Ironwolf ST10000VN0008 as parity drive. Updated to Unraid 6.9.x ago from 6.8.3 (around Oct 17, a week ago) Last night my server flagged the Ironwolf parity drive as having failed SMART test and having too many bad sectors. Then it marked the drive with a red x. Today I put the Ironwolf into a Win10 PC and ran seatools, all tests except the long generic. Seatools found no problem with it. I read around on the forums and found this thread. Downloaded SeaChest and disabled EPC (on a win10 PC). After adding the Ironwolf back to the array there were again smart problems, and the drive was marked as failed. I wasn't even able to get it to be listed as unassigned. After reverting to 6.8.3 the drive came up as unassigned and I was able to set it up as parity drive. But when the array came back up to rebuild the parity it again complained about smart errors and flagged the parity drive with a red x. Now I don't know what to do. Should I enable EPC again and continue trying with 6.8.3 (which had worked fine before) or should I forget about the ironwolf for now and buy another parity drive? Attached are the last diagnostics from when I was back to 6.8.3. I also have diagnostics available from 6.9.x (last stable version) schiethucken-diagnostics-20211025-1925.zip
  16. Is anybody here using a Rosewill Thor case as server? It would be awesome if you could share how you solved the airflow problem of the case's 3.25" drive bay. Problem: For now I have put 2 drive cages from an Antec 900 case in the 5.25" openings of my Rosewill Thor case. Those 6 hard drives are running temperatures around 20s C. On hot summer days low 30s, even through parity check. First I tried using the 3.25" drive bay on the bottom of the Rosewill Thor case, however there is no airflow: The case has a 230mm intake fan at the front, and the drive bay sits perpendicular to it. What little air the front fan provides needs to go through slots on the side of the bay. Almost no air arrives at the drives. Even with just 1 drive in there it got too hot within a short time. The case has a side door with another large fan. It has holes to exchange that fan for 4 120mm fans, but that air wouldn't arrive at the drive bay, only at the graphic card. I thus tried turning the door with the fan upside down, and mount it on the other side of the case. The fan would actually reach part of the drive bay from the back, but there is not enough space to close the door. The drive bay is mounted with rivets, with no way to take it out. I cannot turn it to allow for more air to get to the drives. Even if I could drill out the rivets, and take out the 3.25" drive bay I didn't find any screw holes or other provisions to put in another drive bay with fans. The drive bay has no way or even space to add fans in front of or behind it, and I can't modify the stainless steel case door to add openings for fans.
  17. Sorry to hear your reproduction efforts have not been fruitful so far. Have finished copying everything to the array. My disk 5 has very little data on it and I can take it out to remake the error producing config again. Let me know when you need me to run more tests at some point. Sorry to hear this is so hard to reproduce. Thank you for all your efforts.
  18. Great. I'll keep the disks from the original configuration around and make sure not to copy too much to the array so I can go back to the original config in case more tests are needed.
  19. Awesome: Parity Swap resulted in the same errors! Did a Parity Swap and then Parity Check, without rebooting. Attached are: Diagnostics before starting Parity Swap Diagnostics after Parity Swap and Parity Check Screenie at the end of the Parity Check with the result message. Same amount of errors as before. Please let me know what needs to be done now. I'll keep the array running. schiethucken-diagnostics-20200701-0923.zip schiethucken-diagnostics-20200703-1230.zip
  20. Last time I ran a pre-clear on the 10 TB drive before starting the parity swap. The whole drive was 0s when the procedure started. That is the situation I'll try to reestablish if the issue does not occur this time.
  21. Just thought about this: I didn't change anything on the array after I took the 10 TB drive out. It still has the parity from last time on it. If the issue does not happen this time it might be bc the parity on the 10 TB drive could still be valid for parity check. If the issue does not happen, I'll redo it, and this time do a pre-clean of the 10 TB drive first.
  22. My array is running Unraid 6.8.3 should I update to a newer version first? Or just do it the same as last time with the same Unraid version?
  23. Parity rebuild finished. Config back to how it was before I did the parity swap. Ready to start the test. Please let me know what I should do now. schiethucken-diagnostics-20200701-0034.zip
  24. I'll stop the array now and add the drive. It's empty, but the parity will be invalid once it's in and being rebuilt anyways. I can save time by just adding it now. 3TB drive added, after a couple of stop/starts and step by step adding the disks . It's now rebuilding parity with all drives present.