tjb_altf4 Posted March 8, 2020 Author Share Posted March 8, 2020 Build is still going strong, nearing 2 years later, running 24/7, and it is still serving me well... a testament to both the OS and the hardware. A couple of updates recently: move from 4TB to 10TB array drives underway 10GbE network upgrade (direct connection) network upgrade to Unifi gear, hosting the controller on unraid. Hoping to upgrade case soon for more space, with either go down a rack mount solution path i.e. rosewill 4u, or the new fractal define 7 xl. Quote Link to comment
tjb_altf4 Posted March 10, 2020 Author Share Posted March 10, 2020 Nearly forgot, also upgraded host GPU from AMD 5450 to Nvidia GT710, which has made GUI mode much nicer. There seems to be some weirdness on the older AMD card on my 4K monitor that resulted in microscopic text on normal sized GUI... no such issues with the GT710. This is only to do with the basic display driver, never had any issues in VMs with the appropriate drivers. Quote Link to comment
tjb_altf4 Posted July 21, 2020 Author Share Posted July 21, 2020 As I'm quite looking forward to 6.9 RC release, I've upgraded my licence to Unraid Pro. I've done this for two reasons, I need the overall increased device capacity, and I'd like to continue supporting the good work team Limetech are doing. What else is happening, well I've fallen onto the shuck train, and as it turns out either the 10TB drives I'm upgrading with don't utilize PWDIS, or my PSU is compliant... either way happy days! Main array should hit 44TB once I'm done, with the replaced array disks going into service as pool devices once I've upgraded to 6.9. I'll also need to do some hardware upgrades to support that as I'm at capacity of both physical case storage and sata ports, so that will probably be my next update! Quote Link to comment
tjb_altf4 Posted October 18, 2020 Author Share Posted October 18, 2020 Managed to snag a good deal (free shipping) on a pair of Rosewill RSV-L4500s from Newegg, a rare opportunity for an Aussie customer, especially given these have been out of stock for most of the year. Looking forward to migrating my unraid server and workstation into a server rack in the coming weeks. Quote Link to comment
Vr2Io Posted October 18, 2020 Share Posted October 18, 2020 (edited) 4 hours ago, tjb_altf4 said: Managed to snag a good deal (free shipping) on a pair of Rosewill RSV-L4500s from Newegg, a rare opportunity for an Aussie customer, especially given these have been out of stock for most of the year. Looking forward to migrating my unraid server and workstation into a server rack in the coming weeks. Existing cooler, Noctua U14S TR4-SP3 (changed to chromax black and additional U14 fan) can install in 4U case ? Change to U9 ? I have 1st gen TR too, but due to limited air cooler choice for 3U, so I haven't put it there. I have idea to combine a 3U and 2U case be one, then I have a 5U for TR, but this can't perfectly fallback the case mod. Finally, I change to X299 with Intel Core-X, waiting for the last parts coming .... CPU Edited October 18, 2020 by Vr2Io Quote Link to comment
tjb_altf4 Posted October 18, 2020 Author Share Posted October 18, 2020 Just now, Vr2Io said: Existing cooler, Noctua U14S TR4-SP3 (changed to chromax black and additional U14 fan) can install in 4U case ? Change to U9 ? Will be picking a U9 most likely, but I'll do a test fit first... it might just squeeze in with different fans Quote Link to comment
tjb_altf4 Posted October 19, 2020 Author Share Posted October 19, 2020 (edited) 18 hours ago, Vr2Io said: I have idea to combine a 3U and 2U case be one, then I have a 5U for TR, but this can't perfectly fallback the case mod. hotrodding i.e. cutting a hole and mounting a scoop (3d printed or the like) over the top is another thought I had to keep the existing cooler Edited October 19, 2020 by tjb_altf4 Quote Link to comment
Vr2Io Posted October 19, 2020 Share Posted October 19, 2020 11 hours ago, tjb_altf4 said: hotrodding i.e. cutting a hole and mounting a scoop (3d printed or the like) over the top is another thought I had to keep the existing cooler Yes, seems your top case cover not slide in to main case, so the hole can be the footprint of the cooler. ( mine are slide in type, so the hole need to be large enough ) Another method was print a large frame adaptor which adapt between the cover and the case, but you need a very large printer. Quote Link to comment
tjb_altf4 Posted December 21, 2020 Author Share Posted December 21, 2020 Based on recommendations and now that I've found stock I've ordered a pair of iStarUSA TC-RAIL-26, and a StarTech Open Rack 12U. That should be enough to finish migrating to a rackmount setup. Transferred my workstation to the RSV-L4500, that has an AIO + 7700K, so no issues with height on this one. Hope to transfer the 1950X server over the xmas break when I've got time off. Quote Link to comment
tjb_altf4 Posted January 2, 2021 Author Share Posted January 2, 2021 OK lots of movement in the last few days. 12U rack build and rails attached to rack + cases Unraid server migrated over to RSV-L4500 case CPU cooler changed to the previously mentioned U9 to fit in the 4U form factor... its comically smaller than the U14S, but seems to keep up for now. With only 8 of 15 bays populated, I can already see the advantages of a backplane loi... maybe the next project. Have some more additions for the server coming, but for today I wanted to stay at the same baseline as much as possible in case I needed to troubleshoot problems. Quote Link to comment
tjb_altf4 Posted February 9, 2021 Author Share Posted February 9, 2021 Bunked in with the in-laws for a few weeks while we are between houses. They have zero internet connectivity though the house other than wifi and I'm setup at the opposite end of the house, so cable run is not an option. After some head scratching and some trial and error, I came up with a suitable solution 1 AP is connected to unraid, the other AP is connected to my routers (WAN port) that is then connected to inlaws existing router (LAN port). This way I can get wireless connectivity to my unraid server, but also preserve both our existing networks. Other than some minor issues that come from the extra network overhead, its actually working quite well. Quote Link to comment
tjb_altf4 Posted April 14, 2021 Author Share Posted April 14, 2021 (edited) Found some time to work on the server this afternoon and added some more storage. Asus M.2 Hyper 2x 2TB of Samsung 970 Evo Plus Before this upgrade, I had 2x 960 Pros in RAID0, great for performance of VM and dockers that I can replace, not so great for long term data that I want to be fast AND protected. Enter the upgrade, 970s in RAID1 in a new pool for actual data, old pool will be re-commisioned for applications (docker, VMs etc) only. I had planned to replace my remaining 4TB drive with a 10TB, but suffers from the 3.3V affliction, so I'll have to do that another time. Currently in final semester of uni, working full time and getting ready to move houses again., so you could say things are busy... but it was nice to take some time out and work on my Unraid server Edited April 14, 2021 by tjb_altf4 Quote Link to comment
tjb_altf4 Posted April 24, 2021 Author Share Posted April 24, 2021 Did some more work on the server yesterday, clearly procrastinating instead of doing uni 🤣 I nuked and rebuilt my Guacamole setup, and using dmacias's WOL & VM WOL plugins I've added wake on LAN to their Guacamole profiles. This is a nice time saver and handy for getting access to different machines on my network through a central portal. Also decided to try the Dynamix Auto Fan, and for my setup it works super well, with my fan walls that cool the HDDs now adjusting their speed based on HDD temps. The only config tweak was to exclude nvme drives, and change the thresholds down to 30/40C to increase the baseline airflow through the case. For the most part this now means its using less power and creates less noise, while keeping the HDDs in their happy zone Quote Link to comment
Vr2Io Posted April 24, 2021 Share Posted April 24, 2021 (edited) 1 hour ago, tjb_altf4 said: instead of doing uni Oops I also rearrange current disks setup planning, mass data transfer between main/backup and almost complete. Got some strange unmountable disks issue and it will gone if unplug some disks. Anyway no issue now. Edited April 24, 2021 by Vr2Io Quote Link to comment
Vaultboy_Gary Posted May 7, 2021 Share Posted May 7, 2021 Nice Build. Do you can confirm that the board works well with sleep mode (s3) and WoL in Unraid 6.9? Can you test this please Thx! Quote Link to comment
tjb_altf4 Posted May 8, 2021 Author Share Posted May 8, 2021 2 hours ago, Vaultboy_Gary said: Nice Build. Do you can confirm that the board works well with sleep mode (s3) and WoL in Unraid 6.9? Can you test this please Thx! My server is busy 24/7, so I don't use S3, so I cannot confirm either way. Quote Link to comment
tjb_altf4 Posted May 24, 2021 Author Share Posted May 24, 2021 (edited) Finally got around to adding my LSI 9211-8i (H310) to the system, as I've been at capacity with onboard sata ports since last year. Luckily no issues with the HBA, I know some had issues when X399 was first released, I assumed later bios releases fixed that issue. Once the HBA was in, I added a couple more 10TB drives for good measure, which are currently preclearing, once done it will take the array (alone) up to 70TB. I hope to push past 100TB in the next few months, although if multi-array is not released soon, I might build a separate storage only machine with my spare unraid license. Edited May 24, 2021 by tjb_altf4 Quote Link to comment
tjb_altf4 Posted May 24, 2021 Author Share Posted May 24, 2021 Also in the new house now, but the network needs some work, so sticking with wifi bridge for now. Unifi NanoHDs are still doing a great job with coverage and reliability. New house is on HFC (fiber), so was able to get on a 1Gbps plan, and while I'm not currently able to utilize it fully over the wifi bridge, I'm still getting +400Mbps. Quote Link to comment
tjb_altf4 Posted May 25, 2021 Author Share Posted May 25, 2021 Drives successfully precleared and absorbed into the array, just in time! Quote Link to comment
tjb_altf4 Posted June 1, 2021 Author Share Posted June 1, 2021 Some more friends arrived 1 Quote Link to comment
tjb_altf4 Posted July 6, 2021 Author Share Posted July 6, 2021 Part 1. For about 5 years I've had a 2TB WD Blue (xfs) that has been hammered with torrents all day every day,. About 6 months ago I had a random disconnect from it, but came good after a restart. Well its happened again, twice in the last few days in fact, so I took that as a warning of impending doom! I had been putting off migrating my downloads from an unassigned device to its own pool (now that 6.9 has unleashed multi-pools upon us). I put it off as I knew this would mean remapping about 10 dockers and a bunch of data migration... I guess no time like when the missus is on night shift and the kiddies are sleeping So the solution you ask? Well I upgraded my array from 4TB drives to 10TB drives this time last year and not really found a use so far, so I've grabbed 2 of those 4TB Reds in RAID0 btrfs pool. This pool will be a scratch pool that will be a dumping ground for all sorts of in progress stuff that is highly replaceable: downloads, video conversion jobs, temp folders etc. Quote Link to comment
tjb_altf4 Posted July 6, 2021 Author Share Posted July 6, 2021 Part 2. At this point my RSV-L4500 is max capacity, but I still have drives I want to add (mutil-array when??). I dusted off my 3D printer from my recent house move, jumped into CAD and designed a simple prototype to gauge how I could expand capacity, while maximising airflow to drives. At this point I should be able to jump from 15 HDDs to 24, being conservative with airflow gaps... with room in the case to expand to 36 Hoping to finalize that design this weekend, do some testing and get that into the server pending getting the cables I need Quote Link to comment
tjb_altf4 Posted July 6, 2021 Author Share Posted July 6, 2021 (edited) Part 3. Last year I felt it was time to upgrade my workstation/gaming-rig... well covid and its supply chain shocks ruined that. Things are better now, and I've recently picked up a 5950X and a 3090, which I will be watercooling with some nice EKWB bits and some more 3D printed parts I've designed. This will be replacing my aging 7700K build in my second RSV-L4500, while the Titan X(p) in there currently will be rehomed to the server, hopefully for use as a "daily" development VM. At this stage, the workstation/game-rig build will be a Unraid build, centered around virtualisation and some supporting dockers, with only a minimum storage footprint, with "fortytwo" remaining the primary NAS and appserver. As I've been sending walls of text, here's a pic of the (slow) progress Edited July 6, 2021 by tjb_altf4 Quote Link to comment
tjb_altf4 Posted July 13, 2021 Author Share Posted July 13, 2021 Currently waiting on EKWB support to provide replacement hardware. When I dialed the mounting hardware in to the correct spec the mounting hardware turned to cheese, EK don't seem to provide spares with their blocks anymore (even their crazy priced ones). I'm pretty sure I'm stuck waiting a few weeks before the build can progress. I've done a test run at low mounting pressure and there is potential for some decent performance there, can't wait until I can mount it properly and use it anger! Currently the RGB colour puke is linked to CPU temp thresholds, but might have to switch to orange for the final iteration... Quote Link to comment
starbetrayer Posted July 14, 2021 Share Posted July 14, 2021 On 7/13/2021 at 2:42 AM, tjb_altf4 said: Currently waiting on EKWB support to provide replacement hardware. When I dialed the mounting hardware in to the correct spec the mounting hardware turned to cheese, EK don't seem to provide spares with their blocks anymore (even their crazy priced ones). I'm pretty sure I'm stuck waiting a few weeks before the build can progress. I've done a test run at low mounting pressure and there is potential for some decent performance there, can't wait until I can mount it properly and use it anger! Currently the RGB colour puke is linked to CPU temp thresholds, but might have to switch to orange for the final iteration... Nice build @tjb_altf4 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.