Jump to content

WETAFROMAN

Members
  • Posts

    18
  • Joined

  • Last visited

Posts posted by WETAFROMAN

  1. Redundancy is always a good idea. If you can afford it do it.

     

    As for the size. There are more factors that have to be considered for this. Some OSs take more space then others. i.e. pure Linux is much smaller then Windows 10 or MacOS. Same with Dockers some containers are larger then others. It is really a case by case basis. Also you have to consider will there be anything else in the future that you want to run off of those NVME drives and do you need to assign space to a share or make your VM drive pool larger to accommodate that?

     

    Personally I would say buy a 1TB unit. They are super cheap now, especially if you buy 3.0 instead of 4.0.

  2. It depends on the type of editing you are going to do. Especially if you are doing anything above 30 secs in resolutions of 4K or higher and/or RAW you will be struggling in the timeline to live play and probably spend hours in rendering the final project.

     

    I would buy a better CPU, at least 32GB of RAM, and add a dedicated GPU.

     

    But of course do what you can afford. If its going to cost to much to go that much higher then get what you can. Something is better then nothing.

  3. Cant seem to find anyone else that has had this issue so seeing if anyone has any advice.

     

    In my new build I have ASUS AMD AM4 ROG Strix X570-F Gaming ATX Motherboard / G.SKILL Trident Z Neo Series 64GB (2 x 32GB) / MasterLiquid ML360 Mirror ARGB Close-Loop AIO CPU Liquid Cooler.

     

    All 3 have RGB. I am able to turn off the RGB of the mother board in BIOS but can not seem to find out how to get the ram and AIO cooler RGB to turn off. 

     

    I have tried P3R-OpenRGB but it only seems to recognize the motherboard and my mouse and keyboard. Am I just missing some pass through setting to get the docker to recognize the other parts? Or is there some other way?

  4. On 10/31/2021 at 9:13 PM, jakeslominski said:

    Currently maxing out my tower chassis with 11 drives. Looking for a rackmount chassis to transplant to that wont break the bank. Need the future expandability for more drives. Budget is $300-400. Doesn't need to be new, but needs to be easily stocked because my husband is buying it for me for Christmas and doesn't have the skillset to scour ebay auctions.

    Wishlist:

    • 4U with 7 full height expansion slots (GPUs and NICs currently full height)
    • Accepts standard ATX motherboard (eatx expandability would be cool too)
    • Accepts standard ATX PSU (just want to transplant my current system and cant afford a new/different form factor psu at the moment)
    • 24-36 drive bays (can be front hotswap or top loaded, not picky on that)
    • backplane (using a mix of sas and sata drives currently (prefer 12g but can work with 6g)

     

    In my experience this is a really hard ask... The chances of spending $300-$400 for everything on your list is next to impossible. Really Ebay is your only option and you will still have to make certain mods and compromises to get what you want in that budget. So you might want to scour Ebay and buy it yourself and then have the husband just hide it from you till Christmas.

     

    Maybe it is time to upgrade the size of a few of your drives instead of doing a chassis transplant.

     

    Best of Luck!

  5. On 11/1/2021 at 9:25 AM, wgstarks said:

    Also, Google is showing a huge range of prices for the nvidia 1050Ti. Walmart has one for $100 US while Newegg has the same card for $400 US. It’s obvious from the pics that these aren’t really the same card, they don’t look the same at all and I’m a little doubtful of the quality of the cheaper versions. Could use some advice on which is the best price vs quality!!!

     

    Walmart is super sketch online... They have so many random retailers on there that you could be looking at some wish.com level garbage card. Sure the 1050Ti is an older card but with the microchip shortage the way it is everything is hard to find and it being a cheap Chinese knock off is the only reason I would think you could find a 1050Ti for only $100 brand new.

  6. On 10/23/2021 at 8:18 PM, greyday said:

    Have you tried running the second Unraid license as a VM? You'd still probably need a separate card and/or DAS for the array, but you wouldn't need the other physical server...

    I love and hate everything about this suggestion. unraid in an unraid... can you say unraid-ception!!! 

     

    This may be a dumb thought on my part... but I wonder how well this would work. My bigger concern would be hardware allocation. Like how would it react to seeing a processor that it wont have full allocation ability for. I mean other VMs do it just fine. But would the Unraid OS space know how to handle it?

  7. On 10/24/2021 at 3:28 AM, JorgeB said:

    We usually recommend LSI HBAs, see here for a list:

     

    As for the backplane, if you can afford it's a good option.

     

    JorgeB I appreciate the response. Looks like a 9305-24i is what I am looking for. Sadly that card is so much more expensive then Adaptec ASR 72405 that I was looking at before. Are there any post that discuss the issues with using a RAID card like that and just setting it to see each disk as JBOD? That was my initial idea.

     

    Maybe I go with the 9305-24i and just not use a backplane. Loose hot swapping but that's not the end of the world.

     

    Or maybe I get the backplane and start off smaller with like an 8 port LSI card. Not like I have the money to populate all 24 drive bays right at the start. I could upgrade the LSI card in a year or so when I am ready to add more drives then it supports.

     

    What are your thoughts?

  8. Jonathan-pas again it comes down to speed and time. I actually work in the film and video industry also. If we are talking about working with and backing up large amounts of REDCODE footage, the way Unraid works you will have to either keep the server on at all times or update the speed of the array. It wont take super long to get the footage from the working SSD to the cache drives but at 5400RPM it will be an eternity to redistribute the footage from cache to array and then to sync up parity. But then again if you are primarily a photographer that is running like an a7sIII and occasionally doing video, or even if your video is REDCODE and its small form stuff for like Social Media, it wont be as bad.

     

    My rig has multiple SSD for cache and 7200RPM HDD in the array. I work off the cache during the day and at night everything safely gets pushed to the array that is not on a Cache only drive.

  9. I believe you will be able to reuse the REDs with no problem. However.... they are only 5400RPM.... Not what you should ever want to use for video. Especially if you want to properly off load large video files in any type of reasonable time and that wont happen if you are planning on turning the server off when not in use.

  10. 1. I have been doing some testing with this recently and it should be yes. The only issue I have seen is being stupid and forgetting to change the BIOS settings of the new motherboard to match. So I had a small panic attack for a second when I forgot to enable cpu virtualization. and then another when I forgot to enable iommu but then everything worked out fine.

     

    2. I do not want to speak to much on this topic because everyone's power requirements are different. But it seems to me that 1K for a system that can handle everything you want to throw at it is realistic when you are going to have to do new mobo. cpu, ram, and gpu. At least if the plan is all new.

  11. Hello World,

     

    Long time follower, first time poster... Excited to be here!

     

    I have dabbled in the past with some smaller desktop builds but I am currently building my first large Unraid server build. It will have multiple VMs but primarily (1) windows VM for gaming and video editing and (1) Ubuntu VM for Plex Pihole and other security applications. Recently I was able to take an older Supermicro 4U 24-bay server home from the office. The original plan was to scrap the motherboard and processor for a newer one but recycle the SAS card and drive backplane.

     

    That was until I found out the SAS card was an Adaptec 52445 which only supports SATA I / 3Gb/s. Ain't no body got time for those types of speeds in this modern society!!!

     

    So my research for a replacement has led me to the Adaptec ASR 72405. Seems like I can find them used on ebay for like $129. I was going to pair that with a Supermicro BPN-SAS2-846EL2. Those are pricey on ebay at like $675.

     

    So now the real questions. I am not apposed to paying the money for the SAS card and Backplane. But I am 100% a noob when it comes to server grade parts. Are these 2 parts even compatible with each other? Is this the most price effective way to utilize a 24-Bay server chassis that will be populated with SATAIII / 6Gb/s drives? If not what would you suggest?

     

    Like I said new to this so I am sorry if I did not provide enough details. Let me know if you need to know more to provide recommendations!

  12. The 4 things that stand out to me on this build are - 

     

    1. Why go 32GB of Ram if all you will be doing is running Unraid and a Plex Docker?

    2. IDK if you have network security further down the line but you may end up wanting to run some of that on the server. Which will take up system resources.

    3. You might want to get the i5. Sure you are transcoding probably only max 2-3 streams but depending on what your starting with will make a big difference. If it is DVD/480p SD quality no big deal. But if you are transcoding Bluray/1080p HD or 4K files.... that is going to be much more taxing.

    4. One thing you did not mention is your CPU cooler. Neither the i3 or i5 come with one and even if they did I would never recommend a stock cooler for transcoding. Just make sure whatever you get is good quality.

×
×
  • Create New...