hellasus0001

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by hellasus0001

  1. What did you end up doing? I have an X9SCM-F with a Xeon E3-1240v2 and 24gb of ddr3 ecc dram - and my idea was to use one x8 gen3 lane for an asus hyper (4 nvme drives - but planning to use to to add two drives given them full gen3 x4 speeds - since theyre actually gen4 drives and would perform at twice the cap of gen3 i would not want to add any more drives, but i also need to add an HBA card in one of the gen2 x4 slots to get sata3 speeds for my sata drives - so i could potentially add two asus hyper m2 cards for a total of 4 nvme drives if i wanted - or perhaps a graphic card to deal with plex transcoding in one (this is the most likely for me - assuming x4 gen2 will be enough speeds for 8-10 mechanical drives, of course theres a 4th slot which is also x4 gen2 so if it would become too slow with just one HBA card - there is space for another and stilll keep a GPU & two nvme drivess... This is of course all assuming that the board allows bifurication, so thats why i am asking how you managed it! I appreciate any help and recommendations, thank you
  2. Hey no problem Happy that my research and insight helped^^ And that sounds like a really solid build & yeah that is one of the (imo) best priced z690 motherboards for expansion out the gate + enough m2 drives (imo 2 is enough for most; assuming they do a cache pool that's 1:1 software raid and use idk a regular sata ssd drive for the dirty work if m.2 slots are scarce (but the cost is equal and m2s tend to have longer warranty / TB written before it's likely to die vs sata SSDs. i I have actually been looking into getting that specific board myself the last days, because a) i found the 12th gen celeron and pentium selling for really low prices, and seems perfect for plex transcoding and basic unraid cpu host to a nice board filled to the rim with the best storage options you expect from a consumer board, and it b) I have started considering a second server to actually trust my important data being stored locally when it's in two seperate units, right now i only use it for replaceable content really. Just today i saw the new Z790 board which actually is priced similarly to the z690 steel night here (the z690 boards have gone up a lot since the 13th gen release, as an example steel night is now 340 euros here, while ASRock Z790 Pro RS/D4 is 325 (atm the z790 steel knight adds a 5th nvme slot using gen5 speeds which is nice, but relative to the Pro RS it also adds wifi and bluetooth.. which i dont really value in an unraid server much. Also ram costs play in to the options; the steel knight is DDR5 which is expensive vs ddr4 that i already have 16gb extra off) I don't think the "on die ecc" that doesn't report the problem nor does anything about it so is it actually worth double the cost for worse ecc compability / functionality than on amd consumer cpu's and motherboards? I don't think so, and you could get ddr4 used real cheap im sure. Z690 vs Z790 is almost identical in terms of these asrock boards. You get nearly at minimum; 4 nvme slots, 8 sata ports, the regular x16 gen5 + on most an x16 second slot, possibly a x1 on some too, but the second 16 slot is usually mechanically gen3 x4 /w z690, gen4 /w z790. Lanes of which are routed through the chipset not straight to the cpu - which might worth noting if anyone who reads this is thinking they will have 5 nvme's and a full x16 gen5 lane, its it'll be a monster but if you want a GPU with that check for a board with bifurication between first and second x16 - to x8/x8 if two are used, right now you need to spend at least 450 euros here in sweden for such a board in z690 ddr5 (ddr5 is necessary for this from what iv gathered, any board with bifurication does not use ddr4 anyway), up to 650e for "better" z690 boards that allows this split - if you want Z790 and bifurication expect to buy the most expensive board from either vendor, so 800-1500e somewhere between that. In short you arent missing anything of relevance using a z690 this generation, especially if you opt for ddr4 models, the only reason i am looking at the z790 boards is because the z690 ones that offer the same expansions as yours, are all either out of stock or cost equally or more than the Z790 version of the RS at least.. So ofc I would choose it to get the extra m4 gen4 - not that i necessarily have a strong need for it - but its nice to have. The biggest upgrade came with your 12th gen build vs my 11th gen; 4 times more bandwidth between CPU and chipset - which is cruicial for your expansions out the gate - and I am feeling the limit from my current motherboard, i am literally out of options if I want to add more drives. These boards cost between 200-350 retail (atm z690 is pricier than it was a little while ago for the cheapest two, but its still 250-350 so far). Regardless these versions of Asrock cards are virtually identical in terms of expansion in z690 versions with minor differences; Steel Knight, Pro RS, PG Riptide Extreme But even further, they are also basically the same boards with Z790 too, the actual difference in between Z690 and Z790 is 4 more gen4 lanes allowing the 4th nvme to be gen4 instead of gen3. They also come in both ddr4 and ddr5 versions; both identical otherwise too so far; and they cost the same with either type of ram, too. I believe Z790 allows higher ddr5 OCing, but I am not interested in that for a server anyway, it'll run XMP if i even get ddr5, which is unlikely. To summize they all have 8 sata ports, 4 nvme slots, 2,5gbit nic and one gen5 x16 slot. And ofc one mechanically X4 slot; which becomes gen4 on the new boards - however I am relatively certain that one m2 will either run at x2 and the pcie at x2; or one is shut off if the other is used. So... what if you want more than 8 sata ports? There's a decent solution that doesnt include a controller! turn one of the m.2 slots into 5 sata ports! It's still all routed through the 16000 MB/s with everything else, but how often would all of the drives and nvme's be used at once? It's unlikely to become an issue since you're really removing one m2 drive that could do 5000-7000 MB/s peak speeds for 60-180 seconds then most of them drop to 1000-2000 MB/s if theyre being used to the max for longer. Each sata drive will use at most 250 MB/s; it's basically a 1:1 trade in possible write speeds if you were to move Large data at once... And realistically all 5 drives would never use max speeds at once, the reason hba controllers work and support hundreds of drives through an x8 port is the fact that not all of them are used at once right.. so a few sata drives in unraid should def not slow down stuff imo. So there's actually a reasonable option to expand into a silly number of sata ports with these asrock boards.. While still keeping two nvme drives for a cache pool mirrored, the other two slots you convert into 2 x5 sata ports, for the total sata drives possible becoming a whopping 18 drives, since there's 4 slots it's absolutely acceptable to lose one or two for 5-10 sata ports, since I consider 2 being the necessary amount anyway - well technically any board that has 4 nvme slots. With the Z790 steel knight and it's 5th gen5 slot.. there's even more options, It would be technically possible to add 3 of the 5 sata controllers and still have 2 nvmes.. 23 sata ports; yeah the gpu would run at x8 lanes, but any gpu below 3090 is not going to be affected what so ever, even my 2070 super that's using pcie 3.0 - runs at full performance on x8 3.0, only 3090 is going to suffer a 1-2% loss if it was ran at x8 3.0 - however the 3000 series is using gen4 pci express, so it would be running at x8 4.0 - no reduction in performance. I would assume 4090 is either running gen 5 already, or it wont cap out x8 4.0 either. Basically any card is going to be using its complete performance on x8. Just wanted to share this insane but fantastic idea, which granted is most effective in itx and matx sized boards with few sata ports to start with. I actually just found out about the m2 sata controllers this weekend.. Really opened my eyes to possiblities of itx builds with 4+5 sata ports, and still have 2 m.2 slots for nvme's if you use the itx cards with 3 slots! (two at the back, so sata port will be on the front, easy!) Regarding the question of video editing and which GPU you should get. but I did some research and well, I tried to keep it short anyway! I looked into the video editing and which GPUs is ideal, or if the one you mentioned were included anywhere in those discussions. I did find "the magic combo" or at least what works the best and roughly what cards are 100% smooth to ? maybe smooth, maybe good enough? I would just go for the minimum or better, btw. about the CPU and rendering, if you still havent bought the cpu I would choose the 13th gen i5 - even if you dont overclock it's a solid increase in overall performance, what i gathered it's about 15-19% faster doing pretty much anything if i understood the reviews iv looked between the generations. I think ddr5 close to zero performance impact on the bump in performance, it's like 1-2% i would just guesstimate if at all, using ddr4 - maybe a tiny bit more if you OC the memories to high speeds - but that's unstable and not good for our use case; unraid. we want stable cpu and ram. I would use XMP profile at most, and zero cpu overclocking really.. maybe unlock power limits; but idk if they are even locked on the K models - probably not? Anyway so I found some posts about picking drivers while editing video and wouldnt you know... Nvidia releases 2 different drivers specifically for productivity, and one for gaming, so the "studio version" of the drivers are a key part of nvidia cards being really good with productivity. Supposedly the studio driver is more refined & stable before it's released, and is used for 3d software, well basically anything creative on the PC I suppose, including video editing. Seems interesting, but not really what you were asking... So I looked further and i found this post which mentions specific card and how it performs in his bitrate and everything: (source https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=164568 ) So he points out that nvme drives are really good for improving editing, what he didnt say until later in that thread is that he's using an RTX 2060 Super to do this, with studio drivers of course. I also found an interesting thread on reddit which enforced that RTX cards are superior for editing. They even mention a RTX 1050 card being acceptable, and it seems like a lot of people pooint to the 1660ti for performance/cost alternative - ofc better is better, to some point - but clearly there's a ceiling somewhere performance wise; and then I found a post with this guy mentioning doing 4k editing with an 3080ti, and for some reason changing to an RTX 2060 non super version - and it made zero difference to him scrubbing and editing 4k videos. In conclusion, it seems it's all about having some CUDA cores + the nvidia nvenc & studio drivers, and whatever budget you opt for; the more vram the better. And there's definitly a dividing point somewhere between the RTX 1050/1660ti and RTX 2060 - which is confirmed to work just as well as the second best GPU for the last year - of course rendering time will be better the higher you go above 2060 - but the requirement you have should be met by getting at least RTX 2060 or better. I think it's very possible to find 3070 cards for 400 or less these days, if you look around maybe even an 3080, maybe ti too - for a hundred more probably around 500-600? Not completely sure, I am personally waiting a few months, but those are the cards I'll aim to get for my gaming VM - since my 2070 Super is a bit underperforming at 3440x1440p (The screen got 180hz, but parsec only allows 60 fps; and it's still a great card for 1080 or 1440p in many games, but the added pixel area of an ultrawide 1440p made it struggle - not that surprising of course. How much lesser of a GPU, before you dont get completely smooth 4k scrubbing? Impossible to determine without looking around and asking people editing until you find out where the limit is actually drawn, I would recommend just buying a 2060 at least, they're not very expensive, I think a 2070 Super that I have would sell for like 200-250 alone at most tbh. This post below is worth reading, he mentions very solid nvme drives & names a few RTX card options if you're having doubts or budget concerned, but at least from what I gathered it's strongly advised not to use AMD cards for this, "they are made for gaming" is the reply one had on reddit, regarding them. Considering this info and the fact that you got a mb with 4 nvme slots available all with heatsinks already in place (saves you 10-20 bucks each) You could easily get 2 and mirror them as a cache pool; move all important things like VM and the system files there - perhaps all non array things in essense - and you would likely not see much of a difference from actually passing through an nvme to the VM - and idk if it's possible to use a dedicated drive as host OS drive when it's a VM - like you would pick that bare metal essentially and run it off of that nvme - maybe? It might be worth looking into it, another way is ofc to add an nvme drive as a second hdd in the VM - but I don't know how performance would be affected since the host drive is on the cache pool? That's some things you'd have to figure out, I would probably try running it straight on a cache pool first of two gen4 nvme's - how large you need i am not certain, i picked 1tb x2 for my cache pool - it feels like 1TB that's mirrored keeps the files kind of safe and you could do regular backups to the array as a precation, too. The performance when I used a single 1TB kingston renegade fury as cache pool & VM files, I didn't feel like any background systems were slowing down anything. I also pinned 5 out of 6 cores - so unraid had 1 core and 1 thread to run with iirc, but i bet 4 cores for the VM would suffice since my GPU is the weak point atm - it felt very smooth and worked really well and using wifi on the client i still only had a few ms delay. It's clear as day that using nvme drives is going to be a big improvement overall, if possible i'd get two of the better gen4 ones that you find on sale or around 1tb/100-120e each. Scales in price pretty much up to 2TB, larger and it for sure costs more per gb. Personally i have a Samsung 980 PRO and a Kingston Fury Renegade both at 1TB, i havent installed the samsung yet cause of my current mboard being so shit anyway and ill likely upgrade this week.. But i got the 980 pro with heatsink for only a hundred on amazon prime day - so you could def save money on those nice nvmes (980 is one of the best, the fury is not that far behind but samsung is better at prolonged heavy workloads; i think that's kind of what you would prefer to get most improvement editing, i know the 980 pro is pretty much the top 5 nvmes esp. with productivity tasks afaik maybe the best? I would probably look for samsung drives on sale, or buy whatever is good value vs performance and consider the budget; i would at least get 2 for data parity 1:1 software cache pool - idk if its possible to passthrough an M.2 directly to the windows VM and use as the host OS drive rather than a virtual one in the VM folders - but if its possible - or if adding it as a second harddrive in the VM will improve editing - You'd have to look around to find out what's possible and the benefits and negatives on whats a good alternative or even possible. IMO I would not care that much about the TBW as much as the price per GB, all gen4 drives that try to perform in the top tier levels are all around 600 TBW range per 1TB, all last longer the bigger they are; usually 2tb is rated for 1200 TBW & price/gb is like 1TB drives. 4TB = expensive AF. (Source https://www.reddit.com/r/buildapc/comments/tc72pw/comment/i0c5vub/?utm_source=share&utm_medium=web2x&context=3 ) So for me, I think my current setup is at a dead end for me, since I am down at 11th gen I am getting 1/4th of the bandwidth from the chipset to the CPU... It's horrendous. And I have a b560, it's down right the worst version of all asus boards - no 2,5gbit, one x4 gen3 slot and 6 sata ports. That's it. The only upside is that the vrm fins are good enough to allow unlocking the power limiter, which is mainly good for gaming. I initially thought i could add enough drives with the x4 slot; it's just a struggle using 4000 MB/s between 8 sata drives and only having one gen4 nvme slot to cpu, and then the 4000 mb/s is shared between the gen3 nvme slot, with the 6 sata port drives as well as the x4 pcie slot which i was originally planning to use for expanding at least 4 more drives with. A large reason I want to replace the entire build is to get my two gen4 nvme's to work at their best and have a good foundation like the asrock boards provide; in fact it's pretty much spot on what I need. So for now I'm very likely ordering a Z790 atx for storage and expansion being spot on for my ATX case. However since I am playing on a budget, the second build will have to wait a little, and for now I plan on using either a celeron 6900 (2c) which has UHD 710, or the pentium g7400 (2c2t), Not sure if i plex, sonarr, qbittorrent, deluge and maybe radarr would be needing the extra 2 threads to function. The UHD 710 has QuickSync, but with 16 execution units vs i5 12th gen's UHD 770 or 750's 32 exection units. The assumption I am making is that I can transcode about half the amount of 4k as it is literally half the gpu power comparing the two. If it doesnt work I guess I will deal with it; it's very rare for me to transcode things unless im travelling anyway. So for now I'm very likely ordering a Z790 atx for storage and expansion being very much spot on for my 8 bay case, and if i did decide to expand i have a case with twice as many bays - altho it feels unlikely to happen within several years. My dream build would to be fair require an mATX card, but I have to look past ideal and use the things I have to cut costs as much as possible and that includes picking an ATX card with a lot of expansion ports and enough sata ports to begin with, honestly i dont think i would need to buy any controller card of any kind and still make it into a gaming VM host when I have the funds, while also looking a server grade itx board with either 6 sata slots or one m.2 i can turn into 5 ports.. worst case id just put a controller card 8i in it and be done with it, it's likely the easiest way. I do feel like remote control of box is going to be necessary or it will have to be in my home - its better than no backup at least. But if I had the money and found a board suitable to build a power gaming VM unraid from the larger yet compact and very well planned Fractal Design 804 matx case - it's probably my ideal NAS if I am going to have a gaming VM - it allows so much: 30cm gpu, 28cm if you have a fan infront of it as intake. 280 AIO slot that isn't removing gpu length mounted in the bay behind MB + GPU venting out the area the drives are located 8 3,5" fits in the compartment behind MB togheter with a full length 170mm PSU 2 more 3,5" fits in front of the motherboard and can be mounted on the bottom. 2 2,5" sata drives also has mounting available if you really need any in there. mATX so while it's difficult to find any with more than 6 sata ports; there are cards with one 16 lane and one 8 lane; either they split 8/8 and you add a HBA controller - consider if there's only 6 ports; you need another 6 to max it out with two ssds, you could technically add more ssds - but that would probably be excessive unless it's one or two for installing games on the VM, that you dont wanna waste nvme space on. There's space for 3x 120mm fans intake front; 4 if the gpu is 28 cm or shorter. There's a 140 mm exhaust behind the motherboard There's a 120mm exhause in the drives/psu compartment There's the 280 AIO exhaust in the top of same compartment - 2x 140mm fans there too. There's also space for two 140 mm fans to exhaust the MB/GPU compartment in the top; it's just not enough space room for an AIO+ fan thickness, from what I gathered. It will allow cpu coolers up to 15 cm or so, so basically any CPU cooler which is good and air cooled is an option over water AIO. Personally I would get an AIO simply because there's a window on the GPU/MB side; and I would have my server located in a way that anyone who comes to my home can not miss it - cause why not I do have a choice of building on an AMD matx board which fits the description, but I don't know if it's going to work without problems, iv read many having trouble with it, and ecc memory being iffy on consumer cpu's and so on.. I am not very confident in the build holding up to expectations if I do go with it. Also it would pretty much force me to keep both servers here, but I guess its a bit over the top to relocate one server for data backups.. With all this building I would throw in my gaming stuff into a nice case and hope people like the visual representation and that it's still a really good gaming pc for 1080 and 1440p. i5 11500 (with a basic b560 board, perfect for gaming even with an 11500, the power limit is unlocked and theres enough cooling to allow it to run free, so cpu does go up to 150w peak turbo instead of being locked at max 65w. For me it's just limiting too much - and the GPU is too weak after the screen upgrade. Basically it's not worth keeping even as a backup server, id rather have it be ecc server grade itx or a cheap lga 1700 since i have a cpu for once i upgrade the first build. The best cost effective move is to get the Z790 board for now and using it with the cheap celeron cpu, it'll allow me to get it up and running and pretty much zero downtime, if i wait to build it until the board arrives before friday. Which would be ideal for me, and the gpu is likely a last on the list priority over getting both boxes work tbh, but eventually i will most likely upgrade the CPU & add an GPU when or if I feel like gaming again. Anyway I hope my small novel was worth reading, and that I replied in time before you might've gotten the AMD gpu - or if you did - how is it working for you? Hopefully I gave you or anyone who finds this post later of interest and that i at least gave you some new info of value ^^ Good luck with the build if not yet done! You can always DM me your discord and I'll add you if you want any inputs that's a bit faster - i dont really visit this forum that frequently unless it's troubleshooting stuff
  3. A bit late but hopefully this will help.. For question 3; AFAIK the recommendations are that you add a 2nd parity drive once you have 3-4 storage ones and are going to add more - then it's just piling drives and pool drives to make certain things faster - maybe even passthrough an nvme to the gaming VM, just like passing through the GPU. The performance loss is like 3-5% on the GPU, I would assume the same for nvme ssds? Not that bad imo. The CPU obviously won't be as powerful as baremetal full usability, but given a decent amount of cores you shouldnt have the CPU bottleneck your gaming VM... Which is where my current issue is, I think my i5 12500 will struggle if I purchased a 3080 for example. So I am in the boat of either getting a setup similar to yours; and then my question becomes: what about plex transcoding, hmm? well.. I think it's certainly possible given the correct motherboard layout to use a gaming GPU, a SAS card for expanding further than the 4-8 sata slots on board and an intel or nvidia GPU that's compatible with plex transcoding. The new option is ofc to get the AM5 build which does actually have an igpu; but still it's very new and not supported by plex to transcode with and who knows if it ever will be; so if you want transcoding done with hardware acceleration you will need a compatible 2nd GPU for that, either nvidia 1660+, p400 or p2000 etc (theres a list somewhere), or imo lets hope plex is working on adding compability with the new cheapest intel dedicated GPU (i can find it for 240 euros here in sweden even) - at least if you have 2 physical x16 lanes open, 3 if you need a sas card further down the line (preferably with x8 lanes for the sas card since the cheap ones use pcie 2.0 or 3.0 - you dont wanna buy a gen4 sas card - trust me. The 2nd dedicated GPU is likely a far cheaper option than having a second entire computer just to transcode for you. While an 13th gen build would give you an igpu (uhd 770) which is pretty much already supported by plex transcoding wise; it's obviously kind of a lot to upgrade from your AM4 setup; just like to the AM5 : and you would lose the possibility to use ECC memory unless you found the elusive W680 motherboards in the wild. I have also heard that you should be able to transcode through software using a VM specifically for plex media server, something that will require a lot from the cpu; i dont know how noticable it would be on the gaming VM if it were to happen at the same time... Seeing your motherboard I would suggest looking at adding larger; like 18TB enterprise drives; they cost around 350-400 euros and is a cheap way of avoiding any sas or sata expansion cards in the pcie x1 slots etc if you were to use the 2nd x16 lane for a transcoding GPU. But that would require adding the first one to parity, then I assume it'd be just as well to remove the first 8tb parity drive and once you wanna add more you might wanna think about a 2nd parity as I mentioned.
  4. I've personally been running with a Samsung BAR Plus 32gb usb 3.1 through the usb 3.0 pins to 2 usb 3.0 sockets on two seperate motherboards the past few years - I had a bunch of options such as a few kingston 16gb and this samsung stick, which i had several of too - and it's been working without issues for about 2-3 years. It's good to know in my opinion, because some newer motherboards don't even have usb 2.0 pins anymore. Not that you need to use usb 3 sticks in the usb 3 sockets, it's just nice to know it works well enough as usb 2.0 sticks.
  5. At the very least, Unraid will not hog the gaming GPU you install (if you plan on passing it through to a VM) which is nice. I also wonder if it will be good enough for transcoding even 1 single plex 8k res to anything low like 720p - but as you mentioned plex has to my knowledge zero support with igpu's from AMD. However I am relatively certain that if you just ran the plex server in a VM; the amd cpu would software transcode anything with ease. That's just a speculation though, I don't know how it would work with software transcoding in dockers on an am5 cpu. Would be very good to know, if anyone has the new AMD cpu!
  6. Wow, so I don't need an extra GPU if i pick a powerful AM4 cpu?! I am stocked! I was looking around and I found ASRock Rack X570D4U-2L2T as someone already mentioned i think, which has that arm chipset with vga controller in it - i didnt think it was enough to let unraid power up though! Fascinating. I need to chase down the best X570 server board now for my Fractal Design XL (v1 - yes, it's that old haha - I can fit up to 15 drives and plenty of cooling in it - if there's not enough - just dremel some more holes )
  7. It's up to the motherboard if it's ECC compatible - so pick the right motherboard and you're good. It's quite expensive with DDR5 ECC memory, but I would def. buy it if I was getting that gen - it will last for a while.. Maybe I'd start with just 2x 16gb tho because of the limited slots & the fact that you wanna get 32gb dimms to reach 128gb, something which is barely out on the market yet.
  8. I have an i5 11500 and I can unfortunately not see any possible way for me to continue using it with the B560 board I have - I am capped out of pcie lanes essentially. I was just looking at upgrading just the board to a Z590 - like the top models - they all have split x16 down to 8/8/4/4 etc and granted they have more m.2 gen 4 slots, but that's all i would get.. I don't know how well I can enable a z590 to have at least 15-16 drives connected with sata/sas and + the nvmes... So I'm either looking to sell my entire build in a nice fancy case with some noctua fans I got laying around since I merged unraid and my gaming pc (which is the base of the current system.. I just never expected to re-purpose it for unraid..) and basically an entire PC incl my 750w seasonic gold psu with 8 years warrantly left, and an rtx 2070 super... And be content with no GPU until I can find (granted, better) one to use with my VM on ebay or such. To be fair I am kind of underpowered in gpu performance; vs my screen which I would like to run at more than low settings; 3440x1440p even if its through the local network, and more than 60 fps stream. I don't know, I feel like I'm in such a bind; and it feels risky to sell essentially all but my drives + my nvme gen4 ssd and the server case - and hope I get enough to cover most of an entirely new base - which could be either AM5 or 13th gen intel... I do kind of want to go the real ECC route with AMD, because the W680 motherboard aka ECC on intels core 12th gen cpu's (and the new 13th i9 only, i cant find compatability with any other 13th gen on intels site - so its unknown for now! It would give you much more lanes for expansion through the card (w680 is like a z690 without the fancy bling) with 13th gen, but i think 12 should give enough anyway.... The problem for me is that it's difficult to find here in europe, the german sites refuse to ship outside germany as an example of the only sites that actually have a few on stock.......... So the W680 cards do exist, if you're american it can be bought on newegg if not in the perfect board build, it is available. Also available to germans - but it does cost around 500 dollars/euros. It will likely be Supermicro that's in stock, but even their ATX boards (usually 3 gen4 nvme slots + decent lanes for expansions, ignore the PCI slot -are difficult to find in stock. The best model is in my opinion still the Gigabyte MW34-SP0 - https://www.gigabyte.com/Enterprise/Server-Motherboard/MW34-SP0-rev-10 That Gigabyte card basically has everything I would ever need - two x16 lanes that works, in total 4 gen4 x4 nvme slots and cheap ECC memory support; DDR4. Try finding DDR5 ECC memory above 16gb.. The biggest stick currently sold is 32gb and they're pricy as Fuck. If I was going to build an AM5 system with their best CPU, which costs 1000 dollar in my country, and a good enough board is about 400-600 - around 1500 dollars for cpu+mb. If I could find a W680 for around 500-600 and pair it with the i9 for comparison which is going to sell for 850 dollars - it would basically come out the same; however it would only cost around 650, vs 699 dollars for the ryzen 9 7950x, not much of a difference - yet. There is a locked i9 that is not yet listed on sites, assumingly it will drop out without much notice after the big release day - but it's perfect for an unraid build and what i can gather it's actually pretty powerful, same amount of cores - just lower speeds. Or you could pair the W680 with a decent i7 12700/k or i5 12500k.. Maybe find a used CPU selling for cheap cause they want the new gen asap and already got a board for it...? It's def. a good option to go if you want to get some extra umpf without paying much for it. And the expansions W680 offers is equal enough to the AM5 top chips, and allows ECC DDR4 with your intel core cpu - which will be much cheaper so the total build cost is still a lot less than the equivelant of any level AMD latest gen. I am fairly uncertain what I will do still, I truly wish a z590 would solve my problems, but if im upgrading im def. not going to spend 350+ dollars on a Z690 card when I can add a couple more and get real ECC memory support; and not the expensive DDR5 version that AMD mostly has on their good cards.. By the way AMD does have an APU on all their zen4 CPUs now, which encodes and transcodes everything up to AV-1 which it decodes but can't decode. It's more a question of how quickly the graphic cores on the AM5 processors will be supported in Plex for transcoding. It's also only 2 cores, you could probably assume intels graphics will transcode more 4k streams at once..? And support for APU transcoding? Maaaybe in a year? I don't know if 13th gen igpu is supported already either tho, maybe not. Decisions... Good luck!
  9. Installed the i5 11600kf, no power limits and 3000mhz ram, 2x ssd nvme, 4 ironwolf 4tb, one 2,5" 1tb WD blue - and the RTX 2070 super. I installed 7 fans total. overkill? Maybe two could be removed - and I had to, and I had to change GPU to gt710 only, and disconnect one nvme on a PCIe card. That's when the PC could boot at all. Wtf? I thought that a Seasonic Focus GX 750W 80+Gold Semi-passive (Bought 2020-03-23) would handle my current hardware and at least more than 5 drives?! I have enough space for 10 (!!) More 3,5" and am planning to gradually fill up the array, as a student i can't spend too much so the idea was to get almost new drives for good deals over time .. Now I don't know what to do, am I gonna be forced to sell my psu for one with better 12v rails for the 15 eventual drives I could fit? And enough for an RTX 2070super, i5 taking 65-120w (I bought an 11400 with UHD 730 for transcoding, its rated 65w tdp but MB unlocks it to use around 100-120w at most, slightly less than 11600kf but thats probably not the issue because the computer literally does not start without me disconnecting several fans and removed GPU completely to use GT710??? And it's still on the edge of booting or not pretty much! What the hell am I supposed to do? Please help me 🙏 I need someone's knowledge on what psu will handle more drives at boot and the hardware sometimes being used close to the max capacity. (Gaming and rendering VM) Is 15 drives so unrealistic? What's your advice? I can't afford to downgrade the XL big tower with ATX card and keep my massive GPU - it's simply too expensive to sell my things and buy bigger drives, I am already spending beyond my budget if I replace PSU - but I will if it enables me to run at least 10-15 drives. The only other option is to keep two computers - one with i7 7700k and RTX 2070super and 32gb ram at 2400mhz - on a Corsair cx 500 psu and one 512gb 660p nvme. And one with an i5 11400 with 1tb gen4 pool nvme and drives only. It would be possible with current hardware I own, but I wouldn't get 170e for selling the i7+ram+mb+cooler, or 50e for that case. I would ofc not need to sell the good psu yet (?) And only have spent 130e on the 1tb pool nvme. Advice? Keep two computers or use one with better psu?
  10. Trying to install the usb-creator tool following https://github.com/limetech/usb-creator to the t. These files don't exist on the server anymore: wget https://download.qt.io/official_releases/qt/6.2/6.3.1/submodules/qtbase-everywhere-src-6.3.1.tar.xz wget https://download.qt.io/official_releases/qt/6.2/6.3.1/submodules/qt5compat-everywhere-src-6.3.1.tar.xz wget https://download.qt.io/official_releases/qt/6.2/6.3.1/submodules/qttools-everywhere-src-6.3.1.tar.xz As seen from my terminal window attempting to wget them (this is for 1 file, but it's the same for all 3, and I quickly translated my output to english just to show you): https://download.qt.io/official_releases/qt/6.2/6.3.1/submodules/qttools-everywhere-src-6.3.1.tar.xz Looking up download.qt.io (download.qt.io)... 77.86.162.2 Connecting to download.qt.io (download.qt.io)|77.86.162.2|:443 … connected HTTP-request sent, waiting on anser... 404 Not Found 2022-09-25 07:40:11 ERROR 404: Not Found. I have xcode13, cmake, installed wget through brew etc, doesn't really matter when the files are no longer on the server, tho... EDIT: I just realised that this is not necessary for me to do, completely missed the finished product available on the unraid.net/download page in a dmg file, haha. But maybe this is relevant for someone, so ill keep the topic here.
  11. What you mean by easiest could be interpreted in many ways. If you like to use ssh and do things, https://command-not-found.com/mktorrent If you like to create them with a GUI, probably rtorrent with rutorrent webui - i think binhex has a version with included vpn options & proxy stuff which might come in handy if you were to be the initial distributor.
  12. Seems that the VM + docker using the same card at once is impossible. GTX 1660 is the cheapest alternative, possibly quadro 620, both of which costs almost as much as a new CPU, the i5 11600K with igpu has an Intel UHD Graphics 750, i dont know what the i5 11600kf could be worth 2nd hand, but the difference is basically so low i might as well sell it and buy a new 11600K with the 750 UHD - which is enough for any plex transcoding. Still curious what you guys think about 32gb ram; how much could i give the VM for best performance possible when im working? Would you buy a gen4 ssd or just keep the 600p + 660p (both 512gb) or get one gen4 for that high performance pool disk / would you dedicate one m2 for the VM, or keep it unassigned and use it as storage for the VM? I am not happy that a 20 dollar mistake a year ago will cost me a lot to upgrade today, but it is how it is
  13. I basically have 2 ATX size computers, one is already an unraid server, but I want to add two GPU's for passthrough to VM's (don't currently run any) and I have two sets of hardware. I want to have 1 server, and 1 client (my macbook m1, accessing VMs remotely and on LAN to the VM for 3ds work and sometimes play games and use discord with friends) The rest will be sold, since it's wasting away for me to have 2 computers if 1 can do the job. I would prefer to upgrade since it's a fair upgrade except for 1 thing I need to know: What happens when a VM is using 100% of the passthrough GPU, to say render some 3dsmax file - and plex starts transcoding a 4k 80mb/s bitrate movie - the only hardware able to transcode this is the RTX card. What happens when there is no igpu or secondary gpu good enough for the transcode; and both a docker and the VM is trying to use the gpu? This is the main question in terms of keeping the i7 or using the improved kit in all other ways with the i5 below. Storage situation 4xironwolf 4tb disks, 1 of which is parity. No pool/cache disk today. Slots for 5+ more 3,5" disks in the case. Would you waste a slot on a 1tb 2,5" WD blue drive for long term storage (might not want it in the array - it will be sigificantly slower and more prone to dying early than any other drive, rather i would mount it alone and have it used for storing more valuable data and then sync it to a onedrive 1tb cloud account - is that a good use of it, and is it worth having it for a place to drop important things i want uploaded in the cloud drive in that single share? It's barely worth selling it, nobody wants it really, and it's brand new, return shipping was too expensive so I kept it - and this is the only use I can think of - if it dies all data will be synced with the cloud, unlikely to just die while syncing, at least? 2x m2 slots for ssd pool/&/or vm drive, if I change to the i5 kit I would likely buy a Kingston 1TB 7,300/6,000MB/s /w up to 900K/1M IOPS? Is it worth using this as a pool drive & should i use my second m2 intel 660p 512gb with 1500/1000 MB/s /w 90K/220K IOPS - for the Win11 VM? Or is the performance negligible if I had the VM on the much faster gen4 pool drive, OOOR should I use the gen4 for Win11 and 512gb for pool/dockers etc? Will I actually notice a difference in either of these situations, should I use both as pool, and use the 512gb drive for less important things, VMs and dockers on the faster disk? appdata too? temp downloads on the 512gb drive or idk, what would you do? Dedicated GPU & items being installed with either setup: GeForce RTX 2070 SUPER™ GAMING X TRIO - I want it to be passthrough to a win11 VM for max performance, as well as CPU passthrough. At least 16gb ram, maybe more? How much would you give it if you had 32gb? The server will be used less intensely while I'm running the VM generally, so how much would you need to save for unraid and the dockers to not crash and burn? GT 710 (thought it could be useful for osx vm's to play around, but will the 1gb support the latest macos 12 - if not, i might not install it unless theres other reasons to have it installed?) **Current unraid hardware:** i7 7700k (igpu intel 630 - (H.265) 10 bit 4:2:0 codec supported) 4 cores, 8 threads /w 256kb L2 cache per core, shared 8mb L3 cache. - 4,2ghz, 4,5ghz turbo 32gb ram mixed 2133mhz or something, max 2400mhz Realtek RTL8111H Ethernet 4 sata ports (needs sata pcie card to add any more 3,5" drives) No dedicated GPU **Newer hardware I want to use if possible ** i5 11600kf - no igpu. 6 cores, 12 threads, 512kb of L2 cache per core, 12mb shared L3 cache, 3.9ghz /w turbo to 4.9ghz. Intel AVX-512 literally twice as good as the i7's version (avx-2)? Supports 128gb ddr4, currently own 32gb 3200mhz. Supports PCIE gen4 x4 m.2 on one slot, gen3 on the second. 6 sata ports. 2 available before needing pcie card. Intel I219-V Ethernet Is there any compability issues with the newer hardware, or the ethernet connection? Does unraid, docker, vm's etc utilize AVX-512? It seems pretty good on paper anyway.. Is it worth to get more than 32gb ram for intended use? (I'd have to sell the current sticks and get sticks with higher ram per slot to upgrade, but if it's worth it to run the VM better + dockers, I would consider getting 64gb) Is it very bad to lose the igpu 630 HD? Maybe. Hopefully not? If there's issues when my win11 VM uses 100% gpu power for something & docker app wants to use like 10-25% of its power for a transcoding job? If it's not a big deal, and the i5 hardware is compatible overall - then it's obvious to me that I'll change it. What's your advice on any or several on my choices? Thanks in advance