Jump to content

VisualHudson

Members
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

0 Neutral

About VisualHudson

  • Rank
    Newbie
  1. Because the disk will have already been zeroed by the preclear plugin, is that when you choose the option that "parity is still valid" or something that I've read online? Although I haven't tried to do this yet or seen this option myself. So if I stop the preclear session in the pre or post-read step, and then create the array using those disks, I'm guessing it won't have the preclear signature? Whatever that is? What are the benefits / drawbacks of doing this and of the preclear signature? All 4 of the drives are now in the post-read stage, two at about 85% and two at about 55% so if I'm going to follow your advice I need to end this process within the next day or so before it starts zeroing the drive again.
  2. I have been looking around and I can't find a definitive answer so hopefully someone can help me here. I'm currently in the process of preclearing 2x 14TB & 2x 12TB drives via USB 3.0 before schuking them. I had scheduled them to do two cycles including pre and post-read, but each step is taking 20-24 hours, so a full cycle takes about 3 days. I'm currently on the first cycle for all drives and they are all at step 2 (zeroing) approximately 60 - 80% so far. I don't have nearly a week to wait just for these four drives, and then to top it off I've got 8 more drives to preclear. That would be three weeks before all the drives were ready for the array at this rate. I mean, I do plan on setting up and starting the array with the first four drives once they are ready and adding the rest of them as they complete the preclear process. However I don't want to be waiting weeks on end before this is all set up and the array is complete. So I was wondering, is there a way to abort / end the current preclear process after the first cycle completes and still retain the preclear signature (I think it's called)? I know that there's no way to do it exactly as when it completes the first cycle it will automatically begin the second cycle, but at what point am I safe to cancel the process and still have the drives classed as precleared and ready to be formatted?
  3. See the RES2SV240 now is more like £250 on eBay from a UK seller, whereas if you get one from a US seller or China seller it's about £80 - 100. Are they all just selling the exact same thing, should I just consider the cheaper sellers and wait the weeks it might take to get here? Why would you stay away from the marvel controller on this board? I've been using them so far for years and they've not ever caused any problems. Why might that be any different with Unraid? Thanks for taking the time to write out a long thought out response! You actually bring up some very good points. I've been spending all this time thinking about the SAS expander and especially for the time being I probably don't even really need it. As I asked the guy above, why would you recommend avoiding the Marvel controller? If I was to follow your suggestion, how would you feel if I used the Marvel SATA ports for my Samsung 860 EVO SSD that I plan to use as a cache drive and only use the SAS card / Intel SATA ports for all of the HDDs in the array? I wouldn't be expecting the 10GbE to give me any benefit for streaming from Plex, it would literally just be to have the fasted transfer speeds possible between my new PC and the Unraid server. As I have 32GB of RAM and I know I won't need all of that on Plex, and I have 128GB of RAM on my new PC, I might do RAMDISK to RAMDISK transfers, or at the very least ideally directly to the SSD cache. I may also add in a second SSD down the road as an unassigned drive just to have a faster bit of storage on the server than the main array. Your idea of mounting drives as unassigned devices is actually something I've recently been thinking about actually as a much quicker way to get all the 30TB of content I currently have back into Plex within the new Unraid environment. Whilst I've not yet ever used Unraid, cut and pasting it from within the same system must be quicker than transferring it over a network even if it is 10GbE. I mean, so far for the last few years I have been getting buy using my GTX 680 and my CPU so I suppose there's no reason why I couldn't continue with that outside of eventually hitting the limit of whatever the two can do together within Unraid. But I don't know if I would revert back to the GTX 680 or just stick with the Quadro. Using a VM isn't the upmost priority of this build. But I'm not sure I understand what you mean by "need a display output or intend to 'stream' a game from the server"? The more that I think about it, maybe I will use what I've got now / planned for now for Plex, then instead of running a VM on this machine I'll purchase a new CPU, mobo and RAM sometime down the line and then transfer the server again but over to that new hardware. I can then use my current hardware either for another Unraid server solely for a VM or simply just a traditional Windows install seeing as my current specs still run everything perfectly fine for the most part. That would also allow me to put the 680 back to use too. But that's an idea for a distant time. I can't really afford to be going to buy essentially an entire new computer right now.
  4. Thanks for the response! I have a quick Google and read through on VT-d now you've mentioned it, but it'd not something I've come across before, don't really understand what it is. Could you give a brief explanation of what it is or why it's important? I had seen the RES2SV240 recommended before, but it looks to be incredibly expensive on eBay unless you're willing to order one in from China, in which case they're hundreds cheaper. Should I be dubious of doing that? But this was the main reason why I was leaning more towards the IBM 46M0997 as it can be found at a much more reasonable price locally (on eBay). But lets say I did get the RES2SV240 and I stayed with my 3770k, would I be correct in thinking that I would have a few options? Put the Quadro P2000 and the 9207-8i in both at PCIe 3.0 x8. I then have the option to power the RES2SV240 using the third PCIe port and forget about using the GTX 680 at all in this build. However this would still not give me a 10GbE NIC. Put the Quadro P2000 in at PCIe 3.0 x8 and the 9207-8i in at PCIe 3.0 x4 (or would they both run at x8 again??), power the RES2SV240 using molex and then put the GTX 680 in the last port but it would run at PCIe 2.0 x4 (or would it be x16? either way it wouldn't be a massive concern as it would rarely be being used, and when it is being used, it's not going to be for highly "critical" game playing that requires the world's best framerates). However this would also still not give me a 10GbE NIC. Put the Quadro P2000 in at PCIe 3.0 x8, the 9207-8i in at PCIe 3.0 x4, power the RES2SV240 using molex and put a 10GbE NIC into the third slot but it would run at PCIe 2.0 x4 (or would it be x16?). Would it be better swapping the order of these? Either way, this would also mean forget about using the GTX 680 at all in this build. Use the Quadro in the top slot, use the GTX 680 in my new rig until I can finally get my hands on a RTX 3080 (which I'm planning on doing anyway) and then attempt to sell the Quadro and put the GTX 680 back into the top slot. Then as with Option 3 the SAS card would go in the 2nd slot, the expander would be powered by molex, at the 10GbE NIC in the third. The drawback of this being that the GTX 680 isn't as good at Plex and that I would lose the ability to run a VM with a dedicated GPU. Sell the Quadro straight away, keep the GTX 680 in the top slot and just lose out on having a GPU in my new rig until I can get an RTX 3080. Everything else would be the same as options 3 / 4. At this very present moment, if I'm honest, I'm kind of leaning towards either option 3 or 4, but it would all be dependent on whether I can source a RES2SV240 without waiting months on end. Also, as per SpaceInvaderOne's demonstrations on YouTube I was looking at getting a couple Mellanox ConnectX-2's, but are there any 10GbE NIC's you'd recommend?
  5. In 2012 I built a rig that at the time was relatively about as good as you could get and I'm looking to repurpose as much of that as possible now in a new Unraid build. I will be looking to mainly use the new Unraid build as a Plex server as well as a separate backup storage (ie a NAS) for computers around the house and my camera SD cards. Currently I have been using the rig as a Windows 10 machine to host the Plex server, but I've recently built a new main rig so am now looking to finally make the switch over to Unraid after watching many people over the last few years on YouTube recommend it so highly. My current rig has the following specs: CPU - Intel i7-3770k RAM - 32GB Corsair Dominator DDR3 1866Mhz GPU - ASUS GTX 680 2GB Mobo - ASUS P8Z68-V PRO/GEN3 PSU - Corsair 850AX (80Plus Gold) SSD - Samsung 860 Evo 2TB SATA3 (I think I plan to use this as a cache drive) HDD - various WD Reds, Blacks, White Label shucked Reds totalling about 30TB (I have half a dozen more 12TB White Label shucked Reds ready and waiting for the new Unraid server to be built) I have been using a CoolerMaster HAF-X case, but have bought a Fractal Design 7 XL for the new Unraid build. I have today purchased a nVidia Quadro P2000 5GB off eBay. I am also looking at buying a 9207-8i HBA card flashed to IT Mode. I was actually going to buy two of the cards, but then I realised that I only need one plus a SAS Expander. I mentioned this to the eBay seller and he recommended I purchase an IBM 46M0997 SAS Expander card, although he doesn't sell them himself and couldn't vouch for it as his enclosure has a built in expander backplane. As I'm going to need to transfer the 30TB back into the new Unraid server, and for future benefits of backups and transfer speed, I'm looking to get add 10GbE adapters into both my rig and the new Unraid server. This is currently where I'm a bit stuck. I see people recommend Mellonox ConnectX cards, but there seem to be so many different ConnectX cards not to mention all of the other manufacturers / brands I'm really lost as to which card/s I should be trying to purchase. I was hoping to use SFP+ given the speed and latency benefits, but I'm happy to listen to recommendations. I was planning on taking out the GTX 680 to use in my new rig for the time being due to the ridiculous difficulty of trying to get hold of a RTX 3080 right now. But I was hoping to be able to put it back in down the road, so that I can use that GPU for a VM or something like that. However this also brings me on to my next problem, I don't think this motherboard has enough PCI-Express ports for all these cards or I'm not sure which order I should be installing them in.... The motherboard manual lists the expansion slots as: My motherboard looks like this. So excluding the GTX 680, my plan was to install the cards as follows: Install the Quadro P2000 into the top / blue PCI Express 3.0 slot Install the 9207-8i into the middle / white PCI Express 3.0 slot Install the IBM 46M0997 SAS Expander into the bottom / black PCI Express port. However it's looking like the 10GbE cards all seem to need PCI Express too which at that point I would have run out of. I also would not be able to use my GTX 680 down the road as a separate GPU for a VM. So a few questions: Can anyone suggest a better way to order my cards in the various slots? Maybe to get better use of PCI-Express lanes and speed, or to free up to a slot for the 10GbE NIC &/or the GTX 680. Are there any other SAS Expanders I should look at getting outside of the IBM 46M0997? I understand the IBM SAS Expander only uses the PCI-Express port for power, so is it not possible that could come from elsewhere, maybe one of the PCI-Express 2.0 x1 slots if I took the risk and dremmelled out of the right hand end side of the slot? Or, on the less risky side of things, maybe there is a different SAS Expander that is powered by SATA or molex that I could use instead? If the Quadro, the 9207-8i and the SAS Expander will all require PCI-Express 3.0 / 2.0 x16 slots (ie the Blue, White and Black slots), are there any 10GbE NICs that would only require a PCI Express 2.0 x1 slot or, I believe that this would be a longshot, maybe even one of the basic PCI slots? Am I correct in thinking that the 9207-8i and the SAS Expander should both basically be plug and play, as long as the HBA card is flashed to IT mode, or will there be extra work I need to do to get these to work and for my drives to show up? I'm sure that I will have many more questions as I progress through this new build, but I think that about covers my uncertainties at the moment. Any help would be greatly appreciated!