VisualHudson

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by VisualHudson

  1. As posted in my thread here I have had basically this exact problem, but now I'm panicking because I want to get all of my old libraries, settings, etc.

     

    I have downloaded and installed the new nvidia driver from Community Applications. 

    I have removed the old Unraid Nvidia plugin.

    I think I now need to re-download and install a new Plex docker, but how can I go about that whilst also bringing back all of my old libraries, settings, etc?

    Also I see that there are TWO binhex Plex dockers, one with and one with "plexpass" in the title. I do have Plex Pass, but I'm not sure that my previous docker had Plex Pass in the title. Does that matter? 

     

    Edit: I think I've fixed the problem...

     

    This one of the comments in this reddit thread suggested going to Apps > Previously Installed > Install Plex from there and it should bring back app the same mappings. It did!

     

    I then used instructions in the 2nd post of this thread on the Unraid forums to add "Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all':" before re-installing that Plex docker. 

     

    I've checked that Hardware Transcoding is turned on in Plex. I've checked in Plex that it is successfully doing hardware transcoding, and I've also checked through "watch nvidia-smi". All seems to be working and exactly as it was! 

     

    Plex Web interface does look a little weird and different to what I'm used to. It's now got ALOT more of a bluey background, and seems to have merged On Deck with Continue Watching? But I'm guessing that this is another new Plex layout and nothing to do with anything else, not that I'm particularly bothered as I use PMP on my PC & laptop. 

     

    Still not sure whether I got the docker with or without "plexpass" in the title, but it doesn't seem to be causing any problems either way....

  2. As the title says, I've just updated Unraid for the first time in a couple of years from 6.9.something to 6.10.0-rc4.

     

    As I was going through Unraid, I noticed that the Plex docker was not running. I tried manually starting it, but was getting a "execution error - bad parameter" message pop up. I saw there was an option to Force Update, even though it did say it was already up-to-date, so I tried that. The update failed and then completely removed Plex from my list of dockers. 

     

    I'm not sure if this error was because I still had / have the Unraid Nvidia plugin installed which I know is not depreciated, however I didn't want to get rid of it until I figured out how to do it and how to make sure that Plex was still using the GPU. 

     

    So now I'm panicking because I've completely lost Plex and have no idea what to do. I really hope that I will not have lost all of my settings and everything else I've spent the last few years getting just how I want it!

     

    Please can someone help, what can I do?

     

    Edit: So this thread seems to be very similar to the situation I have just had.

     

    I have downloaded and installed the new nvidia driver from Community Applications. 

    I have removed the old Unraid Nvidia plugin.

    I think I now need to re-download and install a new Plex docker, but how can I go about that whilst also bringing back all of my old libraries, settings, etc?

    Also I see that there are TWO binhex Plex dockers, one with and one with "plexpass" in the title. I do have Plex Pass, but I'm not sure that my previous docker had Plex Pass in the title. Does that matter? 

     

    Edit 2: I think I've fixed the problem...

     

    This one of the comments in this reddit thread suggested going to Apps > Previously Installed > Install Plex from there and it should bring back app the same mappings. It did!

     

    I then used instructions in the 2nd post of this thread on the Unraid forums to add "Key: 'NVIDIA_DRIVER_CAPABILITIES' and as Value: 'all':" before re-installing that Plex docker. 

     

    I've checked that Hardware Transcoding is turned on in Plex. I've checked in Plex that it is successfully doing hardware transcoding, and I've also checked through "watch nvidia-smi". All seems to be working and exactly as it was! 

     

    Plex Web interface does look a little weird and different to what I'm used to. It's now got ALOT more of a bluey background, and seems to have merged On Deck with Continue Watching? But I'm guessing that this is another new Plex layout and nothing to do with anything else, not that I'm particularly bothered as I use PMP on my PC & laptop. 

     

    Still not sure whether I got the docker with or without "plexpass" in the title, but it doesn't seem to be causing any problems either way....

    • Upvote 1
  3. 25 minutes ago, dlandon said:

    You won't be able to do anything until you clear the 'Array' status.  Reboot and it should clear up.

     

    Those disks were probably not unmounted before being removed, or they were part of the array and fell out of the array.

    I have updated Unraid and rebooted and the 'Array' status has cleared, it is now allowing me to mount the drives! 

     

     

    They were neither removed or part of the array at any time, they have been sat as unassigned drives since I set up the server, so I'm not sure why that status was showing up. 

     

    EDIT: Ignore me, I've got it working! Thank you

     

     

  4. I have been out of the Unraid game for a little while, had everything up and running for the last year or so and now it's coming time to me to decide what to do with these four unassigned drives I have in my server. However I cannot for the life of me figure out how to mount them and access them via Windows?

     

    When I look at tutorials on YouTube / Google / Reddit / etc it shows that there should be an Auto-Mount and Share toggle switch on the right hand side, but it's not there? I can't see anything in the Settings when I click on the little gear icon.If I click on the three letter Device codes on the left hand side, I can't see anything there. 

     

    I'm sure this is a very simple fix, so please can someone point me in the right direction?

     

    image.thumb.png.0fdd3a0ba13134b69c3d1f2249409747.png

  5. 4 minutes ago, Hoopster said:

    Yes, that's how you edit the go file.  The lines above in the go file are because my CPU has an integrated GPU and I am using that for hardware transcoding in Plex and this sets up the drivers for doing that.  If you do not have an Intel CPU with iGPU, those lines are of no use to you.

     

    Your cache drive is showing up as a share because you have disk shares enabled and you are exporting the cache disk.  I am doing this intentionally because I want direct access to the cache share as I store some temporary downloads there.

     

    image.thumb.png.7da3ddb415bf93e8090d30589077a536.png

     

    image.png.070b80919bf586ab3e5e79b978ed8002.png

     

    image.png.ac97e3520370e5b9644c1e5a44d677dc.png

    I do have an Intel CPU with an iGPU but as it's only a 3770K I'm using a P2000 for hardware transcoding / encoding in Plex, so I suppose I can forgo that first paragraph. 

     

    I don't know why that disk share had suddenly become a thing. It definitely wasn't there earlier. But okay cool, I've found it and set it to "Private" and "Yes (Hidden)" now so other people won't be able to access it at all and I won't be able to see it, but if need be I can still access it.  

     

    Thanks for all of your help today man, very much appreciated!!

  6. 2 minutes ago, Hoopster said:

    image.png.19b410759154d1950e5e7f373934e0b3.png

     

    I have the remove commercials setting enabled in the Plex DVR as well so after recording it post-processes and removes commercials.  The above setting also creates smaller files than the raw .mt2s from an OTA record.

     

    The 'go' file is on your unRAID flash drive in the /config folder.

     

    Here is mine for reference (the SSH stuff is for the rsync bacup between unRAID servers):

     

    
    #Setup drivers for hardware transcoding in Plex
    modprobe i915
    sleep 4
    chown -R nobody:users /dev/dri
    chmod -R 777 /dev/dri
    
    mkdir /tmp/PlexRamScratch
    chmod -R 777 /tmp/PlexRamScratch
    mount -t tmpfs -o size=16g tmpfs /tmp/PlexRamScratch
    
    # Copy SSH files back to /root/.ssh folder and set permissions for files
    mkdir -p /root/.ssh
    cp /boot/config/ssh/medianas_key /root/.ssh/id_rsa
    cp /boot/config/ssh/known_hosts /root/.ssh/known_hosts
    cat /boot/config/ssh/backupnas_key.pub > /root/.ssh/authorized_keys
    chmod g-rwx,o-rwx -R /root/.ssh
    
    #!/bin/bash
    # Start the Management Utility
    /usr/local/sbin/emhttp &

     

    We are way off topic now but you started the thread and no one else is participating so take it where you want to.

    Okay yeah I see that you mean within the DVR settings in Plex itself. See I'm a little bit hesitant with trying to get Plex to cut out commercials. It never did a very good job whenever I tried it in the past on my previous Windows set up. 

     

    It is certainly a very interesting idea that I think I'd like to try. 

     

    To edit the 'go' file I assume I'd just have to view the flash drive as a share in Windows, go to config, then open and edit the file using notepad then just save it. Right? Would I need to include:

    #Setup drivers for hardware transcoding in Plex
    modprobe i915
    sleep 4
    chown -R nobody:users /dev/dri
    chmod -R 777 /dev/dri

    I assume that is for something else as it's not what you have previously told me to do?

    You wouldn't have an idea about why that cache drive is showing up as a share, would you?

     

    And yeah I'm aware that we're very very off topic now, but I do very much appreciate having you here to help me today and so quick with your responses too. If I could show you my gratitude I would!! It's very nice to have someone a lot more knowledgeable than I helping me to get along and learn this whole new world of Unraid haha

  7. 3 minutes ago, Hoopster said:

    Yes, enter at the command line in terminal for initial setup and enter those line in the 'go' file so it is recreated at reboot.  Since it is in RAM it disappears on a reboot.

     

    I fyou dont have the Plex DVR setup and recording, a 4g Ramdisk size is probably plenty.

     

    I quite frequently have several transcodes going on simlultaneously, as well as Plex DVR records which also use the RAM disk.  If you have the Plex DVR setup you will want to set the "Convert Video While Recording" to Transcode.

     

    Fortunately, I never hit the RAM limit and experienced a server crash, but, others have, so better safe than sorry.

     

    I have a recording going on right now and 5% of the RAMdisk is being used:

     

    image.png.465792612919221f44331a24b1e62ebd.png

    I do have Plex DVR set up through a HD HomeRun Quatro that's connected elsewhere within the network. 

     

    I do not understand what you mean by "enter those lines in the 'go' file" (what "go" file and how?) and where would I then set "Convert Video While Recording" to Transcode? Although, to be fair, it's not very often we use Plex DVR to actually record, mainly just use it for Live TV. It's usually easier to source from elsewhere than to record, cut the commercials, and then re-encode down to a decent file size, etc. 

     

    Also, I was going to open a separate thread but maybe you can help me with this, since doing all of what we've discussed today I'm now getting a share show up in Windows called "cache" which appears to me as though the cache drive has for some reason now become it's own share? All of the folders within / shares on the cache drive (ie domain, appdata, system) are all set to "Yes (Hidden)" so none of them should be visible, and they're not when you open the IP in explorer and view the possible shares, but if you open the cache folder there they are. If you can understand what I mean? This definitely wasn't there earlier. How I hide it again? 

  8. 16 minutes ago, Hoopster said:

     

    
    mkdir /tmp/PlexRamScratch
    chmod -R 777 /tmp/PlexRamScratch
    mount -t tmpfs -o size=16g tmpfs /tmp/PlexRamScratch

     

    Is what I would have to copy and paste into terminal to achieve the same result as you?

     

    You do mention it could become an issue with multiple simultaneous transcodes. Do you not ever have more than one transcode at a time? Have you never experienced what happens if you reach your limit?

  9. 1 minute ago, Hoopster said:

    As JorgeB mentioned, the speed advantage with 5-10 GbE cards is really only realized when transferring between SSDs.  Anytime an HDD is involved it will be slower and caching is more noticeable.  Of course, if writing to the HDD array directly and with parity, it will be even slower.

    I should maybe point out that whenever I previously said array, I did mean a share that is using a SSD for a cache. 

    But okay, I think I understand all what's going on now.

     

    Whilst I've got you guys here, is it possible to get your assistance in using setting up to use some of the 32GB of RAM on my server as a RAM disk? And this might be totally wrong, but could that RAM disk maybe be pooled with the SSD cache?

  10. Just now, JorgeB said:

    File is cached to RAM during first transfer, then the next transfer(s) are done from RAM, so faster.

     

    Okay I can understand that. 

     

    But it seems almost like my 5GbE is wasted if it's only going to transfer at such slow speeds. It's not often I'm going to be transferring the same file more than once? 

    So I've just done some further testing.

     

    I had a MKV I ripped a week ago on my NVME boot drive on Windows. I copied that across using 5GbE to the array and it went across first time at approximately 335MB/s. I deleted it from the array. I copied it across to the HDD on Windows, then copied it across to the array and it went at approximately 335MB/s again.

     

    But then, as before, I transferred another movie that was saved on the HDD to the array and it would transfer again at the slow speeds. I then copied 2 different movies from the HDD to the NVME boot drive and copied them across to the array for the first time and they both went over at the much higher speed. 

     

    So basically any time I copy something off the HDD to the array, atleast for the first time, it's going to transfer at a much slower rate. Now I understand the HDD is the bottleneck and being a WD Black 2TB (and an old one at that) it has read and write speeds around 150MB/s but we on average seem to be getting speeds quite a bit below that approximate 150MB/s figure. But I guess that can be attributed to the fact it's an "up to" figure and because the drive is quite full it's naturally going to be slower? 

  11. Just now, JorgeB said:

    Then this is mostly likely the bottleneck, you should get better speeds from SSD to SSD (an note that contrary to popular belief not all SSDs are capable of 500MB/s+, especial for sustained writes).

    I understand the HDD would be a bottleneck. But it wouldn't explain why the first time I transfer a file it goes across slowly, but the 2nd time I transfer the exact same file it goes across at the kind of speed you would expect. At the slower speed it's not much / if any better than just using 1GbE.

  12. Just now, JorgeB said:

    This suggest RAM cache interfering, what are the devices in use, both source and dest, e.g. SSD to SSD, disk to array, other?

    When transferring from Windows to the Unraid share the source was a HDD and the destination is a HDD array which has an SSD cache. If that makes sense?

    I would be interested to setup RAM disk to RAM disk as the Windows PC has 128GB of RAM and the Unraid server has 32GB of RAM, but I'm not sure how to set that up on the Unraid side of things and thought that might have been something to do after I've got all this 5GbE business working as it should.

  13. 20 minutes ago, Hoopster said:

    You can never have two devices with the same IP address in the same subnet.  How could things be properly routed between them or from other devices if they have the same address?  Which one does the router pick as the destination for the traffic?

     

    It is possible that devices can share an IP address in certain link aggregation or hot failover configurations, but, neither is a case of the two devices needing to communicate with each other or separate devices independently.

     

    6 minutes ago, JorgeB said:

    Curious where you saw that...

     

    Okay so this is a good example of where, as I said, I might have got something wrong.

     

    So I've just gone back to SpaceInvaderOne's video and at 10:30 he puts the Unraid server as 192.168.11.199. Then at 16:49 he sets setting in Windows to be 192.168.11.197

     

    Clearly I was not paying enough attention both auditory and visually as I thought he put .199 for both. 

     

    So I've just gone in to the 5GbE adapter properties on Windows and changed that to 192.168.2.12 and you know what... that appears to have worked! haha

     

    Atleast to an extent. I can add a network share, it asks me to log in, I can then map the share to a drive letter and access my files as you would expect. However some quick tests I've just done copying a movie from Unraid to Windows via 1GbE and then via 5GbE, then copying a movie from Windows to Unraid by 5GbE then via 1GbE all seem to get very similar results of approximately high 80s - 110ish MB/s. 

     

    But then it gets werider. If I copy say Movie A from Windows to Unraid using 5GbE it'll transfer at the above approximate speed. I then delete Movie A. I then copy Movie A again, the 2nd time it will transfer at 335MB/s. If I delete and copy across again, Movie A will again transfer at about 335MB/s. But then if I go to copy across Movie B, the transfer speed drops back down again to appoximately 90 - 110MB/s. What gives with that?

  14. 31 minutes ago, JorgeB said:

    You can't have the same IP on both 5GbE NICs.

     

    Sorry, what? All of the stuff I've seen says that you need to set the same IP on both of the 10/5GbE NICs.

     

    So you're saying I need to change one to, say, Windows to 192.168.2.13 and Unraid to 192.168.2.14? Either I've totally misunderstood the tutorials, which very well might be true, or that can't be right...

  15. I originally followed this video by SpaceInvaderOne.

     

    I then tried to follow the two tutorials in a post found elsewhere on this forum.

     

    But I just cannot seem to get it up and running correctly.

     

    Windows 10 PC has an Asus Maximus XII Hero with a built-in 5GbE ethernet port on the back. Unraid server has an ASUS XG-C100C 10GbE PCI-E card (& is running version 6.9.0-beta30). They are connected via Cat 7 ethernet cable. Both Windows and Unraid can see they're connected by 5GbE in the Ethernet Status window on Windows and on the Dashboard in Unraid. 

     

    I am able to access the shares on Windows via 192.168.1.13 but not via 192.168.2.13 which should be connecting to the Unraid server using the 5GbE network. Windows will just hang if I try to Add Network Location before eventually saying "The folder that you entered does not appear to be valid. Please choose another." 

     

    If I try to ping from Windows it works fine for 192.168.1.13 but will time out and fail for 192.168.2.13. If I try to ping from Unraid using the 10GbE NIC (eth1, as detailed in the written tutorial on the 2nd link) I get a single line result saying "PING 192.168.1.83 (192.168.1.83) from 192.168.1.13 eth1: 56(84) bytes of data." - To be honest I'm not sure if this is all that should show up or if there's anything more. 

     

    I am totally new to Unraid and I am now totally lost & confused as to what on earth I have to do to get this working correctly. These tutorials make out like it's oh so simple, but I cannot for the life of me figure out what on earth is going wrong. Please can somebody help?

    1.PNG

    2.PNG

    3.PNG

    4.PNG

    5.PNG

    6.PNG

    7.PNG

    8.PNG

  16. So to give backstory, I had used Unbalance to move some data between drives. After it had finished I noticed that it had still left empty directories on the old drives so I googled how to get rid of them and found this reddit post which suggested using "find /mnt/disk1 -empty -type d" and "find /mnt/disk1 -empty -type d -delete".

     

    So I ran it the first one to list all the empty directories on Disk 3 which was fine. But then when I typed in the code again I accidentally just did what the post said and included Disk 1 which is where my appdata, isos, system, domains, etc folders are stored. I do plan on moving appdata to a separate SSD but as I've just been spending the last couple of weeks getting things up and running, preclearing, moving data across, etc I've not yet done that. 

     

    But I'm concerned about the fact I've totally deleted the shares / folders for ".trash-99", "Domains", and "ISOs" as well as any other empty directories that might have been within appdata or system folders. Should I be worried? Is there anything I can do? (no, I don't yet have a backup of any of these folders or shares)

    Edit: Turns out they were automatically re-created sometime later, possibly the next time the array was shut down and restarted, so they have returned and it does not appear as though there is anything to worry about.

  17. 5 hours ago, gfjardim said:

    If it's reading the disk (Pre or Post Reads) it's safe to stop the current preclear session and add it to the disk. If it's zeroing the disk, if you stop the session and add it to the array the disk will be cleared by Unraid again.

    Because the disk will have already been zeroed by the preclear plugin, is that when you choose the option that "parity is still valid" or something that I've read online? Although I haven't tried to do this yet or seen this option myself.

     

    So if I stop the preclear session in the pre or post-read step, and then create the array using those disks, I'm guessing it won't have the preclear signature? Whatever that is? What are the benefits / drawbacks of doing this and of the preclear signature?

     

    All 4 of the drives are now in the post-read stage, two at about 85% and two at about 55% so if I'm going to follow your advice I need to end this process within the next day or so before it starts zeroing the drive again. 

  18. I have been looking around and I can't find a definitive answer so hopefully someone can help me here.

     

    I'm currently in the process of preclearing 2x 14TB & 2x 12TB drives via USB 3.0 before schuking them. I had scheduled them to do two cycles including pre and post-read, but each step is taking 20-24 hours, so a full cycle takes about 3 days. I'm currently on the first cycle for all drives and they are all at step 2 (zeroing) approximately 60 - 80% so far. I don't have nearly a week to wait just for these four drives, and then to top it off I've got 8 more drives to preclear. That would be three weeks before all the drives were ready for the array at this rate.

     

    I mean, I do plan on setting up and starting the array with the first four drives once they are ready and adding the rest of them as they complete the preclear process. However I don't want to be waiting weeks on end before this is all set up and the array is complete. 

     

    So I was wondering, is there a way to abort / end the current preclear process after the first cycle completes and still retain the preclear signature (I think it's called)? I know that there's no way to do it exactly as when it completes the first cycle it will automatically begin the second cycle, but at what point am I safe to cancel the process and still have the drives classed as precleared and ready to be formatted?

  19. 9 hours ago, Michael_P said:

    Yeah, they have gotten expensive, I paid $85 new back in 2018!

     

    In a nutshell, VT-d allows for directed I/O, which allows hardware to be assigned to VMs

     

    The Mellanox cards work fine, but you'll need at least an x4 PCIe slot free

     

    Depending on the  number of drives, you may need to keep the controller on an x8 lane or your parity checks will suffer - and I'd stay way from that marvel controller on the board.

     

     

    See the RES2SV240 now is more like £250 on eBay from a UK seller, whereas if you get one from a US seller or China seller it's about £80 - 100. Are they all just selling the exact same thing, should I just consider the cheaper sellers and wait the weeks it might take to get here?

     

    Why would you stay away from the marvel controller on this board? I've been using them so far for years and they've not ever caused any problems. Why might that be any different with Unraid?

    5 hours ago, Decto said:

    A couple of thoughts.

     

    6 of the 8 Sata ports on the motherboard are good, I wouldn't use the 2 Marvel based ones.

    With the 8 ports on the SAS card you should be good for hard drives for a while, at which point you may be thinking of hardware upgrades.

    Asmedia 2 port controlers also work fine in a PCI-E x1 so you have 14-16 drives to get you started.

     

    The SAS card would be fine with 8 drives in a PCI-E 2.0 x4 slot

     

    10G isn't that useful in a steaming / low user count space other than for quick flash to flash transfers. 

    With how unraid is designed, the read / write speed is limited to that of a single disk, slower for write due to parity calcs. Typically you're capped around Gbit speeds during writes anyway with the parity calcs.  Teamed Gbit is usually fine for read which runs at the normal disk speed. A dual Intel lan on a PCI-E x 1 slot would be an option. If slots are at a premium, I'd comprimise network transfer rate since most of the traffic/download etc. is internal to the server.

    You can always mount drives using 'unassigned devices' outside of the array via SATA or USB for much quicker transfer.

     

    The encoder in the IGPU of old chips isn't great as it doesn't support many of the modern formats. If you pass through the P2000 to a Plex, it can serve double duty and unraid display and plex encode / decode (if you have plex pass). Same issue with reverting to the GTX680, you will need to be picky about formats or do quite a bit of offline CPU transcoding for your library. 

     

    You don't need a second GPU for a VM unless you either need a display output or intend to 'stream' a game from the server. With the older, core limited CPU streaming options will be fairly limited. 

     

    With the board you have, IO options are limited.

     

    Either now or the future you could sell of the quadro and cpu/mainboard/memory and replace with a B365 board with quadcore or 6 core CPU... or whatever modern equivalent. The modern IGPU would be fine for Plex etc and you would have reasonable IO for expansion. The B365 boards expose more PCI-E lanes.

    Probably only a bit of beer money in it.

     

    Good luck

     

     

    Thanks for taking the time to write out a long thought out response!

     

    You actually bring up some very good points. I've been spending all this time thinking about the SAS expander and especially for the time being I probably don't even really need it. 

     

    As I asked the guy above, why would you recommend avoiding the Marvel controller? If I was to follow your suggestion, how would you feel if I used the Marvel SATA ports for my Samsung 860 EVO SSD that I plan to use as a cache drive and only use the SAS card / Intel SATA ports for all of the HDDs in the array?

     

    I wouldn't be expecting the 10GbE to give me any benefit for streaming from Plex, it would literally just be to have the fasted transfer speeds possible between my new PC and the Unraid server. As I have 32GB of RAM and I know I won't need all of that on Plex, and I have 128GB of RAM on my new PC, I might do RAMDISK to RAMDISK transfers, or at the very least ideally directly to the SSD cache. I may also add in a second SSD down the road as an unassigned drive just to have a faster bit of storage on the server than the main array. 

     

    Your idea of mounting drives as unassigned devices is actually something I've recently been thinking about actually as a much quicker way to get all the 30TB of content I currently have back into Plex within the new Unraid environment. Whilst I've not yet ever used Unraid, cut and pasting it from within the same system must be quicker than transferring it over a network even if it is 10GbE. 

     

    I mean, so far for the last few years I have been getting buy using my GTX 680 and my CPU so I suppose there's no reason why I couldn't continue with that outside of eventually hitting the limit of whatever the two can do together within Unraid. But I don't know if I would revert back to the GTX 680 or just stick with the Quadro. 

     

    Using a VM isn't the upmost priority of this build. But I'm not sure I understand what you mean by "need a display output or intend to 'stream' a game from the server"?

     

    The more that I think about it, maybe I will use what I've got now / planned for now for Plex, then instead of running a VM on this machine I'll purchase a new CPU, mobo and RAM sometime down the line and then transfer the server again but over to that new hardware. I can then use my current hardware either for another Unraid server solely for a VM or simply just a traditional Windows install seeing as my current specs still run everything perfectly fine for the most part. That would also allow me to put the 680 back to use too. But that's an idea for a distant time. I can't really afford to be going to buy essentially an entire new computer right now.

     

     

  20. Thanks for the response!

     

    I have a quick Google and read through on VT-d now you've mentioned it, but it'd not something I've come across before, don't really understand what it is. Could you give a brief explanation of what it is or why it's important?

     

    I had seen the RES2SV240 recommended before, but it looks to be incredibly expensive on eBay unless you're willing to order one in from China, in which case they're hundreds cheaper. Should I be dubious of doing that? But this was the main reason why I was leaning more towards the IBM 46M0997 as it can be found at a much more reasonable price locally (on eBay).  

     

    But lets say I did get the RES2SV240 and I stayed with my 3770k, would I be correct in thinking that I would have a few options?

    1. Put the Quadro P2000 and the 9207-8i in both at PCIe 3.0 x8. I then have the option to power the RES2SV240 using the third PCIe port and forget about using the GTX 680 at all in this build. However this would still not give me a 10GbE NIC. 
    2. Put the Quadro P2000 in at PCIe 3.0 x8 and the 9207-8i in at PCIe 3.0 x4 (or would they both run at x8 again??), power the RES2SV240 using molex and then put the GTX 680 in the last port but it would run at PCIe 2.0 x4 (or would it be x16? either way it wouldn't be a massive concern as it would rarely be being used, and when it is being used, it's not going to be for highly "critical" game playing that requires the world's best framerates). However this would also still not give me a 10GbE NIC. 
    3. Put the Quadro P2000 in at PCIe 3.0 x8, the 9207-8i in at PCIe 3.0 x4, power the RES2SV240 using molex and put a 10GbE NIC into the third slot but it would run at PCIe 2.0 x4 (or would it be x16?). Would it be better swapping the order of these? Either way, this would also mean forget about using the GTX 680 at all in this build. 
    4. Use the Quadro in the top slot, use the GTX 680 in my new rig until I can finally get my hands on a RTX 3080 (which I'm planning on doing anyway) and then attempt to sell the Quadro and put the GTX 680 back into the top slot. Then as with Option 3 the SAS card would go in the 2nd slot, the expander would be powered by molex, at the 10GbE NIC in the third. The drawback of this being that the GTX 680 isn't as good at Plex and that I would lose the ability to run a VM with a dedicated GPU. 
    5. Sell the Quadro straight away, keep the GTX 680 in the top slot and just lose out on having a GPU in my new rig until I can get an RTX 3080. Everything else would be the same as options 3 / 4. 

    At this very present moment, if I'm honest, I'm kind of leaning towards either option 3 or 4, but it would all be dependent on whether I can source a RES2SV240 without waiting months on end.

     

    Also, as per SpaceInvaderOne's demonstrations on YouTube I was looking at getting a couple Mellanox ConnectX-2's, but are there any 10GbE NIC's you'd recommend?

  21. In 2012 I built a rig that at the time was relatively about as good as you could get and I'm looking to repurpose as much of that as possible now in a new Unraid build. 

     

    I will be looking to mainly use the new Unraid build as a Plex server as well as a separate backup storage (ie a NAS) for computers around the house and my camera SD cards. Currently I have been using the rig as a Windows 10 machine to host the Plex server, but I've recently built a new main rig so am now looking to finally make the switch over to Unraid after watching many people over the last few years on YouTube recommend it so highly. 

     

    My current rig has the following specs:

    CPU - Intel i7-3770k 

    RAM - 32GB Corsair Dominator DDR3 1866Mhz

    GPU - ASUS GTX 680 2GB

    Mobo - ASUS P8Z68-V PRO/GEN3

    PSU - Corsair 850AX (80Plus Gold) 

    SSD - Samsung 860 Evo 2TB SATA3 (I think I plan to use this as a cache drive) 

    HDD - various WD Reds, Blacks, White Label shucked Reds totalling about 30TB (I have half a dozen more 12TB White Label shucked Reds ready and waiting for the new Unraid server to be built)

     

    I have been using a CoolerMaster HAF-X case, but have bought a Fractal Design 7 XL for the new Unraid build. 

     

    I have today purchased a nVidia Quadro P2000 5GB off eBay.

     

    I am also looking at buying a 9207-8i HBA card flashed to IT Mode. I was actually going to buy two of the cards, but then I realised that I only need one plus a SAS Expander. I mentioned this to the eBay seller and he recommended I purchase an IBM 46M0997 SAS Expander card, although he doesn't sell them himself and couldn't vouch for it as his enclosure has a built in expander backplane. 

     

    As I'm going to need to transfer the 30TB back into the new Unraid server, and for future benefits of backups and transfer speed, I'm looking to get add 10GbE adapters into both my rig and the new Unraid server. This is currently where I'm a bit stuck. I see people recommend Mellonox ConnectX cards, but there seem to be so many different ConnectX cards not to mention all of the other manufacturers / brands I'm really lost as to which card/s I should be trying to purchase. I was hoping to use SFP+ given the speed and latency benefits, but I'm happy to listen to recommendations. 

     

    I was planning on taking out the GTX 680 to use in my new rig for the time being due to the ridiculous difficulty of trying to get hold of a RTX 3080 right now. But I was hoping to be able to put it back in down the road, so that I can use that GPU for a VM or something like that. 

     

    However this also brings me on to my next problem, I don't think this motherboard has enough PCI-Express ports for all these cards or I'm not sure which order I should be installing them in....

     

    The motherboard manual lists the expansion slots as:

    Quote

    2 x PCI Express 3.0 / 2.0 x16 slots (single at x16 or dual at x8 / x8 mode)

    1 x PCI Express 2.0 x16 slot [black] (max. at x4 mode, compatible with PCIe x1 and x4 devices)

    2 x PCI Express 2.0 x1 slots

    2 x PCI slots

    * The PCIe x16_3 slot shares bandwidth with PCIe x1_1 slot, PCIe x1_2 slot, USB34 and eSATA. The PCIe x16_3 default setting is in x1 mode. 

    ** Actual PCIe speed depends on installed CPU type. 

     

    My motherboard looks like this.

     

    So excluding the GTX 680, my plan was to install the cards as follows:

    • Install the Quadro P2000 into the top / blue PCI Express 3.0 slot
    • Install the 9207-8i into the middle / white PCI Express 3.0 slot
    • Install the IBM 46M0997 SAS Expander into the bottom / black PCI Express port. 

     

    However it's looking like the 10GbE cards all seem to need PCI Express too which at that point I would have run out of. I also would not be able to use my GTX 680 down the road as a separate GPU for a VM.

     

    So a few questions:

    1.  Can anyone suggest a better way to order my cards in the various slots? Maybe to get better use of PCI-Express lanes and speed, or to free up to a slot for the 10GbE NIC &/or the GTX 680.
    2. Are there any other SAS Expanders I should look at getting outside of the IBM 46M0997? 
    3. I understand the IBM SAS Expander only uses the PCI-Express port for power, so is it not possible that could come from elsewhere, maybe one of the PCI-Express 2.0 x1 slots if I took the risk and dremmelled out of the right hand end side of the slot? Or, on the less risky side of things, maybe there is a different SAS Expander that is powered by SATA or molex that I could use instead?
    4. If the Quadro, the 9207-8i and the SAS Expander will all require PCI-Express 3.0 / 2.0 x16 slots (ie the Blue, White and Black slots), are there any 10GbE NICs that would only require a PCI Express 2.0 x1 slot or, I believe that this would be a longshot, maybe even one of the basic PCI slots?
    5. Am I correct in thinking that the 9207-8i and the SAS Expander should both basically be plug and play, as long as the HBA card is flashed to IT mode, or will there be extra work I need to do to get these to work and for my drives to show up?

     

    I'm sure that I will have many more questions as I progress through this new build, but I think that about covers my uncertainties at the moment.

     

    Any help would be greatly appreciated!