Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by VisualHudson

  1. I do have an Intel CPU with an iGPU but as it's only a 3770K I'm using a P2000 for hardware transcoding / encoding in Plex, so I suppose I can forgo that first paragraph. I don't know why that disk share had suddenly become a thing. It definitely wasn't there earlier. But okay cool, I've found it and set it to "Private" and "Yes (Hidden)" now so other people won't be able to access it at all and I won't be able to see it, but if need be I can still access it. Thanks for all of your help today man, very much appreciated!!
  2. Okay yeah I see that you mean within the DVR settings in Plex itself. See I'm a little bit hesitant with trying to get Plex to cut out commercials. It never did a very good job whenever I tried it in the past on my previous Windows set up. It is certainly a very interesting idea that I think I'd like to try. To edit the 'go' file I assume I'd just have to view the flash drive as a share in Windows, go to config, then open and edit the file using notepad then just save it. Right? Would I need to include: #Setup drivers for hardware transcoding in Plex modprobe i915 sleep 4 chown -R nobody:users /dev/dri chmod -R 777 /dev/dri I assume that is for something else as it's not what you have previously told me to do? You wouldn't have an idea about why that cache drive is showing up as a share, would you? And yeah I'm aware that we're very very off topic now, but I do very much appreciate having you here to help me today and so quick with your responses too. If I could show you my gratitude I would!! It's very nice to have someone a lot more knowledgeable than I helping me to get along and learn this whole new world of Unraid haha
  3. I do have Plex DVR set up through a HD HomeRun Quatro that's connected elsewhere within the network. I do not understand what you mean by "enter those lines in the 'go' file" (what "go" file and how?) and where would I then set "Convert Video While Recording" to Transcode? Although, to be fair, it's not very often we use Plex DVR to actually record, mainly just use it for Live TV. It's usually easier to source from elsewhere than to record, cut the commercials, and then re-encode down to a decent file size, etc. Also, I was going to open a separate thread but maybe you can help me with this, since doing all of what we've discussed today I'm now getting a share show up in Windows called "cache" which appears to me as though the cache drive has for some reason now become it's own share? All of the folders within / shares on the cache drive (ie domain, appdata, system) are all set to "Yes (Hidden)" so none of them should be visible, and they're not when you open the IP in explorer and view the possible shares, but if you open the cache folder there they are. If you can understand what I mean? This definitely wasn't there earlier. How I hide it again?
  4. Is what I would have to copy and paste into terminal to achieve the same result as you? You do mention it could become an issue with multiple simultaneous transcodes. Do you not ever have more than one transcode at a time? Have you never experienced what happens if you reach your limit?
  5. Are you talking about having the transcode set to /tmp? That's currently how I've got my Plex Docker set up.
  6. I should maybe point out that whenever I previously said array, I did mean a share that is using a SSD for a cache. But okay, I think I understand all what's going on now. Whilst I've got you guys here, is it possible to get your assistance in using setting up to use some of the 32GB of RAM on my server as a RAM disk? And this might be totally wrong, but could that RAM disk maybe be pooled with the SSD cache?
  7. Okay I can understand that. But it seems almost like my 5GbE is wasted if it's only going to transfer at such slow speeds. It's not often I'm going to be transferring the same file more than once? So I've just done some further testing. I had a MKV I ripped a week ago on my NVME boot drive on Windows. I copied that across using 5GbE to the array and it went across first time at approximately 335MB/s. I deleted it from the array. I copied it across to the HDD on Windows, then copied it across to the array and it went at approximately 335MB/s again. But then, as before, I transferred another movie that was saved on the HDD to the array and it would transfer again at the slow speeds. I then copied 2 different movies from the HDD to the NVME boot drive and copied them across to the array for the first time and they both went over at the much higher speed. So basically any time I copy something off the HDD to the array, atleast for the first time, it's going to transfer at a much slower rate. Now I understand the HDD is the bottleneck and being a WD Black 2TB (and an old one at that) it has read and write speeds around 150MB/s but we on average seem to be getting speeds quite a bit below that approximate 150MB/s figure. But I guess that can be attributed to the fact it's an "up to" figure and because the drive is quite full it's naturally going to be slower?
  8. Okay so... Is that normal? what can I do about that? How do I get around it?
  9. I understand the HDD would be a bottleneck. But it wouldn't explain why the first time I transfer a file it goes across slowly, but the 2nd time I transfer the exact same file it goes across at the kind of speed you would expect. At the slower speed it's not much / if any better than just using 1GbE.
  10. When transferring from Windows to the Unraid share the source was a HDD and the destination is a HDD array which has an SSD cache. If that makes sense? I would be interested to setup RAM disk to RAM disk as the Windows PC has 128GB of RAM and the Unraid server has 32GB of RAM, but I'm not sure how to set that up on the Unraid side of things and thought that might have been something to do after I've got all this 5GbE business working as it should.
  11. Okay so this is a good example of where, as I said, I might have got something wrong. So I've just gone back to SpaceInvaderOne's video and at 10:30 he puts the Unraid server as Then at 16:49 he sets setting in Windows to be Clearly I was not paying enough attention both auditory and visually as I thought he put .199 for both. So I've just gone in to the 5GbE adapter properties on Windows and changed that to and you know what... that appears to have worked! haha Atleast to an extent. I can add a network share, it asks me to log in, I can then map the share to a drive letter and access my files as you would expect. However some quick tests I've just done copying a movie from Unraid to Windows via 1GbE and then via 5GbE, then copying a movie from Windows to Unraid by 5GbE then via 1GbE all seem to get very similar results of approximately high 80s - 110ish MB/s. But then it gets werider. If I copy say Movie A from Windows to Unraid using 5GbE it'll transfer at the above approximate speed. I then delete Movie A. I then copy Movie A again, the 2nd time it will transfer at 335MB/s. If I delete and copy across again, Movie A will again transfer at about 335MB/s. But then if I go to copy across Movie B, the transfer speed drops back down again to appoximately 90 - 110MB/s. What gives with that?
  12. Sorry, what? All of the stuff I've seen says that you need to set the same IP on both of the 10/5GbE NICs. So you're saying I need to change one to, say, Windows to and Unraid to Either I've totally misunderstood the tutorials, which very well might be true, or that can't be right...
  13. I originally followed this video by SpaceInvaderOne. I then tried to follow the two tutorials in a post found elsewhere on this forum. But I just cannot seem to get it up and running correctly. Windows 10 PC has an Asus Maximus XII Hero with a built-in 5GbE ethernet port on the back. Unraid server has an ASUS XG-C100C 10GbE PCI-E card (& is running version 6.9.0-beta30). They are connected via Cat 7 ethernet cable. Both Windows and Unraid can see they're connected by 5GbE in the Ethernet Status window on Windows and on the Dashboard in Unraid. I am able to access the shares on Windows via but not via which should be connecting to the Unraid server using the 5GbE network. Windows will just hang if I try to Add Network Location before eventually saying "The folder that you entered does not appear to be valid. Please choose another." If I try to ping from Windows it works fine for but will time out and fail for If I try to ping from Unraid using the 10GbE NIC (eth1, as detailed in the written tutorial on the 2nd link) I get a single line result saying "PING ( from eth1: 56(84) bytes of data." - To be honest I'm not sure if this is all that should show up or if there's anything more. I am totally new to Unraid and I am now totally lost & confused as to what on earth I have to do to get this working correctly. These tutorials make out like it's oh so simple, but I cannot for the life of me figure out what on earth is going wrong. Please can somebody help?
  14. So to give backstory, I had used Unbalance to move some data between drives. After it had finished I noticed that it had still left empty directories on the old drives so I googled how to get rid of them and found this reddit post which suggested using "find /mnt/disk1 -empty -type d" and "find /mnt/disk1 -empty -type d -delete". So I ran it the first one to list all the empty directories on Disk 3 which was fine. But then when I typed in the code again I accidentally just did what the post said and included Disk 1 which is where my appdata, isos, system, domains, etc folders are stored. I do plan on moving appdata to a separate SSD but as I've just been spending the last couple of weeks getting things up and running, preclearing, moving data across, etc I've not yet done that. But I'm concerned about the fact I've totally deleted the shares / folders for ".trash-99", "Domains", and "ISOs" as well as any other empty directories that might have been within appdata or system folders. Should I be worried? Is there anything I can do? (no, I don't yet have a backup of any of these folders or shares) Edit: Turns out they were automatically re-created sometime later, possibly the next time the array was shut down and restarted, so they have returned and it does not appear as though there is anything to worry about.
  15. Because the disk will have already been zeroed by the preclear plugin, is that when you choose the option that "parity is still valid" or something that I've read online? Although I haven't tried to do this yet or seen this option myself. So if I stop the preclear session in the pre or post-read step, and then create the array using those disks, I'm guessing it won't have the preclear signature? Whatever that is? What are the benefits / drawbacks of doing this and of the preclear signature? All 4 of the drives are now in the post-read stage, two at about 85% and two at about 55% so if I'm going to follow your advice I need to end this process within the next day or so before it starts zeroing the drive again.
  16. I have been looking around and I can't find a definitive answer so hopefully someone can help me here. I'm currently in the process of preclearing 2x 14TB & 2x 12TB drives via USB 3.0 before schuking them. I had scheduled them to do two cycles including pre and post-read, but each step is taking 20-24 hours, so a full cycle takes about 3 days. I'm currently on the first cycle for all drives and they are all at step 2 (zeroing) approximately 60 - 80% so far. I don't have nearly a week to wait just for these four drives, and then to top it off I've got 8 more drives to preclear. That would be three weeks before all the drives were ready for the array at this rate. I mean, I do plan on setting up and starting the array with the first four drives once they are ready and adding the rest of them as they complete the preclear process. However I don't want to be waiting weeks on end before this is all set up and the array is complete. So I was wondering, is there a way to abort / end the current preclear process after the first cycle completes and still retain the preclear signature (I think it's called)? I know that there's no way to do it exactly as when it completes the first cycle it will automatically begin the second cycle, but at what point am I safe to cancel the process and still have the drives classed as precleared and ready to be formatted?
  17. See the RES2SV240 now is more like £250 on eBay from a UK seller, whereas if you get one from a US seller or China seller it's about £80 - 100. Are they all just selling the exact same thing, should I just consider the cheaper sellers and wait the weeks it might take to get here? Why would you stay away from the marvel controller on this board? I've been using them so far for years and they've not ever caused any problems. Why might that be any different with Unraid? Thanks for taking the time to write out a long thought out response! You actually bring up some very good points. I've been spending all this time thinking about the SAS expander and especially for the time being I probably don't even really need it. As I asked the guy above, why would you recommend avoiding the Marvel controller? If I was to follow your suggestion, how would you feel if I used the Marvel SATA ports for my Samsung 860 EVO SSD that I plan to use as a cache drive and only use the SAS card / Intel SATA ports for all of the HDDs in the array? I wouldn't be expecting the 10GbE to give me any benefit for streaming from Plex, it would literally just be to have the fasted transfer speeds possible between my new PC and the Unraid server. As I have 32GB of RAM and I know I won't need all of that on Plex, and I have 128GB of RAM on my new PC, I might do RAMDISK to RAMDISK transfers, or at the very least ideally directly to the SSD cache. I may also add in a second SSD down the road as an unassigned drive just to have a faster bit of storage on the server than the main array. Your idea of mounting drives as unassigned devices is actually something I've recently been thinking about actually as a much quicker way to get all the 30TB of content I currently have back into Plex within the new Unraid environment. Whilst I've not yet ever used Unraid, cut and pasting it from within the same system must be quicker than transferring it over a network even if it is 10GbE. I mean, so far for the last few years I have been getting buy using my GTX 680 and my CPU so I suppose there's no reason why I couldn't continue with that outside of eventually hitting the limit of whatever the two can do together within Unraid. But I don't know if I would revert back to the GTX 680 or just stick with the Quadro. Using a VM isn't the upmost priority of this build. But I'm not sure I understand what you mean by "need a display output or intend to 'stream' a game from the server"? The more that I think about it, maybe I will use what I've got now / planned for now for Plex, then instead of running a VM on this machine I'll purchase a new CPU, mobo and RAM sometime down the line and then transfer the server again but over to that new hardware. I can then use my current hardware either for another Unraid server solely for a VM or simply just a traditional Windows install seeing as my current specs still run everything perfectly fine for the most part. That would also allow me to put the 680 back to use too. But that's an idea for a distant time. I can't really afford to be going to buy essentially an entire new computer right now.
  18. Thanks for the response! I have a quick Google and read through on VT-d now you've mentioned it, but it'd not something I've come across before, don't really understand what it is. Could you give a brief explanation of what it is or why it's important? I had seen the RES2SV240 recommended before, but it looks to be incredibly expensive on eBay unless you're willing to order one in from China, in which case they're hundreds cheaper. Should I be dubious of doing that? But this was the main reason why I was leaning more towards the IBM 46M0997 as it can be found at a much more reasonable price locally (on eBay). But lets say I did get the RES2SV240 and I stayed with my 3770k, would I be correct in thinking that I would have a few options? Put the Quadro P2000 and the 9207-8i in both at PCIe 3.0 x8. I then have the option to power the RES2SV240 using the third PCIe port and forget about using the GTX 680 at all in this build. However this would still not give me a 10GbE NIC. Put the Quadro P2000 in at PCIe 3.0 x8 and the 9207-8i in at PCIe 3.0 x4 (or would they both run at x8 again??), power the RES2SV240 using molex and then put the GTX 680 in the last port but it would run at PCIe 2.0 x4 (or would it be x16? either way it wouldn't be a massive concern as it would rarely be being used, and when it is being used, it's not going to be for highly "critical" game playing that requires the world's best framerates). However this would also still not give me a 10GbE NIC. Put the Quadro P2000 in at PCIe 3.0 x8, the 9207-8i in at PCIe 3.0 x4, power the RES2SV240 using molex and put a 10GbE NIC into the third slot but it would run at PCIe 2.0 x4 (or would it be x16?). Would it be better swapping the order of these? Either way, this would also mean forget about using the GTX 680 at all in this build. Use the Quadro in the top slot, use the GTX 680 in my new rig until I can finally get my hands on a RTX 3080 (which I'm planning on doing anyway) and then attempt to sell the Quadro and put the GTX 680 back into the top slot. Then as with Option 3 the SAS card would go in the 2nd slot, the expander would be powered by molex, at the 10GbE NIC in the third. The drawback of this being that the GTX 680 isn't as good at Plex and that I would lose the ability to run a VM with a dedicated GPU. Sell the Quadro straight away, keep the GTX 680 in the top slot and just lose out on having a GPU in my new rig until I can get an RTX 3080. Everything else would be the same as options 3 / 4. At this very present moment, if I'm honest, I'm kind of leaning towards either option 3 or 4, but it would all be dependent on whether I can source a RES2SV240 without waiting months on end. Also, as per SpaceInvaderOne's demonstrations on YouTube I was looking at getting a couple Mellanox ConnectX-2's, but are there any 10GbE NIC's you'd recommend?
  19. In 2012 I built a rig that at the time was relatively about as good as you could get and I'm looking to repurpose as much of that as possible now in a new Unraid build. I will be looking to mainly use the new Unraid build as a Plex server as well as a separate backup storage (ie a NAS) for computers around the house and my camera SD cards. Currently I have been using the rig as a Windows 10 machine to host the Plex server, but I've recently built a new main rig so am now looking to finally make the switch over to Unraid after watching many people over the last few years on YouTube recommend it so highly. My current rig has the following specs: CPU - Intel i7-3770k RAM - 32GB Corsair Dominator DDR3 1866Mhz GPU - ASUS GTX 680 2GB Mobo - ASUS P8Z68-V PRO/GEN3 PSU - Corsair 850AX (80Plus Gold) SSD - Samsung 860 Evo 2TB SATA3 (I think I plan to use this as a cache drive) HDD - various WD Reds, Blacks, White Label shucked Reds totalling about 30TB (I have half a dozen more 12TB White Label shucked Reds ready and waiting for the new Unraid server to be built) I have been using a CoolerMaster HAF-X case, but have bought a Fractal Design 7 XL for the new Unraid build. I have today purchased a nVidia Quadro P2000 5GB off eBay. I am also looking at buying a 9207-8i HBA card flashed to IT Mode. I was actually going to buy two of the cards, but then I realised that I only need one plus a SAS Expander. I mentioned this to the eBay seller and he recommended I purchase an IBM 46M0997 SAS Expander card, although he doesn't sell them himself and couldn't vouch for it as his enclosure has a built in expander backplane. As I'm going to need to transfer the 30TB back into the new Unraid server, and for future benefits of backups and transfer speed, I'm looking to get add 10GbE adapters into both my rig and the new Unraid server. This is currently where I'm a bit stuck. I see people recommend Mellonox ConnectX cards, but there seem to be so many different ConnectX cards not to mention all of the other manufacturers / brands I'm really lost as to which card/s I should be trying to purchase. I was hoping to use SFP+ given the speed and latency benefits, but I'm happy to listen to recommendations. I was planning on taking out the GTX 680 to use in my new rig for the time being due to the ridiculous difficulty of trying to get hold of a RTX 3080 right now. But I was hoping to be able to put it back in down the road, so that I can use that GPU for a VM or something like that. However this also brings me on to my next problem, I don't think this motherboard has enough PCI-Express ports for all these cards or I'm not sure which order I should be installing them in.... The motherboard manual lists the expansion slots as: My motherboard looks like this. So excluding the GTX 680, my plan was to install the cards as follows: Install the Quadro P2000 into the top / blue PCI Express 3.0 slot Install the 9207-8i into the middle / white PCI Express 3.0 slot Install the IBM 46M0997 SAS Expander into the bottom / black PCI Express port. However it's looking like the 10GbE cards all seem to need PCI Express too which at that point I would have run out of. I also would not be able to use my GTX 680 down the road as a separate GPU for a VM. So a few questions: Can anyone suggest a better way to order my cards in the various slots? Maybe to get better use of PCI-Express lanes and speed, or to free up to a slot for the 10GbE NIC &/or the GTX 680. Are there any other SAS Expanders I should look at getting outside of the IBM 46M0997? I understand the IBM SAS Expander only uses the PCI-Express port for power, so is it not possible that could come from elsewhere, maybe one of the PCI-Express 2.0 x1 slots if I took the risk and dremmelled out of the right hand end side of the slot? Or, on the less risky side of things, maybe there is a different SAS Expander that is powered by SATA or molex that I could use instead? If the Quadro, the 9207-8i and the SAS Expander will all require PCI-Express 3.0 / 2.0 x16 slots (ie the Blue, White and Black slots), are there any 10GbE NICs that would only require a PCI Express 2.0 x1 slot or, I believe that this would be a longshot, maybe even one of the basic PCI slots? Am I correct in thinking that the 9207-8i and the SAS Expander should both basically be plug and play, as long as the HBA card is flashed to IT mode, or will there be extra work I need to do to get these to work and for my drives to show up? I'm sure that I will have many more questions as I progress through this new build, but I think that about covers my uncertainties at the moment. Any help would be greatly appreciated!