Zoroeyes

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by Zoroeyes

  1. Hi JorgeB i tried virtio and this did increase the speed a little, with windows explorer now showing around 300mb/s. however, this is still at least half the speed I’d expect given the hardware in use? I also noted that task manager was showing roughly 2.5gbps network usage, so it was better than the previous 1.5tbps but nothing like what I’d expect? am I better using a physical 10gbps connection out of unRaid and back into the VM, via a 10gbps switch, using a dual port 10gbps NIC, with only one port shared with the VM? I don’t understand why I should have to take this route, but I have £1000 worth of nvme drives sat on either side of this virtual connection, all running on the same mobo, and I’m struggling to transfer 50gig files at anything more than 300mb/s ? Seems a terrible waste of potential to me. ive logged a support ticket regarding this too (some time ago now), but haven’t had a reply as of yet
  2. Hi JorgeB virtio-net at the moment cheers
  3. Thanks Vr2lo Ive tried your suggestion and it didn’t seem to improve at all? Very frustrating. I wonder if anyone else has seen this issue?
  4. Hi, sorry for the delay in coming back. I’m changed the RSS queue to 2 but this didn’t improve throughput? Not sure if i also need to touch receive side scaling (enabled at moment) or TSO (maximal at moment) any suggestions? Max throughput I’ve seen so far is 200mb/s and this is nvme to nvme
  5. Thanks for coming back Vr2Io, could you elaborate a little please? I have logical 32 cores in my server and have 16 provisioned to the VM and 16 for unRaid. can you give a little more detail of the settings you changed please. thanks
  6. Hi My unRaid server doubles as a workstation (bare metal workstation which sometimes gets booted as a VM within unRaid) and has a number of NVME drives in it for use on both sides (3x 1tb Samsung 970 pro for bare-metal workstation, 4x 1tb Adata sx8200 pro in btrfs raid 0 for use as the cache drive in unRaid). My intention is that I backup my UHD bluRays on the bare metal machine, then copy them across to unRaid once it’s running (via the VM). However, I’m not seeing the copy speeds I’d expect (only about 150mb/s)? I have the unRaid share set to use the cache and everything in on the same machine. So, considering we’re dealing purely with very fast NVMEs here, can anyone suggest what else could be preventing the 500+ mb/s copy speeds Id expect when copying between machines? Could it be the virtual NIC? It is showing a usage of about 1.5Gbps and it’s only a 1gig NIC but I didn’t think this would limit anything as it’s not really using the network in this instance (is it?) I do also have a 10gig card in the server that I could pass through, but it’s not plugged into anything yet? So would this even work, and if so, should I see improved performance when copying from VM to the unRaidshare? Cheers
  7. Hi Bit of an odd question but here goes. I have a high-end UHD player (Panasonic UD-UB9000) that can play discs but also does an amazing job with movie and music files over the network. Its only drawback is its horrendous interface (no poster wall etc.) for browsing and playing media from storage (unRaid in my case). However, what it can do is act as a UPNP renderer, so I believe it can have media sent or 'pushed' to it and it will play that media (as opposed to browsing for the media from the device's own interface and pulling it from the server). My question is, are there any solutions that can run from unRaid that will allow me to use a mobile device to browse my media collection on unRaid and, upon selecting an item and a suitable UPNP renderer (which should show up on the network), send that media to the renderer? What I'm not looking for here is a way to send the media to the device via airplay or similar, I want the server to push the media to the endpoint, and I just want to use the mobile device to select and control it. I remember I used to be able to do something like 'Play To Device' on my Windows PC and it would send the stream to another device on the network. This would be similar in functionality to that. I know its a whacky question, but I'd love to know about any solutions that other unRaid users are using. Cheers
  8. That looks excellent steini84, exactly what i was looking for (hopefully others too). thanks for taking the time to put it together.
  9. That sounds great steini84, appreciate it and will look forward to it.
  10. Ok, so I’ve been reading like crazy about zfs and I’ve been creating, destroying and recreating zpools to try to understand how it all works. However, like many, I’m not quite piecing it all together. I’ve can create a zpool (say zfspool), I can see what drives it uses and that it’s status is all good. I can set the mountpoint to say /mnt/zfs (which I’m afraid is confusing me a little) but then I run out of steam. Basically all I want to use it for it to group 4 devices into effectively a raid-set and use it for a VM (if I learn more about zfs then I might be brave enough to use it for more things like dockers etc., but a single VM would be a start). I wonder if someone with zfs knowledge would mind doing a generic, idiots guide to the most common setups so people like me could make use of this awesome plugin/technology. These might include: configuring a zpool for a vm, on unRaid, start to finish. Configuring a zpool for a general samba share, on unRaid, start to finish. If all we’ve got to do is swap out our device names and pool names, then I think this would really help guys like me understand how the more general use-cases work in the unRaid environment. I know basic guides do exist (like level1techs), but they all seem to stop at the point where the pool is created, so I can’t progress past that point, I’d like to see someone take that pool, give it a real mount-point and create a VM (on unRaid) using that same pool. Please understand I have tried to read as much as possible about the subject, but not being a Linux guy, even the main concepts are quite alien to me (and I’m sure others). Any help would be much appreciated Thanks in advance
  11. Brilliant, I certainly will, look forward to giving this a try, and seeing what kind of performance I can get out of those NVMEs!
  12. Excellent, thanks steini84, I didn’t realise that had been implemented yet. I had read about it being available in some implementations, but didn’t for a second think it’d be the one I actually wanted to use! So with auto trim, is it ‘set and forget’? No periodic commands etc?
  13. Apologies if this is covered elsewhere, but if I wanted to use this to create a 4x NVME vdev with 1 disk parity (like a raid 5), how would I deal with TRIM and would this become an issue?
  14. Thanks for the great feedback guys, really appreciate you taking the time to comment. It's good to hear that I don't need UEFI boot to run UEFI on the VM so that will hopefully fix one problem. I may also give BTRFS a try (although ZFS is still temping). Either way, the idea would be to regularly snapshot the VM to array storage. I'd love to pass through the NVMEs to the VM for 'bare metal' performance, but I understand the type of drive that I have has issues with hardware passthrough on unraid, which is a shame. As for the workload, I'm a developer that also does some 3D modelling/rendering and CAM (computer aid manufacturing) in my spare time. I've spent so many years spending huge amounts of money on super fast CPUs, ram, GPUs etc. only to see them snoozing for most of the day while they sit around waiting to be supplied the the file or the data they asked for from the slow storage. So when I built this rig I decided that storage was going to get as much investment as the rest, hence the NVME overkill! In truth, no I won't get any payback for having such fast storage, but then, as an enthusiast, I honestly don't care about return on investment. I mean, is there ever a use-case for overclocking a CPU to 6GHz on liquid N2, nope, but we do it because it's fun! All I want to know is that, given the potential performance of my kit, what's the best approach to extract the most of that potential performance.
  15. My ultimate intention was to maintain the ability to configure AMD raid volumes in BIOS, with a future view on supporting those volumes in a Windows VM. To access the AMD RaidXpert settings you have to have UEFI enabled. However, the specific NVME drives that I'm using (I've since found) have an issue passing their controllers through to VMs in unRaid, so this may render that approach impossible anyway. I have another post where I'm looking at the best alternatives that I might use in UnRaid, which (if something is recommended that gives me acceptable performance from the NVME drives, without horrible overheads) will make this post redundant, but right now I'm struggling with which approach to take that will give me the best performance.
  16. Ok, so I've been around and around trying to decide on the best route to take to make the best use of my new unRaid server and each avenue I've taken so far has hit some kind of wall. The setup consists of an AMD Threadripper 2950x (16 core) CPU, 32GB ram, 24 spinning disks (LSI 9305-24i HBA to 24 port backplane) and 5 NVME 1TB drives (Adata SX8200 pros). 4 of those NVME drives are on a 16x PCIE adaptor card so you can effectively stripe them if you want. The 5th NVME drive is on the mobo. When I initially specced the machine in my head, it was all sounding great. AMD was providing the capability (with Threadripper) to do NVME raid out of the box, so I was going to configure the drives on the PCIE adaptor as a super-fast, Raid 0 for hosting VMs and to act as a cache drive for unRaid. However, I suppose I should have done some more reading before getting to this point, but excitement got the better of me I suppose. The reality is, the AMD raid solution is (mostly) Windows only, with no native support in unRaid (not sure what I was expecting to find tbh). Ok, no worries, I'll use UEFI boot, configure the AMD RaidXpert2 volumes in BIOS, pass through the raw drive controllers to the VM and just employ AMD raid at a VM level, with the Windows drivers. Ah, well, with UEFI enabled, my unRaid-GUI won't fire up (the old blinking cursor issue - reported) and there is a problem with the controllers on my NVME drives that prevents them being passed through to VMs, excellent. So I've hit two walls so far!. So, some further research and other forum members helpfully suggested I create an NVME (Raid 0) cache pool in unRaid, format it as BTRFS and use that for my VMs and cache, great, sounds like we're getting somewhere, but wait, a bit of reading later and it seems BTRFS performance can be very poor compared to most other file systems and certainly wouldn't do my investment in NVME drives the justice they deserve. AMD Raid 0 over 4 of these drives can achieve in the region of 4GB/s write speed, whereas similar - online - tests have reported BTRFS maxing out at around 1.5GB/s. Obviously the proof is in the pudding, but I'd rather not invest a bunch of time setting something up just to confirm my disappointment. I even looked into doing dual-boot Windows/unRaid, so I could get all the performance out of the hardware when on Windows and benefit from unRaid when I needed those features, but then I lose the ability to copy from the windows machine to unRaid, which is a massive requirement for me. So there I am, pulling my hair out, not knowing which direction to take. A final approach I've looked into is to use the ZFS plugin to create a striped ZFS pool for the 4x NVME drives to host my VM, in the hope the ZFS performance will be better than BTRFS. I'd then use the 5th NVME drive as a dedicated cache (XFS formatted) so I can fast-copy from Windows VM to the underlying unRaid. But am I looking at this right, is ZFS a good choice for the striped-raid to host the VM, is it faster than BTRFS. If configured in a striped ZFS raid, will the pass-through issue with my specific NVME drives be mitigated as it's effectively using an existing volume? Also, if I'm not going to use AMD raid (because I'm using ZFS instead), can I go back to normal (non-UEFI) BIOS and still host Windows VMs? Is the UEFI requirement of Windows 10 dealt with by unRaid's virtual BIOS settings in the VM? I've talked about a lot of stuff up there, and I'm most likely mis-informed on most of it, so please don't flame me if I've offended anyone who prefers a certain technology over another. I'm just trying to make sense of what my options are and understand the best way to extract maximum performance out of a considerable investment in (what should be) really fast hardware. Thanks in advance for any useful info you can provide.
  17. Fair point, guess that's not going to work then. Back to unRaid and VM approach then. Thanks for your comments.
  18. Ok, just looking for suggestions here. Basically, given all the problems I’m having with UEFI boot not launching the unRAID GUI, coupled with the fact that there seems to be a problem with certain NVME drive controllers and hardware pass-through (those found on the Adata XPG SX8200 Pro NVME - of which I have 5!), I’m toying with doing a dual-boot (unRaid and Windows 10). This would let me get the most out of my hardware under windows (AMD RAID etc) as a workstation when required, but the default boot would be into unRaid so it could act as a server on demand (via WOL). However, the one main issue I see with this approach is: the ‘workstation’ element would be my main media-ripping/encoding machine, but the unRaid element (with its own cache and array drives) would be where I’d ultimately want to store that media. My question is, when unRaid isn’t running, how do I move my files from the Windows instance onto the unRaid cache so that it can be ‘moved’ when unRaid next boots? I did wonder whether the unRaid cache could be made visible from both boot modes, but obviously reading/writing to XFS form Windows ins’t easy, and if this was possible, would it even work the way I’m thinking? Any suggestions would be great, thanks.
  19. Hi I believe I have all the legacy options enabled in BIOS (UEFI in this case) and it's not helping. I can go back to a non-UEFI setup if I absolutely have to but I'd really just prefer to solve the problem so that I can properly use the hardware as the manufacturer intended. Some people are successfully seeing the GUI under UEFI boot, so I'm sure it's something specific about my mobo, which I'm hoping the previously attached diagnostics file might help with. However, if no-one on the forum can help then I may need to pass it to support so they can give it some consideration.
  20. As requested, my diagnostics file. Hopefully this will give some clues as to why I can't get into the GUI under UEFI on the server itself (I can obviously get to the GUI from another machine - hence the diagnostics file). Finally, from some reading that I've done, it sounds like BTRFS raid is quite slow (even raid 0), some examples where showing 4x NVME drives in a BTRFS raid 0 performing considerably slower than the single write speed of just one of the drives under windows? this has now got me worried that using that approach for my cache/VM drives will waste the potential of my NVME drives. Has anyone had any experience with BTRFS raid 0 on unraid either positive or negative (sorry this should probably be the basis of another post, this one should be focussed on the GUI/UEFI issue). unraidserver-diagnostics-20191118-2016.zip
  21. Thanks jonathanm What I was trying to say is, given that the cache pool will be a raid 0 and appear as a single volume with (hopefully) some performance benefit through striping, then if i provide a portion of that volume to the VM, will the VM also be using (a portion of) all 4 drives and benefitting from the performance of the striping too. So, i think, by creating the vdisk image file on the raid 0 cache pool, I’ll have access to potentially 4tb in size and any interaction with the vdisk will benefit from the slightly faster speeds provided by the underlying raid 0. thanks
  22. Thanks for coming back to me jonathanm, I have to admit that I thought that might be the case, given that even Windows requires additional drivers to make use of the raid despite the initial setup in BIOS. I can see all 4 NVME devices in the unRaid GUI so I think the BTRFS option is the way to go. My questions with that approach would be: 1) If I create a raid 0 array and use it to host a VM, will the striped array be presented to the VM as a single volume or 4 devices that then need raiding in Windows? 2) What is the performance of BTRFS raid like in comparison to traditional, hardware raid solutions? I’m all about speed here, not bothered about resilience as I’ll only delete stuff that’s being copied to the unRaid box once it’s complete and moved, and I’ll also plan to backup the VM to the main array too. I’ll try to get a diagnostics file when I’m next in front of the server, but I will say that if I disable UEFI in BIOS, the GUI loads just fine. My exact problem is mentioned in another thread but the solution there was simply to turn off UEFI. If I can achieve what I want in terms of raid on unRaid and the planned Windows VM via the BTRFS approach then this may actually be an option, but it does seem odd that the GUI doesn’t load with UEFI enabled?
  23. Hi I’ve just put together a new, fairly high-end, system with the intention of having a computer that can provide a good workstation experience when required and a good media server when the workstation is not in use. Therefore I bought into the unRaid ideal of running the unRaid OS and hosting a windows VM on top of it for when it was required. However, I have hit my first problem. My hardware is as follows: CPU: AMD Threadripper 2950x (16 core) RAM: 32gb Team Group Pro Dark 3200MHz Mobo: MSI MEG Creation X399 GPU: Gigabyte 960 GTX HBA: LSI 9305-24i (direct support for 24 HDDs) PCIE Storage: MSI XPander 2 PCIE Card with 4 x Adata XPG SX8200 Pro 1TB NVME drives NIC: onboard dual Gigabit LAN + PCIE HP dual 10GBe (sfp+ card) Case: 4U Rack server case with 24 hot swap bays and a 6GB/s backplane for my 24 WD red (4tb) spinners (main media storage) PSU: Corsair HX1200i PSU My intention is to use the 4x 1TB NVME drives in a striped raid (using AMD’s built-in raid capabilities) as an unRaid write-cache and also the drive that hosts my VM. Now to the fun part. I’ve no idea how unRaid will see the raid volume that you create at a BIOS level as I understand it then needs additional AMD software to put it all together in Windows. But that’s a problem for later. My problem right now is that, to access the raid capabilities that Threadripper provides, you must use a UEFI enabled BIOS. However, when I have UEFI enabled, then unRaid will not boot into the UI on the host machine. Instead it starts the server successfully but sits with a blinking cursor on the screen rather than the usual login prompt. I know the server is running because I can access the web GUI from another machine, just not on the host. I’ve seen other people with this issue, but they’ve resolved it by disabling UEFI, which I don’t see as an option because it is required for the NVME raid functions (I’m happy to be told otherwise if this is not the case). To test this I’m using the latest stable build on an evaluation license (I have 4 unRaid licenses on other servers but am reluctant to use on of these until I’m further towards a working solution). Any help would be much appreciated as I’m really keen to see unRaid in action on this new build. Thanks in advance
  24. Brilliant, thanks for all of the help. Ill get get on with upgrading my other two servers now! cheers adrian
  25. Well that certainly seems faster, not sure by how much as I can’t interpret the log file very well. mediaserver1-diagnostics-20180109-1448.zip