mmmeee15

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by mmmeee15

  1. When transfering files over SMB from 1 unraid machine to another on the same network, speeds are different when using MC (achieves full gigabit) and using Windows copy in a VM on one of the unraid servers (around 20MB/s). 

    It does not matter if I copy from/to array, cache, or unassigned disk, speed is always the same, MC gives gigabit, VM gives 20MB/s. 




     

  2. A bit of a long story, but hopefully its just an easy fix that I'm missing. 

    I have 2 unraid servers, lets call them A and B. A is mostly for VMs while B is mostly for storage. I have a VM (windows) on A that I use to move things around, and as such it has quite a few shares mapped using the traditional windows way (map drive, \\192.168.1.A\share, or 192.168.1.B\share). Things map just fine, and no real issues disconnecting or anything. The issue is that the speed when moving/copying files using the VM is not what it should be, it hovers around 60MB/s (direction does not matter, A to B or B to A problem still persists). It is not an issue regarding the array/cache arrangement, as you will see below. 

    I have tried numerous things to test, and I think the clearest example is between 2 Unassigned SSDs, that for the purposes of this test were not being used for anything else at the moment. To be extra clear, I even put in a different NIC in each server to make sure that it wasn't anything bottlenecking the transfers. Using MC to transfer directly between the SSDs, the transfer maxes out the gigabit, as it should be. Using the VM to transfer the same file between the disks, its the same 60MB/s. 

    When using a different computer to transfer, there is no issue. All of the transfers go full gigabit, regardless of cache, array etc.

    To make things stranger still, a VM on server B, copying the same file to the same disks, has no issue, works just fine. A different VM on server A has the same issue, hovers around 60MB/s. And heres the best part, if I copy the VM from server B to Server A, copy the settings so theyre all the same, test again, the issue still is there.

    So it must be something to do with server A? I did recently change server A from a custom server to and HP gen9. I don't recall this being an issue before then, but I could just not have been paying attention. 

    Any ideas would be very helpful. 

  3. 27 minutes ago, JorgeB said:

    Unlikely, AFAIK the only thing that changed is the device detection during boot, that looks more like driver/initialization issue, I would recommend starting a new thread in the general support forum and please don't forget the diagnostics.

    Will do. I'm getting a new cable shortly, and will post if needed. 

     

    Thank you

  4. Could you provide more information on the Mellanox NIC bug fix? 

    I have 2 connectx-2 cards flashed to ethernet mode (with the unraid plugin). The cards were correctly detected and showing up in the plugin, but the link appeared down even though it was properly connected. Could this be related, or is my qsfp cable bad? Also there were a few strange issues with the network config of other NICs when the mellanox card was plugged in.

     

    It wasn't a critical issue at the time, so I dropped the matter, but if it is fixed now, it would be great to have 10gbit. 

  5. Hello,

     

    Hopefully this topic has not been repeated, but I am facing a very strange issue that is affecting only 1 of my VMs (Windows 10). 

     

    Setup is as follows:

    Ryzen 2700

    Asus b450 Plus (latest bios as of yesterday)

    Random Nvidia GPU for unraid, but not passed through to any VM

    LSI 9211-8i connected to the 8 HDD (2 parity, rest data)

    2 sata SSD connected through Mobo sata ports

     

    What happens is that after what appears to be a random amount of time, one of my Windows 10 VMs appears to lose connection to the drives when doing a transfer between the local machine (ie the one that the VM is setup on) to another machine. What this means in more practical terms is that the transfer will stop(goes to 0 KB/s or remains at whatever value it had last), but does not continue nor give any errors. I have left it going to see if it times out, but that doesn't happen either. When I attempt to restart the VM, or to shut it down, it remains at the Restart or Shutdown screen with the spinning circle forever, forcing a hard restart of the server, which is the strange part. 

    The VM itself is still responsive, so its not a windows lockup, and leads me to believe that its still accessing the cache disk (where VM is located) just fine. The other VMs do not have this problem, and can be shutdown without issue when the first VM gets 'stuck'. So, what is wrong with this particular VM that is causing the issue? Is it something known, or will I have to create another VM from scratch to see if that resolves things? 

     

    If there is anything I can provide, or explain further, please let me know. 

     

    Thank you

  6. Hello,

    Hopefully this is a silly issue that is easy to fix, but for the life of me, I cannot figure it out. 

    I have a fairly normal unraid server, 2 cache drives, 2 parity, 6 disks, on the most current unraid build (non beta). 

    I have my mover, ssd trim, and parity check setup with scheduler, but none of them are actually working (I have to run them manually). 

    Initially I thought it was due to a VM running, but have since stopped that, tried changing the times in scheduler, rebooting etc, no luck. 

    Syslog shows nothing (ie at the time when mover should be running, nothing is listed in syslog), and have definitely confirmed that none of the 3 actually run with scheduler. 

    Any help would be greatly appreciated, and let me know what other information I can provide. 

    Thank you

  7. Thank you for your reply. 

     

    Why are SSDs not recommended?  I know that they will technically suffer in the long term due to the limited writes that you can perform, but besides that? I want to stay away from HDDs in my main rig, since they are rather loud (I have a bit of a problem with noise in my setup haha), and I already have a large HDD server in another room. Basically just looking for speed and silence in my main rig. 

     

    Yes, I would only ever have either windows or ubuntu running at any time. 

     

    Sorry I should have clarified that point, I meant to say that I would have 2 separate VMs like you mentioned. 

     

    And yes, was planning on using it for cache / VMs in some sort of split. 

     

    Also, meant to ask, I have a mellanox 40gigabit NIC between my main rig and server, model number MHQH19B-XTR. It was a bit of a pain to get this setup in windows, but I've heard it has native support in linux environments. Since unraid is technically linux, it should work without any issues right?

  8. Thanks for getting back to me. 

     

    So basically, to sum things up, it would be best to have a fresh start with everything, including windows, and all respective drives?

     

    To specify a few more details, my PCIE is currently a boot drive, with 3 more SSDs in JBOD. So clearly the PCIE will become the cache and OS drive as it's much faster, but is it currently possible to have an SSD array (will likely setup just 1 disk for parity, or maybe just JBOD again, since everything is backed up to a different server regularly)?

     

    Regarding the dual boot, I know it is technically possible, But the "leading to corrupt data" is an issue I clearly want to avoid. If the other drives (shares) do not have any OS related folders, but are simply data storage for photos, video, games etc, will this still be a problem?

  9. Hello everyone,

     

    I've read up a lot on unraid, and watched every video from LTT with you guys, and I'm ready to take the plunge into unraid. I have a few potential applications, but for the start I would like to basically transfer over my existing main desktop over to unraid. 
    Specs that matter:

    Currently windows based

    6850k platform

    Intel 750 PCIE Boot drive

    A few other SSD storage drives

     

    My main questions are:

    1. How easy will it be to get everything back up and running with my existing hardware, to the point that I had it previously.

    2. I want to create shares (will put all the SSD's minus PCIE into parity if possible) to store all my various files, so will that require deleting everything off the drives?

    3. Plan on dual booting ubuntu, so how will this affect things like sharing of files etc. 

    4. Not planning on having both windows and ubuntu running at the same time, so should be fine to pass the same GPU to both?

     

    I will try and provide any other important information.

     

    Thank you for your help