Jump to content

sonic_reaction

Members
  • Posts

    26
  • Joined

  • Last visited

Posts posted by sonic_reaction

  1. 12 minutes ago, JorgeB said:

    Not sure I follow, I asked to test transferring directly to the array with turbo write enabled.

    Sorry, when you said "try writing to one or both and see if performance is better", I was assuming you meant the individual disks, which then I explained I can't write to the indervidual disks due to it being configured as a pool.

     

    I have tried again writing via SMB with turbo write enabled and I still get around 70Mb/s. 

  2. 20 hours ago, JorgeB said:

    You have an NVMe based pool, also have fast enough disks that should write a 100MB/s+ with turbo write enabled, try writing to one or both and see if performance is better.

     

    Because its a pool I only have the one mount point so I have to write to both drives. If I DD to the drives since switching to raid 0 the speed is great at 600Mb/s~

     

    root@Cobra:~# dd if=/dev/zero of=/mnt/user/Games-Windows/test.img bs=1G count=1 oflag=dsync
    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.56801 s, 685 MB/s

     

    I think it must be something to do with SMB? as the drives speeds are fine and I have tested network throughput with iperf and that was fine too.

  3. 14 hours ago, JorgeB said:

    Post the results of a single stream iperf test in both directions.

     

    Looks fine to me on 1GBPS?

     

    Desktop to Unraid server

     

    C:\iperf3>iperf3.exe -c 10.5.0.5
    Connecting to host 10.5.0.5, port 5201
    [  4] local 10.5.0.128 port 57907 connected to 10.5.0.5 port 5201
    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-1.00   sec   113 MBytes   951 Mbits/sec
    [  4]   1.00-2.00   sec   113 MBytes   949 Mbits/sec
    [  4]   2.00-3.00   sec   113 MBytes   949 Mbits/sec
    [  4]   3.00-4.00   sec   113 MBytes   948 Mbits/sec
    [  4]   4.00-5.00   sec   113 MBytes   949 Mbits/sec
    [  4]   5.00-6.00   sec   113 MBytes   949 Mbits/sec
    [  4]   6.00-7.00   sec   113 MBytes   949 Mbits/sec
    [  4]   7.00-8.00   sec   113 MBytes   948 Mbits/sec
    [  4]   8.00-9.00   sec   113 MBytes   949 Mbits/sec
    [  4]   9.00-10.00  sec   113 MBytes   949 Mbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-10.00  sec  1.10 GBytes   949 Mbits/sec                  sender
    [  4]   0.00-10.00  sec  1.10 GBytes   949 Mbits/sec                  receiver
    
    iperf Done.

     

    Unraid server to desktop
     

    root@Cobra:/# iperf3 -c 10.5.0.128
    Connecting to host 10.5.0.128, port 5201
    [  5] local 10.5.0.5 port 57730 connected to 10.5.0.128 port 5201
    [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
    [  5]   0.00-1.00   sec   111 MBytes   932 Mbits/sec  1214    259 KBytes       
    [  5]   1.00-2.00   sec   113 MBytes   950 Mbits/sec   12    254 KBytes       
    [  5]   2.00-3.00   sec  89.1 MBytes   748 Mbits/sec  328    259 KBytes       
    [  5]   3.00-4.00   sec   113 MBytes   950 Mbits/sec   18    254 KBytes       
    [  5]   4.00-5.00   sec   113 MBytes   949 Mbits/sec   12    257 KBytes       
    [  5]   5.00-6.00   sec   113 MBytes   949 Mbits/sec   19    257 KBytes       
    [  5]   6.00-7.00   sec   113 MBytes   949 Mbits/sec    6    257 KBytes       
    [  5]   7.00-8.00   sec   113 MBytes   949 Mbits/sec    6    259 KBytes       
    [  5]   8.00-9.00   sec   113 MBytes   949 Mbits/sec    0    257 KBytes       
    [  5]   9.00-10.00  sec   113 MBytes   949 Mbits/sec    0    254 KBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.00  sec  1.08 GBytes   928 Mbits/sec  1615             sender
    [  5]   0.00-10.00  sec  1.08 GBytes   926 Mbits/sec                  receiver

     

  4. On 9/28/2023 at 9:18 AM, JorgeB said:

    Assuming you have gigabit read speeds are normal, writes are low, try transferring an actual file, a large one, using Windows explorer, also note that those SSDs are QLC, write speed will slow down a lot after the small SLC cache is full, to around 80MB/s IIRC.

     

    I know they are not the best but I would still expect them to perform better than 30-40MB/s write being SSD's...

     

    I've tried copying a 10gb file and I get around the same speeds.

     

    On 9/28/2023 at 9:28 AM, itimpi said:

    If you are using an Unraid 6.12.x release have you enabled the Exclusive Share option for that share to by-pass the overheads of the Fuse layer?

     

    I have enabled this but it hasn't improved the speeds unfortunately.

  5. Hi,

     

    I've created a separate pool of two 2tb SSD's to store my steam library on using the default BTRFS file system,

     

    drive-pool.jpg

     

    However after making sure the share is only using this cache pool and not the main array, its has really slow speeds of around 36MB/s write and 100MB/s read,

     

    lan-speed-test.jpg 

     

    I've checked my SMB settings and enabled multi channel but there has been no improvement.

     

    I'm fairly certain the problem is SMB as running a DD shows write speeds of 186MB/s

     

    ~# dd if=/dev/zero of=/mnt/user/Games-Windows/test.img bs=1G count=1 oflag=dsync
    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.77645 s, 186 MB/s

     

    Is there anything I can do to improve these speeds over SMB? 

     

  6. On 12/19/2020 at 8:13 PM, Imperioous said:

    Well, I found a quick fix for the broken easy-rsa download if anyone wants to use it. It worked for my system but I don't know if it will work for yours (although it should).

     

    1) Go to your the scripts directory in your plugin installation directory it should be located in:

     /usr/local/emhttp/plugins/openvpnserver/scripts

     

    2) Use nano (or your favorite text editor) to edit the rc.openvpnserver script.

     

    3) Using your text editor scroll down until you hit a function called "openvpnserver_get_easy", in this function you will want to make the following changes:

    delete or comment out:

    wget $EASYRSA_DL_VERSION line, this command is causing problems since an earlier curl does not work properly.

    replace this line with this command:

    wget "https://github.com/OpenVPN/easy-rsa/archive/master.zip"

    And, that's it!.

     

    After you save the script you can run the following command to download easy rsa, or you could click on the GUI's button. Your choice: 

    ./rc.openvpnserver download_easy-rsa

     

     

    It is worthy to note that this is a more or less patchwork solution, while it works now I do not know how long it will remain that way. If you have any suggestions or questions, leave a response.

     

    Good luck!.

     

    This worked great for me. It looks like the sed to get the file name no longer works in the script,

     

    curl  --fail --silent https://github.com/OpenVPN/easy-rsa/ | grep zip | grep archive |  cut -d\" -f16 | cut -d\" -f1 | sed 's#^#https://github.com#g' | grep -v "sig"

     

    Is this plugin still being maintained?

     

     

  7. I'm getting a strange issue now and again where my docker containers all stop. When I try and start one I get the following errors,

    time="2020-10-13T21:34:05.702852526+01:00" level=error msg="b8a40a15e23d642d30d9ffb729a12bc7071b8f73860a15d8ac5364887135eba3 cleanup: failed to delete container from containerd: no such container"
    time="2020-10-13T21:34:05.702884787+01:00" level=error msg="Handler for POST /v1.37/containers/b8a40a15e23d/start returned error: error while creating mount source path '/mnt/user/appdata/ApacheGuacamole': mkdir /mnt/user: file exists"

     

    I have the Appdata Backup plugin installed which spins down the containers every night and then backs them up and then starts them up again. My parity check started also last night so I think maybe its related. 

     

    I have attached my diagnostic file. When I reboot it the containers seem to start fine again.

    cobra-diagnostics-20201013-2135.zip

  8. I don;t know if anyone else is having this issue. I'm running a Windows 10 VM version 2004 build 19041.488 on a nvme drive which I use the clover image to boot to. I cannot install uppdate kb4571756 at all and fails with 0x800f0922. I've ad a google and done all the recommended stuff like install .net and enable app AppReadiness and nothing is working. I don't know if it might be to do with me running it via the clover boot image because I have my nvme drive passed through to the VM?

  9. Hi,

     

    I'm having some issues with my 5700xt. I've been trying to update the GPU drivers, I get to the first screen then the VM locks up. I've checked the logs and it has these lines,

     

    2020-07-04T20:19:15.321825Z qemu-system-x86_64: vfio: Unable to power on device, stuck in D3
    2020-07-04T20:19:15.551449Z qemu-system-x86_64: vfio: Unable to power on device, stuck in D3

     

    Does anyone know why this is happening?

     

    cobra-diagnostics-20200704-2144.zip

×
×
  • Create New...