Trynn

Members
  • Posts

    24
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Trynn's Achievements

Noob

Noob (1/14)

1

Reputation

  1. switched from virtio-win-0.1.126_stable to virtio-win-0.1.185.iso. Installed with now included "virtio-win-gt-x64.msi" installer. Network improved now to Vm-To-Host: 9.x GBit/s Host-To-VM: 7.x GBit/s so issue solved by driver updates and got a really good improvement too. Issue solved for me.
  2. thank you very much. <3 that should proof pretty much that i'm on the right track and it's not normal behaviour.
  3. can anyone may test the those iperf benchmarks on their maschine? couldn't find any reverse-test so far, and would be good to know if the speeds should be faster.
  4. Hi guys, I think i do need a little help to further trace my current network issues. Details Unraid-OS: Version: 6.8.3 Network-Settings on Unraid HOST: Bonding: no Bridging: yes Bridging-Members: eth0 IPv4 Only All IPs / gateways are static Local DNS Server on Router. MTU: 1500 Vlans: no Where i'm coming from In summary, i do host a database as docker on my unraid machine. Storage is on the cache NVME drive. Now i have a bunch of operations that should take around 20 seconds to execute. Now i wanted to excess this docker-container from within a a Win10-VM running on the same Unraid Host, which took about 15 minutes. Accessing the same docker-database from an external workstation, it takes the estimated 20 seconds. So from my point of view, it's not: - the storage / hardware ressources - the docker container - the database system After some searching, i do think it is related to the network. So i tested the connections speeds with iPerf. Unraid-Host: 192.168.1.10 VM: 192.168.1.200 .\iperf3.exe -c 192.168.1.10 Connecting to host 192.168.1.10, port 5201 [ 4] local 192.168.1.200 port 60584 connected to 192.168.1.10 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 433 MBytes 3.63 Gbits/sec [ 4] 1.00-2.00 sec 422 MBytes 3.54 Gbits/sec [ 4] 2.00-3.00 sec 444 MBytes 3.72 Gbits/sec [ 4] 3.00-4.00 sec 436 MBytes 3.66 Gbits/sec [ 4] 4.00-5.00 sec 426 MBytes 3.57 Gbits/sec [ 4] 5.00-6.00 sec 440 MBytes 3.69 Gbits/sec [ 4] 6.00-7.00 sec 439 MBytes 3.68 Gbits/sec [ 4] 7.00-8.00 sec 445 MBytes 3.74 Gbits/sec [ 4] 8.00-9.00 sec 428 MBytes 3.59 Gbits/sec [ 4] 9.00-10.00 sec 449 MBytes 3.76 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 4.26 GBytes 3.66 Gbits/sec sender [ 4] 0.00-10.00 sec 4.26 GBytes 3.66 Gbits/sec receiver iperf Done. .\iperf3.exe -c 192.168.1.10 -R Connecting to host 192.168.1.10, port 5201 Reverse mode, remote host 192.168.1.10 is sending [ 4] local 192.168.1.200 port 60587 connected to 192.168.1.10 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 22.8 KBytes 187 Kbits/sec [ 4] 1.00-2.00 sec 25.7 KBytes 210 Kbits/sec [ 4] 2.00-3.00 sec 20.0 KBytes 164 Kbits/sec [ 4] 3.00-4.00 sec 21.4 KBytes 175 Kbits/sec [ 4] 4.00-5.00 sec 20.0 KBytes 164 Kbits/sec [ 4] 5.00-6.00 sec 20.0 KBytes 164 Kbits/sec [ 4] 6.00-7.00 sec 21.4 KBytes 175 Kbits/sec [ 4] 7.00-8.00 sec 20.0 KBytes 163 Kbits/sec [ 4] 8.00-9.00 sec 20.0 KBytes 164 Kbits/sec [ 4] 9.00-10.00 sec 20.0 KBytes 164 Kbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 328 KBytes 269 Kbits/sec 50 sender [ 4] 0.00-10.00 sec 211 KBytes 173 Kbits/sec receiver iperf Done. As you can see, the network speeds from VM => Host are totally fine with 3.66Gbit/sec. But the reverse-Connection Host => VM is utterly garbage. And i'm kind of stuck now and out of ideas how to trace this further and what could possibly be the issue here. Any advice is heartly welcome. PS: Please just let me know if you need more informations. i'll provide whatever i can.
  5. You nailed it. Thanks Did TRIM manually first with: fstrim -v /mnt/cache Transferspeed is now steady @max Network. Just installed dynamix trim pluggin as adviced. (https://lime-technology.com/forum/index.php?topic=36543.0) But not sure how to schedule it to run "daily". There are no options i can find in the web-GUI :-)
  6. Hi, currently i'm not pleased with the performance i get from my unraid server when writing to the SSD Cache drive. Unraid: 6.2 SSD: Samsung 840 Pro, 256GB Disk Usage: 34% (1VM) Format: btrfs Network: 1Gbit/s So a normal file Transfer with SMB share currently looks like: It starts with maximum speed (limited by network) and drops off to 20-50mb/s after ~600 to 1000MB of data. Model family: Samsung based SSDs Device model: Samsung SSD 840 PRO Series Serial number: S1ATNEAD560077H LU WWN device id: 5 002538 55033e322 Firmware version: DXM05B0Q User capacity: 256,060,514,304 bytes [256 GB] Sector size: 512 bytes logical/physical Rotation rate: Solid State Device Device: In smartctl database [for details use: -P show] ATA version: ACS-2, ATA8-ACS T13/1699-D revision 4c SATA version: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Local time: Fri Oct 28 11:34:55 2016 CEST SMART support: Available - device has SMART capability. SMART support: Enabled SMART overall-health: Passed Smart seems to be fine. 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 9 Power on hours 0x0032 098 098 000 Old age Always Never 5542 (7m, 16d, 22h) 12 Power cycle count 0x0032 097 097 000 Old age Always Never 2302 177 Wear leveling count 0x0013 096 096 000 Pre-fail Always Never 141 179 Used rsvd block count tot 0x0013 100 100 010 Pre-fail Always Never 0 181 Program fail count total 0x0032 100 100 010 Old age Always Never 0 182 Erase fail count total 0x0032 100 100 010 Old age Always Never 0 183 Runtime bad block 0x0013 100 100 010 Pre-fail Always Never 0 187 Uncorrectable error count 0x0032 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0032 067 055 000 Old age Always Never 33 195 ECC error rate 0x001a 200 200 000 Old age Always Never 0 199 CRC error count 0x003e 100 100 000 Old age Always Never 0 235 POR recovery count 0x0012 099 099 000 Old age Always Never 224 241 Total lbas written 0x0032 099 099 000 Old age Always Never 25772364270 And some Disk-Log Information from last Reboot. Oct 27 20:01:13 UNRAID kernel: ata6: SATA max UDMA/133 abar m2048@0xdf12c000 port 0xdf12c380 irq 125 Oct 27 20:01:13 UNRAID kernel: ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Oct 27 20:01:13 UNRAID kernel: ata6.00: READ LOG DMA EXT failed, trying unqueued Oct 27 20:01:13 UNRAID kernel: ata6.00: failed to get NCQ Send/Recv Log Emask 0x1 Oct 27 20:01:13 UNRAID kernel: ata6.00: ATA-9: Samsung SSD 840 PRO Series, S1ATNEAD560077H, DXM05B0Q, max UDMA/133 Oct 27 20:01:13 UNRAID kernel: ata6.00: 500118192 sectors, multi 16: LBA48 NCQ (depth 31/32), AA Oct 27 20:01:13 UNRAID kernel: ata6.00: failed to get NCQ Send/Recv Log Emask 0x1 Oct 27 20:01:13 UNRAID kernel: ata6.00: configured for UDMA/133 Oct 27 20:01:13 UNRAID kernel: ata6.00: Enabling discard_zeroes_data Oct 27 20:01:13 UNRAID kernel: sd 6:0:0:0: [sdg] 500118192 512-byte logical blocks: (256 GB/238 GiB) Oct 27 20:01:13 UNRAID kernel: sd 6:0:0:0: [sdg] Write Protect is off Oct 27 20:01:13 UNRAID kernel: sd 6:0:0:0: [sdg] Mode Sense: 00 3a 00 00 Oct 27 20:01:13 UNRAID kernel: sd 6:0:0:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 27 20:01:13 UNRAID kernel: ata6.00: Enabling discard_zeroes_data Oct 27 20:01:13 UNRAID kernel: sdg: sdg1 Oct 27 20:01:13 UNRAID kernel: ata6.00: Enabling discard_zeroes_data Oct 27 20:01:13 UNRAID kernel: sd 6:0:0:0: [sdg] Attached SCSI disk Oct 27 20:01:13 UNRAID kernel: BTRFS: device fsid a1ce48b5-34ee-4745-8a40-67ca67d04a29 devid 1 transid 32809 /dev/sdg1 Oct 27 20:01:20 UNRAID emhttp: Samsung_SSD_840_PRO_Series_S1ATNEAD560077H (sdg) 250059064 Oct 27 20:01:20 UNRAID emhttp: import 30 cache device: sdg Oct 27 20:01:22 UNRAID kernel: BTRFS info (device sdg1): disk space caching is enabled I'm not sure if the device itself can't handle the speed. Any help to analyse the problem is appreciated. Regards, Trynn
  7. Hey, for comparison i show you my own disc config. But please note, i'm also a newbi on unraid. picupload I'm not sure why you don't use your SSD as a Cache drive. You only use 80GB on the current drive. Also i have no idea how you used/mounted the unassigned disk :-) What i would do: Backup all Data on SSD/Current Cache drive. Use any Clone tool and Clone your Cache Drive over to SSD (not inside Unraid) Then Assign the SSD as your Cache drive. Then you CAN (if you want) add your 2TB Cache as additional storage space. For VM: You make a Normal Usershare (Private, "Use Cache only"), and this share will store everything on the ssd. In VM Config use this Usershare as the "default VM storage Path".
  8. One additional Question, when i rebuild a disk (just recently replaced one) my system is working with 140-150mb/s speed. When i move files with MC between disks, i merely get 30-40mb/s overall. But if i monitor it, i can see that sometimes it uses 80mb/s++, but only for a short period. Any idea where my bottleneck could be? MSI C236M Workstation (7972-018R) Intel Core i3-6100 8GB DDR4 2133 ECC 1 Samsung 840 Pro Cachedrive 5 WD Red 3TB
  9. Used mc on the machine. Worked out great. Thanks. Thanks trurl for the important note
  10. Hey, i made the mistake to config my user shares after data is alread on the shares. Now i read this guide here: https://lime-technology.com/setting-up-your-file-structure-and-user-shares-on-unraid/ What is the best way to move existing files over to assigned disks? For example, I have /movies/ on Disk1/2/3/4. No i set /movies/ to only use Disk1. How to move subfolders of 2/3/4 to disk1? Seems to me like a pretty obvious normal task, but was not able figure out the easiest way to accomplish that. Thanks in advance.
  11. Ok. Running stable on USB2 now. The disconnection-issues seems to be related to USB3 Port. The other behaviour is just a follow up, because unraid can't handle the lost configs - at least i think so.
  12. I'm on USB2 now. Let's see what will happen today Another effect i notized last night. Even tho i set my AppData Share to CACHE ONLY, and manually moved all AppData Files to Cache Disk (two times already), the System (PlexMS - only App) still recreating the AppData folder on Disk1 and writing to it. Wtf? I could imaging this also can happen when unraid lose it's config, so it stops knowing the cache-only config?