Yousty

Members
  • Posts

    65
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Yousty's Achievements

Rookie

Rookie (2/14)

3

Reputation

  1. I actually figured out how to do it on my own and it's super easy. Figured I'd post the solution here in case anyone else finds this thread via googling like I did. I'm assuming everyone is following the guide on OmerTu's github and you're at step B.3. Just create a new docker container and add the following port and variables to it.
  2. Did you ever get this figured out sonofdbn? I would like to set it up as well but not sure where to begin.
  3. Thank you!!! I updated my AMC input and output directories using storage as shown in the picture and move now works!
  4. I'm hoping somebody can help. I have Filebot setup and working perfectly. I download files from my seedbox to a temp folder (mnt/user/temp) and then have AMC rename and move the files to where they belong (mnt/user/Movies or TV Shows) because I don't need the files in the temp directory after they're renamed/moved. However I've noticed using the move command requires reading and writing the entirety of the file, which seems odd to me since it's staying on the cache drive (until unraid's mover runs and moves the files to the array). Usually it's not that big of a deal but a lot of times I'm downloading 60GB files and that's a lot of unneeded wear and tear on the disk, not to mention time consuming. I've gone into Midnight Commander and manually moved a file from mnt/user/temp to mnt/user/Movies and it was instant so there must be something wrong in the Filebot docker permissions or setup that's causing it read/write the entirety of the file when it performs a move. I've attached pictures of my Filebot Docker setup
  5. Aaaand I solved it. Decided to make sure I had the latest NIC software installed on my Windows 10 source machine, and sure enough after installing that I am now transferring at 113MB/s to my Unraid server. Thank you everyone for helping me troubleshoot and leading me down the right path to fix the issue!
  6. Finally had some time to watch the video and run iperf and shockingly it is the network causing the slowdown. C:\iperf3>iperf3 -c 192.168.1.3 Connecting to host 192.168.1.3, port 5201 [ 4] local 192.168.1.2 port 58770 connected to 192.168.1.3 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 84.0 MBytes 704 Mbits/sec [ 4] 1.00-2.00 sec 83.9 MBytes 704 Mbits/sec [ 4] 2.00-3.00 sec 84.0 MBytes 705 Mbits/sec [ 4] 3.00-4.00 sec 83.9 MBytes 704 Mbits/sec [ 4] 4.00-5.00 sec 83.8 MBytes 703 Mbits/sec [ 4] 5.00-6.00 sec 84.0 MBytes 704 Mbits/sec [ 4] 6.00-7.00 sec 84.0 MBytes 704 Mbits/sec [ 4] 7.00-8.00 sec 84.0 MBytes 704 Mbits/sec [ 4] 8.00-9.00 sec 83.9 MBytes 704 Mbits/sec [ 4] 9.00-10.00 sec 83.5 MBytes 700 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 839 MBytes 704 Mbits/sec sender [ 4] 0.00-10.00 sec 839 MBytes 704 Mbits/sec receiver iperf Done. It just makes no sense to me since literally nothing about my network has changed since switching from SATA to NVMe SSD.
  7. I transfer mostly video files, ranging from 1GB to 60GB and they always max out at 84MB/s now. The screenshot shows a transfer I just did. As you can see it hits 84MB/s right away and sits there, almost like there's a bottleneck somewhere. I am positive it's going to the cache drive. I monitored the cache drive temp in Unraid during the transfer and it stayed 88°F the whole time. I have the SSD trim app installed and set to run every 4 hours. I'll watch that video and do the tests but it's highly unlikely it's a network issue when I've been doing hardwired transfers to this server at 113MB/s for over 5 years now and the ONLY thing that changed was switching out my SSD cache drive from a SATA one to an NVMe one.
  8. I have attached my Diagnostics report. Yes, I ran the DiskSpeed docker, but as far as I can tell it only benchmarks read speed, which it benchmarked at 1,502MB/s the whole test. I'm not terribly familiar with iperf so unsure what to run, but I figured my network wasn't the issue since I always maxed out network speed with my previous SSD. nas-diagnostics-20200330-1001.zip
  9. I recently switched out my cache drive from an older 250GB Crucial SSD to a 500GB Samsung Evo 960 NVMe SSD using a PCI-e to NVMe adapter. The weird thing is, even though the NVMe drive is faster I'm seeing slower speeds from it, particularly when I transfer files to it over my gigabit hard-wired network. With the SATA SSD I could saturate my network every time transferring at 113MB/s both read and write. But with the NVMe SSD the fastest I can write to it over the network is 84MB/s and I still get 113MB/s reads from it so I know it's not the network. Here is the hardware I'm using: Asrock 990FX Extreme9 w/ latest firmware - I'm using PCI-e slot #1 which is a PCI-e x16 slot This PCIe to NVMe adapter - which is capable of 1500+MB/s write Samsung Evo 960 NVMe SSD - which can easily max out the adapter I enabled jumbo frames in Unraid's settings but that didn't make a difference. Any suggestions would be highly appreciated. Thank you!
  10. Yes, I'm using ethernet. Yes I'm in a cold climate with dry air, what could possibly be causing a bad ground?
  11. This build worked fine for a few months then started exhibiting this issue. I just unplugged and re-plugged every power connection in the case, guess we'll see if that resolves the issue.
  12. Yeah, I have some SATA power splitters to be able to power all the hard drives.
  13. Can somebody please try and help solve an issue I've been having the last several months that's driving me crazy?! For some reason my Unraid server will hang whenever my tower is touched, we're talking placing something on top of it or even something as simple as the Roomba bumping into it. If that happens it becomes completely unresponsive even though I can still hear the fans running. There is nothing on the monitor and trying to access webUI fails. This has happened about 20 times in the past few months and it's driving me crazy because the only way to fix it is to hard shutdown with the power button which causes me to have to do a parity check when starting up again, putting lots of unnecessary read time on all 7 of my disks. I opened the tower and checked all cable connections to make sure they were secure and re-seated my 2 sticks of RAM, but it's still happening. Any suggestions are greatly welcomed! Thank you.
  14. Actually the worst thing that can happen is frying the motherboard leads going to the CPU fan header due to high current draw, but the odds of that with only 3 fans total is pretty slim.