sailorbob134280

Members
  • Posts

    5
  • Joined

  • Last visited

sailorbob134280's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I want the ability to create a logical volume as a backing store, specify what devices it can be stored on (like cache-only or a specific drive, similar to a share), and present it to the network as an iSCSI target. Basically, I want it to be tgt with a GUI wrapper and some extra Unraid-specific options. Does that answer your question?
  2. I’d like to add a use case that doesn’t seem to have been mentioned: I want to mount an iSCSI target hosted on a different machine to serve as a cache drive. I have a box running FreeNAS with an array of 10k SAS drives (planning to swap those for SSDs at some point) acting as a target for VMs, game library, etc. and it seems like it would make sense to run unRaid’s cache system off the SAN rather than having a separate set of cache drives just for unRaid. Any thoughts on this? It would help consolidate a few storage arrays and keep taking advantage of unRaid’s caching system.
  3. Thanks for the help, I had imagined it was a bandwidth issue somewhere in the chain. The SSD is indeed a cheap TLC one that I bought the other day, looks like I'll be returning it and buying a better one, as well as a NIC.
  4. I've been running into a problem with writes and after hours of troubleshooting, I admit I'm stumped. Background info: I'm running a whitebox unRaid server with an old Intel Core i3-540 and 6GB of DDR3 RAM (it gets the job done, or so I thought). Drives are 5x HGST 7.2k 2TB SATA III drives connected through SATA II (motherboard limitation, shouldn't affect throughput?) running in a single parity config. I also have a 120GB SSD as a cache. See the attached diagnostics for full specs. The problem is that writes to the server (either from my Windows desktop via SMB and FTP or from another unRaid box via NFS) start out at gigabit speeds and eventually settle out at around 50 MB/s or less. Now I know this is normal for read/modify/write operations (those actually top out around 65 MB/s), but this cap applies to reconstruct writes and writes to the SSD cache. Looking at the drive activity indicators on the server itself, the drives only show activity every second or so during turbo writes. Internal transfers from the SSD to the array with turbo write enabled cap out at around ~140 MB/s, which is expected. The activity LEDs light up the entire transfer. Going from the array to the cache is a similar story. Things I've tried: Disabling Docker/VM's and any other service that might be accessing the array. The Open Files plugin shows that only Samba is open, which makes sense because I am testing the transfer from my Windows desktop Transferring with FTP from my Windows box and NFS from another unRaid box, same symptoms Using Teracopy instead of Windows explorer, same symptoms Tinkering with the vm.dirty_ratio settings. I was able to make the turbo writes either fluctuate like crazy or settle out at around 40 MB/s. This leads me to believe it might be a memory issue? Running the Disk Speed Test plugin to test drive speed, all are capable of much higher transfer speeds than I'm seeing (should be around 140 MB/s if the LAN bottleneck was removed) Enabling jumbo frames on both ends (MTU size of 9000), which my switch supports. This produced no change. Defragmenting my data drives. This worked wonders for my read speeds off one of the drives (different issue, now solved) but did not impact write performance Reverting to previous versions of Samba, this made no change (unsurprising, as the FTP and NFS transfers were similarly affected in the first place) Moving SATA ports/cables around to try to isolate a bad port/cable. No change. Transferring the test file from my Windows box to the other unRaid box, substained 112 MB/s. Any ideas? I'm not unwilling to spend a little money on parts if I know it'll help (more ram, cheap HBA, new NIC, etc) but I'd rather get some advice before I resort to shotgun repair. I had initially thought it was the SATA controller becoming saturated since it's an old board and was never top of the line to begin with, but since the internal transfers worked fine, I'm beginning to think its something to do with the memory. Thanks for the help! Transfer to the cache drive. It starts at 112 MB/s and drops to about 50 MB/s where it stays for the rest of the transfer. Transfer directly to the array with turbo write enabled. RAM cache settings heavily impact the speed graph. Average is around 50 MB/s. Transfer directly to the array with read/modify/write enabled, which is currently the fastest way to transfer files to the server. RAM cache settings don't really make a difference here. aurora-diagnostics-20171108-2342.zip
  5. I'm a super noob when it comes to linux, but I had an idea for a (possibly) better form of change detection: when Filebot detectgs a change, get the size of each of the files in the input directory. After the wait duration, get the size again and compare. If it's different, the files are still being written. It would be free to move the files that are the same size. This wouldn't stop Filebot from moving a file that is being read, but it would help prevent moves of files that are still being created. The reason I'm requesting this is because I'm trying to set up an automated workflow for ripping DVD's. It goes MakeMKV autoripper --> Filebot --> Handbrake --> Plex. The problem I'm running into is that Filebot keeps moving the files that are still being ripped. I'm not sure how to write a script to detect when the file is done being ripped, and MakeMKV doesn't have move-on-complete functionality (I'll be making a feature request there too). Any other ideas are welcome.