wrenchmonkey

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by wrenchmonkey

  1. I've been using this docker for the past several weeks, and have identified it as the culprit for filling up my Docker image. I have the storage for /Media and /Config properly pointed to shares outside of the docker image. Not sure what's misconfigured here. Has anybody else seen this?
  2. So, I assume a "small file" would be anything smaller than the "chunk" size, right? I did some reading and it seems chunk size would be 1GB for data, and 256MB for metadata. Is that correct?
  3. Thank you! I am doing that now. I moved a VM image to the other cache drive temporarily to clear some space, since I was getting a 'disk full' error when trying to balance. It's running now. Is this something that should be done routinely? If so, I will set it up in the user scripts to automatically perform this task. How often would you recommend running it?
  4. I'm pretty confused. A few days ago, I had an issue with a Plex docker filling my cache drive up with thumbnails/preview content. I deleted that data, and set Plex to not store that data, freeing up 172 GB of data. I have a VM that utilizes about 160GB of storage space on the drive (which is included in the 318GB total). I've been using that VM to download some large files, and then transferring them via SMB to the actual share drive later. Anyway, last night, the VM paused, and when I went to resume, I saw that the drive shows that it's 100% full, with 0bytes of available space, and 318GB in use. I rebooted the server, and it still shows that there's only 318GB of actual storage, even though it still lists the drive as a 500GB drive. (Screenshot and logfile attached). tower-syslog-20210915-1523.zip
  5. No worries, yes, it's just single disk. I did the write tests in Terminal and speeds were as expected. But under normal use, the performance is horrible on all disks, including cache.
  6. I'm not sure what you mean by using raid 1. I'm using unraid, and the SSD is a single cache disk.
  7. Yeah, that's not it either. If it was a trim issue, then it would happen even during those write speed tests done in terminal. It doesn't. I've been running trim nightly on the drive using the Dynamix Trim plugin since I set the server up originally. Running it manually, even with verbose flag, shows no errors.
  8. Read performance on all disks is great. 130-150MBPS I'm quite confident at this point that it's not a hardware problem.
  9. No way of knowing if they're GOING to fix the bug, but it's a bug they're aware of, and I assume they would WANT to fix it at some point. Hopefully...
  10. I've had no problem with mixing SAS and SATA drives. You can even use SAS connectors on SATA drives (but not the other way around). The only downside is that there's a bug in Unraid that prevents SAS drives from spinning down. If you use SAS drives, make sure you set them to not spin down. Just discovered this issue in my own setup, and disabled the spin down timeout on my SAS drives. Another big benefit is that Surplus/Used SAS drives can often be found WAAAAAAY cheaper than SATA drives.
  11. I ran into that issue when I first added my drive, and ultimately just moved everything over using terminal commands. Kinda dangerous if you don't know what you're doing, but sometimes you just gotta get a bigger hammer... You could also try running docker safe permissions before running the mover. It could be a permissions issue, maybe?
  12. Disabled Turbo Write and re-ran the tests, with the exception of the user share and the cache share, since it would be redundant. (Turbo Write Disabled) In /mnt/user0: Average of 3: 73.46 MB/s In /mnt/disk1 Average of 3: 73 MB/s In /mnt/disk2: Average of 3: 76.86 MB/s In mnt/disk3: Average of 3: 77.46 MB/s In /mnt/disk4: Average of 3: 73.7 MB/s In mnt/disk5: Average of 3: 73.33 MB/s
  13. Sorry, I'm relatively familiar with terminal, but haven't ever used that particular command. It was late and I overlooked the instruction to CD into the directory I wanted to test. I just kept it in the default directory. Trying again. (Turbo Write Enabled) In /mnt/user: Average of 3: 351.6 MB/s In /mnt/user0: Average of 3: 98.86 MB/s In /mnt/cache: Average of 3: 710 MB/s In /mnt/disk1 Average of 3: 102.3 MB/s In /mnt/disk2: Average of 3: 99 MB/s In mnt/disk3: Average of 3: 101.33 MB/s In /mnt/disk4: Average of 3: 102.33 MB/s In /mnt/disk5: Average of 3: 103.6 MB/s All looking pretty good. Definitely more like what I'd expect to see from these disks. I will re-run the tests with turbo write disabled and post results. It's kind of a time-consuming process.
  14. Yes, sorry I overlooked the trim question. It's scheduled to trim nightly at midnight. The testing to the cache was specifically because somebody requested some other screenshots, besides the work going on in unbalance. I ran that command: dd if=/dev/zero of=file.txt count=5k bs=1024k And returned: 5120+0 records in 5120+0 records out 5368709120 bytes (5.4 GB, 5.0 GiB) copied, 3.33656 s, 2.2 GB/s Ran it several times in a row and got exactly that same number all but once, and go 1.6 GB/s once. Not sure how to run it in other directories. I tried dd if=/mnt/user0 of=file.txt count=5k bs=1024k to run it in the array without cache? But it says the file doesn't exist, so I'm not sure what path to point it to, since I don't fully understand what that command is doing in the first place. Not sure what I need to do different, but I don't want to get too creative and corrupt something.
  15. Why not just add the second drive, or just unmount the old drive, mount the new drive in the "array", and then mount the old drive as an "unassigned device" and then copy the data over to the new drive? If you want some redundancy, add the larger drive as a parity drive, and then add more/larger drives from there. Once you have parity created, you can simply replace drives and it will rebuild data. It's kind of a hard pill to swallow that larger drive for parity at first, but in the long run, you'll be better off, IMHO.
  16. Cache-to-cache is actually significantly better. It bursts up as much as 400MBPS for a few seconds, and then seems to level off around 120-150MBPS. I fixed the log spamming issue, I think. No change on transfer speeds though. The board is a Gigabyte GA-7TESM board. All SATA connectors on this board are 3.0. It also has an onboard SAS controller, which I've connected with breakout cables. I have a breakout board for it as well, but currently am just plugged directly into the SAS port for most of the drives. It's an older server board, but it's still a pretty capable board. I don't see any reason why it wouldn't be capable of handling reasonable transfer speeds. Then again, maybe I'm taking the wrong approach. Maybe I should've started by asking what other people are seeing as normal operating transfer speeds. Maybe I'm expecting too much from Unraid, but I don't think it's all that unreasonable to expect a measly 50MBPS from, for example, a drive that is seeing write speed benchmark averages in the low-to-mid hundreds, as reported here: https://hdd.userbenchmark.com/Seagate-Barracuda-720014-1TB/Rating/1849 And like I said, unless I'm crazy, I'm pretty sure I was seeing 50MBPS average on spindle writes, and minimum 130MBPS to the cache. Am I crazy? Does Unraid really have that much overhead? Even with turning on "turbo write"? I get that there's a lot of overhead for parity, but this still seems slow, especially to see zero difference when using turbo write. I've tested switching cables between the SAS controller and the SATA ports with the cache drive to ensure that the issue isn't cabling or the ports. I've also got an LSI SAS controller card, and even a SATA 3 PCIe cart that I could throw in as well, to test, but that seems a little far-fetched, TBH. Thanks for clarifying that disk 29 is the second parity drive. It occurred to me that it might be second parity, but I wasn't sure why those drives would be wanting to spin down, since I figured they'd be spun up full time when writes were happening. Attaching the whole latest diagnostics zip file below. Thanks for all the responses, guys. tower-diagnostics-20190611-0238.zip
  17. Really? That seems horrendous even for spindle drives to me. But yeah, even when transferring from data to cache, or over network to cache, the speeds are pretty bad. Here's a Krusader screenshot from a test I just did moving to the cache drive. It peaked at 70MBPS for literally 2 seconds, before dropping down to 5MBPS and then crawling back up to this and staying right around 30-31 after that.
  18. Hi guys, I've been running Unraid for almost a year now. All issues I’ve run into so far, I’ve been able to resolve on my own with the help of google and this forum. But I’m kinda stumped now. I’m running 5 data disks, a mixture of SATA and SAS drives (all 7200RPM), plus dual parity (both SAS), and a 500GB Samsung Evo 850 SSD as a cache disk, for a total of 8 disks. All of the spindles are on XFS, and the Cache is on BTRFS. The disks assigned are connected to sdc through sdj. When I first built the server, I had fewer disks running and only single parity, but I could get write speeds to the cache of about 130mbps or better; and write speeds to the spindle disks around 50mbps, as I recall (not exact, so don’t quote me). Not exactly impressive, but I’m not a speed freak either, and while I was not impressed, I didn’t find those speeds to be troubling enough to be concerned about. I didn’t go with Unraid for speed, I went with it for ease of use and ability to use mismatched disks, and easily be able to upgrade/expand the data pool. At any rate, I’m not overly concerned with performance of the spindle disks, as long as the cache does its job, and is maintaining SOMEWHAT reasonable write speeds. The problem is, it’s not anymore. I recently noticed (since upgrading to 6.7.0) that my write speeds are maxed out at about 30MBPS, no matter what. Includes the cache. I don’t know if this was going on before the upgrade, but I didn’t notice it before then. To be fair, I did move the drives to a new case; and am now running a different board with dual processors. It’s POSSIBLE that since I didn’t do any testing at that point, that that is when the performance started to suck. I don’t think so, but I’m not ruling that out, because I don’t have hard data for that. I've tried turning on Reconstruct write, and it had literally zero effect. (And it shouldn't matter for writes to the cache drive anyway). I’ve tried switching out cables on the SSD, and I’ve tried both SAS connection and SATA connection to that drive. The performance is the same regardless of whether it’s writing to cache or a disk in the data pool, and it doesn’t matter if it’s a network transfer, or done locally using a Krusader in a docker, or Unbalance. I pulled the diagnostics and in the log file (attached), I'm seeing a lot of references to a write error on disk 0 and disk 29. I've looked everywhere I know to look to find anything it would consider a disk 29, and can't find anything. I assume disk 0 is one of the parity drives? I googled the error and from the posts I've found, it's related to a bug with Unraid not being able to spin down SAS drives? Not sure if that's accurate, but it makes sense since I do have SAS drives in my machine. I’m sure it’s just something stupid (and probably my fault), but I can’t for the life of me figure out what is going on with it to cause such slow writes. I spent the past few days going through, cleaning things up, making sure only the data that I want on the cache drive is on there, and that there are no files stuck in limbo trying to transfer out of the cache due to insufficient space on a share. I feel like there isn’t anything glaringly wrong at this point, yet the issue persists. I’m in the process of trying to clear off the smallest disk to try and reformat it from XFS to BTRFS because I’ve read that it can be SLIGHTLY faster, but I already know that it’s not going to make a meaningful difference (cache is already BTRFS anyway). Desperate for any ideas at this point. Thanks in advance for any tips or ideas you guys might have! syslog.txt