Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. OK. Remember it will only go as fast as your slowest drive. And also, if the drives are nearing capacity that is also much slower. If you have a faulty SATA cable, that would slow down the whole array also. Those are the main things I can think of off the top of my head.
  2. OK, I must be reading it wrong. It sounds like you're saying it's gotten slow, that you're currently getting 50MB/s when you used to get 20, which would mean it's actually faster than what you had before. I'm confused.... (which wouldn't be the first time)
  3. I think you mean in the high 120's? Yes I've noticed too. I went down the array tuning path, but really that's not the issue. Linux is supposed to be more performant than other OS's when it comes to file transferring, but lately it's less. And that damn Plex issue - I've put the mover on minimum everything, but we shouldn't have to - there's clearly some kernel I/O scheduling issue.
  4. I found something that I think is telling about this:
  5. I'm aware a few of us have this issue - where it takes a very long time to boot and longer when you have more memory. I've just seen something that I thought I'd share. Basically, while watching HTOP, starting a VM that isn't already started with dedicated (not shared / ballooned) 32GB RAM, the VIRT Ram is set straight away to 32GB, but the RES RAM slowly counts up to 32GB. Once the RES RAM reaches 32GB, suddenly the TIANOCORE BIOS is displayed and then we're good to get booting. While the VM is active, reboots etc are performed normally, i.e. not slow, it's just the first boot from power on. So my uneducated guess is that it takes time for KVM to find available RAM that may have been virtually assigned to other processes and make it available to the KVM / OVMF / Tianocore bios and further that the BIOS will not start until this is done. However, with ballooned memory it also seems to need to wait to see the whole 32GB before displaying the bios. Subsequent reboots continue to behave the same (bios does not come up until the RES ram in htop is completely assigned, however these subsequent reboots perform this step much faster. I assume if one leaves the system for some time, the ram previously assigned to the VM has been used by other processes and again will be slow to assign. I have certainly experienced that behaviour. This KVM article goes some way to explain when the assigning of memory takes place within the various steps of making hardware available. http://www.linux-kvm.org/downloads/lersek/ovmf-whitepaper-c770f8c.txt This is probably a question we'd need to raise with the KVM developers. Is anyone on that list?
  6. Well technically not 'anything under' /mnt, anything at exactly /mnt is more correct given that's where all your permanent storage also resides....
  7. Fantastic, thanks for putting in the time to explain for me. So I think I should work out a number of transcodes I'm not likely to have e.g. 10, figure out how much data each one would have at the worst resolution for say 2 minutes per stream (mostly 1080p for now), and that gives me an idea how much RAM I would need. I agree the transcode directory on SSD is a concern and the advantage of /tmp is that it should give the ram back to the system as needed - maybe not an issue with 128GB though. I have a high endurance SSD, but I still don't wish to put transcode directory on it because it seems pointless and wasteful. And finally monitoring of what is going on may be more challenging on /tmp. Time for some experiments. Thanks again. Marshalleq
  8. Apologies if this is obvious, but as a hint - is the explanation of this buried somewhere in this thread that I've missed, or just personal experience? I'm probably not likely to be affected by memory issues with the amount of RAM I have, nevertheless I do like to understand what's going on and why a Plex docker would fill /tmp differently than a RAM disk on some other mount point. Perhaps some other process is getting in the way trying to clean it up or something?
  9. I still don't understand why this is better than /tmp.
  10. Unless I missed something in the 9 pages of comments (only skim read the first part and the last part - /tmp is ram. So you're saying /tmp still has some limit on it? I liked the idea of /tmp because it's pre-existing and I didn't want to faf about with scripts and such.
  11. Thanks everyone for this fine thread. I've been considering doing this for a while due to the series of unfortunate events below and hadn't clicked it was so easy. My two issues were: 1: If any automation or VM's etc decided to run, Plex couldn't transcode to disk fast enough in order to keep up with real time video (on a single stream even) and thus, a media item would 'pause' in the middle while watching it. So, an I/O issue. 2. I had two SSD disks die (an Enterprise Samsung SM863 960GB (new, died in under a year and Samsung's not covering it due to being OEM) and an Intel SSD330 60GB). The latter is obviously older, but being MLC based figured it would be fine as a transcode device, but no. Lessons learned: 1 - Even though SSD's are rated for something, I don't really want to have to keep buying them at their current prices, so I should make their lives easy if I can. 2 - By using /tmp it reduces I/O on the PCI bus which helps other activities 3 - Don't buy OEM drives unless you're 100% sure the warranty will match the original manufacturers warranty 4 - At the moment buy Intel SSD's - The price for an Intel branded enterprise 1TB drive rated at 1 drive write per day for 5 years (the warranty is 5 years) is about the same as a Samsung good 1TB consumer drive which doesn't have the IOPS or endurance. I paid about $300 for it in NZ dollars (compared to thousands for other brand Enterprise SSD's like the one that died above). It's not the fastest drive, but you don't need that, you need reliability and IOPS in the server space. And as it is, the Intel still does 95,000 IOPS READ / 36,000 IOPS writes. If you think the write IOPS are low have a look at HDD's - a new Seagate Barracuda 12TB HDD does about 215 IOPS. So who's going to complain about 36,000? Not me.
  12. Thanks, sorted that and yes it's much cleaner now. I did run your script once previously but somehow did overwrite it with the wrong one lol. Thanks again, I'll do some more testing again later, when there's no-one on my server to skew the results!
  13. Hmm, that's interesting, OK I'll have another look at the script, I must have gotten the old one mixed up with the new one somehow.
  14. In the interim I seem to have gotten around the problem by running ln -s /usr/local/sbin/mdcmd /root/mdcmd, which is what I had to do on the old script too if I recall correctly.
  15. Anything is possible, but it's not designed for that. It's designed to run manually so you can run it and rerun it to get the optimum settings for disk performance on your particular setup.
  16. @Xaero there still seems to be quite a major issue on my hardware - see below results: Completed: 0 Hrs 56 Min 11 Sec. Press ENTER To Continue unraid-tunables-tester.sh: line 55: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 598: [: : integer expression expected Best Bang for the Buck: Test 0 with a speed of 1 MB/s Tunable (md_num_stripes): 0 Tunable (md_write_limit): 0 Tunable (md_sync_window): 0 These settings will consume 0MB of RAM on your hardware. Unthrottled values for your server came from Test 0 with a speed of MB/s Tunable (md_num_stripes): 0 Tunable (md_write_limit): 0 Tunable (md_sync_window): 0 These settings will consume 0MB of RAM on your hardware. This is -299MB less than your current utilization of 299MB. NOTE: Adding additional drives will increase memory consumption. In unRAID, go to Settings > Disk Settings to set your chosen parameter values. Full Test Results have been written to the file TunablesReport.txt. Show TunablesReport.txt now? (Y to show): I was expecting more for 56 minutes of testing It doesn't look to me like it's autodetecting mdcmd which I think you said it would earlier....
  17. Hey, I'm in the middle of running this while I cook dinner - thought I'd share the output incase you can shed some light / adjust the script. Sorry to see you've been sick BTW, hopefully on the mend! unRAID Tunables Tester v2.2 by Pauven unraid-tunables-tester.sh: line 80: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 388: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 389: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 390: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 394: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 397: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 400: [: : integer expression expected Test 1 - md_sync_window=384 - Test Range Entered - Time Remaining: 1s unraid-tunables-tester.sh: line 425: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 429: /root/mdcmd: No such file or directory Test 1 - md_sync_window=384 - Completed in 240.717 seconds = 0.0 MB/s unraid-tunables-tester.sh: line 388: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 389: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 390: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 394: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 397: /root/mdcmd: No such file or directory unraid-tunables-tester.sh: line 400: [: : integer expression expected Test 2 - md_sync_window=512 - Test Range Entered - Time Remaining: 1s unraid-tunables-tester.sh: line 425: /root/mdcmd: No such file or directory It repeats from here.
  18. I actually hadn't edited it. I use VI and Nano so I don't think there's much chance of it coming from my end? - are you saying it was published to this site with windows line endings and I need to change it? In which case I probably would need to do that with something (I run Mac) but this would be a first in 20 years!
  19. @XaeroDo you have something installed that we need to add? I'm getting command not found error, bad interpreter errors and file not found errors.
  20. Is this still an active solution? I mean, actively maintained? Scratch that, I've now confirmed that it isn't. Thanks.
  21. I assume your lightroom application is not hosted on the unraid server. So yes network speed mostly then disk o would say. I run my catalog locally and just store the files on the server. Though over a gigabit connection the performance is noticeably slower even like this for larger catalogs. Still, if you’re not a heavy user of lightroom you won’t notice much.
×
×
  • Create New...