tmchow

Members
  • Posts

    441
  • Joined

  • Last visited

Everything posted by tmchow

  1. I'm in middle of contemplating hardware upgrade to my Unraid system and wrapping my head around it all since there are so many things inter-related. Would appreciate some help. I currently have: 6 drives in the array 1 parity 1 cache (512GB SSD) 2 drives precleared but un-assigned. These are on standby in case I need to replace a drive so I can do it more quickly. All drives directly connected to my ASRock E3C224D4I-14S motherboard which an LSI 2308 onboard, which supports "8 x SAS2 from 2 x mini SAS 8087" Intel Xeon E3-1231 v3 3.4Ghz 32GB ECC DDR3 RAM Rosewill RSV-L4412 4U case. This is a 4U case that fits up to an enhanced ATX motherboard so my motherboard options are unrestricted. My plan is to upgrade my motherboard and CPU (for another reason), and in process I may as well look at how I'm connecting my drives. What I want ideally (changes in bold) 6 drives in array 2 parity (add redundancy) 2 cache (add redundancy and change to NVMe) 2 drives on standby New Motherboard New CPU (Likely switching from a Xeon to a Core i7) 32GB ECC DDR3 RAM (I've read I can continue to use ECC RAM on a non-ECC supporting CPU) Dedicated SATA card (improve perf, better scalability) SATA CARD For the SATA card, my research indicates that a good bang for the buck is either the LSI 9207-8i or LSI 9211-8i, and their only difference seems to be PCie 3.0 vs 2.0 (specific post talking about it). Since I plan to go NVMe for cache drive away from SSD connected to my SATA card or motherboard, seems like there's no real difference to me so I should just go for whatever is cheaper which seems to consistently be the LSI 9211-8i. On ebay they seem to run between $50-$75 from theartofserver (a regularly recommended eBay seller on this forum and on unraid subreddit). These cards from this seller come already flashed for IT mode (Example LSI 9211-8i, Example LSI 9207-8i) which simplifies things. I realize the evaluation whether a card is good for my situation depends on # of drives and motherboard compat. Since I'm vying for a new motherboard, my guess is that's a non-issue since I can tailor the choice. So 2 questions: What do you think of my choice of LSI 9211-8i? Anything I'm missing in my decision making criteria? How do I get support for 10 drives since a single LSI 9211-8i card maxes out at 8? I've seen "expanders" for $200+ dollars (most commonly recommended one is the RES2SV240 which is $250) which makes sense if I'm using RAID functionality natively on the cards. But since these will be in IT-mode, I'm using them as a simple HBA controller, so is it OK to just buy 2 x LSI 9211-8i for approx $100 total and connect 8 drives on 1 card, and 2 drives on the other? (which leaves me expansion for 6 more drives in future). Alternatively, I've read I could use an inexpensive HP Expander card like this HP 468405-001 HP SAS EXPANDER CARD ($20 renewed) and according to this comment on reddit, it doesn't even need to be connected to the motherboard if I'm short on PCie slots, it only needs power. Note: One thing i don't like about this last approach it is looks like I need to connect to the card using external cables outside of the case to connect the LSI 9211-8i to this HP 468405 expander? Not end of the world, but feels sloppy. MOTHERBOARD + CPU Mentioned above, but I was thinking about a Intel Core i7 4770 or 4790 for balance of performance and Quick Sync support for hardware accelerated transcoding for Plex. Question: Good idea or should I go with a more recent CPU so my motherboard choices open up? For Unraid, I like the a USB directly on the motherboard since I plug my tiny USB key directly, without it dangling outside the case. Question: Do most motherboards have this feature? The last motherboard research I did was like 5 years ago when I got my ASRock which has this. NVMe for Cache drive instead of SATA SSD Seems like prices are cheap enough to just go with NVMe now since they range from $50-$80 for 512GB. So my cost is 2x that since I want 2 cache drives for redundancy. Question: Seems like as long as I get a motherboard with two PCIe x4 slots, I'm good, right? Or are there other considerations?
  2. Yes, thank you! I'll leave it in current settings then and not mess with it.
  3. My parity checks are currently taking 14 hours, and reported to run at approx 109-114MB/s. Here was one recent summary: Event: Unraid Parity check Subject: Notice [TOWER] - Parity check finished (0 errors) Description: Duration: 15 hours, 15 minutes, 19 seconds. Average speed: 109.3 MB/s Importance: normal After running this tunables script last night, looks like I'm barely going to get any perf improvements but the stated speeds (both Bang for buck and unthrottled) are way faster what my parity speeds run at. What's the reason for the disparity? ******************************************************************************* Completed: 2 Hrs 9 Min 5 Sec. Best Bang for the Buck: Test 1 with a speed of 135.0 MB/s Tunable (md_num_stripes): 1408 Tunable (md_sync_window): 512 These settings will consume 33MB of RAM on your hardware. Unthrottled values for your server came from Test 27 with a speed of 138.6 MB/s Tunable (md_num_stripes): 2968 Tunable (md_sync_window): 1336 These settings will consume 69MB of RAM on your hardware. This is 39MB more than your current utilization of 30MB. NOTE: Adding additional drives will increase memory consumption. In unRAID, go to Settings > Disk Settings to set your chosen parameter values. ******************************************************************************* ******************************************************************************* * It is estimated that the Best Bang for the Buck values will provide 99% of * * the performance that the Unthrottled values will deliver, plus they provide * * much lower memory consumption. The Best Bang for the Buck values may be the * * smarter choice, especially if you run 3rd party plug-ins and add-ons that * * compete for memory. * *******************************************************************************
  4. I have this scripted in go file to download a single tarball for static build and expanding it which is just a set of binaries: Main site Https://johnvansickle.com/ffmpeg/ Example to get the latest build in a static url: https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-amd64-static.tar.xz
  5. In addition to rclone, what about including ffmpeg? It’s so commonly used in media server setups.
  6. Forgive me, but how do I find your script? There's no search within this thread and dont' want to just keep going page by page through 32 pages
  7. Just tried this for the first time after procrastinating... I tried it in "Very Fast" mode just now per the instructions and getting these errors: unRAID Tunables Tester v2.2 by Pauven ./unraid-tunables-tester.sh: line 80: /root/mdcmd: No such file or directory ./unraid-tunables-tester.sh: line 388: /root/mdcmd: No such file or directory ./unraid-tunables-tester.sh: line 389: /root/mdcmd: No such file or directory ./unraid-tunables-tester.sh: line 390: /root/mdcmd: No such file or directory ./unraid-tunables-tester.sh: line 394: /root/mdcmd: No such file or directory ./unraid-tunables-tester.sh: line 397: /root/mdcmd: No such file or directory ./unraid-tunables-tester.sh: line 400: [: : integer expression expected Test 1 - md_sync_window=384 - Test Range Entered - Time Remaining: 1s ./unraid-tunables-tester.sh: line 425: /root/mdcmd: No such file or directory ./unraid-tunables-tester.sh: line 429: /root/mdcmd: No such file or directory Test 1 - md_sync_window=384 - Completed in 4.011 seconds = 0.0 MB/s
  8. That doesn't work well. For example, if you try this search for LFTP, you get 3 results which isn't accurate given 43 pages of this thread and I know it's come up more than 3 times. https://www.google.com/search?q=lftp+site%3Ahttps%3A%2F%2Fforums.unraid.net%2Ftopic%2F35866-unraid-6-nerdpack-cli-tools-iftop-iotop-screen-kbd-etc
  9. Couldn't figure out how to search this thread to find my own answer. I'm surprised that rclone isn't included in Nerd Pack given it's popularity. I have it working in a script called from my /boot/config/go file, but would prefer to not hack my own solution if it could be included as part of Nerd Pack for easier install and updating. Has rclone inclusion been considered or been planned? If not, why? PS - If anyone has a tip on how to search a specific thread, let me know
  10. I was just about to reply I don't have another server on my LAN to test with, but realized I could just use my macbook over wifi and mount a share to do the same test. I'll try that tonight when I get home.
  11. You're right it could be about overhead, but I've read that other people get 40-50MB/s with SSHFS. This is obviously is dependent on internet connect speeds, but shouldn't be an issue for me. Not sure why I'm so far off. I haven't tested remote server, I'm don't have another one I can easily test with.
  12. I'm copying files from an SSHFS mount (installed via Nerd Pack) on a remote server, and pulling my hair out why I'm only getting 1.5MB/s. My server has a gigabit NIC and I've confirmed it's at a 1000tx/gigabit link speed. Dashboard also shows this: The sshfs mount is at /mnt/cache/foo using this command: (This was altered to hide my paths and server IP): sshfs [email protected]:/home/owner/downloads /mnt/cache/foo -o IdentityFile=~/.ssh/id_rsa -o StrictHostKeyChecking=no,Compression=no,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,allow_other I've done a variety of things to try to diagnose WTF is going on. First, I've done an iPerf3 test between my local unraid server and a remote server. iPerf is giving me 500Mbit/s. Second, I've measured copy speeds between /mnt/cache/foo (the SSHFS mount) and /mnt/cache by running pv /mnt/cache/foo/movie.mkv > /mnt/cache/movie.mkv and this only gets 1.5MB/s. Third, I've also copied between array and cache drive and get super fast speeds of 1.96GiB/s. This was just to verify if any local issue. Fourth, I suspected it was a Cipher problem since I've read that can impact SSHFS copy speeds a lot. I've tried using arcfour, but couldn't get that to work even after I added a Ciphers statement in my ~/.ssh/config and /etc/ssh/ssh_config on my local server. I've verified with the following command that arcfour is supported on the remote server: sshd -T | grep "\(ciphers\|macs\|kexalgorithms\)" Thanks to a tip on reddit, I got a pointer to this article on cipher performance. I then tried aes128-ctr and aes256-ctr, adding a Ciphers option at the end of the SSHFS command: sshfs [email protected]:/home/owner/downloads /mnt/cache/foo -o IdentityFile=~/.ssh/id_rsa -o StrictHostKeyChecking=no,Compression=no,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,allow_other,Ciphers=aes128-ctr This completed successfully, but transfer speeds were exactly the same as when I didn't specify these Ciphers (so 1.5MB/s). Fifth, for fun I mounted a SMB share to compare speeds and got 6-10MB/s (so 6 to 10 times faster). I mounted it like this: mount -t cifs -o username=myUser,password=myPassword //remote-server.com/downloads /mnt/cache/foo2 What the heck is going on? How can I solve this? What am I missing with SSHFS? I've read reports that slower speeds are expected but read people are getting 40-50MB/s copying from remote servers. Obviously this is contingent on internet connection speeds, but recall I've got gigabit fiber so don't think that's the issue (and see my first point above where iperf is showing 500Mbit/s)
  13. I want to mount a drive on a remote server using SSHFS (installed via NerdTools). Problem is several fold: I want to connect to the server using my ZeroTier network and that server's zeroTier IP. I am running ZeroTier in a docker container successfully already so that's not the issue. The issue is that it seems that drives that are mounted are NOT visible to docker containers unless they are mounted BEFORE the docker service starts. Given ZeroTier is in a Docker Container, I have a circular problem it seems. My first ideas was to use a User Script tied to "At Startup of Array", which should mean it's always run BEFORE the docker service starts (since it needs the array started). However, if I do it that way, ZeroTier docker containers hasn't yet started, so the remote server won't be accessible via ZeroTier IP. Any ideas of what I can do here? Is what I want to do impossible? Or maybe I"m just doing something wrong and it's true that drive can be mounted AFTER docker server has started and still be visible by containers?
  14. Wondering if a better approach is just to try the sshfs mount and if it fails, handle the failure gracefully.
  15. Thanks that makes sense. So you’re saying to put my SSHFS mount command before emhttp but also check for network before that. I guess if network isn’t available I could put a sleep/wait in the script at that point to stall to get more time. Any suggestions on the best/right way to check for network connectivity to my remote server? I’m kinda new to scripting and such.
  16. So just to be clear: is everything in go file executes BEFORE docker service is started? I need to run SSHFS to mount a drive but sshfs is installed via Nerd Pack. Will it be available for execution from go file?
  17. Instead of adding to /boot/config/go file you mean? Any disadvantages with using user scripts for this? Eg. Any risk of user scripts not loading correctly when go file will always? If I go user scripts route, I’m trying to run sshfs before the docker service starts. Is that possible?
  18. This is probably answered already but I can't figure out how to search this thread 🤷‍♂️. I have SSHFS working but want to auto mount something on boot. Should I just add the command in my /boot/config/go file? If I do that, how do I ensure SSHFS is installed before I run the command? Or is it not possible to have NerdPack install stuff on boot automatically?
  19. I want to mount a drive using SSHFS for a remote share on another machine. I had it at /mnt/user/remote to start with, but then found my docker containers couldn't see this. After some digging, I found that containers cannot see any new mount points created after the docker service is started. While I can stop and restart the docker service, I'll encounter problems when I reboot the server unless I can guarantee the SSHFS mount happens BEFORE the docker service begins. So this begs 2 questions: How do I ensure that SSHFS mount happens before the docker service starts on boot? I couldn't find a good answer despite a bunch of searching. Is it a good idea to mount using SSHFS within /mnt/user? Or should i be putting it somewhere else like directly on a disk? (e.g. /mnt/disk1) Lastly, I noticed that if my remote machine isn't available, the logs threw all sorts of errors and seems like the machine was getting hung. Not sure if this is just expected or a side-effect of me mounting it within /mnt/user
  20. I've tried to get his going by mucking around. I have the container setup to port 2080 for HTTP and 20443 for HTTPS. I've forwarded ports 80 and 443 on my router to those ports. When I try to create an SSL cert through the Nginx reverse proxy dashboard, I get an "Internal error" dialog after a few seconds. In the error.log there isn't a 1:1 corresponding line for when this error occurs other than: 2019/06/29 18:37:01 [notice] 1037#1037: signal process started If I hit "OK" on that error modal and refresh the page, there is a line for the SSL cert. If I then try to use that cert, it fails because it can' tfind the cert on the disk (presumably due to the "internal error"). How do i debug this and get this working?
  21. I've been digging into this trying to get SSL to work for the some of my soon to be externally accessible sites. I have a mixture of things I want exposed and how to expose them. Some things, I want to only expose over my ZeroTier network. Examples are Node Red and Nzbget since all the devices I access those things from (basically my laptop) can have the zero tier client installed and works seamlessly. I really don't need SSL on these but why not? Other containers I want exposed over regular internet (non-ZeroTier network) since I need them accessible from other internet devices (e.g. my MQTT broker which needs to be accessible from internet attached devices not on my LAN). My ZeroTier addresses are in 10.241.0.0/16. When creating proxy hosts in Nginx Proxy Server, is this just a matter of adding those addresses as aliases? (e.g. 10.241.1.1 and 192.168.1.5 both for same proxy host?) Or am I just totally confused? Would appreciate help in understanding the above.
  22. Turns out the stick was the “sigma designs” one already in the list. Doh.
  23. I have a "Aeotec Z-Stick Series 5" Z-WAve USB Key plugged into my unraid machine. I've been using it with Home Assistant in a docker for over year without issue. I have recently moved to use Home Assistant with the "HASSIO" variant in a VM. Everything is great with Home Assistant as I get this new setup configured. I'm not at the stage where I want to setup my ZWAve devices, but then I noticed I can't see the USB stick in Home Assistant in the VM. In the VM configuration page, I don't even see the stick listed in the available devices to passthrough, but I don't see other devices: Is the issue something to do specifically with inability for this particular ZWave USB stick to be passed through to the VM? Is it a port issue? SInce I see other USB devices, it seems like it should be possible.
  24. OMFG, that was the issue. I made it 10GB and it works. I wish the installer would be clearer why that disk was disabled. Thanks for your help
  25. I'm trying to install Ubuntu into a VM and having a heck of a time figuring out how. When I get partway through the installer, I'm showing the virtual disk I created of 5GB in size, but it's greyed out and I can't select it. What the heck is going on? What am I doing wrong?