Herdo

Members
  • Posts

    101
  • Joined

  • Last visited

Everything posted by Herdo

  1. As I said in the title, I tried to replace my cache drive as my current one has been showing some SMART errors. I followed the instructions, but it says a "btrfs device replace will begin" but that never happened. I was left with a blank unmounted drive that needed to be formatted. I formatted it but it still didn't copy over the old cache drives data. Now I'm stuck because trying to remount my old cache drives tells me that it will overwrite all data on the disk when I try to start the array. What do I do now? EDIT: Nevermind on the not being able to remount my original cache disk part. I realized what I did wrong and was able to remount the old disk. Now I'm just still not sure how to proceed with replacing the drive as the instructions given in the FAQ don't seem to work. EDIT 2: Nevermind again. I saw that in 6.9 this feature didn't work automatically so I followed the instructions to do it through the command line and it worked perfectly!
  2. Yes. I'm just saying, limit the scope of exposed ports. If I understood your post correctly, you essentially opened every port on your router from 1 - 65535. Instead, designate one port. So src port 34854 - 34854 and dst port 34854 - 34854, as an example. Then on deluge do the same. Change it from "use random port" to 34854 as you did in your router. Again, that is just a random port I'm using as an example. You can set it to whatever you want.
  3. Exposing a docker container to the internet isn't any less safe than simply exposing Deluge to the internet through an open port. That being said, no that's not correct. You do not want to open every port to the internet. In Deluge select a port (or range of ports if you prefer, maybe like 5 -1 0) and open those. Then ensure nothing else will use those ports. What you've essentially done is told your router to accept all/any traffic from anywhere and forward it to your unRAID box. This is very bad. You want to fix that immediately. EDIT: Also in case you weren't aware. Ports 1 - 1024 are what are known as "well-known ports" and those should be avoided. I'd just pick something in the 10s of thousands.
  4. I just had to make a change to crontab because an old script was interfering with some recent changes I had made. Previously "crontab -l" displayed this: # If you don't want the output of a cron job mailed to you, you have to direct # any output to /dev/null. We'll do this here since these jobs should run # properly on a newly installed system. If a script fails, run-parts will # mail a notice to root. # # Run the hourly, daily, weekly, and monthly cron jobs. # Jobs that need different timing may be entered into the crontab as before, # but most really don't need greater granularity than this. If the exact # times of the hourly, daily, weekly, and monthly cron jobs do not suit your # needs, feel free to adjust them. # # Run hourly cron jobs at 47 minutes after the hour: 47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null # # Run daily cron jobs at 4:40 every day: 40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null # # Run weekly cron jobs at 4:30 on the first day of the week: 30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null # # Run monthly cron jobs at 4:20 on the first day of the month: 20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null I found the old script located under /etc/cron.d/root so I used the "replace crontab from file" function with "crontab root". This allowed me to use "crontab -e" to remove the old script and save. However, now when I use "crontab -l", it's only displaying the "root" files crontab. It looks like this: # Generated docker monitoring schedule: 10 */6 * * * /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php check &> /dev/null 10 03 * * * /boot/config/plugins/cronjobs/medialist.sh >/dev/null 2>&1 # Generated system monitoring schedule: */1 * * * * /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null # Generated mover schedule: 30 0 * * * /usr/local/sbin/mover &> /dev/null # Generated parity check schedule: 0 3 1 * * /usr/local/sbin/mdcmd check &> /dev/null || : # Generated plugins version check schedule: 10 */6 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugincheck &> /dev/null # Generated speedtest schedule: 0 0 * * * /usr/sbin/speedtest-xml &> /dev/null # Generated array status check schedule: 20 0 * * 1 /usr/local/emhttp/plugins/dynamix/scripts/statuscheck &> /dev/null # Generated unRAID OS update check schedule: 11 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/unraidcheck &> /dev/null # Generated cron settings for plugin autoupdates 0 0 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplications.php >/dev/null 2>&1 I guess I just want to make sure this is OK, and that this isn't going to mess anything up. Obviously the "root" file crontab was working even though it wasn't loaded, so I'm guessing the hourly/daily/weekly/monthly scripts will still work, but I don't know. Am I correct in assuming crontab is just used to manage and display cronjobs, and that they will work regardless of which crontab file is loaded? EDIT: crontab -d and then a reboot reverted the crontab to the default settings.
  5. I just bought an E3-1275v6 for my Supermicro X11SSM-F, and I've upgraded from a G4400. That being said, I'm planning on selling this and upgrading to a Ryzen 9 3900x or 3950x depending on how much I wan't to spend when the 3950x launches. I've got two VMs running currently. Both are running Ubuntu Server 18.04; one with Wireguard/Deluge and the other with a highly customized Feed the Beast Minecraft server. Both have 1 CPU and 1 thread (the same pair) as I read I should be keeping the CPU and threads together. Is this true? I've noticed in the CPU Pinning settings I can designate the same CPU/thread to two different VMs. Is this a good or bad idea? The reason I ask is because Wireguard and Deluge can really hammer those 2 CPUs when they are actively downloading, but that only happens maybe once a day or every other day for 15 - 20 minutes. I think both the Minecraft server and the Wireguard/Deluge server would greatly benefit from having access to 4 CPUs (2 cores and 2 threads). Like I said, for 95% of the day it would mostly be the Minecraft server utilizing the CPUs, so I don't think they'll be fighting for resources too much. Thanks in advance.
  6. Thank you. It's about to finish with the post-read, but I think I'll just do one at a time. I'm not in any huge rush or anything. Thanks again for the help!
  7. I just got myself 2 more 4TB Ultrastar drives and they are currently preclearing. Once this is done, what's the best way to go about adding these? I'm adding a second parity drive and another (5th) data drive. Should I add them both to the array at the same time, or one at a time? If one at a time, in which order makes the most sense? I'm trying to avoid doing 2 parity rebuilds if possible, but I'm not sure if that is an option. I know adding the second parity drive is going to need a parity rebuild, but I believe adding another data drive will as well. Thanks in advance!
  8. The intel 4xxx series is no joke when it comes to single core performance. I still have an i7 - 4790k that I refuse to upgrade because for my gaming machine it's hard to beat. This is really going to come down to your use case mostly. The two I'd be between are the 4770k or the threadripper. Generally, if all you're doing is running some dockers and transcoding through Plex, I'd say go for the 4770k, although it sounds like you're using this for more than just a media server. I'm kinda in the same boat. I literally just bought (like two weeks ago) a new Xeon E3 - 1275 v6 processor and I think I'm going to sell it and upgrade to a Threadripper 2950x. Previously I had a G4400 and it worked wonderfully for sonarr/radarr/plex/syncthing/etc, but I've started to virtualize some stuff and I'm already wanting more than 4 cores. Like you, I've got a minecraft server running on a VM as well as a VPN and deluge running on a second VM, and I'm realizing the need for something beefier. That being said, if you aren't running any of this under a VM, the 4770k is probably plenty.
  9. I know there are plenty of guides on doing this, but I'm just wondering if simply specifying a tag at VM creation, and then mounting that inside the VM is the proper way to do this. The reason I ask is because generally you never want to have 1 disk mounted under two separate systems, correct? Doesn't that just guarantee file system corruptions? Maybe I'm not fully understanding the process here, but after reading several guides I'm a bit worried to just follow this advice blindly. I'm trying to mount all of my shares, so is the best way to do this to specify each one separately, e.g. /mnt/user/Movies tag: Movies Or can I just do /mnt/user/ tag: shares ?
  10. I have the official Plex docker container installed and I'm using the Live TV and DVR functionality. It's working great, but I'd like to be able use the post processing script functionality to encode the over the air recordings into something smaller and more compatible with my devices (h.264, MKV). I need to link a script that will run ffmpeg or handbrake-cli. I can install an ffmpeg docker container, but I'm not sure how to communicate between the two. My thinking is that I put the script somewhere accessible by both docker containers (somewhere like /boot/config/plugins/scripts) and then mount that directory in both the Plex Media server and the ffmpeg docker to something like /scripts/. From there, in Plex Media Server, I would call the script with /scripts/myscript.sh, and then in the script itself I would use some sort of docker command to call ffmpeg within the docker container? For instance: docker run dockerhubRepo/ffmpeg -i localfile.mp4 out.webm Am I on the right path here, or am I way off? My first thought was to just install ffmpeg onto the PMS docker container, but my understanding of docker containers is that when updated, they are completely wiped and reinstalled, which is why all the configs are saved in /appdata/ because that isn't touched when the image gets nuked. Obviously the script would be more complicated than that, but you get the idea. Any help would be appreciated. EDIT: OK, I figured it out. I've been testing it and I am getting an error 127 (key expired) on my Plex server. In testing I've learned that one cannot pass spaces through the docker container, which is a problem because of Plex's naming convention. There is no way for me not to have spaces or uppercase letters in my folder structures... I guess I'm back to square one here. I really wish I could just install ffmpeg directly onto the unRAID server. FINAL EDIT: I solved this by creating my own docker container. It's just the official plexinc/pms-docker image with ffmpeg installed as well. This is an automated build which is linked directly to the plexinc/pms-docker image, so you won't be reliant on me to update the container as an update will be triggered automatically when the official plex docker is update. https://hub.docker.com/r/herdo/plexffmpeg/
  11. Squid, thank you so much for the reply and I'm sorry it took so long to reply back. After looking at my shares settings I can see what you're talking about. I'm not sure how I didn't realize this, as I intentionally set it up myself, haha. Thanks again!
  12. Can someone explain to me what exactly this is, and why one of my users has "3" under the "write" column?
  13. Hey thanks for the reply. That makes perfect sense. I've been testing it periodically and it's still all good and my parents can stream without any buffering.
  14. OK, I've solved it. Disabling the "Static" IP setting in unRAID under "Network Settings", and then assigning the IP as static in my router instead seems to have completely solved the packet loss issue. Just did a test of 500 50 byte packets, and I had 0 lost packets. The speeds I'm getting to the unRAID server are still lower than expected. It's anywhere from 1/2 to 1/3 what I am getting on my desktop, but at least the Plex stream should work now. The latency is also quite a bit higher. It's also possible the cli version of speedtest tests differently. I'm not sure why this solved the issue, I just saw it mentioned on a thread where someone was having a similar problem. I'm guessing my routers DHCP server was getting confused? Thanks again for the help bonienl. EDIT: Nevermind about the latency and speeds being worse. I must have reset the server settings and I wasn't comparing the same speedtest server between my desktop and server tests. They are testing about equal now.
  15. I had another NIC on my motherboard so I just tested that. I thought I fixed it because it seemed to work fine at first, but after a longer ping test I can see I am definitely still dropping packets. 24% packet loss as of the latest test. Is it possible the CPU can't keep up? I don't know how much the CPU would affect something like this, but it's definitely the weak link in my server. It's a Pentium G4400.
  16. Thanks for the reply bonienl. I updated the post while you were typing this reply. I've tried swapping the port and ethernet cable with no luck. I just tried pinging my server from my desktop and had 30% packet loss. However, I thought I'd ssh into the unRAID server and try pinging my desktop, and the packet loss was very minimal; 3% after about 3 minutes. I don't think that is considered "acceptable", but it's certainly better than 30%. I'll check out the Tips and Tweaks plugin and reboot my equipment. Thanks again.
  17. I'm not sure if this is a problem with my unRAID server, or something else, but I figured I'd ask here as this forum has always been very helpful. I've been having trouble streaming Plex to my parents house (2 miles away). I have an asymmetric gigabit connection (1000/35), and my parents have ADSL (15/1). I've run various speed tests from my desktop and found I should have no problem meeting the 720p 4mbps stream requirements, as I consistently get 30mbps upload, with maybe 20mbps at the slowest. This evening I went to my parents house to check their connection. They are consistently getting 10mbps+ and have no problem streaming full HD Netflix, Hulu, and other services from the same device running Plex (Samsung Smart TV). I then decided to run some iperf3 tests. I found that from my desktop running iperf3 to their house, I was getting 8mbps - 10mbps. Again, this should be fine, and it seems to be maxing out their downstream connection. After several hours I left their house puzzled, with no idea what the problem could be. When I got home I had the idea of running iperf3 directly on the unRAID server instead on my desktop. This told a very different story. Now I was only getting about 500kbps to their house, with spikes maxing out around 700kbps. I decided to download the speedtest plugin onto my unRAID server. Again, this was showing something very different than when running a speed test from my desktop. On my desktop, I consistently get about 800/25. I just ran 5 speed tests in quick secession and they were all under 20ms latency, 750 - 850 down, and 20 - 30 up. I did the same thing on the unRAID server, and here are the results. 2018-01-14 04:24 Sun Speedtest.net (Phoenix, AZ) 23.86 km 199.151 ms 226.18 Mbit/s 4.11 Mbit/s http://www.speedtest.net/result/6964850039.png 2018-01-14 04:12 Sun Speedtest.net (Phoenix, AZ) 23.86 km 539.298 ms 44 Mbit/s 3.29 Mbit/s http://www.speedtest.net/result/6964828188.png 2018-01-14 04:11 Sun Speedtest.net (Phoenix, AZ) 23.86 km 23.062 ms 53.86 Mbit/s 6.11 Mbit/s http://www.speedtest.net/result/6964825532.png 2018-01-14 04:10 Sun Speedtest.net (Phoenix, AZ) 23.86 km 61.116 ms 44.67 Mbit/s 4.43 Mbit/s http://www.speedtest.net/result/6964823300.png 2018-01-14 04:05 Sun Speedtest.net (Phoenix, AZ) 23.86 km 23.684 ms 274.93 Mbit/s 6.03 Mbit/s http://www.speedtest.net/result/6964815609.png As you can see, it's not only very slow, it's wildly inconsistent. This got me thinking, and I realized that almost every time I ssh into the unRAID server I notice "lag". I'll go to type a command and mid typing the cursor would lock up and then a few seconds later the text would quickly appear on the screen. I assumed it was just a process running in the background occupying the CPU temporarily, but now I realized it may be something more sinister. I have a mikrotik hEX router, which includes a handy little ping tool so I pinged the unRAID server. As I suspected, there is some serious packet loss from the router to the unRAID server. I've attached an image showing how bad it is. It isn't rhythmic at all, just random packet loss from what I can tell. It's upwards of 40% sometimes. I just upgraded to this gigabit connection about a week ago, which is when I noticed this problem, but I'm thinking it's just a coincidence. The modem was upgraded to a DOCSIS 3.1 modem, and then the router was upgraded a few days ago. I first noticed it sometime between getting the new modem/gigabit and getting the new router. I just set my parents up on Plex right around the time I got the new modem, but I've noticed inconsistent speeds for months, maybe more. For instance, sometimes syncthing will pull down files from my remote server at 18Mbps, and sometimes at 200kbps, and like I said, I first noticed that months ago. The problem with syncthing would then somehow fix itself the next day, but then I'd usually notice very low download speeds again soon after. So I guess my reason for posting this is advice on where to start. I'm not sure why only my unRAID server would be dropping packets, when everything else on the network is working fine. My desktop and the unRAID server are both plugged in the same switch in my office and are sitting about 8 feet away from each other. I'd really appreciate any ideas at all. Any other tests/diagnostics I could run? Has anyone run into a problem like this before? If more information is needed, please just let me know. EDIT: I just realized I should try swapping the port on my switch. I tried a different port, and a different cat5 cable with no luck. Still getting packet loss. EDIT 2: I've solved the issue. My comment below explains what I did to fix it.
  18. Here's a screenshot. I understand why "unRAID" as well as "UNRAID(File Sharing)" are there, but I don't understand why there are 2 "UNRAID(File Sharing)". When clicking on them, they seem to resolve to "Windows shares on unraid" and "Windows shares on unraid.local". Is this normal? Which one is the correct one to use? Like I said, this might not be unRAID specific, but I've made samba shares outside of unRAID before and I feel like I've never had this problem. The OS I'm accessing this from is Solus with the Budgie desktop, but I've noticed the same thing on Ubuntu, as well as Windows 10.
  19. Ok great, thanks for the reply. I'll check out the Linuxserver Plex thread. I somehow didn't even think to check there.
  20. Sorry if this is not the appropriate sub-forum for this, but seeing as how it pertains to Plex, Docker, and hardware, I wasn't sure exactly where to put it. The new Plex Pass DVR feature seems pretty neat and I was considering giving it a try. I started to think about the logistics of getting it setup, but then started to realize it might not be as simple as Plex makes it out to be considering I'm using unRAID and a docker container. Plex currently supports multiple TV tunners (both USB and PCI-e). I know I'll need one that works under linux obviously, but I'm wondering if I will face any additional hurtles because I'm using docker for my Plex server instance. The idea is that I should just be able to plug in a USB TV tuner to my unRAID server, connect any digital antenna, and then my unRAID server should recognize it. From there I can go into the Plex webgui and activate the TV tuner. Will this work under unRAID at all? Has anyone tried this out yet? I know the feature is pretty new, but I'd love to get some feedback if anyone has messed around with this feature under unRAID yet.
  21. Oh that makes sense, thank you. I'll run another parity check tonight.
  22. Hey thanks. The settings you gave me didn't make any difference with the diskspeed script, but maybe it will with the actual parity check. I'll play around with it some more. trurl, yea, this is why I use key pairs.
  23. Thanks for the replies you two. trurl, I've included the diagnostics I took last night. I looked it over and didn't see anything interesting. Don't mind the morons trying to brute force my server. I had port 22 opened for a few hours to test something and they came pouring in, haha. dikkiedirk, thanks for linking that script. wget and curl failed me (I assume due to the forum requiring login credentials for downloads) so I had to use scp. All the disks seem to be performing just fine at about 137MB/sec on average. I've included the diskspeed.html graph file if you want to take a look at it. I guess it's possible I was doing something weird during the parity check this week. I think I might shut everything off that could potentially access the disk tonight and run another parity check. So far it looks like this might just be some one off situation. Thanks again unraid-diagnostics-20170323-0128.zip diskspeed.html
  24. My parity checks have always clocked in at 129 MB/sec. It's always been surprisingly consistent, but this weeks parity check has me a bit worried. It took a bit longer and clocked in at 108MB/sec. Is this indicative of an immanent drive failure? The only thing different I've done recently is moved Plex Media Server onto the unRAID server (it was on a standalone server before) and I added DDclient, but those are located on my cache drive so I doubt that is causing this. I don't think I was accessing the drives at the time, but even if I was, I've looked back and the speed has never not been 129MB/sec, even when I know I was accessing the drives. Any ideas?
  25. Doh! Nevermind. I miss typed that command several times but got it working now. Was able to clear out +5GB of log files from Couch Potato.