Jump to content

Herdo

Members
  • Posts

    101
  • Joined

  • Last visited

Everything posted by Herdo

  1. As I said in the title, I tried to replace my cache drive as my current one has been showing some SMART errors. I followed the instructions, but it says a "btrfs device replace will begin" but that never happened. I was left with a blank unmounted drive that needed to be formatted. I formatted it but it still didn't copy over the old cache drives data. Now I'm stuck because trying to remount my old cache drives tells me that it will overwrite all data on the disk when I try to start the array. What do I do now? EDIT: Nevermind on the not being able to remount my original cache disk part. I realized what I did wrong and was able to remount the old disk. Now I'm just still not sure how to proceed with replacing the drive as the instructions given in the FAQ don't seem to work. EDIT 2: Nevermind again. I saw that in 6.9 this feature didn't work automatically so I followed the instructions to do it through the command line and it worked perfectly!
  2. Yes. I'm just saying, limit the scope of exposed ports. If I understood your post correctly, you essentially opened every port on your router from 1 - 65535. Instead, designate one port. So src port 34854 - 34854 and dst port 34854 - 34854, as an example. Then on deluge do the same. Change it from "use random port" to 34854 as you did in your router. Again, that is just a random port I'm using as an example. You can set it to whatever you want.
  3. Exposing a docker container to the internet isn't any less safe than simply exposing Deluge to the internet through an open port. That being said, no that's not correct. You do not want to open every port to the internet. In Deluge select a port (or range of ports if you prefer, maybe like 5 -1 0) and open those. Then ensure nothing else will use those ports. What you've essentially done is told your router to accept all/any traffic from anywhere and forward it to your unRAID box. This is very bad. You want to fix that immediately. EDIT: Also in case you weren't aware. Ports 1 - 1024 are what are known as "well-known ports" and those should be avoided. I'd just pick something in the 10s of thousands.
  4. I just had to make a change to crontab because an old script was interfering with some recent changes I had made. Previously "crontab -l" displayed this: # If you don't want the output of a cron job mailed to you, you have to direct # any output to /dev/null. We'll do this here since these jobs should run # properly on a newly installed system. If a script fails, run-parts will # mail a notice to root. # # Run the hourly, daily, weekly, and monthly cron jobs. # Jobs that need different timing may be entered into the crontab as before, # but most really don't need greater granularity than this. If the exact # times of the hourly, daily, weekly, and monthly cron jobs do not suit your # needs, feel free to adjust them. # # Run hourly cron jobs at 47 minutes after the hour: 47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null # # Run daily cron jobs at 4:40 every day: 40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null # # Run weekly cron jobs at 4:30 on the first day of the week: 30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null # # Run monthly cron jobs at 4:20 on the first day of the month: 20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null I found the old script located under /etc/cron.d/root so I used the "replace crontab from file" function with "crontab root". This allowed me to use "crontab -e" to remove the old script and save. However, now when I use "crontab -l", it's only displaying the "root" files crontab. It looks like this: # Generated docker monitoring schedule: 10 */6 * * * /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php check &> /dev/null 10 03 * * * /boot/config/plugins/cronjobs/medialist.sh >/dev/null 2>&1 # Generated system monitoring schedule: */1 * * * * /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null # Generated mover schedule: 30 0 * * * /usr/local/sbin/mover &> /dev/null # Generated parity check schedule: 0 3 1 * * /usr/local/sbin/mdcmd check &> /dev/null || : # Generated plugins version check schedule: 10 */6 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugincheck &> /dev/null # Generated speedtest schedule: 0 0 * * * /usr/sbin/speedtest-xml &> /dev/null # Generated array status check schedule: 20 0 * * 1 /usr/local/emhttp/plugins/dynamix/scripts/statuscheck &> /dev/null # Generated unRAID OS update check schedule: 11 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/unraidcheck &> /dev/null # Generated cron settings for plugin autoupdates 0 0 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplications.php >/dev/null 2>&1 I guess I just want to make sure this is OK, and that this isn't going to mess anything up. Obviously the "root" file crontab was working even though it wasn't loaded, so I'm guessing the hourly/daily/weekly/monthly scripts will still work, but I don't know. Am I correct in assuming crontab is just used to manage and display cronjobs, and that they will work regardless of which crontab file is loaded? EDIT: crontab -d and then a reboot reverted the crontab to the default settings.
  5. I just bought an E3-1275v6 for my Supermicro X11SSM-F, and I've upgraded from a G4400. That being said, I'm planning on selling this and upgrading to a Ryzen 9 3900x or 3950x depending on how much I wan't to spend when the 3950x launches. I've got two VMs running currently. Both are running Ubuntu Server 18.04; one with Wireguard/Deluge and the other with a highly customized Feed the Beast Minecraft server. Both have 1 CPU and 1 thread (the same pair) as I read I should be keeping the CPU and threads together. Is this true? I've noticed in the CPU Pinning settings I can designate the same CPU/thread to two different VMs. Is this a good or bad idea? The reason I ask is because Wireguard and Deluge can really hammer those 2 CPUs when they are actively downloading, but that only happens maybe once a day or every other day for 15 - 20 minutes. I think both the Minecraft server and the Wireguard/Deluge server would greatly benefit from having access to 4 CPUs (2 cores and 2 threads). Like I said, for 95% of the day it would mostly be the Minecraft server utilizing the CPUs, so I don't think they'll be fighting for resources too much. Thanks in advance.
  6. Thank you. It's about to finish with the post-read, but I think I'll just do one at a time. I'm not in any huge rush or anything. Thanks again for the help!
  7. I just got myself 2 more 4TB Ultrastar drives and they are currently preclearing. Once this is done, what's the best way to go about adding these? I'm adding a second parity drive and another (5th) data drive. Should I add them both to the array at the same time, or one at a time? If one at a time, in which order makes the most sense? I'm trying to avoid doing 2 parity rebuilds if possible, but I'm not sure if that is an option. I know adding the second parity drive is going to need a parity rebuild, but I believe adding another data drive will as well. Thanks in advance!
  8. The intel 4xxx series is no joke when it comes to single core performance. I still have an i7 - 4790k that I refuse to upgrade because for my gaming machine it's hard to beat. This is really going to come down to your use case mostly. The two I'd be between are the 4770k or the threadripper. Generally, if all you're doing is running some dockers and transcoding through Plex, I'd say go for the 4770k, although it sounds like you're using this for more than just a media server. I'm kinda in the same boat. I literally just bought (like two weeks ago) a new Xeon E3 - 1275 v6 processor and I think I'm going to sell it and upgrade to a Threadripper 2950x. Previously I had a G4400 and it worked wonderfully for sonarr/radarr/plex/syncthing/etc, but I've started to virtualize some stuff and I'm already wanting more than 4 cores. Like you, I've got a minecraft server running on a VM as well as a VPN and deluge running on a second VM, and I'm realizing the need for something beefier. That being said, if you aren't running any of this under a VM, the 4770k is probably plenty.
  9. I know there are plenty of guides on doing this, but I'm just wondering if simply specifying a tag at VM creation, and then mounting that inside the VM is the proper way to do this. The reason I ask is because generally you never want to have 1 disk mounted under two separate systems, correct? Doesn't that just guarantee file system corruptions? Maybe I'm not fully understanding the process here, but after reading several guides I'm a bit worried to just follow this advice blindly. I'm trying to mount all of my shares, so is the best way to do this to specify each one separately, e.g. /mnt/user/Movies tag: Movies Or can I just do /mnt/user/ tag: shares ?
  10. I have noticed the problem with my Samsung smart TV (UNJS9000) which has a wired connection. I haven't noticed it yet on my Amazon Fire TV upstairs which is wireless. I'm going to start watching stuff through the Plex desktop client to see if it has the same problem.
  11. Yep, it doesn't seem to happen with your typical 23 minute or 43 minute episodes but I have a few shows that run about 50 minutes and it happens right at about the 45 minute mark. I'm using 6.1.9 by the way.
  12. Hey, thanks for the advice. All of my disks are already set to "Never" for the spin down delay. Should I be doing something different? EDIT: Whoops! Just checked and apparently I only set the parity to "Never" only, the others were at "Default". I could have swore I checked that myself, but I think I just checked "Disk Settings" not the individual disks themselves. Although, if I have "Default Spin Down Delay: Never", shouldn't the individual disks set to "Default" be using "Never"?
  13. I've got PMS running on a setup machine and I've got all the media stored on my unRAID server. I plan on eventually moving PMS to my unRAID server when I upgrade the CPU. Everything has worked fine so far, except this one problem. Every so often, Plex will randomly "pause" the media. Nothing actually freezes, or crashes, it just pauses. Clicking the "jump back 15 seconds" button immediately starts the media playing again and it then plays right on through where it stopped without any problem. It seems to happen more often towards the end of longer episodes, like around the 48 minute mark, usually a minute or two before the episode is over. It just happened again and I checked the logs to see if maybe something was interrupting the playback, but there hasn't been anything logged for a few hours. The reason I'm asking this hear is because I've changed nothing with my PMS install, other than changing the libraries from local directories to the unRAID user shares. Maybe it's some sort of network issue? Everything is connected via a wired 1Gbps network. EDIT: Oh and media bitrate/size doesn't seem to have any effect. This happened once while playing a 20Mbps movie and has happened several times playing 3Mbps SD television shows, all of which are directly streaming (no transcoding). And to be clear, this isn't stuttering or buffering, it just pauses it as if I pressed the pause button.
×
×
  • Create New...