Jump to content

volcs0

Members
  • Posts

    115
  • Joined

  • Last visited

Everything posted by volcs0

  1. I have a regular SSD for my cache drive right now. One of the things I've read I can do to speed up PleX is to put the metadata on its own fast SSD. So I'm adding an M.2 to my setup. Should I just make this my new cache drive for everything? Should I set it up as a separate drive and just have appdata/plex on there? Should I put the whole appdata folder on there (and leave cache for it's Mover functions)? Thanks for the advice. Diagnostics, in case they matter. EDIT: I put an M.2 in there and added is as a single-drive cache pool, formatted as XFS. I didn't realize this was an option in 6.9. I created a new share called plexdata and put it as "preferred" on the new M.2. I used rsync to copy the Plex metadata to this new folder on the M.2 and put the new appdata path in the Plex docker template. This seems to work well - the interface is definitely more snappy, both locally and remotely. As a side note, I first use an SFTP program to copy the plex folder over to the M.2 and it broke everything. I had to go back and use rsync to preserve permissions and dates/times. tower-diagnostics-20220126-1147.zip
  2. Is there a way to run Invidious as a private instance? I have it set up through a reverse proxy, but I'd like to avoid having any random user using it (and chewing up my bandwidth). Is there a way to restrict it to registered users (and then turn off registration)? Thanks.
  3. I spent many days configuring the binhex-SABNZBVPN docker container to work with 6 other docker containers. I went into the settings to change one thing - and I made a small mistake and the docker build failed. Usually when this happens, the docker is still in the list, it's just not started. In this case, it is gone from the list. I do not have a record of all of the settings I used - and there were a lot of them. Are these settings saved anywhere? Are then in a log somewhere? Anything I can do short of going through the steps to recreate everything again? I'm posting this in General, since I've had this happen on occasion with other dockers - so I do not think it is specific to this particular docker container. Thanks.
  4. I've successfully run appdata backup in the past, but today when I launched a manual backup, it stopped the dockers, ran - but didn't back up anything - and then restarted the dockers. I confirmed the paths and made sure nothing was excluded. My appdata folder is 400gb and takes several hours to run, usually. The status gave this message: "Backup/Restore Complete. tar Return Value: 0" My unRAID server is otherwise working great. I am up-to-date for unRAID OS and CA Backup / Restore Appdata. Any thoughts about how to troubleshoot this?
  5. To follow up on this, the scans took 3 days to complete.
  6. Thank you - this is very helpful. I will investigate further.
  7. Thank you for this. I don't have a backup for files that are not that mission critical (flacs, for instance) or .nfo or jpg art files for movies and TV shows. I have several backups for things like family photos and videos, etc. (and none of the dups are of those critical files) I am doing a binary compare with Czkawka, and I am confirming that the path for each is identical on both disks, and then I'm deleting the one on the higher-numbered disk. For the most part, it has been whole albums that have been duplicated, making me think that this is some combination of my SFTP client and Picard and how it moves music files to another share. Maybe I should take a closer look at how the shares are set up. Maybe the mover is doing something wonky with the files. I assume that any time I copy a file to a share - whether it's from an SFTP client or via SMB, it goes first to the cache drive and then is handled by the mover, right? Thanks for your help with this.
  8. I ran the CA Fix Common Problems plug-in and it found thousands of duplicate files, almost exclusively on disk5 and disk6 of my 8 disk array. I've been going though and deleting them off of disk6 since I read that the file on the lowest disk is the one that will be used (and any others will be ignored). I do not do any work at the disk level, so I'm trying to track down the source of these duplicate files. The vast majority of the duplicates are music files. My normal way of organizing music is to upload the files to a temp share on unRAID using an SFTP client. Then I run Musicbrainz Picard to tag and move the music files to my Music share. All I can think of is that the files are getting put on separate disks in this process. Does any of this make sense? My diagnostics are attached, if they are helpful in any way. An old thread is linked below - it wasn't that helpful in finding a source for this problem, but I did just post how I am dealing with the duplicates. tower-diagnostics-20220103-0914.zip
  9. I know this reply is 3.5 years later, but I was searching for a solution to this problem. CA Fix Common Problems found hundreds of duplicate files on two of my disks. I never work at the disk level - only the share level, so I do not know how this happened. I do a lot with Musicbrainz Picard - and a lot of the dups were flac files, so maybe that's an issue? In any case, I needed a way to find and delete these duplicates. I found the app Czkawka - and this allowed me to do a binary compare and then delete the duplicates. The program suffers from stack overflow errors if you try to compare too many files, but once I figured out the sweet spot, it's been easy to search and find these thousands of dups. I'll work on finding out the cause, but I thought I would post this workaround to fixing the problem without having to manually go through everything.
  10. How long should a scan take? I'm scanning /mnt/user for the first time, and it's been running for 24 hours. I see it in the process list. Nothing interesting in the Docker log, except "starting scan." My unRAID is about 20Tb. Thanks.
  11. My unRAID tower has gotten pretty wonky lately - lots of slow downs and PleX not working well or sometimes not at all. I looked through the error logs and found a few that I do not understand: For example: Jan 2 02:18:52 Tower kernel: Plex Media Scan[17833]: segfault at 14d191b3c030 ip 000014d194cbff80 sp 00007ffedcaf45b0 error 4 in ld-musl-x86_64.so.1[14d194c76000+53000] Jan 2 02:18:52 Tower kernel: Code: 0f b6 49 06 48 c1 e1 08 48 09 c1 41 0f b6 41 07 48 09 c8 eb 02 48 98 48 8b 6c 24 28 45 85 e4 74 3a 41 0f b6 4a ff 48 8d 0c 49 <0f> b6 54 4d 00 c1 e2 18 0f b6 74 4d 01 48 c1 e6 10 48 63 d2 48 09 Also, lots of entries about fan temperature and fan speed - must be related to a plugin in installed. Parity check ran last night - found 400 errors, which is unusual - usually there are zero. My main use is for Plex and Emby. I use NGINX for reverse proxy. I also have SABNZB/RADARR/SONARR set up. I have a few other containers running, but those are the main ones I use. I don't normally run any VMs. I have not upgraded to 6.9.2 yet. I don't want to break my NVIDIA GPU set up (for PleX transcoding), and I remember getting this going on 6.9.1 to be a little painful - maybe it's easier now. Thanks for the help. tower-diagnostics-20220102-1228.zip
  12. Sorry to not follow up - yes, I took out the cache drive, blew the dust off everything and put it back in. It showed up - I backed everything up this time, and it seems to be working. But I'm wondering if I should replace it (1TB Samsung EVO 870). The only warning I get is about the cache drive being hot - then it goes back down again. This happens during any heavy use. Thanks for your help.
  13. Found my unRAID without the array started - after a reboot some time today when I wasn't home. I started up the array and found that Docker services weren't starting. Then I found that appdata is missing. The share and folder are just gone. Diagnostics attached. Any help is appreciated. I have an appdata backup from September 6, so not that current but not nothing... Thanks for the help. Edit: Seems that my cache drive is no longer. I need to figure out what happened to it. Presumably that's the problem. tower-diagnostics-20211201-1957.zip
  14. I know this is an old thread, but I am having trouble sorting this out. Here is the command running: /usr/local/sbin/shfs /mnt/user -disks 511 -o noatime,allow_other -o remember=330 It's using up almost all my CPU. I don't have anything set up like described above. How can I sort out the source of this problem?
  15. Just to be clear - You have a backup unRAID without parity and cache? That sounds easy enough...
  16. I have 3-2-1 backups of my very-most-important stuff - family photos, videos, etc. I have multiple off-site redundant backups of these precious items. I would like to also have a backup of my music, TV, and. movies. Yes, I know that many don't back up their media, preferring to rely on *arr software for reconstituting a crash. But my stomach is not that strong. I've spent 20 years curating and giving lots of love to these collections. I have enough external hard drive space to back most of it up right now (~25Tb). What is the best way to do this - and to sustain it going forward? These are the two options I can think of: 1. Back up to the external hard drives via USB to my unRAID. 2. I have a spare MacMini that I could set up as an end point for a continuously running backup on the home network. I'm not sure the best way to do this - maybe have the drives mounted as network shares via SMB and have backup software running that does daily incremental backups. 3. I don't want to set up a completely separate unRAID server for mirroring, as others have suggested - mainly because I think it is overkill and I don't see the point in the expense of the parity and cache drives. 4. I don't want to use cloud backup - it would take months to seed the first one. And it would be too expensive. Ideally, once a drive is filled, I could take it offsite (e.g., to my office) and leave it there. Other thoughts? I'll sleep much better once this is done. Thanks for the advice.
  17. I had to switch from http validation to duckdns validation because my ISP won't allow forwarding port 80. I set up mydomain.duckdns.org. My config screen for Swag is attached. I set up a few proxy-conf files - for calibre-web, plex, and emby. An example is attached (for calibre-web). This does not work. I tried https://calibre.mydomain.duckdns.org and it hangs and times out. I see that port 443 is forwarding. Nothing updates in the Swag log when I try to connect, if that's helpful. I feel like I'm missing something obvious, but I don't see it. I've tried to add other server_name variables to the calibre-web.subdomain.conf file - like www.calibre.* and calibre.mydomain.* etc. but this doesn't help. Any advice is appreciated - I've been at it all afternoon. Thanks.
  18. Edit: I realized that I cannot do this without forwarding port 80. Since I cannot do that, I changed to trying duckdns validation. While I now see that port 443 is open, I am still not able to get my reverse proxy running. I've asked about this in a new thread here: -------- This is a bit of complex question. I'm unable to forward in requests, and I think it has to do with the way that Comcast/Xfinity's modem/router works/doesn't work. I'm using default settings for the docker for port 80 (8080-->80) and 443. I have port 443 forwarded to my unRAID box. I do not have port 80 forwarded. Do I need to? I'm getting this error: Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems: Domain: XXXXX.duckdns.org Type: connection Detail: Fetching http://XXXXXX.duckdns.org/.well-known/acme-challenge/0JQsgWcr6OCovXfDLxU8F4m3U3t_jHOqawZJ1DyVI: Timeout during connect (likely firewall problem) Hint: The Certificate Authority failed to download the challenge files from the temporary standalone webserver started by Certbot on port 80. Ensure that the listed domains point to this machine and that it can accept inbound connections from the internet. Do I need to set up port 80 to forward to port 8080 on my unRAID? If so, I think I'm out of luck, as XFinity's XFi gateway does not allow you to map one port to another. Any advice on how to mitigate these errors is appreciated. Thanks.
  19. Perfect - as you indicated, I removed the VPN_OUTPUT_PORTS, and everything still works. Thank you very much for the help.
  20. I'll look at the FAQ, since I clearly am missing the mark here. I had it all working with Radarr and Sonarr not using the VPN provided by sabnabdvpn. I just wanted to add a little more security, so I messed with it to have them leverage the VPN. But then I couldn't get the Radarr and Sonarr interfaces up until I added those VPN in and out ports. But I will try to read and learn more about this. Thank you for your patience.
  21. Yes, I ran it from within the sabnzbdvpn console. I'm running several other containers through this one as well (Radarr, Sonarr, Firefox), so I just wanted to make sure that the connection was secure as possible. I had to enter those ports (e.g., 7878, 8989) in the VPN input ports and VPN output ports to get it to work --- is this OK from a security standpoint? My settings are attached. Thanks for all your hard work and dedication. It's very much appreciated.
  22. Am not sure if I'm leaking DNS info. I just checked and this was the result: (I'm using Torguard, and that IP is a Torguard IP) (I used this tool.) Your IP: 45.128.36.XXX [United States of America AS9009 M247 Ltd] You use 4 DNS servers: 108.162.218.141 [United States of America AS13335 CloudFlare Inc] 172.70.109.93 [United States of America AS13335 CloudFlare Inc] 172.70.109.125 [United States of America AS13335 CloudFlare Inc] 172.70.113.21 [United States of America AS13335 CloudFlare Inc] Conclusion: DNS may be leaking. Thoughts about this? Thanks.
  23. I followed SpaceInvaders directions (very similar to other guides on the same subject). I have binhex-sabnzbdvpn container running and working well. I also have binhex-radarr and binhex-sonarr running and working fine. I wanted to run the radarr and sonarr traffic through the VPN container. So, I followed the directions - changed network on radarr and sonarr to none and put the --net=container:binhex-sabnzbdvpn in the extra parameters line. Then I added the sonarr and radarr ports as ports on binhex-sabnzbdvpn. I restarted everything. It seems to work - when I curl my public IP from sonarr or radarr, it is from the VPN. So, traffic is being routed through the VPN container. But, when I go to try to get to the WebGUI - unraid-ip:7878 or unraid-ip:8989, it does not work. It just hangs and won't connect. I can see in the logs that the containers are working - pulling NZBs and passing them to binhex-sabnzbdvpn. But I can't get to the WebGUI. Any suggestions? Thanks. OK - I added the Sonarr and Radarr ports to the VPN_INPUT_PORTS variable in binhex-sabnzbdvpn docker. And now it works. BUT - in the documentation, it has this warning, "configuring this incorrectly with the VPN provider assigned incoming port COULD result in IP leakage" Is this something to worry about in this use case? Thanks.
  24. How to I use a different port from 8080? It's already in use by another container (Swag). I tried changing it in the docker config (Host port 1). I tried deleting that port and adding it back in. I tried editing sabnzbd.ini and changing 8080 to 8787. When I restart the container, the config file gets overwritten with 8080. No matter what I try, it keeps using 8080 (and the long says "listening on port 8080"). Do I have to use 8080? Am I just not understanding how to set this up? Thanks.
×
×
  • Create New...