Jump to content

Yivey_unraid

Members
  • Posts

    165
  • Joined

  • Last visited

Posts posted by Yivey_unraid

  1. 4 hours ago, mgutt said:

    Set the network to host or bridge and use the local IP of your unRAID server?!

     

     

    I'm sorry, but now you lost me. ELI5... What container should I set to Host or Bridge, and when?

     

    4 hours ago, mgutt said:

    Or do you mean that you can not open your domain through the local IP? Then NPM probably doesn't listen to Port 80 and 443?! This is a requirement (change unRAID to 5000 / 5001) for local DNS rewrite and IPv6.

    No, locally (and remote) everything works fine! 

     

    4 hours ago, Kilrah said:

    You need to direct it to NPM.

    NPM will need to be on ports 80/443.

     

    For pihole either you enter everything manually in local DNS records or you can make a custom conf in dnsmasq.d that directs the whole domain in one go.

     

    image.png.c59337e3975f5e077fed9f41763156be.png

    THIS IS IT! (I think...) Thank you!

     

    I first tried adding a wildcard domain in the PiHole WEB-UI but didn't get that to work. This above seems to be the solution though! :)

     

    I added a "02-wildcard-dns.conf" file to /etc/dynmasq.d/ (host path for my PiHole container: /mnt/user/appdata/pihole/dnsmasq.d/).

     

    In that conf I added:

    address=/mydomain.com/192.168.1.4

     

    Then restarted PiHole.

     

    Before I started everything I ran this in the Unraid CLI to see where the URL routes to:

    nslookup mydomain.com

    and that pointed to my public IP. Same result running:

    nslookup servicesubdomain.mydomain.com

     

    After restarting PiHole and running the same commands they come back to 192.168.1.4

     

    So I guess it's working. The subdomains I have setup in NPM shows as normal with a guilty SSL cert when surfing to them locally.

     

    Only "downside" is that if I only surf to "mydomain.com" I'm routed to Unraid UI since that's the servers IP, insecure no SSL. Same if I surf to any type of subdomain that not proxied in NPM.

    It's only in the local LAN, so not a major issue. Surfing to Unraid UI through the normal IP is equally "open", just feels more hidden. I guess it's just a feeling... I do have a strong root password. :P

     

    If anyone have any suggestion for this to only work on URLs in NPM I'm all ears. Perhaps wildcard wasn't the right choice. :) 

     

     

     

     

     

     

     

  2. 3 hours ago, mgutt said:

    Note: This is not allowed for Nextcloud, Plex, etc.

     

    The easiest method is not to use Cloudflare and use your public IP for your domains. As your public IP is the IP of your router, the traffic would not leave your LAN. This is called NAT Loopback or Hairpinning.

     

    Yes. You need a local DNS server which should not be hosted on your unRAID server, else your complete DNS resolution is dead if your DNS server container isn't running (server reboot etc). This has a very low WAF 😉 In Pi-Hole it's called Local DNS Records, in Adguard Home it's called Filter DNS Rewrites.

     

     

    Thank you for the answer!

     

    I’m aware of the ToS prohibiting non-HTML content. Don’t use Nextcloud and for Plex I don’t see the need.

     

    I’m running my Pihole on the server at the moment, but I’m looking into building/setting up a PFsense or OPNsense router. That would also host the Pihole (or similar service).
     

    But that’s some time away, and right now I only have my ASUS router.

     

    When setting it up on Pihole, how exactly would that be done? My NPM (and all my services) has the same IP as my server and I don’t see a way to point Local DNS to a specific port, only IP.  

    EDIT: Right now I do have a public IP, but my ISP is finicky about it and looks like they might start charge for it. That was why I wanted to setup CF tunnel to not be dependent of that.

  3. Hi! Perhaps this is a question already answered, but I can’t find it and perhaps I’m not searching for the right words.

     

    Anyway, thank you for this container!

    I’ve setup NPM and Cloudflare Tunnel with my own Cloudflare SSL certificate. This now work perfectly for all my different containers, but took some time to troubleshoot (mostly because of my lack of knowledge in the area).

     

    Now I was thinking, instead of every time I’m on my local LAN and I go to https://myservicename.mydomain.com all traffic has to outside of my network and out to Cloudflare and then back, I’d like to set it up so when I’m on my LAN that URL points directly to that services local IP without leaving the network. 
     

    How do I manage this best? 
    Do I use Pihole local DNS and point to NPM somehow? Or can this be handled directly in NPM?

     

    Sure, I can use the IPs when I’m at home, but it would be nice to just use the same URLs everywhere. 👍

  4. On 12/19/2022 at 5:38 PM, kim_sv said:

    Dear Squid!

     

    As usual, thank you for your great work for this community!

     

    I just have a question regarding having appdata split over multiple pools, and backup thereof.

     

    I have all but one appdata-folder on /cache and only my Plex appdata on a separate pool /cache_plex.

     

    FCP flags this as a problem:

     

    So far I've just ignored it, since the update seems to still be functioning correctly.

     

    Question is, can I leave it as is (and solve the issue of "how to backup my plex pool" this way) or is this a bad practice?

     

    Merry Christmas!

    Sorry for reposting, but I fear this question disappeared due to someone posting their whole log file in plain text. :P

  5. No, sorry! This perhaps should've been two different threads, I just put them in the same because I thought they were (are) related. But I can see how it causes problem following along. 😬

     

    Originally my "only" problem was these weird streaming issues with glitches/freezings/pixelations/errors and artwork being displayed with artifacts in Plex UI. Like this:

    135949094_Skrmavbild2023-01-10kl_15_26_30.thumb.png.434e606f9967ca943d7707a6c42db0f7.png983747264_Skrmavbild2023-01-09kl_07_52_00.thumb.png.e4a9038110ae54766809923da7cbf903.pngIMG_7033.thumb.PNG.69e9e19196df3c7432fa35deef9c88df.PNGIMG_7032.thumb.PNG.866fa375edbde7bc9484e0792b1a1b1a.PNG

     

    But it was during my troubleshooting of these issues, and subsequently multiple different Docker container installs, that I got the:

    Error: filesystem layer verification failed for digest sha256

     

    So to hopefully make it clear:

    - The streaming issues occur intermittently, no matter what type of device is played on, no matter what type of media is played etc etc.

    - The sha256 error is occurring intermittently during some, not all, installs/updates of Docker containers. Seems to help to remove unused volumes in Portainer.

     

    Since sha256 seems (?) to be a checksum problem, could that point to a network issue?

  6. 5 minutes ago, trurl said:

    Since you have appdata backup of plex, you could try deleting your plex appdata so the container will start fresh with a new appdata and see if that works.

    That is what I have done when I've spun up new containers of Plex from different repositories. Tried linuxserver, Binhex, Plex official and a clean Hotio. Just installed them plain and just added a test folder containing a few different videos in various resolution and dynamic range. Same weird streaming issues in all.

  7. 13 hours ago, trurl said:

    me too. And I'm not currently having any problems with it so it seems like it must be something with your setup.

     

    Do you have appdata backup?

     

     

    Yes, I've also tried resetting the appdata from backup. Sorry, I've tried so many things over the past weeks it's hard to keep track of what I've presented as "tried". I also have support threads regarding this going in a few different forums also, since I'm not really sure what is the problem. I could be hardware, it could be Plex, it could be Docker or the created containers, it could be my network and it could be.... I don't know, karma?! 🙄

     

    Do you by any chance have any insight into the "Error: filesystem layer verification failed for digest sha256" problem?

    I did switch from Docker image to directory last spring, should I try to revert back? I guess it shouldn't matter..

  8. 11 hours ago, DarphBobo said:

    Lose faith in Plex, not in anything else.

     

    Plex has become a necessary evil in the world and once you find a version of it that actually works (every update seems to break something else), then leave it alone and don't update again.

    Mm, but i have been using Plex for 10+ years and generally speaking been a happy camper. So I’m willing to fight this, as in my opinion, neither Emby nor Jellyfin is even close to compete with the user experience and the WAF. 

  9. Nobody have any suggestions on the Plex streaming issue? I’ve tried multiple things to solve it without any success… 

    • I’ve nuked the docker image on the server and reinstalled Plex.
    • I’ve tried multiple fresh installs of Plex using different Docker repositories (binhex, linuxserver, official Plex).
    • I’ve run a memtest86 to make sure my RAM is good, multiple passes. No errors.
    • I’ve changed wall outlets, a desperate move I know…

    I’ve also tried installing Jellyfin and tried streaming the same files through that, and that has actually been working. Haven’t been able to produce the same issues with that. So… Plex does seem to be the culprit. Unfortunately.

     

    Is there some dependencies that Plex have that isn’t included in the Docker containers? Since all Plex containers I've installed have the same problem I figure it has to be something else?!

     

    This is the Plex.app on my Mac. This sort of graphical errors I also get from time to time. I’d say exclusively on the artworks, posters and backgrounds like in this screenshot. Since that’s stored on the server in the appdata library, there must be either something corrupting the file when it’s transferred over the network to the Mac (or any other device I have, all present the same intermittent issues) or some sort of file corruption on the cache that is intermittent?! I really don’t get it...

    59659407_Skrmavbild2023-01-09kl_07_52_00.thumb.png.90608222ffdb5adb8c352bbd89db81b4.png

    Please help! I’m loosing faith here!

  10. Ok, I managed to install Portainer and deleted a bunch of volumes that was unused. After that I was able to update the container without any problem.

    What I don't understand though is why the size of the /docker directory is so different depending on where I look:

     

    /mnt/user/system/docker is set to cache:prefer and is on a 250 GB SSD.

    69759719_Skrmavbild2023-01-03kl_00_49_45.thumb.png.d9aba1fcad5828157c4a856bed6fc623.png

     

    Checking it's size in Terminal.

    72386596_Skrmavbild2023-01-03kl_00_43_37.thumb.png.1a085f3da70ca84b22b75cbbbe53ee99.png

     

    Calculating /mnt/cache/system/docker in Unraid UI.

    776576882_Skrmavbild2023-01-03kl_00_49_31.png.084d4f55c6df697fb3e32323ec3513e3.png

     

    Something doesn't add up?!?

     

     

    The streaming issues persist also... :( But perhaps I should nuke and rebuild /docker directory?

  11. Hi!

     

    I just tried to update binhex-qBitVPN but pulling the latest image gives me this:

    Pulling image: binhex/arch-qbittorrentvpn:latest
    IMAGE ID [679001613]: Pulling from binhex/arch-qbittorrentvpn.
    IMAGE ID [e130c81b086a]: Already exists.
    IMAGE ID [f0c2d8550f0e]: Already exists.
    IMAGE ID [e8b87bc620a7]: Already exists.
    IMAGE ID [7b303ad07582]: Already exists.
    IMAGE ID [e6cff58d3325]: Already exists.
    IMAGE ID [ed093e4b8303]: Pulling fs layer.Downloading 100% of 4 KB.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [0a7e7d1b57dc]: Pulling fs layer.Downloading 100% of 12 KB.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [acdab43f922d]: Pulling fs layer.Downloading 100% of 3 KB.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [a873d56991c3]: Pulling fs layer.Downloading 100% of 11 MB.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [bf4455564214]: Pulling fs layer.Downloading 100% of 293 B.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [47be8f93f574]: Pulling fs layer.Downloading 100% of 2 KB.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [9a45b4ff2318]: Pulling fs layer.Downloading 100% of 3 KB.Download complete.Extracting.Pull complete.
    IMAGE ID [d77252f0b421]: Pulling fs layer.Downloading 100% of 383 B.Verifying Checksum.Download complete.Extracting.Pull complete.
    IMAGE ID [d1631fdb6136]: Pulling fs layer.Downloading 100% of 181 MB.Verifying Checksum.
    
    TOTAL DATA PULLED: 192 MB
    
    Error: filesystem layer verification failed for digest sha256:d1631fdb61368a5121f94f4a89dcaa24f36e2d1bd9402729aaccbba2a8403a02

    I've never seen this Error: filesystem layer verification failed for digest sha256 before, and googling it doesn't give much. A few posts point towards checksum errors and doing a Memtest. Suggesting it's more of a HW issue than SW.

     

    Any ideas?

     

    Did a Memtest86 last night actually, just prior to this reboot of the server and prior to this error presenting itself. Showed no faults at all. Ran 4 passes.

    IMG_7129.thumb.jpeg.9320ee33d9567ee319a37cc8c29f3e3f.jpegIMG_7130.thumb.jpeg.33e9ed854f1c416d7ec0874f6dd439cc.jpegIMG_7131.thumb.jpeg.0def50545e2c424c66517c3e63a58da3.jpeg

     

    The thing is that I've been troubleshooting my Plex install for the last month now. That is why I did the Memtest.

    I have these intermittent streaming errors/freezes/glitches/pixelations on everything I stream. Doesn't matter if it's direct play or transcoding. Doesn't matter what container I use (Hotio, Binhex, linuxserver) or if I'm HW or SW transcoding. Doesn't matter if it's played local or remote. Doesn't matter what device is used, or if it's wireless or wired. Doesn't matter what version of Plex I'm running.
    More comprehensive explanation on Reddit here.

     

    These streaming issues has driven me crazy and I'm more and more convinced that it's some kind of HW problem. 

    If you have any suggestion regarding that issue I'm all ears.

     

    Plan right now for troubleshooting that is:

    - Exchange the router currently acting as a switch in the office for a 2.5 GbE QNAP switch I had already ordered. See if the router for some reason can't handle the traffic, even though it is light.

    - Remove the Shelly Plug S that is used to measure power consumption of the server. If I really stretch my imagination the Plex issues might have started all the way back in October when I started using the plug... Perhaps it's introducing some weird overtones or something.... Yeah I know, but I'm desperate... 🤷‍♂️

     

    Attached Unraid diagnostics and Plex server logs (confirmed playback issues occur from Dec 27, 2022 13:55:16 and forward). 

     

    Best regards

    define7-diagnostics-20230102-2210.zip Plex Media Server.log.zip

  12. Dear Squid!

     

    As usual, thank you for your great work for this community!

     

    I just have a question regarding having appdata split over multiple pools, and backup thereof.

     

    I have all but one appdata-folder on /cache and only my Plex appdata on a separate pool /cache_plex.

     

    FCP flags this as a problem:

    Quote

    Share appdata set to use pool cache, but files / folders exist on the cache_plex pool

     

    So far I've just ignored it, since the update seems to still be functioning correctly.

     

    Question is, can I leave it as is (and solve the issue of "how to backup my plex pool" this way) or is this a bad practice?

     

    Merry Christmas!

  13. 3 hours ago, hugenbdd said:

    Change Disabled to "No", the way you have it now, it will never run mover unless hit the button.

    Like I wrote in the post, it is only set to Yes in the screenshot due to it not working properly when I tested so I turned it off.

    3 hours ago, hugenbdd said:

    Also, turn logging on, then we can look at the syslog and see where it might be having issues.  Might also want to turn on the test mode until we get it figured out.  That way no actual files will be moved, but logs will be created.

    I will definitely try this and get back to you when I’m back home again. 👍

    4 hours ago, hugenbdd said:

    Sometimes text files have a weird end of line character (a ^) if it was created on a windows machine.

    I created the file on macOS and added the path lines within the unRAID UI to get proper formatting (I believe). 

  14. @hugenbdd any possibility that you could show a setup of how to ignore a folder, using the text file method? 

     

    I've tried with this setup (Disable Mover running on a schedule: Yes in this picture is only because I want it turned off when it's not working properly)

    577805257_Skrmavbild2022-09-15kl_11_56_47.thumb.png.112955f33ba5086217e4c633f3dc29c3.png

     

    The path it points to: /mnt/cache/appdata/plugins/CA_Mover_Tuning/ignore_file_list.txt

     

    This is how the txt-file is written:

    604741837_Skrmavbild2022-09-15kl_13_40_29.thumb.png.29842681d91cfbe23474838f884730c5.png

     

    I tried both with and without the second line with the *.

     

    Please, any suggestion on how to solve this?

     

     

  15. On 9/2/2022 at 1:52 PM, diffz said:

    Hi,

     

    I have a share data where all my media and torrents are stored. I want my torrents folder to stay on the cache so i created a text file to ignore /mnt/user/data/torrents/ when I read the logs i see it's excluding /mnt/user/data/torrents/ and all the files inside but it's also excluding the whole share data and subfolders not listed in the text file.

     

    is what i want even possible?


    Creating a separate share for torrents is not an option because i want radarr/sonarr to hardlink/instant move torrents to my media folder.

    I've done the same thing, and it works for me. I think your problem is the trailing / in your path. Try it like this instead:

    Quote

    /mnt/user/data/torrents

    Never mind. I thought it worked but no...

     

    Looking at the log (enabled logging, and looking at unraids logs) I see that it skipped one file then transferred everything else in the directory.

  16. 4 hours ago, trurl said:

    This would have nothing to do with duplicates if all you are looking at are the contents of disk1. If you were looking at user shares, duplicates would not show up and so that might make it seem some space was missing.

    I'm afraid I don't really follow you. Shouldn't computing /mnt/disk1 using file manager plugin show the exact content on Drive 1? Or check mnt/drive1 in MC?

    Since it only seem to contain 30 GB instead of 11.6 TB that unRAID UI is showing, then what could the problem be? 

     

    EDIT: I've booted into Ubuntu to check there and I see the same files as I did in unRAID (30 GB). None of the visible files in Ubuntu is something that is important, just appdata backup and flash backup. Both those will get recreated. Could I just format and partition the drive to like ext4 in Ubuntu and then restart into unRAID? That should make the drive unmountable and I could let unRAID format it and then rebuild from parity. 

     

    Or is there some other way, that won't involve a new parity check? I guess not, but one could hope.. :P

    Maybe just reboot into unRAID, change file system for the drive to BTFS and start the array. That would trigger a format, right? Then maybe change back to XFS since that's what I'd prefer..

  17. 8 hours ago, trurl said:

    Never mix disk shares and user shares when moving or copying files. Linux doesn't know user shares and disks are the same files, and so will allow you to specify a destination that is the same file as the source . When it creates the empty file for the destination copy and it is the same as the source file, the source is lost.

     

    Unraid v6.10 has a file manager plugin that will allow you to work with files directly on the server instead of over the network. Or you can use the builtin Midnight Commander (mc at the command line, google it). That is what I have always used to manage files directly on the server since Unraid v4.7

    Yes, I normally never use disk shares nor are they normally activated. 99% of the file transfers during my recovery was done through MC, file manager plugin (excellent plugin btw) or rsync so using disk shares wasn't the problem here. I just needed it for a few things and that's why it was activated.

     

    Now my current problem is that I forgot (probably) to delete the content of one of the XFS formatted 18 TB drives I was using as middle "cache" during the recovery, before I put it into the array. Result is a duplicates problem that I haven't been able to solve yet. The drive (Disk 1) shows as 11.6 TB full in Main's tab, but I can only see 30 GB worth of files in /mnt/disk1 when checking in MC or Krusader of File manager plugin. Did try to run extended FCP test and dupeGuru without a result. Also tried Czkawka and that looks more promising. I'll look more into it after the parity build is complete in a few hours, but I will do a restart and post diagnostics first if the original problem in this thread persists.

  18. 6 hours ago, trurl said:

    Reboot and get us diagnostics without making any changes to the OS.

    I've just added a parity drive and it is building parity now, so I'll wait for that to finish and after that I'll restart and post diagnostics.

     

    6 hours ago, trurl said:

    Why are you sharing disks? I recommend only sharing User Shares. Several possible ways to cause yourself problems with disk shares and user shares both.

    Normally I don't, but I've been trying to solve a previous problem with file system corruption and used disk share for part of the file moving. 

     

    6 hours ago, trurl said:

    You have a user share with '$' as its first character. I suspect that is the reason for the problem with the User Shares page.

    Ah, should've seen that. I just left the file structure as UFS Explorer spat it out. Now I've gone through those unnamed folders and the share is deleted.

     

    3 hours ago, bonienl said:

     

    Yes this causes PHP errors when doing disk calculations (share calculations are fine).

    I made a fix for this in the next release.

    Thx.

     

    Perfect, in case someone else leaves the files that UFS Explorer creates as top directory like me... 😅

  19. 3 minutes ago, sephallen said:

     

    No worries, I'm glad someone else was able to find it useful.

     

    The container should persist after a reboot, but if you were to recreate your docker.img, you would need to find and run the command again. If you used the docker-compose template, you could simply run:

    docker-compose up -d

    within the same directory as the template to recreate the container (though you would need the docker-compose CLI tool installed on your unRAID machine).

    Ok, I think I'll just bookmark your forum post instead. :D I don't have docker-compose CLI tool. 👍

×
×
  • Create New...