Ustrombase

Members
  • Posts

    96
  • Joined

  • Last visited

Posts posted by Ustrombase

  1. On 7/31/2023 at 5:30 AM, JorgeB said:
    write time tree block corruption detected

     

    A reboot should fix it for now, but yes, this is usually a sign of a RAM issue.

    So are there any further steps to help identify root cause? I haven't had this issue again but it is frustrating to have this issue seemingly unresolved and perhaps just hibernating. To put it in another way is there a way to do a process of elimination?

  2. I have done more research on this and RAM is coming up. I did change RAM last month, but I did a memtest recently 1 full pass with 0 issues on the RAM. It is a 64gb so it takes a very long time to complete 1 pass.

     

    I could run another memtest to make it run for 2 passes, but I rather see if it is another issue first given the 1 pass of memtest recently.

  3. @itimpiany other thoughts here? I feel I have been able to replicate this using other VMs that are not on the same VLAN as my unRAID server host but everything can ping each other. It's not a routing issue as I have other machines able to connect to my unRAID host and to the VMs. I'm unsure what is going on but it feels like it's something to do with unRAID and how it handles VLANs.  

  4. I have created a small diagram to illustrate my problem. Basically I have configured VLANs on my Unraid Server under Settings > Network Settings with "Enable VLANs" set to `Yes` and I added a VLAN for each one I have on my pfsense router. I then have 2 VMs on top of Unraid and I have some dockers on the Unraid Host. My problem is I can't connect to my containers on the host via a port. I have done a `netcat` port scan and from either VM i have I can't detect a port open on the host, but the VMs can see each other's ports.

     

    This is weird bc I assumed a VM should be able to talk to the host with no problems.

     

    FYI Unraid on the default untagged VLAN hence why I put it as VLAN 0 but maybe it should have been VLAN 1 i can't remember what is the notation for the untagged VLAN. This situation reminds me when I used macvlan dockers to give a docker an IP it couldn't connect back to the host but that was a known issue this is something I felt with VMs wouldn't happen.

     

     

    Untitled-2023-04-16-2135.png

  5. Hi guys

     

    i read all the pages on this thread and I see 1 comment that says the SSL is fixed. Was this referring to using the myunraid.net SSL cert that the Limetech guys provide to us?

     

    i have 2 unRAID servers that have the https set to strict which issues out a TLS cert. I know when I visit http://<my server ip> it redirect to a myunraid.net url and on my pfsense box I had to add a private domain in the DNS resolved to make it work. However with this container when I visit the Tailscale IP it redirects to the myunraid url but it leads me nowhere. 
     

    the only way it works is because I also have my pfsense box on the tail net and I have subnet routing setup so since my pfsense box runs on the same vlan as my unRAID it works but on my 2nd unRAID box it doesn’t as I don’t have my pfsense box advertising the vlan where my 2nd unRAID box lives in. 
     

    is there a setup to make this work or do I have to resort to not using https strict in the unRAID networking settings?

  6. 12 hours ago, Bcy said:

    Change your jellyfin udp port and try again.

     

    Do you have install nerdpack or nerd tools plugin ? if you dont need python3 . you can uninstall it from nerdpack or nerd tools.

     

     

    So i had nerd tools running, but I deleted the plugin and restarted the server. Now I still have python3 running. However, when I enter `python3` into the terminal I can't get it to run so maybe some other service is using python3? I ran `top | grep PROCESS_ID` where PROCESS_ID is the ID that i got when i ran `lsof -i4 | grep 1900` and i get that it is running which is weird. I am unsure where else to look

  7. On 9/22/2022 at 8:58 AM, trurl said:

    Standard way to deal with this is to leave those default ports alone and map ports in the containers.

    Yea so the reason i couldn't do this is bc I am trying to give https to my internal services and use my domain name to access them for convenience, to do this I needed to establish split DNS and for that, since I use pfsense, I had to use host overrides entries in the DNS Resolver. Well for this you have to link to a rev proxy that is using port 80 and 443 since DNS doesn't let you redirect the host to an IP AND a Port, so I had to remap the unRAID for this very reason. I solved it as I was just dumb and didn't realize I had to use the unRAID IP with the new http port and that would automagically redirect to the https port.

  8. Hi, I changed my default ports for the unraid webui to 280 and 2443 from the default. I checked and these ports are not used for any docker container. After changing the ports I could still access my webUI via the obfuscated myunraid.net url but not via https://IP:NEWPORT. I did an nginx restart and this broke everything as in the myunraid.net url was inaccessible. I did a restart and I went back to being able to access the webui via the myunraid.net url but again no via https://IP:NEWPORT. I have SSL/TLS option in the management settings to be strict. This leads me to believe that when I type the IP:PORT it doesn't recognize the port correct to redirect to the myunraid.net url. Maybe the DNS record needs to be updated?

  9. Yea so while I wouldn’t call myself a noob I am no expert and maybe this is such a low level thing I should have known but in either case I didn’t know. 
     

    also I don’t think I explained it well. My issues presented themselves as dockers being turned off which at first glance from research it led me to think it was a docker image issue. Then through the help of the community it was diagnosed it was a RAM issue which causes me to immediately turn off my server and I fixed it. However it never occurred to me that it would corrupt data which now makes total sense. 
     

    anywho I have now since formatted my cache drive Pool and everything is all good now. 
     

    thanks for the help!

  10. @trurl for sure! Sorry if I left it out. Didn’t know it be helpful. 
     

    basically when I did the mv command it was from the default pool which is “mnt/cache/“ to the new pool which is “mnt/appdata_vms” I moved system, appdata and domains. 
     

    the appdata did not do a remove for some files after the copy since mv is equal to a copy + remove. 
     

    the files that did not move were giving me “rm: cannot remove `X’: Read-only file system”

     

    is this helpful?