Jump to content

strike

Members
  • Posts

    775
  • Joined

  • Last visited

Posts posted by strike

  1. Did you try what's mentioned in the 6.12.0 release notes?

     

    Crashes related to i915 driver
    We are aware that some 11th gen Intel Rocket Lake systems are experiencing crashes related to the i915 iGPU. If your Rocket Lake system crashes under Unraid 6.12.0, open a web terminal and type this, then reboot:
    
    echo "options i915 enable_dc=0" >> /boot/config/modprobe.d/i915.conf
    
    Setting this option may result in higher power use but it may resolve this issue for these GPUs.

     

    • Upvote 1
  2. 1 hour ago, House Of Cards said:

    So I can't fix the double NAT.  I have cellular internet, so I will have to try some advanced workaround, and I'm hoping you guys/gals can step-by-step me through setting up remote access for my Emby server?

     

    What is the accepted, secure way to do this?  Any help would be greatly appreciated.

     

    Thanks!

    You could use tailscale or zerotier, but personally I would do this just for the ease of use and access: https://www.youtube.com/watch?v=RQ-6dActAr8&t=2s 

      

     

    • Thanks 1
  3. 48 minutes ago, joshsalvi said:


    If I don’t have this setup right now, using top level shares for movies, downloads, tv, etc… is there a straightforward way to convert to hardlinks like this, without breaking my system?


    Sent from my iPhone using Tapatalk

    Sure, it's pretty straight forward. You just have to do some manual moving of your files to the new share. If you do it right the move should be instantaneous. Since all you want to do is to move your files to a new share and not to a new disks.

     

    If you're familiar with midnight commander you can use that,it's included in unraid. Just type mc in the terminal. I think the best way would be to enable disk shares and move the files disk by disk to ensure they don't move to another disk. Depending on your file structure it shouldn't take to much time. Just set up the new share and all the folders, then move your files to the appropriate folder in the new share. As I said the move will be instantaneous, if you do it right. 

     

    Just make sure you DON'T (!!!) move from a disk share to a user share or vice versa. If you do that you can have data loss. 

  4. 8 hours ago, brent3000 said:

    I hope someone can provide some guidance on this issue which came up recently since upgrading to 6.12.2 my deluge client has just stopped loading for some reason.

    The docker loads up and shows everything as all good but unable to load the web UI at all,

     

    What info can I supply or any suggestions on how to fix? 

    I can ping,

    VPN connects and has IP

     

    I have re-downloaded and removed the old appdata folder but but same results, 

     

    Screenshot 2023-07-27 215727.png

    Screenshot 2023-07-27 215417.png

    Screenshot 2023-07-27 215348.png

    Do this: https://github.com/binhex/documentation/blob/master/docker/faq/help.md

     

    Also what network type do you use? And are you using VLAN?

  5. 17 minutes ago, thesr5 said:

    I am a Nord user and have not been able to use this app since the last update. I receive a Auth Failed error in the log file. I have changed my password for nord down to the most basic of passwords 0 special characters letters and numbers only, Ive also checked to make sure the settings and the credentials.conf file are correct and matching... there is zero white space infront, behind, left, right, up, down, in the middle of the username and password. 

     

    How do i downgrade back to the previous versions?

    Have nothing to do with the delugevpn version. https://forums.unraid.net/topic/44109-support-binhex-delugevpn/?do=findComment&comment=1277577

     

    • Like 1
    • Thanks 1
  6. 4 hours ago, Evidenz said:

    For whatever reason my AirVPN config does not provide a dedicated DNS server? If I connect with the standard ovpn config provided by AirVPN DNS always defaults to the name servers stated in the docker config. Am I missing something?

    I use airvpn with openvpn and have no issues. I always get the dedicated dns servers when I check https://ipleak.net/

    Edit: But I don't have the name server variable in my template. Never needed it.

  7. 21 minutes ago, Alec.Dalessandro said:

    The 3 folders indicated in the download configuration (incomplete & complete & unzippedTorrents) are created under the directory prior to, fairly standard stuff I think at least.

    That looks good. Not sure what's causing your issue. But you should close the port on your router. When using a vpn you have to open up the port on the vpn side and not your router. You have to check with your vpn provider if they allow port forwarding and if so set that up. When you have set that up you put that port as the incoming port in deluge. 

  8. On 7/2/2023 at 2:45 AM, Alec.Dalessandro said:

    I seem to be having an issues with my deluge docker. I have had it setup and working for a while and it seems to have just stopped downloading. I have been looking through the logs and I don't see any indication or error, similar with my other dockers, they are able to send my requests to deluge and add the torrents.

     

    When launching the delugeVPN docker, it seems to have a connection and established on over the VPN, but all the torrents just sits at 0.00%. Sometimes 2 torrents will get ~5 kb/s then back to 0. Almost like it doens't see the peers or cannot connect to them.

     

    Has anyone see anything like this before? The closest thing I have found something related to the common ports of torrents be blocked via the provider. I have tried to switch some port around in the container and I also have some port worded on my router, to allow access. Any indications would help.

    Can you post a screenshot of your container volume mappings and a screenshot of the downloads settings in deluge?

  9. 2 hours ago, scarecrowboat said:

    I am having a similar issue after updating Unraid OS to the latest version. I tried reverting to the last OS version but still get ERR_CONNECTION_REFUSED when trying to get to the web GUI.

    Tried all the suggested fixes as well and deleted and reinstalled the container and still can't reach the web GUI.

    Do this: https://github.com/binhex/documentation/blob/master/docker/faq/help.md Also post what network type you're using.

  10. 3 hours ago, Shu said:

    I do use binhex-qbittorrentVPN but on bridge mode - using nordvpn. 

     

    My *arrs are routed through the binhex-qbit container with a custom network (--net=container:binhex-qbittorrentVPN)

     

    Didn't have issues prior to the 25th/26th. I even tried previous version of somnar & qbit but to no avail. 

    And you have checked that your output/input ports are set up correct according to the FAQ?

    If you have, I'm out of ideas.

  11. 1 hour ago, Shu said:

    I also tried with LinuxServer's Sonarr image but faced the exact same issue. Neither qBittorrent nor Prowlar indexers will connect.

    Do you by any chance use the vpn container of qbittorrent on a custom network? 

     

    I've had a weird issue for some time where radarr/Sonarr are unable to connect to deluge unless I connect over privoxy which is enabled in DelugeVPN. I'm on a custom network tho which isn't officially supported. 

  12. 3 hours ago, WalkerJ said:

     

    I had a Zerotier container running. I disabled both it and the built-in Wireguard VPN and the container still fails to start with the same error: "possible DNS issues, exiting..."

     

    There must be something different about how the Deluge container starts the VPN vs how the QBT one does because the QBT one works fine, but the VPN setup in init.sh is identical for both. I am not enough of a Docker expert to investigate much further myself. 

    Did you change the dns servers in the container template? Do this: https://github.com/binhex/documentation/blob/master/docker/faq/help.md

  13. 39 minutes ago, Swirl3208 said:

    This is where I disagree.

    Then we have to agree to disagree. 

     

    I'm talking about how the parity in the unraid array works today. 

     

    If you want to discuss what you want the parity to be then I agree with you. It would be nice to be able to recover a corrupted file from parity. But I don't see this changing for the unraid array in the foreseeable future. 

     

    And to achieve what you want you can just set up a zfs redundant pool. Unraid now have the best from both worlds. 

     

    You can back up your unraid array to a zfs pool if you want. Then all your files are protected from bitrot, even if the unraid array parity can't fix it. But this would just be a backup like any other backup, which is a must for important files anyway. 

     

     

  14. 42 minutes ago, Swirl3208 said:

    When a corrupted file is detected on disk 1, unraid could easily "emulate" the corrupted file from parity + disk 2 - disk 7, then rewrite the file to disk 1 as a correction mechanism (actually in unraids case it doesn't even need to write back to the same disk, it only needs to be written back to the same share). Does that make sense?

    No it doesn't make sense. Why do you assume the emulated disk holds a healthy file? Where does it magically pull this healthy file from when it doesn't exist anywhere else? In a rebuild unraid can recover all the data from the emulated disk, that includes all the bits of a corrupted file as well. So if disk 1 has a corrupted file and you pull that disk, replace it with a new one an rebuild, the new disk will still have the corrupted file.

  15. 29 minutes ago, Swirl3208 said:

    Why can't unraid go to the exact spot the file starts and ends in the parity drive

    I feel like I'm only repeating myself. There is no spot where the file start or ends on the parity drive. The parity drive only holds the answer to the parity calculations for all the disk. And in order to get the answer you must first ask the question which represents all the other disk. So if one disk dies we can ask the question with all the drives and get the answer from the parity drive to calculate the correct data to rebuild. But when a file is corrupt so is the answer/question and it can't be answered. I don't think I can explain it in more ways than I already have, so I give up :)

     

    29 minutes ago, Swirl3208 said:

    I know this isn't how unraid works today but is this something that could possibly be implemented in the future?

    What the future holds no one knows, but I doubt it.

  16. 7 minutes ago, Swirl3208 said:

    The difference is that unraid stores all the parity data on 1 drive, where zfs stripes the data across a row, and stores the parity in different drives.

    I think you answered your own question here. The unraid array consist of multiple disks, but each disk has it's own filesystem. The array just pools all the disks together so you're able to access them all in a pool. So the data only exists on that single disk. Disk 1 has no idea of the data on disk 2 because they are both a single disk with it's own filesystem. And the parity disk has no idea about the data on disk 1 or 2 either unless it checks all the blocks of all the other 8 disks in the array, only then can it recover a drive. And if one file on disk 1 is corrupted, that data does not exist anywhere else. It's not in the parity disk, it could be on disk 2 (theoretically) if you have a copy there as well, but that doesn't matter because disk 1 doesn't know about the data on disk 2 so it can't recover the data on disk 1 with the data from disk 2 anyway other than copying it back to disk 1 that is. So if the data from disk 1 doesn't exists anywhere else, even in the parity drive. where do you recover from? You can't, other then a backup. 

     

    But yeah, watch spaceinvader one's video about the array and parity, he explains it much better then I do.

  17. 4 minutes ago, Swirl3208 said:

    Theoretically, for an array with 1 parity drive, we could recover from bit rot if only 1 drive has corrupted data for that block. Using zfs checksum mechanism, If we know the block on disk1 is corrupted then we can use the parity information from the other drives to rebuild the block. This idea is similar to how zfs currently self heals like you said.

    No, since you mentioned spaceinvader one's videos you should watch the video about parity.  The parity is for recovery from a failed disk not for recovery of corrupted data. As I said the parity drive does not hold any data so it's not possible for it to recover any corrupt data. IF it in fact DID hold any data your theory would be possible. 

     

    In order to be able to use the healing capabilities of zfs your will need to have a redundant pool like raidz/2/3 etc. Or you'll need multiple copies of the data on the same disk if using a single pool disk. But the parity disk would not be able to help you at all like I already mentioned. The unraid parity disk is not like regular raid/zfs parity.

  18. 2 hours ago, hahler2 said:

    I encountered an odd problem today.  I went to pull up the WebUI for Sonarr today to add a new show I want it to download, and there's no option to click for WebUI.  Just console.  Rebooted the docker image and I restarted my server and no change.  Anyone know how to fix this?

    Did you just update from 6.9.2 to 6.12.x? If so you should be following this thread: https://forums.unraid.net/bug-reports/stable-releases/no-webui-link-on-docker-containers-6121-r2505/

     

    You could probably reach the webui by typing the IP:PORT manually in the web browser tho.

  19. 30 minutes ago, Swirl3208 said:

    When bit rot is detected do any of these 3 options have mechanism to recover from bitrot? Like could Unraid use the parity drive to recover the data on that block?

    No, not for the array. The parity drive does not hold any data. You would be able to detect bitrot, but only way to recover from it is to restore from backup (which you should have anyway :)). In a zpool tho it would self heal automatically when the data is read (or via scrub), if it detects corruption.  

  20. 3 minutes ago, crazybits said:

    So your post got me thinking. I'm accessing deluge from a different subnetted vlan. I plug my computer into the same vlan/subnet as my unraid server and bam, deluge webui comes right up..

    Yeah, that would do it. You could try to change back to your other vlan and run the command you see in your log on your unraid server and restart the container.

     

    Quote

    [warn] Unable to load iptable_mangle module, you will not be able to connect to the applications Web UI or Privoxy outside of your LAN
    [info] unRAID/Ubuntu users: Please attempt to load the module by executing the following on your host: '/sbin/modprobe iptable_mangle'
    [info] Synology users: Please attempt to load the module by executing the following on your host: 'insmod /lib/modules/iptable_mangle.ko'

    If it works after that just set up a user script with that command and set it to run at array start.

    • Like 1
  21. 3 hours ago, WalkerJ said:

    The symptom is the VPN can't connect because of "possible DNS issues"

    Yeah, that's not the same issue as the rest is having. The rest is having a successful start as far I can see. If your log states that you have possible DNS issues you should change your nameservers in the container template to something else, like maybe 1.1.1.1,8.8.8.8. PIA users have had this issue on/off for some time. So if you're a PIA user changing your nameservers is worth a shot. 

     

    For you other guys/future posters reading this it will probably take some time to debug. So please enable debug logs and post them along with your diagnostics as well. Also write if your're using zerotier/Tailscale as that maybe could be a factor.

    • Like 1
×
×
  • Create New...