Dwiman89

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by Dwiman89

  1. I am unable to get any mounts working. This is pictures of the errors I am getting. It says the directories don't exist. There are pictures of how the directories are set up. I am confused how they are missing since I can cd into them. Theres a picture of my conf file showing that they are added to allowed mounts. https://imgur.com/a/Bwnt0xM
  2. I have been trying to get this to work for two days using a domain I purchased and setup through cloudflare. No matter what I do, I get "Origin is unreachable Error code 523". For privacy reasons, I will refer to the domain I specifically purchased as "mydomain." I setup cloudflare with my personal IP set in the DNS settings. I set a CNAME subdomain for overseerr. I created an Origin SSL certificate through cloudflare and set it to strict. I forwarded ports 80 and 443 to ports 1880 and 18443 with the IP of my unraid server of 192.168.1.134 within my router. I installed this docker and set network to Bridge. I then set it up with a user and password. I added "overseerr.mydomain.com" as a host and added the origin SSL certificate to it Originally, going to the web address "overseerr.mydomain.com" would strangely lead to the login page for one of my Reolink security cameras. I disabled UPNP within my router, and then it would only display the cloudflare page with "Origin is unreachable Error code 523" These are things I have tried; I have tried removing and reinstalling NGINX multiple times with deleting all files in appdata in between. I have tried it on different network setting such as individual docker networks, and Br0(Where an individual IP was assigned, and I changed the IP for the port fowarding in the router respectively) I have done various testing. I can ping "overseerr.mydomain.com" sucessfully. Running a traceroute ends at the cloudflare servers Scanning open ports for "overseerr.mydomain.com" shows port 80 and 443 open. Using Telnet, I am able to successfully ping ports 80 and 443 for "overseerr.mydomain.com" whois for "overseerr.mydomain.com" says cloudflare. I have also tried setting up other subdomains for other dockers. running a curl command in to see the IP for NGINX and overseer both returns my external IP indicating that they can both reach out to the internet. I do not see anything strange in the logs for NGINX.
  3. I installed that docker, but I'm woefully confused about how to set it up..
  4. Im working on setting up backups. The script the FAQ points to for automatic backups says its experimental and that was 5 years ago... Is this the script you use? Does it copy the entire vdisk over and over? Thats a lot of writes for me with 200gb worth of vdisk.
  5. Okay, its mounted to the pool. I recieved notifications that the cache has returned to normal opporation. I believe all is well with the cloning, and it is mounted. I cant seem to get the process to work to see which files specifically are currupted. Under the /mnt/ directory, I have this: root@Tower:/mnt# ls cache/ disk1/ disk2/ disk3/ disk4/ disk5/ disks/ remotes/ user/ user0/ Thus, i'm using this to try to get the output with the "unRAID" string that it says indicate what files are corrupted. I followed the instructions with first two commands printf "unRAID " >~/fill.txt And then: ddrescue -f --fill=- ~/fill.txt /dev/sdY /boot/ddrescue.log I then try this command, and this is the result. I still don't see anything with anything that says "unRAID" indicating a corrupt file. root@Tower:/# find /mnt/cache/ -type f -exec grep -l "unRAID" '{}' ';' /mnt/cache/appdata/binhex-plex/Plex Media Server/Logs/Plex DLNA Server.5.log /mnt/cache/appdata/binhex-plex/Plex Media Server/Logs/Plex DLNA Server.4.log /mnt/cache/appdata/binhex-plex/Plex Media Server/Logs/Plex DLNA Server.3.log /mnt/cache/appdata/binhex-plex/Plex Media Server/Logs/Plex DLNA Server.2.log /mnt/cache/appdata/binhex-plex/Plex Media Server/Logs/Plex DLNA Server.1.log /mnt/cache/appdata/binhex-plex/Plex Media Server/Plug-in Support/Caches/com.plexapp.system/HTTP.system/CacheInfo /mnt/cache/appdata/binhex-krusader/supervisord.log /mnt/cache/domains/Hassos/hassos_ova-2.12.qcow2 /mnt/cache/domains/Windows 10/vdisk1.img root@Tower:/# I appreciate all of your help with this. I didn't expect to get this far and assumed all was lost. Hopefully once I know which files are damaged, I can work on replacing them, and then set up something to keep everything on the cache backed up like the script you mentioned! Thank you.
  6. I think I sucessfully cloned it. it says 99.99% recovery. Those instructions say there is a way to output which files specifically are damaged. I used the commands, but nothing is outputted with the string "unRAID" find /mnt/cache -type f -exec grep -l "unRAID" '{}' ';' This is what I used in place of path/to/disk since I assume there is no individual mount point when its in the cache drive pool. I tried to mount the cloned drive as an unassigned device, so I could try to get the output that way, but it just hangs indefinitely on "mounting" for like 20 minutes now.
  7. ddrescue -f /dev/sdX1 /dev/md# /boot/ddrescue.log "Replace X with source disk (note de 1 in the source disk identifier), # with destination disk number, recommend enabling turbo write first or it will take much longer." With this the damaged source drive is "sdc", but what is the source disk identifier denoted by the 1? If I look in the /dev/ directory there exists an "sdc" and "sdc1". For the destination disk, where do I get the destination disk number? I see several files under /dev/ that start with "md". The destination SSD is listed as "sdj" under unassigned devices in my main tab.
  8. Also, do I need to preclear and format the replacement drive first before attempting the clone? I was also thinking about manually copying the files using file explorer before just incase, even though if they are messed up, that may mean nothing in the end.
  9. Geez Looking at that wiki, it looks like a a big margin of error and close to being over my head. Is it safe for me to bring both cache drives or just one out of the array before messing with anything or is all okay to just start the whole array in maintenance mode? Im worried about making it worse. Well the most important file I thought was my Vdisk.img since its everything on the windows VM on the cache which is 200GB. Thats a huge file to keep backed up all the time and a lot of writes I would think. To even have it backed up on say a weekly basis, that means I have to be copying a 200GB file to my array every week? when I first set it up, I did pick raid0 but initally it was just scratch drive space with it being periodically moved to the array once a week to reduce wear on the array. The VM was an afterthought and didnt initally have importance to me. It became important, and I didnt think about it being on a raid0 to begin with. I forgot it was saved to it.
  10. I came home this morning to my windows VM not working. I believe this is due to one of my two cache drives failing. The cache drives are two 500GB SSDs totaling to 1TB. I noticed errors with one of the cache drives. This is the first error in the log for the drive and only appears once. Jun 21 09:48:04 Tower kernel: print_req_error: I/O error, dev sdc, sector 89165888 This error repeats over and over when I attempt to start the VM. Jun 21 09:48:04 Tower kernel: print_req_error: critical medium error, dev sdc, sector 66458640 If I am understanding this correctly, the drive has a bad sector, and because of how I mistakenly set it up, that single bad 512 bytes of data on that bad sector means the well is poisoned and that ALL 216GB of data on the cache is ruined? It has my docker stuff saved on it, and my VMs. I care VERY much about the data thats on the 200GB Vdisk for the windows VM. Theres over a years worth of a project ive been working on. Is there no way to recover ANY of it? I thought BTRFS meant something could be done. I know I messed up by having critical data on a RAID striped drives, but I just set it up and forgot about that being a potential issue. It CANT be completely ruined I would think, because theres another linux VM that still appears to be working fine, and most of the dockers appear to be working fine. Its just the Windows VM and the Krusador docker that appear to not be working. I get the Blue screen of death when booting the windows VM. Attached is my diagnostics and SMART report for the drive that has errors. tower-diagnostics-20210621-0848.zip tower-smart-20210621-0720.zip
  11. I've been searching through posts all day on here and I cant find anything that seems to work. DelugeVPN is working fine and its using my VPN. The other dockers are also working fine by themselves. What I cant get working is getting other dockers to use its privoxy such as Sonarr, Radarr, Jackett, and Hydra2. After I set these dockers to use the privoxy, I've tried "curl https://ipinfo.io/ip" in the consoles for those dockers, and its still displaying my physical IP and not the VPN IP. Using "curl https://ipinfo.io/ip" in the console for Deluge displays the VPN IP, just not the other dockers. My unraid box's local IP is 192.168.1.134, so Ive set the proxy setting in all of those other dockers to 192.168.1.134:8118. Ive even tried to use the corresponding local host IP of 172.17.0.3:8118 with the same results. There are no errors in any logs. In the unraid terminal, I am able to connect to 8118 using telnet.