josh1014

Members
  • Posts

    9
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

josh1014's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I have been running pihole docker container on unraid for years. When I upgraded to 6.12.4, my server began crashing regularly. I isolated the issue to pihole as it stopped crashing when I kept the pihole offline. Upon further forum reading, I learned of the call trace issues. Pihole is my only container that was running with a custom IP on br0. I changed to ipvlan, and started the pihole as a custom IP on eth0. It has been working fine in general, but periodically throughout the day, it will become unresponsive for 30 seconds to 2 minutes, rendering my devices unable to resolve external addresses and load websites. During the downtime, the unraid UI is accessible, other docker containers are accessible, but the pihole webUI is not. From the console inside the pihole container, I can access the log and nothing exciting is happening. I have looked at the pihole logs and diagnostics and see nothing of interest. The unraid syslog is also silent during these brief periods. Any suggestions on how I can look deeper into what is causing this one container to temporarily become unresponsive periodically throughout the day?
  2. Hello, wondering if an expert can immediately identify the problem here to save me some time messing with my app subfolder conf. I have my webapp accessible via https://mydomain.duckdns.org:444/appname/ this successfully brings you to the login page for this webapp. Once you submit your credentials, you get sent to: https://mydomain.duckdns.org/appname/entrance/ instead of https://mydomain.duckdns.org:444/appname/entrance/ if you go ahead and add the port back in then you’re fine the rest of the way, but that initial login causes the port to disappear from the URL. location ^~ /appname { auth_basic “Restricted”; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://appname:80; } Any ideas what I need to add to solve this? Thanks!
  3. If you follow those instructions and add the second parity disk, wont it only use 4TB of the 10TB when creating parity, and then when you replace the primary parity disk, will it again match the 4TB and therefore not expand the max disk size of the array up to 10TB?
  4. No data is backed up outside of the array. I suppose I would like safest.
  5. Hi, I am about to make some changes to my array and was wondering if someone could confirm best practice. I currently have a 4TB parity disk, four 2TB data disks and one 4TB data disk. All disks are healthy and parity is valid. I have three new 10TB drives. I am planning to add dual 10TB parity disks, convert the current 4TB parity disk into a data disk, and remove a full 2TB data disk and rebuild its data onto a new 10TB data disk. I was planning on pre-clearing each of the three new 10TB disks before making any changes to the array. Could someone please tell me the best practice order of operations to follow after I am done pre-clearing all the new disks? Thanks!
  6. Thanks for your thoughts. If I ditched plex and the most CPU intensive activity was unpacking archives, what specs do you think I could get away with?
  7. Hoping for some help with a new Q25B build: Upgrading by current unraid box in a Q08B which is using the SUPERMICRO MBD-X7SPA-HF-O Mini ITX with Atom D510 processor. Use case is for storage plus probably ~5 dockers, most intense being plex (max 2-3 streams). No VMs. Already have the drives from my current array, need help selecting cost-effective but ample motherboard/processor/memory and an SSD cache drive for the dockers. All the examples of recent builds I've found seem like overkill for my purposes and I'm looking to not spend more than necessary to cover my bases. Any thoughts would be much appreciated, thanks!
  8. Just to preface, after reading many forum threads I am aware that I did not handle this troubleshoot process in the ideal way initially. Hoping for some guidance at my current point. So this started with noticing that my parity disk had thrown thousands of errors per the unraid main page. I had not run a parity check in a few weeks, and all previous parity checks have been error free. None of my 3 data drives at this point were showing any errors on the main page and everything was still green balled. I unfortunately decided to reboot without grabbing a syslog, then ran a parity check which was progressing at an incredibly slow pace. At this point I cancelled the parity check and I bought a new HDD to replace my failing parity drive. After performing the swap, I started the parity-sync which has now finished finding 7 errors, all of which are on disk3 (a 2TB drive that is about half-full with data). No other errors on any of the other drives. At this point, I assume there is no real way to determine if I suffered any data loss. Posted below is the syslog showing the errors during the parity-sync, as well as the SMART report for disk3. My best interpretation is that I probably did not suffer any data loss, but I should replace disk3 with a new drive and let my new parity disk rebuild it. Please advise, thanks! syslog.txt