Nasha

Members
  • Posts

    44
  • Joined

  • Last visited

About Nasha

  • Birthday 11/04/1986

Retained

  • Member Title
    Jedi Master

Converted

  • Gender
    Male
  • Location
    Melbourne, Australia

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Nasha's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. For all future reference - 6.12.4 is said to negate any nerfing that is initially undergone.... I'm not sure what actually nerfed the existence of a particular set of repeatable circumstances for very few people but all sharing common ties to widely used products independently (being Unraid and Docker and PiHole). For me, I thought that I'd launched the nerf. I now know that was pure coincidence Pretty sure that given Unraid comment vaguely advising on upgrading versions in imperfect English, hintd to something that changed within Unraid, and tight to the dev-core (because of limited end-user impairment, and the correlations I know to have existed between public reports of issues around the time my problem occurred) - but I've not actually tried to "find the reasons for the cause", because I trashed most any semblance of change tracking the moment I manually manipulated related data. My upgrade was the installation of what I believe to be the root of the 6.12 tree and back to early 11s, up to 6.12.3 level was my last update&reboot performance. My next available reboot just happened to incur some possible changes to the Docker folder storage location on physical disk, so I chased it as a problem I had created, and tried to patch it my copying files between disks etc. to no avail, and I'd resigned to life without Docker on Unraid, until the mrs tried to use Plex and all hell broke lose. So I tried "starting again", gut, disable, enable from scratch, load items based upon predictable results, and struck luck with joining data dots with some others, that created the "perfect storm" scenario, which I may have completely avoided if I'd waited a minor revision longer also... So yeah just wanted to say, I felt it too, it sucked to have been the unlucky few, to face rather dire failures, but it appears that 6mths later it seems to have passed on by like a gentle breeze. Not the catastrophic nightmare that was having Docker disabled from use for months irreconcilablely... Thankfully, nothing that cannot be replaced was impacted, but it did place me at a critical DR scenario of single point failures for an extended period of time. But an important reminder that unraid shouldn't be relied upon to function beyond servicing your data storage requirements and interists, certainly not concerning the excess of separation from common interest 3rd party plugins to a 3rd party application that becomes a nexus for 3rd party integrations in general to becomr a convenient nexus to apply integrations that this issue created/brought to attention... Including the reminder that removing single disk failures from your life is by no means = to safe and comprehensive means of data storage full stop. Learning the hard truths about multiple disk failures, and how easily they can occur is extremely painful, trust me!
  2. Hi All, I'm sure this has to be something stupid and small, but my head is a bit overloaded in Home-Assistant mode with my first deployment. This appears to be a two stage problem; 1. Unraid host (192.168.1.20) can't ping from the command line, or perform nslookup to the pi-hole Docker container (192.168.1.24) which is running Network Type: Custom - br0 -- Ethernet (where br0 is eth0+eth1 bonded into bond0, on-board NIC + PCI 10GbE NIC). I believe this issue cropped up when i first started fluffing with AdGuard & PiHole, and the solution that solved the problem, was adding a secondary DNS server. There are no issues with DNS or accessing Docker containers setup using br0, from my laptop which receives DHCP. 2. Possibly related - Hostnames are not being picked up by pi-hole. "Use Conditional-Forwarding" is enabled in pi-hole, and both pi-hole and the dhcp server share the same domain name (local). My router is a Unifi Dream Machine & DHCP server (192.168.0.0/22). I've set the .local Domain Name in the UDM & pi-hole. route, nslookup & ifconfig can be found in this pastebin Thanks in advance, and please let me know if there's anything further needed. Nasha
  3. Hi All, I thought this was going to be straightforward, but after hours of fiddeling around, i'm no closer to ,a result . Just in case the title isn't clear, i need the unraid box to think it's FQDN is tower.publicdomain.id.au so that i can install an SSL cert. In this instance, would i then still be able to access it locally (192.168.1.x) without it throwing SSL errors. I'd also like to be able to do this for the few docker containers that need this. I assume the process would be the same as above, except i need to use host networking with its own IP, not bridged? I know the deprecation of self-signed certs and .local domains were for the betterment of security, but it seems to me you need to be a domain/DNS/CA master just to be able to use ssl internally, and potentially open the network to the outside world for LE/ZSSL to work unhindered. Other relevant info; - Unifi Dream Machine router running DHCP, with "Domain Name" set to publicdomain.id.au - which seems to propogate fine for DHCP clients ie my laptop gets it, but Unraid is running Static IP for obvious reasons. - Local DNS is currently Adguard Home (currebtly testing to see whether PiHole or AdGuard is "for me" - I do own a public domain name, and know my way around administering all aspects of that. Thanks in advance if anyone is willing to pick up what seems to be an absolute headache at this point! Cheers, Nasha
  4. No, if you follow the relevant threads Is a good start. You'll find that this is still an issue work in progress. This is a new feature that hasn't existed previously, so simply follow your existing flash backup routine until such time as it's confirmed to be issue free and you'll remain completely unaffected
  5. I didn't realise there was still work going on, i will keep an eye on the thread you linked to. I swung the virtual hammer, to no success, so file attached as requested - TLDR seems like some sort of permissions issue, but i'll leave it in your capable hands. Thanks for the assistance! gitflash
  6. I tried this, as i noticed there'd been a few updates since i last tried to get this setup. Unfortunately, i'm still experiencing "failed to sync flash backup" - "not up-to-date". Am i an anomaly, or is this still being worked on? Thanks, Nash
  7. @JoeUnraidUser if you read a few posts up, there are still some issues with this function, and has yet to be declared fully functional. Keep an eye on this thread for updates
  8. I don't seem to be able to get a backup loaded. System is only a few weeks into deployment, and only discovered earlier that i had to "activate" the flash backup feature (the whole My Servers concept is a new feature since i last used unraid), so in reality it's only been a few hours since i activated - but yeh no love in that time. Will follow this thread for the all clear @ljm42 , and then report if i'm stilll experiencing errors after this.
  9. Ok... I figured that speed thing might be a potential issue... But no Trim I didn't see coming... I'll admit I have very little idea what this means, but I know it's not good when it's missing. I was only going the SSD route because I have all these drives here, but maybe I sell them off and buy some ex-server 3/6/900GB 10k SAS drives which can be had pretty cheap. I just need to make sure their thickness is going to fit my case and mounting bracket. I think that may well be an issue, which may mean back to the drawing board. Ho Hum! Thanks for the info
  10. Hi All, Haven't had a NAS at home for ages now and I finally sold off all my existing server HW, so I have a bit of a budget to start planning things again. I've ended up with a bunch (6+) of 120GB SSD's and I wanted something small this time around, so an SSD based array seemed a logical choice, especially given I don't have massive data requirements (no 4k RAW family home video libraries here). Of course id look at getting some larger drives as well, which brought me to the "do I need to waste $$'s on an SSD parity drive?" question to myself. After some pondering, I assumed this would be not only frowned upon, but may actually be a bad idea... Is it really a genuinely bad idea, or just not the best choice. Money is the driving factor, not alot to spare so buying a pair of ~500GB SSD's, and using a shucked 2TB portable HDD for parity would ideally be disks done for me. So I'd like to know what nasties am I opening myself up to going down this route? I appreciate any and all feedback, and thanks for reading!
  11. I tried.... At which point I waited over an hour for diagnostics to be generated and they still weren't done. So I rebooted and did it immediately. The read check was started prior to this post, so as far as I'm concerned there's nothing more I could have done to please you. Nothing has deteriorated any further than since my OP, the diags are the current situation. My OP problem still exists, and all I've had is people criticise my actions unrelated to assisting with the OP. Nothing has changed
  12. I was using atop at the time to see what was going on... But I didn't initiate an installation of any sort other than updating CA. I installed nerd pack for a certain set of tools - generally I know what I'm doing, but this issue is out of my league. Does this impact my initial problem in any manner? Is it auto-updating since the last power up? Should I kill it? EDIT: Uninstalled nerdpack until such time as I find I need it again
  13. The /var/log tempfs partition had hit 100%, probably due to the number of parity errors. The system became unresponsive so i rebooted and grabbed the Diags (attached), hopefullly it helps! Appreciaye you taking a look yoda-diagnostics-20220124-0002.zip
  14. I was waiting for the parity read check to complete before I did this, but it's 2 days in with about 2 days remaining. I've kicked off a diagnostics collection and so far I've been waiting 15min for it to complete. Would this be considered normal for a system in my shape? (ie. Degraded array, parity check in progress)? Dashboard shows 8 cores pegged at 100% and it doesn't drop from that. So it's either really busy, or stuck running around in circles! Hopefully it finishes sometime soon before my laptop battery dies, otherwise I'll have to start it again on my phone 😣
  15. Currently they don't show under UD, as unraid sees them as part of the existing array... Do you suggest I remove them from the array in order to have them show up in UD? I'm currently running a parity read check, as I thought this may have some benefit. Should I cancel this to perform the above, or await its completion (Eta 14hrs). Thanks for your help! Nasha