Jump to content

Nasha

Members
  • Posts

    44
  • Joined

  • Last visited

Everything posted by Nasha

  1. For all future reference - 6.12.4 is said to negate any nerfing that is initially undergone.... I'm not sure what actually nerfed the existence of a particular set of repeatable circumstances for very few people but all sharing common ties to widely used products independently (being Unraid and Docker and PiHole). For me, I thought that I'd launched the nerf. I now know that was pure coincidence Pretty sure that given Unraid comment vaguely advising on upgrading versions in imperfect English, hintd to something that changed within Unraid, and tight to the dev-core (because of limited end-user impairment, and the correlations I know to have existed between public reports of issues around the time my problem occurred) - but I've not actually tried to "find the reasons for the cause", because I trashed most any semblance of change tracking the moment I manually manipulated related data. My upgrade was the installation of what I believe to be the root of the 6.12 tree and back to early 11s, up to 6.12.3 level was my last update&reboot performance. My next available reboot just happened to incur some possible changes to the Docker folder storage location on physical disk, so I chased it as a problem I had created, and tried to patch it my copying files between disks etc. to no avail, and I'd resigned to life without Docker on Unraid, until the mrs tried to use Plex and all hell broke lose. So I tried "starting again", gut, disable, enable from scratch, load items based upon predictable results, and struck luck with joining data dots with some others, that created the "perfect storm" scenario, which I may have completely avoided if I'd waited a minor revision longer also... So yeah just wanted to say, I felt it too, it sucked to have been the unlucky few, to face rather dire failures, but it appears that 6mths later it seems to have passed on by like a gentle breeze. Not the catastrophic nightmare that was having Docker disabled from use for months irreconcilablely... Thankfully, nothing that cannot be replaced was impacted, but it did place me at a critical DR scenario of single point failures for an extended period of time. But an important reminder that unraid shouldn't be relied upon to function beyond servicing your data storage requirements and interists, certainly not concerning the excess of separation from common interest 3rd party plugins to a 3rd party application that becomes a nexus for 3rd party integrations in general to becomr a convenient nexus to apply integrations that this issue created/brought to attention... Including the reminder that removing single disk failures from your life is by no means = to safe and comprehensive means of data storage full stop. Learning the hard truths about multiple disk failures, and how easily they can occur is extremely painful, trust me!
  2. Hi All, I'm sure this has to be something stupid and small, but my head is a bit overloaded in Home-Assistant mode with my first deployment. This appears to be a two stage problem; 1. Unraid host (192.168.1.20) can't ping from the command line, or perform nslookup to the pi-hole Docker container (192.168.1.24) which is running Network Type: Custom - br0 -- Ethernet (where br0 is eth0+eth1 bonded into bond0, on-board NIC + PCI 10GbE NIC). I believe this issue cropped up when i first started fluffing with AdGuard & PiHole, and the solution that solved the problem, was adding a secondary DNS server. There are no issues with DNS or accessing Docker containers setup using br0, from my laptop which receives DHCP. 2. Possibly related - Hostnames are not being picked up by pi-hole. "Use Conditional-Forwarding" is enabled in pi-hole, and both pi-hole and the dhcp server share the same domain name (local). My router is a Unifi Dream Machine & DHCP server (192.168.0.0/22). I've set the .local Domain Name in the UDM & pi-hole. route, nslookup & ifconfig can be found in this pastebin Thanks in advance, and please let me know if there's anything further needed. Nasha
  3. Hi All, I thought this was going to be straightforward, but after hours of fiddeling around, i'm no closer to ,a result . Just in case the title isn't clear, i need the unraid box to think it's FQDN is tower.publicdomain.id.au so that i can install an SSL cert. In this instance, would i then still be able to access it locally (192.168.1.x) without it throwing SSL errors. I'd also like to be able to do this for the few docker containers that need this. I assume the process would be the same as above, except i need to use host networking with its own IP, not bridged? I know the deprecation of self-signed certs and .local domains were for the betterment of security, but it seems to me you need to be a domain/DNS/CA master just to be able to use ssl internally, and potentially open the network to the outside world for LE/ZSSL to work unhindered. Other relevant info; - Unifi Dream Machine router running DHCP, with "Domain Name" set to publicdomain.id.au - which seems to propogate fine for DHCP clients ie my laptop gets it, but Unraid is running Static IP for obvious reasons. - Local DNS is currently Adguard Home (currebtly testing to see whether PiHole or AdGuard is "for me" - I do own a public domain name, and know my way around administering all aspects of that. Thanks in advance if anyone is willing to pick up what seems to be an absolute headache at this point! Cheers, Nasha
  4. No, if you follow the relevant threads Is a good start. You'll find that this is still an issue work in progress. This is a new feature that hasn't existed previously, so simply follow your existing flash backup routine until such time as it's confirmed to be issue free and you'll remain completely unaffected
  5. I didn't realise there was still work going on, i will keep an eye on the thread you linked to. I swung the virtual hammer, to no success, so file attached as requested - TLDR seems like some sort of permissions issue, but i'll leave it in your capable hands. Thanks for the assistance! gitflash
  6. I tried this, as i noticed there'd been a few updates since i last tried to get this setup. Unfortunately, i'm still experiencing "failed to sync flash backup" - "not up-to-date". Am i an anomaly, or is this still being worked on? Thanks, Nash
  7. @JoeUnraidUser if you read a few posts up, there are still some issues with this function, and has yet to be declared fully functional. Keep an eye on this thread for updates
  8. I don't seem to be able to get a backup loaded. System is only a few weeks into deployment, and only discovered earlier that i had to "activate" the flash backup feature (the whole My Servers concept is a new feature since i last used unraid), so in reality it's only been a few hours since i activated - but yeh no love in that time. Will follow this thread for the all clear @ljm42 , and then report if i'm stilll experiencing errors after this.
  9. Ok... I figured that speed thing might be a potential issue... But no Trim I didn't see coming... I'll admit I have very little idea what this means, but I know it's not good when it's missing. I was only going the SSD route because I have all these drives here, but maybe I sell them off and buy some ex-server 3/6/900GB 10k SAS drives which can be had pretty cheap. I just need to make sure their thickness is going to fit my case and mounting bracket. I think that may well be an issue, which may mean back to the drawing board. Ho Hum! Thanks for the info
  10. Hi All, Haven't had a NAS at home for ages now and I finally sold off all my existing server HW, so I have a bit of a budget to start planning things again. I've ended up with a bunch (6+) of 120GB SSD's and I wanted something small this time around, so an SSD based array seemed a logical choice, especially given I don't have massive data requirements (no 4k RAW family home video libraries here). Of course id look at getting some larger drives as well, which brought me to the "do I need to waste $$'s on an SSD parity drive?" question to myself. After some pondering, I assumed this would be not only frowned upon, but may actually be a bad idea... Is it really a genuinely bad idea, or just not the best choice. Money is the driving factor, not alot to spare so buying a pair of ~500GB SSD's, and using a shucked 2TB portable HDD for parity would ideally be disks done for me. So I'd like to know what nasties am I opening myself up to going down this route? I appreciate any and all feedback, and thanks for reading!
  11. I tried.... At which point I waited over an hour for diagnostics to be generated and they still weren't done. So I rebooted and did it immediately. The read check was started prior to this post, so as far as I'm concerned there's nothing more I could have done to please you. Nothing has deteriorated any further than since my OP, the diags are the current situation. My OP problem still exists, and all I've had is people criticise my actions unrelated to assisting with the OP. Nothing has changed
  12. I was using atop at the time to see what was going on... But I didn't initiate an installation of any sort other than updating CA. I installed nerd pack for a certain set of tools - generally I know what I'm doing, but this issue is out of my league. Does this impact my initial problem in any manner? Is it auto-updating since the last power up? Should I kill it? EDIT: Uninstalled nerdpack until such time as I find I need it again
  13. The /var/log tempfs partition had hit 100%, probably due to the number of parity errors. The system became unresponsive so i rebooted and grabbed the Diags (attached), hopefullly it helps! Appreciaye you taking a look yoda-diagnostics-20220124-0002.zip
  14. I was waiting for the parity read check to complete before I did this, but it's 2 days in with about 2 days remaining. I've kicked off a diagnostics collection and so far I've been waiting 15min for it to complete. Would this be considered normal for a system in my shape? (ie. Degraded array, parity check in progress)? Dashboard shows 8 cores pegged at 100% and it doesn't drop from that. So it's either really busy, or stuck running around in circles! Hopefully it finishes sometime soon before my laptop battery dies, otherwise I'll have to start it again on my phone 😣
  15. Currently they don't show under UD, as unraid sees them as part of the existing array... Do you suggest I remove them from the array in order to have them show up in UD? I'm currently running a parity read check, as I thought this may have some benefit. Should I cancel this to perform the above, or await its completion (Eta 14hrs). Thanks for your help! Nasha
  16. Hi all, My UnRAID server has been gathering dust in the corner ever since I thought I lost 2 disks to physical failure. After being bored one night I plugged up what I thought the running config was and my HBA only dropped 1 drive..... I saw hope! Array is mounted and everything, and I can access data that's on the sole drive that's operational (aside from the parity disk).... Hope dwindles... However, I've two disks that it's placed into the array but marked them as "Unmountable: to invalid partition layout" and wants to format them. It states their FS type as XFS in the dashboard Now I'm not familiar with xfs to tackle this solo... My suspicion is the HBA (or my stupidity controlling the HBA) may have messed with them to form Virtual Disk Groups and then back to "pass through", because the drives I'd purchased 2nd hand to resolve the initial disk issues were DOA unbeknownst to me at the time. So stupidly I fiddled at the HBA level to try and rectify (from very foggy memories). I thought I'd kick off a parity read check just now, and interestingly it appears to be reading from these drives despite them not being mounted (based on "Reads" from "Main" dashboard). TLDR; Could anyone suggest how to go about looking into potentially recovering the data that is on these ("Unmountable: Invalid partition layout" according to unRAID) drives, as sure enough its all the replaceable media that survived! UnRAID 4.19.107 5 disk array, 1 parity (working), 1 storage (working), 1 storage (failed/disconnected), 2 storage "Unmountable" Any help would be greatly appreciated, even if it's just pointing me to a link for something similar or a tool that I need to study the man pages of that might help dig into things further. Thanks in advance, Nasha
  17. Hi all, I can't seem to locate anyone who speaks of anything similar to my problem/what i want to achieve here, so reaching out in hope of an answer (as i'm tired of wading through the amount of emails produced! - This is essentially the "problem", secondary problem being the dashboard alerts). Ok, i understand i could do all sorts of client side sorting/filtering etc. with the email - Which i will do if i'm not able to reach resolution here... The Story.... I have 2 fans, which generate countless "failures" because they spin too slow. They actually spin in a range that see's them constantly fluctuating up and down, tripping the low-threshold in both directions... Each occurrence generates 5 emails from resulting alerts (inline below) with both normal, warning and alert priorities. As it's the variation of the fan speed which actually triggers the influx of alerts, it can happen once every 24hrs (rarely), or once every second if it wanted to. Event: unRAID Server Alert Subject: Notice [YODA] - IPMI Event Description: localhost *Warning* Fan - Lower Critical - going low ; Sensor Reading = 1024.00 RPM ; Threshold = 576.00 RPM Importance: warning Description: localhost *Nominal* Fan - Lower Non-critical - going low ; Sensor Reading = 1024.00 RPM ; Threshold = 784.00 RPM Importance: normal Description: localhost *Critical* Fan - Lower Non-recoverable - going low ; Sensor Reading = 1024.00 RPM ; Threshold = 400.00 RPM Importance: alert Description: localhost *Warning* Fan - Lower Critical - going low ; Sensor Reading = 1024.00 RPM ; Threshold = 576.00 RPM Importance: warning Description: localhost *Nominal* Fan - Lower Non-critical - going low ; Sensor Reading = 1024.00 RPM ; Threshold = 784.00 RPM Importance: normal Does anyone have any thoughts/suggestions/etc. for addressing this? Thanks, Nash
  18. The method in this thread is both outdated, and generally not required for NIC's. Stubbing, GPU's aside, is only *required* when a device is in a shared IOMMU group. In the event it's required it's achieved via the vfio-pci.ids=xxxx:XXXX append statement, in lieu of the former pci-stub.ids statement. This binds the HW to the vfio driver at runtime, the current accepted method. I also want to pull one key piece of info from the OP that stands today more than it did 5yrs ago: Direct passthrough of network controllers is NOT a requirement for most use cases under Windows & Unix (I can't comment regarding the current state of BSD) , and in fact loses benefits to be had with use of a vNIC. See the OP for more. For VM's of PC type (not Q35), here's a quick and simple way to passthrough non-GPU devices Now, with all that out of the way... Your question is valid and as of v6.7 has a fancy new method for itself. I've linked to a thread, not the changelog post as there's some good info posted by Limetech, but be sure to also read the changelog With regards to passthrough, that there isn't going to be 1 all-encompassing nor future-proof method for passthrough for a variety of reasons. For anyone reading this post in future, or not covered by the above, search for the current accepted method for performing passthrough (there will be a relevant thread here), as this will give you the best results.
  19. Hi Raiders,, I recall passing through someones storage forum, and briefly glimpsing a comment advising purchasers away from buying the much cheaper Chinese held controller cards, HBA/Expanders. At the time it was of no significance to me, it wasn't what i was lookking for at the time so i moved on. Well i need a SAS expander - Where one can be had from HP or IBM for $20 New..... Even their Intel expander iis quite affordable. Are there known ongoing issues with these imported, assumed "cloned" cards? Or a handful of cases anda bit of Russian Roukette? Would you use one? (*Everyone screams NO IT'S YOUR DATA HERE*) Unfortunately this is an unforeseen expense, and finances are never good anyway, but i came to purrchase a new (used) IBM x3950 x5 for a vvery good price, which means those big old disks have got to go somewhere! Enter External SAS enclosure; -Gen8 Microserver gutted of majority of internals (everything but the wall) -Replaced drive bays with a 5 in 3 cage -Had some SSD mounts 3D printed She;s coming together beautifully, but build is earner completion, and i need to get that SAS card in there so i can enjoy my Penta-CPU 80 core beast in action! Insert other media Ni suggestions ignored! Thanks, Nash
  20. Might be a stupid question.... But can i use a GPU mining PCIe riser board to power the SAS Expander? I don't see any reason why not....
  21. Seems pricey compared to everything else, however it's also the only one i've seen with an independant power input which is a BIIG plus. Thanks, assumed as much, but wanted to ask whilst i had the chance! 8088-8088 cables, already sitting in my cart Well there you go, most interesting piece of information i've learnt so far! That's nifty. Ah ok - I might go without, no need to turn it off regularly enough to need a power button, and that way it's hidden from prying fingers. If i do find the need, i'll whip up a board of my own. So i know exactly what i'm doing now, i just have the issue of sourcing a local SAS expander, and also the price of the Intel SAS (considering this was not budgeted for), i certainly want to avoid requiring a motherboard etc... So i need to prey i can find one 2nd hand in the forums, a bit cheaper, and local! Thanks again for your time and expertise johnnie, greatly appreciated! I've wandered these forums long enough to know that you're one man who knows his sh*t, so i know i've got trusted information and probably the best available!
  22. Firstly johnnie, thankyou very much for your assistance and your patience. This concept was foreign to me, and i've learnt alot, but you put up with my misunderstanding, and nudged me in the right direction. Just a few questions to finish off, and i should be out of your hair. I know the solution design now, i just have questions about parts Made some notes on the img to ensure i understood what was going on. Ok, so i wont be going the HP route. Upgrading my hardware, i don't want to go backwards with the expander. So what affordable SAS expanders should i look at, that wont have an issue with unRAID? Just had aquick look on eBay, looks like the only expanders in Aus are HP's! Is the IBM 46M0997 ServeRAID Expansion Adapter 16-Port SAS Expander sufficient? IBM will ensure compatibility with my server... From image: What expander is in use in the photo? Is that molex power an addon or part of design? Those PCI boards, what exactly are they called? The SAS Expander is taking input from the HDD's, and also outputting - Are there designated ports on the card for input/output? Or is it a feature of an expander that it can detect inputs/outputs? What's the role of the SuperMicro board, and can they be purchased individually? Name/model? It necessitates the need for a motherboard correct?
  23. So option 2 i wouldn't consider - As you said, vulnerable, and 1m limit will make things difficult or impossible. I have an HBA in the server, with external ports, so i assume you are saying i need a SAS Expander in place of my M1115? Can you give some examples of a SAS Expander that has both internal and external ports? I was unable to find anything on eBay with the exception of a SuperMicro UIO card.
  24. This is why i am here - I had assumed that they could communicate across the PCIe BUS. Ok, so that setup does not work. So i need an HBA with internal & external ports in the enclosure (Examples of such?, i couldn't find anything i'd trust to work on eBay)?
  25. In future, if expansion beyond 8 disks, i would replace the M1115 with a SAS expander and some more breakout cables right? As in this scenario, the M1115 is playing the role of the "SAS Expander", if i understand correctly....
×
×
  • Create New...