thestraycat

Members
  • Posts

    202
  • Joined

  • Last visited

Everything posted by thestraycat

  1. @JorgeB Sweet. Obviously i'll just need to remap my shares with the new explicit disk names right... If for example my share /user was explicitly mapped to disk 19 and disk 19 is now disk 10 i'll need to go into each and re-map the new disk number right?
  2. @trurl @JorgeB I'm just about to remove 10 x 2tb disks from my array which will leave the disk numbering all over the place... in escense it'll go from this: To this: Is it possible for me to assign my 10 x 6tb remaining disks to disk slots 1 > 10 without losing any data (obviously i would reassign the new disk slot number in my shares if they were previously explicitly set.) And obviously i'd re-assign parity1 and parity2 back to the same slots that they were before. Would that work? Want it to like this after reassignment:
  3. Thanks for this - How can i disable parity temporarily?
  4. @trurl I'm following the "remove drives then rebuild parity" Method for the removal of multiple disks. As i can only run unbalance on a single disk at a time and have 10 to do, i was initially assuming my parity would be invalid until i've finished the last disk and rebuilt it. However i think after a re-read the only time my parity is at risk is at the end of the process when i run 'new config' to finish removing the drives and unraid runs a parity rebuild when the disks are finally removed. In regards to the disks I've tried Moving and Copying and both transfer at around 54mb/s with unbalance. Wondering whether it's because the 2TB disks are 99% full. The disks are very old (2010) but the SMART reports for all the 2tb disks all pass so no issue with the disks. I have the disk speed plugin and it shows no bottlenecks with the disk or controller config. I'm currently copying from Disk8 > Disk12 and as per the screenshot it seems there's lots more bandwidth between the 2 disks than unbalance is using. I know i'll lose some from running dual parity but it does seem a little slow regardless. Current Unbalance Copy Disk8 > Disk12 Disk Speed Plugin screenshot showing Disk8 and Disk 12 and the bandwidth available Any idea on how to speed it up?
  5. Quick one, I have 10 x 6tb disks & 10 x 2TB disks and i'd like to remove ALL the 2TB disks from my array which are now surplus and not needed. However i need to get the data off the 2TB disks and over to the 6TB disks.. . I'm using unBalance to move the contents of the 2tb disks over to the newer 6tb disk however each disk looks to be taking around 12 hours and i have 10 to do. It seems unBalance is locked to only doing 1 disk at a time... is there a faster way for me to get all these disks going simultaneously to save on time that my array parity is down.... I thought about just opening up 10 terminal sessions and manually copying the contents over simultaneously. But was wondering if there's a nicer solution? I'd assume i'd probably bewilder my parity disks bandwidth if i had multiple whole disk copies going at the same time? Would turbo write help in this situation?
  6. Any chance we can add EXIFTOOL to Nerdtools? Nerdpack had it bundled in and i'm now super reliant on it... I've had to roll back for the time being just to get it back Is there a better way/process to request packages to be re-added?
  7. Thanks, i thought it had been incorporated! but just couldn't get any solid evidence when googling it! Cheers.
  8. Quick one.. I'm running 6.10.3 and have finally found the time to move over from my ancient old OpenVPN-AS container over to wireguard. However all the resources recommend Dynamix WireGuard Plugin - But that hasn't been updated for about 18 months or so it seems... So i check out linuxserver.io's image and find out that it's been marked as depreciated.. What are you guys using?
  9. But that guide references Dynamix Wireguard plugin as a pre-req... "Prerequisites You must be running Unraid 6.8+ with the Dynamix WireGuard plugin from Community Apps" Can someone confirm what maintainer they are using for wireguard now??
  10. Hi guys, quick one.. I know i can recreate this.. But i have a lot of containers around 50 or so... and am upgrading my cache to a larger disk is there any real issue in: turning off docker copying the docker.img over to the new cache turning on docker I dont really want to have to through the whole add container > "my-templates" > add template process as i have a lot of container, some duplicates and have previously installed a lot of other containers from different owners etc with identical names etc.. Anyone run into issues in just copying docker.img accross?
  11. @itimpi A really great way to summarize. Thanks. @Frank1940 @JonathanM - Thanks also. If anyone has any experience of replacing more than one disk at a time. I'd love to hear their personal experience.
  12. @Frank1940 @JonathanM These are all good point, but i feel they've all been factored in to my plan already so didn't go into detail on this post as i wanted to keep it focused on just the risks and capability of unraid to handle replacing multiple disks at once. Looking for clarification that i can in fact replace 2 disks at once and what the risks of this are. "Do you really need an additional 52TB of storage at this exact point in time." >> Basically yes. My array is 91% full and my long winded plan (that i didnt post as to save a 'block of text' question that got no replies) was to always hold a disk or two back for emergency replacement. "The reason that I mention it is that a purchase of that many HD's at one time from a single source would mean that all of the disks would be coming from a single manufacturing lot." >> The disks are used, SMART tested as 100% healthy, with staggered data written and time powered on between them all. And from different batches and have different manafactured dates. That's about all the risk i could mitigate when purchasing them. "You could make an argument that you could be buying into a lot of superior quality. " >> I feel i am. As these are battle tested with lower usage and with potential warranties to utilize. "Frank has a really good point. I would not add all the disks at once, rather a staggered strategy, where you replace the worst of the old disks as you actually need the free space." >> Yup this has always been my plan. Replace both parity disks for 6tb disks, then get my current array usage of about 36tb onto 6tb disks add 2 more 6tb disks to increase headroom of the array and then save 1 to 2 back.. Mainly for electricity savings, heat savings (as i have a dense 24 bay 4u rack), putting on needless mileage on unused harddisks And also to have the security of future replacements disks on hand. Lastly, the 2TB disks all need removing as they now need to be sold to cover the costs of the 6tb disks.
  13. @JonathanM - This is what i thought too. I take it i'd be vulnerable of losing any additional disk that failed during the 2 disk rebuild? (and having to put back the old 2tb disk to replace the data) In theory it should take the same time to rebuild 2 disks as it would 1 disk right? Can anyone that's done it vouch for the 2 disk approach working?
  14. I'm pretty sure i know the answer to this question but thought i'd get confirmation regardless. I have an array of: 20 x 2TB disks 2 x 6TB parity disks. I plan on swapping out 13 of the 2tb disks with 6tb disks. Can i pull 2 disks at a time as i'm running 2 disks of parity? Would the risk not be the same as someone running 1 disk of parity swapping out 1 disk at a time? I have an inkling that the answer may be: "if there were an issue during the rebuild you'd stand to potentially lose 2 disks of data rather than 1." or similar.
  15. @SquidHey Squid, i'm currently still running the v1 backup and restore app and i really like it as when i need to recover a config file or an individual file i can simply hop into the backup and hop into the folder and grab the file without having to restore the whole backup which i believe was what the v2 app does when i originally moved over to it? You mentioned on the first post: "Due to some fundamental problems with XFS / BTRFS, the original version of Appdata Backup / Restore became unworkable and caused lockups for many users. Development has ceased on the original version, and is now switched over to this replacement." Can you explain what the issues with the v1 app were exactly and how it was causing issues? Did it ever crash unraid at all for any users? I'm currently troubleshooting my array and my only warning from the community 'fix common issues' plugin is that i still run the v1 plugin... i'd love to know more.
  16. @JorgeBI agree it looks unlikely. However if you look at where the crashes happen in the log. They all happen at the same point in time. Therefore something happens after communicating with an HBA or an attached disk to that particular HBA is the last tracked log entry. Or do you think that's just co-incidence?
  17. @JorgeB - I'm leaning to the same things myself. I have actually been running it with VM Manager off for the last few days and had a few crashes... Yesterday i turned off docker overnight to see if it stayed up.. .It was off in the morning. I decided to run a memtest just for the hell of it which ran for 8hrs and passed every test. One thing i did note from the log though was that is seems quite often to crash at: MediaServer emhttpd: spinning down /dev/sdf sdf is my parity disk 2 - What do you think? Coincidence? I was thinking of doing the following: 1. Replace disk 'sdf' letting the array rebuild and then seeing how it runs... (i have a spare) 2. If that dosn't work then move disk 'sdf' onto a different drive bay controlled by a different HBA... That disk has reported 4 errors on 'UDMA CRC Error count' Might be the connectivity? What do you think? 3. If that dosn't work then replace the HBA sdf was originally connected too (i have a spare) 4. Last resort. Replacing the motherboard. What do you think?
  18. @JorgeB - Another crash. Happened between: Feb 25 23:21:51 MediaServer autofan: Highest disk temp is 21C, adjusting fan speed from: 84 (32% @ 1062rpm) to: 54 (21% @ 704rpm) and Feb 26 01:01:23 MediaServer root: Delaying execution of fix common problems scan for 10 minutes syslog attached. syslog-192.168.1.125.log
  19. @JorgeB - Cheers. I'm currently waiting for a new crash so here's the older one from a few days back... There should be at least 2-3 crashes in there. The server is left on 24/7 so if there's any large jumps (2hrs+) between the date/time stamps.. It'll be because it had become unresponsive and was rebooted. syslog-192.168.1.125.log
  20. @JorgeB - Yeah i have. Some older ones from a few days ago and captured the before and after state for 2 or 3 crashes. Is there anything that i need to anonymize from the syslog output?
  21. Hi guys, Having some issues for quite a while now with the server staying up. Been happening for the last few months. Server stays on, all fans running, shares not accessible, SSH dosn't connect and IPMI KVM dosn't output. A reboot brings it all back up swiftly. My build is a nocro-esque 24 bay server running the following components: Supermicro X9SCM-F (on most recent BIOS) Xeon E3-1240v2 EVGA 1300G2 PSU (1300w) 32 GiB DDR3 Single-bit ECC 3 x Dell H200 (flashed to LSI 9211i FWVersion(20.00.07.00) HBA PCIe Benchmarks: HBA1 = 5/GT width x8 (4GB/s Max) HBA2 = 5/GT width x4 (4GB/s Max) <- Bottlenecked HBA3 = 5/GT width x2 (4GB/s Max) <- Bottlenecked 1 x Intel X540-AT2 10GB NIC NIC PCIe Benchmark 5/GT 32GB/s 22 x 2TB Disks 1 x Samsung 1TB SSD 1 x Intel 2TB SSD 1 x Micron 500GB Things i've checked. 1. RAM is new from Kingston 2. PSU swapped out from corsair 750w to EVGA 1300w 3. All docker appdata references moved to cache from user 4. VM manager disabled 5. All temps are fine for everything. 6. No SMART disk failures on any of the array disks 7. Diagnostics don't seem to turn up anything. 8. Syslog server mirrored to flash and to array. 9. Checked container sizes - All looks fine, but i have Increased docker.img it was getting a little full. 10. CA fix commong problems finds no errors and just a few warnings.= about updates for containers that i'm keeping on a certain label. Things i believe it could be: 1. Complications of running 3 x LBA's on this board. As you can see in the attached pic, 2 controllers are bottlenecked but im more concerned by the different bandwidth between identical controllers being an issue. 2. Hardware failure - Although every component is working and responds so there's nothing to find. 3. I have SMART failures on both the cache drive and the backup drive, but they look like early sign wear related issues, undecided whether there causing issues. 4. Unraid nuance that i havn't yet found. Syslog turns up nothing that i can see. 5. sleep/power state related???? I've included my diagnostics below and a screen shot of my HBA's benchmarked through the handy speeddisk container. If anyone can sanity check the logs i'd be greatly appreciative. Does anyone have a solid working installation on the X9SCM board right now that they'd be happy to share the BIOS options for? I understand detaiing it all out may be painful, But if there's any chance of saving your bios config to file so i can give your settings a go that would be awesome. It just helps me outrule BIOS options for the X9SCM... If the syslog dosn't contain anything too confidential and isn't included in the diagnostics.zip i'm happy to upload it seperately? mediaserver-diagnostics-20220225-1800.zip
  22. @Matt3ra - Any update on the crashing? Curious to hear your findings...