thestraycat

Members
  • Posts

    169
  • Joined

  • Last visited

Everything posted by thestraycat

  1. Any chance we can add EXIFTOOL to Nerdtools? Nerdpack had it bundled in and i'm now super reliant on it... I've had to roll back for the time being just to get it back Is there a better way/process to request packages to be re-added?
  2. Thanks, i thought it had been incorporated! but just couldn't get any solid evidence when googling it! Cheers.
  3. Quick one.. I'm running 6.10.3 and have finally found the time to move over from my ancient old OpenVPN-AS container over to wireguard. However all the resources recommend Dynamix WireGuard Plugin - But that hasn't been updated for about 18 months or so it seems... So i check out linuxserver.io's image and find out that it's been marked as depreciated.. What are you guys using?
  4. But that guide references Dynamix Wireguard plugin as a pre-req... "Prerequisites You must be running Unraid 6.8+ with the Dynamix WireGuard plugin from Community Apps" Can someone confirm what maintainer they are using for wireguard now??
  5. Anyone got any experience with it?
  6. Hi guys, quick one.. I know i can recreate this.. But i have a lot of containers around 50 or so... and am upgrading my cache to a larger disk is there any real issue in: turning off docker copying the docker.img over to the new cache turning on docker I dont really want to have to through the whole add container > "my-templates" > add template process as i have a lot of container, some duplicates and have previously installed a lot of other containers from different owners etc with identical names etc.. Anyone run into issues in just copying docker.img accross?
  7. @itimpi A really great way to summarize. Thanks. @Frank1940 @JonathanM - Thanks also. If anyone has any experience of replacing more than one disk at a time. I'd love to hear their personal experience.
  8. @Frank1940 @JonathanM These are all good point, but i feel they've all been factored in to my plan already so didn't go into detail on this post as i wanted to keep it focused on just the risks and capability of unraid to handle replacing multiple disks at once. Looking for clarification that i can in fact replace 2 disks at once and what the risks of this are. "Do you really need an additional 52TB of storage at this exact point in time." >> Basically yes. My array is 91% full and my long winded plan (that i didnt post as to save a 'block of text' question that got no replies) was to always hold a disk or two back for emergency replacement. "The reason that I mention it is that a purchase of that many HD's at one time from a single source would mean that all of the disks would be coming from a single manufacturing lot." >> The disks are used, SMART tested as 100% healthy, with staggered data written and time powered on between them all. And from different batches and have different manafactured dates. That's about all the risk i could mitigate when purchasing them. "You could make an argument that you could be buying into a lot of superior quality. " >> I feel i am. As these are battle tested with lower usage and with potential warranties to utilize. "Frank has a really good point. I would not add all the disks at once, rather a staggered strategy, where you replace the worst of the old disks as you actually need the free space." >> Yup this has always been my plan. Replace both parity disks for 6tb disks, then get my current array usage of about 36tb onto 6tb disks add 2 more 6tb disks to increase headroom of the array and then save 1 to 2 back.. Mainly for electricity savings, heat savings (as i have a dense 24 bay 4u rack), putting on needless mileage on unused harddisks And also to have the security of future replacements disks on hand. Lastly, the 2TB disks all need removing as they now need to be sold to cover the costs of the 6tb disks.
  9. @JonathanM - This is what i thought too. I take it i'd be vulnerable of losing any additional disk that failed during the 2 disk rebuild? (and having to put back the old 2tb disk to replace the data) In theory it should take the same time to rebuild 2 disks as it would 1 disk right? Can anyone that's done it vouch for the 2 disk approach working?
  10. I'm pretty sure i know the answer to this question but thought i'd get confirmation regardless. I have an array of: 20 x 2TB disks 2 x 6TB parity disks. I plan on swapping out 13 of the 2tb disks with 6tb disks. Can i pull 2 disks at a time as i'm running 2 disks of parity? Would the risk not be the same as someone running 1 disk of parity swapping out 1 disk at a time? I have an inkling that the answer may be: "if there were an issue during the rebuild you'd stand to potentially lose 2 disks of data rather than 1." or similar.
  11. @SquidHey Squid, i'm currently still running the v1 backup and restore app and i really like it as when i need to recover a config file or an individual file i can simply hop into the backup and hop into the folder and grab the file without having to restore the whole backup which i believe was what the v2 app does when i originally moved over to it? You mentioned on the first post: "Due to some fundamental problems with XFS / BTRFS, the original version of Appdata Backup / Restore became unworkable and caused lockups for many users. Development has ceased on the original version, and is now switched over to this replacement." Can you explain what the issues with the v1 app were exactly and how it was causing issues? Did it ever crash unraid at all for any users? I'm currently troubleshooting my array and my only warning from the community 'fix common issues' plugin is that i still run the v1 plugin... i'd love to know more.
  12. @JorgeBI agree it looks unlikely. However if you look at where the crashes happen in the log. They all happen at the same point in time. Therefore something happens after communicating with an HBA or an attached disk to that particular HBA is the last tracked log entry. Or do you think that's just co-incidence?
  13. @JorgeB - I'm leaning to the same things myself. I have actually been running it with VM Manager off for the last few days and had a few crashes... Yesterday i turned off docker overnight to see if it stayed up.. .It was off in the morning. I decided to run a memtest just for the hell of it which ran for 8hrs and passed every test. One thing i did note from the log though was that is seems quite often to crash at: MediaServer emhttpd: spinning down /dev/sdf sdf is my parity disk 2 - What do you think? Coincidence? I was thinking of doing the following: 1. Replace disk 'sdf' letting the array rebuild and then seeing how it runs... (i have a spare) 2. If that dosn't work then move disk 'sdf' onto a different drive bay controlled by a different HBA... That disk has reported 4 errors on 'UDMA CRC Error count' Might be the connectivity? What do you think? 3. If that dosn't work then replace the HBA sdf was originally connected too (i have a spare) 4. Last resort. Replacing the motherboard. What do you think?
  14. @JorgeB - Another crash. Happened between: Feb 25 23:21:51 MediaServer autofan: Highest disk temp is 21C, adjusting fan speed from: 84 (32% @ 1062rpm) to: 54 (21% @ 704rpm) and Feb 26 01:01:23 MediaServer root: Delaying execution of fix common problems scan for 10 minutes syslog attached. syslog-192.168.1.125.log
  15. @JorgeB - Cheers. I'm currently waiting for a new crash so here's the older one from a few days back... There should be at least 2-3 crashes in there. The server is left on 24/7 so if there's any large jumps (2hrs+) between the date/time stamps.. It'll be because it had become unresponsive and was rebooted. syslog-192.168.1.125.log
  16. @JorgeB - Yeah i have. Some older ones from a few days ago and captured the before and after state for 2 or 3 crashes. Is there anything that i need to anonymize from the syslog output?
  17. Hi guys, Having some issues for quite a while now with the server staying up. Been happening for the last few months. Server stays on, all fans running, shares not accessible, SSH dosn't connect and IPMI KVM dosn't output. A reboot brings it all back up swiftly. My build is a nocro-esque 24 bay server running the following components: Supermicro X9SCM-F (on most recent BIOS) Xeon E3-1240v2 EVGA 1300G2 PSU (1300w) 32 GiB DDR3 Single-bit ECC 3 x Dell H200 (flashed to LSI 9211i FWVersion(20.00.07.00) HBA PCIe Benchmarks: HBA1 = 5/GT width x8 (4GB/s Max) HBA2 = 5/GT width x4 (4GB/s Max) <- Bottlenecked HBA3 = 5/GT width x2 (4GB/s Max) <- Bottlenecked 1 x Intel X540-AT2 10GB NIC NIC PCIe Benchmark 5/GT 32GB/s 22 x 2TB Disks 1 x Samsung 1TB SSD 1 x Intel 2TB SSD 1 x Micron 500GB Things i've checked. 1. RAM is new from Kingston 2. PSU swapped out from corsair 750w to EVGA 1300w 3. All docker appdata references moved to cache from user 4. VM manager disabled 5. All temps are fine for everything. 6. No SMART disk failures on any of the array disks 7. Diagnostics don't seem to turn up anything. 8. Syslog server mirrored to flash and to array. 9. Checked container sizes - All looks fine, but i have Increased docker.img it was getting a little full. 10. CA fix commong problems finds no errors and just a few warnings.= about updates for containers that i'm keeping on a certain label. Things i believe it could be: 1. Complications of running 3 x LBA's on this board. As you can see in the attached pic, 2 controllers are bottlenecked but im more concerned by the different bandwidth between identical controllers being an issue. 2. Hardware failure - Although every component is working and responds so there's nothing to find. 3. I have SMART failures on both the cache drive and the backup drive, but they look like early sign wear related issues, undecided whether there causing issues. 4. Unraid nuance that i havn't yet found. Syslog turns up nothing that i can see. 5. sleep/power state related???? I've included my diagnostics below and a screen shot of my HBA's benchmarked through the handy speeddisk container. If anyone can sanity check the logs i'd be greatly appreciative. Does anyone have a solid working installation on the X9SCM board right now that they'd be happy to share the BIOS options for? I understand detaiing it all out may be painful, But if there's any chance of saving your bios config to file so i can give your settings a go that would be awesome. It just helps me outrule BIOS options for the X9SCM... If the syslog dosn't contain anything too confidential and isn't included in the diagnostics.zip i'm happy to upload it seperately? mediaserver-diagnostics-20220225-1800.zip
  18. @Matt3ra - Any update on the crashing? Curious to hear your findings...
  19. Can anyone confirm whether the Seagate 6TB SAS Drives (ST6000NM0034) are currently playing nicely (spin up/down correctly) with any of the LSI2008 controllers? Any one's confirmation either way would be amazing!
  20. Can anyone help? Can i add a line to my unraid config to issue the command "PWM FAN = 40" on startup? That would at least be a workaround....
  21. I've got everything set up as i want it with autofan and it works well for me on my Supermicro X9SCM-F for what it matters i also have the following plugins: IPMI (configured with fan control off) and SYSTEM TEMP (which sees my sensors and controllers) However, Everytime i have to reboot, i have to manually change out and put back the 'PWM FAN' value in the plugin to force the plugin talk to the controller and manage the speed. My BIOS fan controller is set to 'FULL' as i've read that the bios dosn't fight for control of PWM control if it's set like this. If i just reboot unraid and login, the array fans (that i manage with autofan) will be running at 100% until i do the disable/renable, detect process. Is it likely that the plugin needs to be forced to refresh it's values prior to starting after a fresh reboot on the X9 platform so that it fights for control of PWM from the BIOS? If so, could this be added? Secondly, Am i right in thinking the plugin dosnt display the highest disk temp in the unraid footnote? My autofan plugin simply displays: CPU (temp)/ Mainboard (temp)/ Array Fan (%) but the footnote dosnt tell me what the disk temp is! As this plugin adjusts fans based on disk temp would it not make sense to display the disk temp? I'd like to get to the bottom of what i need to do to avoid the re-enable/redetect every reboot before the plugin takes control. Ive tried uninstalling and reinstalling and manually clearing out all the old .cfg files for the plugin. But nothing has worked. Cheers for the hardwork this plugin was def needed. Hope someone can shed some light.
  22. Hi guys - Quick one, can anyone confiirm that they have this plugin changing fan speeds based on HDD temp on the Supermicro X9SCM-F? I know there's been a lot of questions around the Supermicro X9 boards and cant find clarification in this 132 page thread! lol. If so, can someone detail what fan header their using for CPU and DISK ARRAY fans and what fan mode (if any) they are running in the BIOS? It would be great appreciated.
  23. @uldise - These are I hear what your saying with the noctuas but they seem to be more in pc/gaming bracket in terms of performance i was comparing nidec/delta fans in the 80/120mm sizes that consume around 10 times as much power but have much larger static pressure figures. I'm wondering whether aerodynamically when you get into server grade fans whether 80mm starts becoming a better compromise of back pressure vs noise vs performance. @aburgesser - Can you give me an example of server grade fans where the 120mm fan out performs the 80mm in the same price range or product line?
  24. Hi guys, replacing the midplane fans on my 24 bay "x-case 424S" which is a 24bay norco-esque copy.... Noticed that 80mm fans are generally better at static pressure than the 120mm fans so was thinking of putting in 80mm fans as case runs hot. However, the very top and very bottom disk bays wont be directly in front of the 80mm fan as it's smaller. noticed that newer 24 bay enclosures come with 80mm as standard do you think it'll be more performant? From what i can see because 24 drives takes up ALL the space at the front of any 24 bay 4u server in theory it dosnt seem to be any less effiicient than buying a new case and running the stock 80mm fans. Opinions?
  25. Noticed that 80mm fans always seem to have been static pressure than 120mm fans ... Might explain why it looks like Supermicro for example have run their most recent 4U 24 bay chassis as 3 x 80mm midplane + 2 x 80mm rear exhaust setup... over 120mm midplane... Thoughts?