TDD

Members
  • Posts

    64
  • Joined

  • Last visited

Everything posted by TDD

  1. Roger that. I'm considering moving my dockers to their own IPs across the board rather than all bridged to the UnRaid IP and ports. It should be as simple as moving each to br0 then manually specifying an IP that is not taken anywhere else in my lan. This works for Plex which I have moved. For those dockers that like inter-docker chatter like Sonarr, I cannot get it to see any of its targets like qBittorrent, Jackett, etc., even if they are on the same br0 and their own IPs (and Sonarr correctly set to point to the new dedicated IPs). I have docker set to allow 'host access to custom networks' just in case. Of note is these targets are dockers with internal VPNs which is blocking things I suspect. What is magical about the bridge mode by default in docker (which works) against a custom ip and the br0? Do I need to make another custom bridge (br1) in my subnet and have all the dockers ride that? Docker settings show the subnet as 192.168.0.0/16 and my correct gateway for my lan (192.168.1.1 on my 192.168.1.x). Ensure the routing table reflects br1 on the 192.168.1.x? This seems like an adventure. Kev.
  2. Has this been fixed? I have the same issue as I move dockers to their own IPs. Legacy forwards still show. I've moved Sonarr to port 80 internally but cannot purge this display. Note that a 'docker ps' shows it clear as it should. Kev.
  3. It may - or it may not. Depends on firmware specific to each model. The good thing is the SeaChest modifications are applicable and should be able to handle any quirks. Kev.
  4. The full tweak allows spin downs gracefully so you give up nothing. Might even save a watt or two :-). Kev.
  5. I want to add that my solo 8TB drive, my parity, does spin down and up as needed and does not always spin. This fix does not affect any requests to go idle. Kev.
  6. My 8TB Ironwolf was the sole Seagate and it was the parity that errored out. It all comes down to strictly how idle the drive is and spin-ups past that. Kev.
  7. There very well could be edge cases with other Ironwolf drives but assuredly it is an issue with the ST8000VN004. I would not bet on a timely, if ever, firmware update for the drive itself. The two changes make the drive more aggressive with its spinup and readyness to compensate for the driver timing out while waiting for its ready state. You have nothing to lose by making these changes as they are reversible; the amount of power saving is negligible IMHO and the benefits of a upgraded UnRAID are worth it. Try and see! Kev.
  8. I only know of the exact 8TB unit in question that requires this tweak. I presented Seagate all the info on the issue but got meaningless responses back. Was hoping to chat with the hardware/firmware guys. We can only hope the intel makes it to where it needs to be. For note, my testing was done on both my LSI controllers and the same outcome was found prior to the fix: [1000:0064]01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02) [1000:0072]02:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) Kev.
  9. Linux guy here. Use the Ubuntu ones. If they don't work for unknown reasons, I have an archive of the older tool set. Kev.
  10. Thank you for the work bringing this together. There is an easy way to just target the disks you want to modify. SeaChest_PowerControl_1100_11923_64 -s --onlySeagate I believe most tools actually allow this -s switch. See screenshot. This allows you to skip the 'map' part and make this easier :-)! Kev.
  11. Try the EPC disable/low current disable per my posts. They are reversible if nothing better happens after reboot. I've had no issue since. Kev.
  12. You won't notice much of a difference, if any, toasting EPC and spin. Try just the EPC if you are inclined. Drives will still spin down. It's not like your power bill will double. All we are doing here is making the drives way less aggressive with their sleep modes so the controller doesn't freak out. I'd rather this fix than the alternative of drives dropping. I believe it to be an issue in any recent merge into the combined mpt3sas driver and kernel. It was all fine under 4.19. Disable and await any non-firmware fixes later. You can then re-enable the aggressive power saving if you wish. I have had zero issue since this fix across all my controllers that are LSI based. Kev.
  13. See my earlier post on how I fixed this. I have had no issues since. Kev.
  14. My 8TB is a very recent manufacture (a VN004) and has the SC60 firmware. Is yours the slightly older VN0022? That would explain the SC61. I haven't seen any update for the VN004 as of yet.
  15. I did check for revised firmware past my SC60. Nothing available. This particular issue seems to be limited to the older 10TB drives. If it is a bug in the 8TBs, they are certainly dragging their feet on a fix.
  16. Any issues with the Ironwolf/LSI combo are software *only* since it does work fine under the old kernel/driver. I'm hopeful that from my reading and testing, disabling some of the aggressive power saving modes of the Ironwolf may cover up any faults currently in the driver that will need to be addressed. No matter what, kudos to Seagate (I have all WD drives BTW!) for having tools and documentation to tweak the settings. WD could learn a lot here - but admittedly my WD drives always 'just worked' without any tweaks...
  17. Upgraded from last stable as I had a spin-up problem with my Seagate Ironwolf parity drive under RC2. I see the same quirk again under the new kernel - this time having attached diagnostics. From what I can tell, it appears once the mover kicks in and forces the spin up of the parity. It tripped up once as you can see from the logs but came up and wrote fine. I've done repeated manual spin downs of the parity, writing into the array via the cache, and forcing a move hence bringing up the parity again. No errors as of yet. This is a new drive and under the same hardware setup completely as 6.8.3 so it is a software/timing issue buried deep. If this continues, I will move the parity off my 2116 controller (16 port) and over to my 2008 (8 port) to see if that relieves any issues. Past that, perhaps over to the motherboard connector to isolate the issue. FYI. Kev. Update: I've disabled the cache on all shares to force spin ups much faster. Just had another reset on spin up. I'll move to next controller now. Update 2: Drive dropped outright on my 2008 controller and Unraid dropped the parity and invalidated it: I'm going to replace the Seagate with a 8T WD unit and rebuild. Definitely an issue somewhere with timings between the 2 controllers. Update 3: After some testing offline with the unit and digging, it looks like the Ironwolf may be too aggressively set at the factory for a lazy spin up. This behavior must be tripping up some recent changes in the mpt2/3sas driver. Seagate does have advanced tools for setting internal parameters ("seachest"). I set the drive to disable the EPC (extended power conditions) which have a few stages of timers before various power-down states. I also for good measure disabled the low current spin-up to ensure a quick start. Initial tests didn't flag any opcodes in the log. I'm rebuilding with it and testing. For note, the commands are: SeaChest_PowerControl_xxxx -d /dev/sgYY --EPCfeature disable SeaChest_Configure_xxxx -d /dev/sgYY --lowCurrentSpinup disable Update 4: I've submitted all info to Seagate tech support citing the issue with the 10TB and my suspicions. Deeper testing today once my rebuild is done to see if the fixes clear the issue. Update 5: Full parity rebuild with Ironwolf. No issues. I've done repeated manual drive power downs and spin ups to force the kernel errors across both the controllers above and I have not seen any errors. Looks like the two tweaks above are holding. Kev.
  18. I took RC2 for a test run and all was well until the mpt2sas driver had an issue with my Seagate Ironwolf 8TB set up as parity (ST8000VN004-2M2101). No issues for weeks under 6.8.3. I didn't manage to capture the logs (sorry!) for a myriad of reasons. Looks like the driver <-> drive didn't like something and the drive dropped out, leading to read errors. I didn't see any errors under my other drives, all WD 40E* and 80E* units. Not sure if there is a quirk here with the drive. Quick SMART test shows fine so i'm blaming the driver in 5.10.1. I'm rebuilding the parity now under 6.8.3 with the 4.19 kernel to give a solid test of things. FYI. TY! Kev.
  19. Well - tested it myself by initiating a manual backup. It does not purge old backups until the running one is complete - as expected and safe. Is it a big ask to have an option where the backups are purged according to the plan prior to the backup? TY. Kev.
  20. Wonderful plugin. TY! Question - with any backup about to proceed, are any out of date .tar backups deleted prior to the commencement of the new backup? I ask as my (small!) 1TB drive that holds the current .tar would not hold another in progress and would of course run out of disk space and error out. TY! Kev.
  21. Hi All. I also upgraded without incident. Looks superb...thank you to all the team! I do want to pass along a GUI issue. I don't obviously see now what dockers are due for an update in the dashboard? I seem to recall they had an indicator stating as much. A right click on one due for an update does show the option, so this is just a cosmetic fix. TY. Kev.
  22. Backups are another story - let's say in this instance that I will be revisiting my backup strategy once I have everything back together and at least disk protected via parity. I've never tried bringing an outside disk, let's say an XFS one, into the array and not zeroing it. Is that possible even to make the system accept it as-is? As long as you rebuilt parity later, what would it really matter what was/is on the disk? Kev.
  23. HI all. For a myriad of reasons, I needed to do some file recovery. As I am moving recovered files off a drive which has been removed from the array (hence UR is down) and to secondary storage, am I ok to then move those files back onto the same array drive? I don't believe there is anything special about the XFS that UR uses. I know this invalidates parity but I will worry about regenerating that later. I could network copy but that will be slower. I could also attach the drive and copy via unassigned devices but then it will be a bit slower as the parity drive writes. My thought is since I am regenerating anyways, why do it twice? I really just want to bring all the XFS disks with their new data back into the array, create a new disk/array config, have the drives settle then rebuild parity in one shot. Is this workable? Kev.
  24. This makes sense. I have just mapped my two folders into docker and have re-pointed all my tv libraries to the mountpoints inside the root of Sonarr. I think now that this was some kind of limitation from way back when I first started Sonarr and I never bothered to change it or even think of this. This should fix it nicely...TY for making me see the obvious! Fingers crossed it will work now! Kev.