TDD

Members
  • Posts

    75
  • Joined

  • Last visited

Everything posted by TDD

  1. I did check for revised firmware past my SC60. Nothing available. This particular issue seems to be limited to the older 10TB drives. If it is a bug in the 8TBs, they are certainly dragging their feet on a fix.
  2. Any issues with the Ironwolf/LSI combo are software *only* since it does work fine under the old kernel/driver. I'm hopeful that from my reading and testing, disabling some of the aggressive power saving modes of the Ironwolf may cover up any faults currently in the driver that will need to be addressed. No matter what, kudos to Seagate (I have all WD drives BTW!) for having tools and documentation to tweak the settings. WD could learn a lot here - but admittedly my WD drives always 'just worked' without any tweaks...
  3. Upgraded from last stable as I had a spin-up problem with my Seagate Ironwolf parity drive under RC2. I see the same quirk again under the new kernel - this time having attached diagnostics. From what I can tell, it appears once the mover kicks in and forces the spin up of the parity. It tripped up once as you can see from the logs but came up and wrote fine. I've done repeated manual spin downs of the parity, writing into the array via the cache, and forcing a move hence bringing up the parity again. No errors as of yet. This is a new drive and under the same hardware setup completely as 6.8.3 so it is a software/timing issue buried deep. If this continues, I will move the parity off my 2116 controller (16 port) and over to my 2008 (8 port) to see if that relieves any issues. Past that, perhaps over to the motherboard connector to isolate the issue. FYI. Kev. Update: I've disabled the cache on all shares to force spin ups much faster. Just had another reset on spin up. I'll move to next controller now. Update 2: Drive dropped outright on my 2008 controller and Unraid dropped the parity and invalidated it: I'm going to replace the Seagate with a 8T WD unit and rebuild. Definitely an issue somewhere with timings between the 2 controllers. Update 3: After some testing offline with the unit and digging, it looks like the Ironwolf may be too aggressively set at the factory for a lazy spin up. This behavior must be tripping up some recent changes in the mpt2/3sas driver. Seagate does have advanced tools for setting internal parameters ("seachest"). I set the drive to disable the EPC (extended power conditions) which have a few stages of timers before various power-down states. I also for good measure disabled the low current spin-up to ensure a quick start. Initial tests didn't flag any opcodes in the log. I'm rebuilding with it and testing. For note, the commands are: SeaChest_PowerControl_xxxx -d /dev/sgYY --EPCfeature disable SeaChest_Configure_xxxx -d /dev/sgYY --lowCurrentSpinup disable Update 4: I've submitted all info to Seagate tech support citing the issue with the 10TB and my suspicions. Deeper testing today once my rebuild is done to see if the fixes clear the issue. Update 5: Full parity rebuild with Ironwolf. No issues. I've done repeated manual drive power downs and spin ups to force the kernel errors across both the controllers above and I have not seen any errors. Looks like the two tweaks above are holding. Kev.
  4. I took RC2 for a test run and all was well until the mpt2sas driver had an issue with my Seagate Ironwolf 8TB set up as parity (ST8000VN004-2M2101). No issues for weeks under 6.8.3. I didn't manage to capture the logs (sorry!) for a myriad of reasons. Looks like the driver <-> drive didn't like something and the drive dropped out, leading to read errors. I didn't see any errors under my other drives, all WD 40E* and 80E* units. Not sure if there is a quirk here with the drive. Quick SMART test shows fine so i'm blaming the driver in 5.10.1. I'm rebuilding the parity now under 6.8.3 with the 4.19 kernel to give a solid test of things. FYI. TY! Kev.
  5. Well - tested it myself by initiating a manual backup. It does not purge old backups until the running one is complete - as expected and safe. Is it a big ask to have an option where the backups are purged according to the plan prior to the backup? TY. Kev.
  6. Wonderful plugin. TY! Question - with any backup about to proceed, are any out of date .tar backups deleted prior to the commencement of the new backup? I ask as my (small!) 1TB drive that holds the current .tar would not hold another in progress and would of course run out of disk space and error out. TY! Kev.
  7. Hi All. I also upgraded without incident. Looks superb...thank you to all the team! I do want to pass along a GUI issue. I don't obviously see now what dockers are due for an update in the dashboard? I seem to recall they had an indicator stating as much. A right click on one due for an update does show the option, so this is just a cosmetic fix. TY. Kev.
  8. Backups are another story - let's say in this instance that I will be revisiting my backup strategy once I have everything back together and at least disk protected via parity. I've never tried bringing an outside disk, let's say an XFS one, into the array and not zeroing it. Is that possible even to make the system accept it as-is? As long as you rebuilt parity later, what would it really matter what was/is on the disk? Kev.
  9. HI all. For a myriad of reasons, I needed to do some file recovery. As I am moving recovered files off a drive which has been removed from the array (hence UR is down) and to secondary storage, am I ok to then move those files back onto the same array drive? I don't believe there is anything special about the XFS that UR uses. I know this invalidates parity but I will worry about regenerating that later. I could network copy but that will be slower. I could also attach the drive and copy via unassigned devices but then it will be a bit slower as the parity drive writes. My thought is since I am regenerating anyways, why do it twice? I really just want to bring all the XFS disks with their new data back into the array, create a new disk/array config, have the drives settle then rebuild parity in one shot. Is this workable? Kev.
  10. This makes sense. I have just mapped my two folders into docker and have re-pointed all my tv libraries to the mountpoints inside the root of Sonarr. I think now that this was some kind of limitation from way back when I first started Sonarr and I never bothered to change it or even think of this. This should fix it nicely...TY for making me see the obvious! Fingers crossed it will work now! Kev.
  11. I am. Here are my specific docker maps: All my internal Sonarr TV paths are then accessed via /mnt/user/TV, /mnt/user TV (Kids), etc. which are the unRaid shares across my array. Perhaps I am not getting why the rules of fill and disk choice would not be followed in this case? All other accesses to the same follow the rules. Copying anything (like with MC) into /mnt/user/TV is following the rules...?! Kev.
  12. I am at a loss to fix this. I have now modified my system so that my main array (disks 1-8) are exclusively for data. They all are set to allow any share included and excluded none. My Docker and various appdata now resides on a cache disk. The cache shares are explicitly set to ONLY use the cache disk. The array shares are explicitly set to NOT use the cache disk. Somehow when Sonarr imports a file, it transfers it out to the TV share in my array (paths are set as /mnt/user/TV) but it ends up on the cache disk despite the rules with the same folder structure. This is the only docker/app that seems to do this. It seems to ignore the cache directives and just follow the share rule for high-water hence it ends up on my most free disk? Any insight here? Kev.
  13. An FYI that this bug is still in play. Somehow my Sonarr still gets past the include/exclude and it drops into my disk explicitly set as exclude but preferred overall?? because of the high-water setting. Kev.
  14. Well, no combination of include/exclude makes it work. This is a bug and unique somehow to the situation. Even shelling into the server and copying via MC respects the rules. This is a Sonarr thing as mentioned above. My share setup, attached, is for review. Seems sane to me! Kev.
  15. Yes - I have tried explicitly setting exclude disk 1 and and setting include (all others) to really force the issue. I have moved the disk to another slot too to ensure it isn't a slot issue (ie: disk 1 vs 8 or something). From my understanding, it is not necessary to explicitly complete both the includes and excludes for the same share. I will still fuss with this but it mysterious. All other copies look to respect things just fine. Kev.
  16. Well, this certainly seems to defeat the point of the include/exclude parameters? This still doesn't explain why an internal move/copy ends up inside an excluded disk via Sonarr, when an external move/copy works fine. I can take the files that should NOT be on disk 1, place them into a different directory on the same disc, shell in and move them with MC back to the /user/TV share and they respect the exclude and end up anywhere but disk 1. Is this a unique Sonarr thing that is bypassing the rules? Kev.
  17. I am at a loss other than to believe it a bug somehow. I tried a new config and moved the disk 1 to disk 9, changing my shares accordingly so the TV share excluded disk 9 - just to remove the chance that that this was tied to disk 1. Same thing arose - files somehow dropped on disk 9 in a TV folder. I am manually moving the files being dropped on disk 1 to another disk then copying back to the user share to workaround this issue. I am hopeful that somehow it gets "fixed" with 6.4 coming up.
  18. Hi all. This one I cannot figure out. With 10 disks, I have a "TV" library set to span all the disks *except* disk 1...hence 2 -> is open game. The share in question is appropriately set to exclude 1, and include the rest. My understanding is that this alone should force any copies to the share to of course exclude 1. This *does* work when I copy files to the share myself. Where it fails is when Sonarr picks up a show and drops it into the TV library. Said show suddenly appears on disk 1. For note, the work flow is that NZBGet fetches a show, drops it into a folder on disk 1. Sonarr sees this and should pick it up and drop it into TV which would bypass disk 1, but it doesn't. The TV share is set to high-water and split of the first two directories. FYI, my shows are contained within sub-folders of TV (ie: TV/Family Guy/SXX...). There are no explicit references to any "/mnt/disk1" or anything like that. Everything is via the "/mnt/user/TV" path. Is this a bug with somehow it being greater than one directory level? Kev.
  19. Hi all...again. When setting up a share and the included disk option, is there any way (perhaps in the future) that share would "follow" the disk in the system (let's say one reorders the drives and starts a new config) rather than just be tied to the physical disk in that slot? I imagine for safety reasons it at least ensures some kind of share would appear but moving disks around forces me to go through all the shares and vet them again. Kev.
  20. VG - I thought so. I assume there is no logic in place to cut off the rebuild past the drive's end? Likely more work than is worth it I suspect. Kev.
  21. Hi All. Just curious and want to know... I pulled an old 1TB drive and popped in a new 4TB to replace. It is happily rebuilding the drive. My question is... My system is current bound by a 4TB parity drive. All other drives are a mix of 1, 2, and 4TB models. Now that I pulled the 1TB, just what exactly is being written to the remaining 75% of the drive? The original 1TB is already written, and the system just parity stripes across the drives, so it is just zeroing the remainder? It is not like there is new data being placed there, nor rebuilding a 4TB drive that is replacing a 4TB drive. This process was started on a precleared disk as well. Kev.
  22. Hi all. After some exciting moments with my old flash drive failing and being thankful for a backup (just this morning!) I am back up and running. Having a look at all the files in the drive, I see many legacy entries in the /shares folder, referring to shares I no longer have. Is there a way to delete them all and have the system recreate them? Are these the *only* definitions for the shares (I believe so...). Does one just have to compare the active shares in the UI against this list and delete appropriately? Thx. Kev.
  23. Hi Squid... You jogged something with your comment. I tried all the browsers with adblocks on/off on my PC...same thing. Tried from my cell phone and the data appeared. A big of digging revealed a proxy that somehow turned on in my PC. Must have come on as part of a spyware/malware scan thing that I have. Disabled it and it is fine again...:-) Thx! Kev.
  24. Hi all. Suddenly, the following items have stopped working: - drive statistics in dashboard - system status in dashboard - array devices in main page - array operation in main page (in fact, all data that should populate in the main page) - confirmations of any options that are changed. They take, but I need to refresh the GUI. - all disk and system stats blank Shares are ok and all disks/shares are accessible. Dockers working. I can't pull a diagnostics either. The save file requester never pops up. I have at least copied a syslog pulled from the viewer. It is almost like the subsystem is getting hung up on some kind of data fetch in the background and blocking any lists from populating. I even went back to a backup of my entire flash config from yesterday and a few days ago...same thing. Assuming even a drive is wonky somehow, why wouldn't the data at least show? Thinking it was perhaps a plugin that is misbehaving, I uninstalled a few that updated in the last day or so...nothing either. ?? Kev. syslog.txt