Wolfe

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by Wolfe

  1. Should I just ignore that appdata and system have unprotected data and assume that everything else that could be moved to those shares is being moved? What can I try next to work on the Time Machine problem?
  2. I just tried adding a folder of photos to my Media share and Mover properly moved them from the cache to Media. Further evidence that the problem is limited to appdata and system.
  3. The share dropping issue in the OP appears to have been resolved, as noted in my post last night. The two current issues (Mover and Time Machine) may be related to issues with shares. There are 2 Time Machine shares, neither of which is working now but both of which were working at the time of the OP, but the duration they would run was getting shorter as the share dropping issue got worse. xfs_repair fixed the share dropping but the Time Machine backups still aren't working at all. The shares involved are the following: TimeMachineJWY-UnRaid TimeMachineSAC-UnRaid Both were set up to use the cache. As an experiment, I changed the TimeMachineJWY share to no longer use the cache. It still can't get past "preparing files", suggesting that the cache and mover aren't involved in the Time Machine problem. The MacBook using the TimeMachineSAC share does write some data to the cache despite also failing at the "preparing files" stage. Running Mover manually does remove the "some or all files unprotected" flag from that share. The "some or all files unprotected" flag remains on the appdata, system and cache shares. I started running Parity Check Tuning (fully updated) part way through this forum thread but doubt it's related to the current issues. Mover was set to every 3 hours with no logging. I have now enabled logging, rebooted my Unraid computer, tried Time Machine on both machines, run Mover and generated new logs. Same behavior as described above. yountnas-diagnostics-20210925-1454.zip
  4. Thanks. I rebooted my Unraid server last night. I just tried Mover, tried a Time Machine backup from my MacBook ("preparing files" for 2-3 minutes and then stopped without an error message) and tried Mover again. Diagnostics are attached. yountnas-diagnostics-20210925-1045.zip
  5. The share dropping appears to have stopped. I have run xfs_repair on my three data disks and it looks to me like it's not finding any problems. I still have some issues. Most relevant to this thread, Mover does not appear to be working. When I click it, nothing appears to happen and data remains on my cache disk. My Time Machine backups are also not working. It starts "preparing backup" and stops without an error message. I've spent maybe 5 hours researching that with no luck so far. Is it best to explore these here or start new threads?
  6. Looks like a parity.check.tuning I just applied took care of that issue. Mover now appears to be working (waiting to see if it completes without errors). It's problem was that I had told my Time Machine backups to stop using the cache as part of my troubleshooting but there was still Time Machine data on the cache. It's now back to using the cache for those and working fine on one MacBook but not the other. I'll dig into that later. It looks like my various issues are close to fully resolved, thanks to you fine folks. I'll post again if more questions arise or if I conclude that everything is all wrapped up. Any point in running xfs_repair on my other drives as a precautionary check despite no obvious symptoms? I'm not aware of a downside other than the obvious that my array would be in maintenance mode while I ran those tests. Any other due diligence/checkups I should work on?
  7. 4 of my shares say "some or all files unprotected" and have since before I posted. Just clicked on Move to start the Mover (I have one cache disk). Nothing appeared to happen except the parity.check.tuning message went away... I suspect that's because the error I quoted above is only happening on parity checks, so I'll run a parity check again. How do I troubleshoot the Mover, or is it normal that I haven't seen anything yet (2 minutes later).
  8. Done. As near as I can tell xfs_repair was successful. I didn't copy the results of the first repair, but I did run it twice more to be sure. The subsequent runs didn't seem to find any problems. The parity check and rebuild of precleared physical disk 2 completed successfully *before* I ran xfs_repair. Shares are no longer dropping within 60 seconds. I'll want to see them stay online for a few days to be more certain, but that problem appears to be solved. My TimeMachine backups still aren't working, but I'll troubleshoot that next and post here or in a new thread if necessary. I was able to write to one of the shares that previously wouldn't let me, despite permissions. No doubt the xfs_repair is responsible for that fix. One open issue that I only noticed a day or 2 ago but could have been there from the moment I installed parity.check.tuning is the following fatal error at the bottom of my screen. I assume that's unrelated to my share dropping issue and I'm fine moving that to another thread. Array Started•Fatal error: Cannot redeclare _() (previously declared in /usr/local/emhttp/plugins/parity.check.tuning/Legacy.php:6) in /usr/local/emhttp/plugins/dynamix/include/Translations.php on line 19 Anything else I should do to confirm that my Unraid is now healthy?
  9. Yes, that was clear to me, though my understanding is that it's good to run it with -n first and ask any questions about the results. This "too corrupted" message makes me think it won't run properly even without the -n but I haven't had time to research it yet. "Inode allocation btrees are too corrupted, skipping phases 6 and 7".
  10. 63 hours later the preclear is complete. Disk healthy, as anticipated. I added it back to the array and started the array in maintenance mode. Without starting a parity check I went right to xfs_repair. This time the xfs_repair did start. Two issues to address. First, at the bottom of the screen, it says the following: Second, the xfs_repair failed within 20 seconds: I think this is the problem to address: "Inode allocation btrees are too corrupted, skipping phases 6 and 7". I'll have some time to look into this in a few hours. Here is the full report: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... agi_freecount 62, counted 63 in ag 1 sb_ifree 260, counted 261 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 imap claims in-use inode 3575280897 is free, would correct imap - agno = 2 imap claims a free inode 5655149344 is in use, would correct imap and clear inode imap claims a free inode 5655149345 is in use, would correct imap and clear inode imap claims a free inode 5655149346 is in use, would correct imap and clear inode imap claims a free inode 5655149347 is in use, would correct imap and clear inode imap claims a free inode 5655149348 is in use, would correct imap and clear inode imap claims a free inode 5655149349 is in use, would correct imap and clear inode imap claims a free inode 5655149350 is in use, would correct imap and clear inode imap claims a free inode 5655149351 is in use, would correct imap and clear inode imap claims a free inode 5655149352 is in use, would correct imap and clear inode imap claims a free inode 5655149353 is in use, would correct imap and clear inode imap claims a free inode 5655149354 is in use, would correct imap and clear inode imap claims a free inode 5655149355 is in use, would correct imap and clear inode imap claims a free inode 5655149356 is in use, would correct imap and clear inode imap claims a free inode 5655149357 is in use, would correct imap and clear inode imap claims a free inode 5655149358 is in use, would correct imap and clear inode imap claims a free inode 5655149359 is in use, would correct imap and clear inode imap claims a free inode 5655149360 is in use, would correct imap and clear inode imap claims a free inode 5655149361 is in use, would correct imap and clear inode imap claims a free inode 5655149362 is in use, would correct imap and clear inode imap claims a free inode 5655149363 is in use, would correct imap and clear inode imap claims a free inode 5655149364 is in use, would correct imap and clear inode imap claims a free inode 5655149365 is in use, would correct imap and clear inode imap claims a free inode 5655149366 is in use, would correct imap and clear inode imap claims a free inode 5655149367 is in use, would correct imap and clear inode imap claims a free inode 5655149368 is in use, would correct imap and clear inode imap claims a free inode 5655149369 is in use, would correct imap and clear inode imap claims a free inode 5655149370 is in use, would correct imap and clear inode imap claims a free inode 5655149371 is in use, would correct imap and clear inode imap claims a free inode 5655149372 is in use, would correct imap and clear inode imap claims a free inode 5655149373 is in use, would correct imap and clear inode imap claims a free inode 5655149374 is in use, would correct imap and clear inode imap claims a free inode 5655149375 is in use, would correct imap and clear inode - agno = 3 - agno = 4 imap claims a free inode 9710099999 is in use, would correct imap and clear inode - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 2 - agno = 0 - agno = 3 - agno = 4 - agno = 6 - agno = 7 - agno = 5 entry "3751" at block 41 offset 640 in directory inode 5263491317 references free inode 5655149344 would clear inode number in entry at offset 640... entry "3752" at block 41 offset 656 in directory inode 5263491317 references free inode 5655149345 would clear inode number in entry at offset 656... entry "3753" at block 41 offset 672 in directory inode 5263491317 references free inode 5655149346 would clear inode number in entry at offset 672... entry "3754" at block 41 offset 688 in directory inode 5263491317 references free inode 5655149347 would clear inode number in entry at offset 688... entry "3755" at block 41 offset 704 in directory inode 5263491317 references free inode 5655149348 would clear inode number in entry at offset 704... entry "3756" at block 41 offset 720 in directory inode 5263491317 references free inode 5655149349 would clear inode number in entry at offset 720... entry "3757" at block 41 offset 736 in directory inode 5263491317 references free inode 5655149350 would clear inode number in entry at offset 736... entry "3758" at block 41 offset 752 in directory inode 5263491317 references free inode 5655149351 would clear inode number in entry at offset 752... entry "3759" at block 41 offset 768 in directory inode 5263491317 references free inode 5655149352 would clear inode number in entry at offset 768... entry "375a" at block 41 offset 784 in directory inode 5263491317 references free inode 5655149353 would clear inode number in entry at offset 784... entry "375b" at block 41 offset 800 in directory inode 5263491317 references free inode 5655149354 would clear inode number in entry at offset 800... entry "375c" at block 41 offset 816 in directory inode 5263491317 references free inode 5655149355 would clear inode number in entry at offset 816... entry "375d" at block 41 offset 832 in directory inode 5263491317 references free inode 5655149356 would clear inode number in entry at offset 832... entry "375e" at block 41 offset 848 in directory inode 5263491317 references free inode 5655149357 would clear inode number in entry at offset 848... entry "375f" at block 41 offset 864 in directory inode 5263491317 references free inode 5655149358 would clear inode number in entry at offset 864... entry "3760" at block 41 offset 880 in directory inode 5263491317 references free inode 5655149359 would clear inode number in entry at offset 880... entry "3761" at block 41 offset 896 in directory inode 5263491317 references free inode 5655149360 would clear inode number in entry at offset 896... entry "3762" at block 41 offset 912 in directory inode 5263491317 references free inode 5655149361 would clear inode number in entry at offset 912... entry "3763" at block 41 offset 928 in directory inode 5263491317 references free inode 5655149362 would clear inode number in entry at offset 928... entry "3764" at block 41 offset 944 in directory inode 5263491317 references free inode 5655149363 would clear inode number in entry at offset 944... entry "3765" at block 41 offset 960 in directory inode 5263491317 references free inode 5655149364 would clear inode number in entry at offset 960... entry "3766" at block 41 offset 976 in directory inode 5263491317 references free inode 5655149365 would clear inode number in entry at offset 976... entry "3767" at block 41 offset 992 in directory inode 5263491317 references free inode 5655149366 would clear inode number in entry at offset 992... entry "3768" at block 41 offset 1008 in directory inode 5263491317 references free inode 5655149367 would clear inode number in entry at offset 1008... entry "3769" at block 41 offset 1024 in directory inode 5263491317 references free inode 5655149368 would clear inode number in entry at offset 1024... entry "376a" at block 41 offset 1040 in directory inode 5263491317 references free inode 5655149369 would clear inode number in entry at offset 1040... entry "376b" at block 41 offset 1056 in directory inode 5263491317 references free inode 5655149370 would clear inode number in entry at offset 1056... entry "376c" at block 41 offset 1072 in directory inode 5263491317 references free inode 5655149371 would clear inode number in entry at offset 1072... entry "376d" at block 41 offset 1088 in directory inode 5263491317 references free inode 5655149372 would clear inode number in entry at offset 1088... entry "376e" at block 41 offset 1104 in directory inode 5263491317 references free inode 5655149373 would clear inode number in entry at offset 1104... entry "376f" at block 41 offset 1120 in directory inode 5263491317 references free inode 5655149374 would clear inode number in entry at offset 1120... entry "3770" at block 41 offset 1136 in directory inode 5263491317 references free inode 5655149375 would clear inode number in entry at offset 1136... - agno = 8 entry "1CE0AA6D-6A65-460A-B9BC-06BDC8EE0C2B-7e97da036096de81d094354d3606ab98.lrprev" in shortform directory 9710099998 references free inode 9710099999 would have junked entry "1CE0AA6D-6A65-460A-B9BC-06BDC8EE0C2B-7e97da036096de81d094354d3606ab98.lrprev" in directory inode 9710099998 would have corrected i8 count in directory 9710099998 from 2 to 1 - agno = 9 No modify flag set, skipping phase 5 Inode allocation btrees are too corrupted, skipping phases 6 and 7 No modify flag set, skipping filesystem flush and exiting.
  11. Even though Unraid doesn't know that the unassigned device used to be an array device? Interesting. I'll wait until the preclear is complete and then try again. Pre-read is at 80% after 14.5 hours. It'll be awhile! Let me know if there's anything I can look into in the meantime. Otherwise I'll just wait it out. There's also the parity check happening, but that'll be done first: just 1 hour remaining. Parity.check.tuning has already helped in this; after your first reply, it allowed me to restart the preclear after I restarted the raid (and the PC) to see if that helped get the xfs_repair check going. Although the reboot didn't fix the xfs_repair at least I didn't loose hours of preclear progress!
  12. Hi trurl, I removed Disk 2 from the array by disconnecting it (while the computer was powered down) and then restarting the array because I had some reason to suspect it was the cause of the problem and was ignorant of the checking the file system (xfs_repair) option (and I knew I had 2 parity disks to rebuild the array with after the test). The physical disk in question is ST10000NM0086-2AA101_ZA28QW6X For the 1st set of diagnostics disk 2 was still disconnected. For the 2nd set of diagnostics, it was connected but as an unassigned device undergoing a preclear (which is still underway). The manual says "If the drive is marked as disable and being emulated then the check is run against the emulated drive and not the physical drive." When I try to run the check, however, nothing happens (as noted in my reply to itempi).
  13. BTW, thanks for including the parity.check.tuning plugin in your signature. Nice utility. I'm particularly pleased to see "Resume parity checks on next array start". Because of this current issue I'm troubleshooting I haven't been able to complete a parity check since it began, and it's long overdue!
  14. I read through that section of the manual. When I click on "Check", nothing happens. xfs_repair status remains "Not available", even minutes later with a refresh. Details: Since it's an XFS format, I started the array in maintenance mode. I tried stopping the array and starting in maintenance mode a second time with the same results. Twice while looking at the Main tab I noticed that STOP was greyed out with a message of "Disabled, a BTRFS operation is running". That disappeared after a few seconds. Disk 2 has the red x and is indicated as "not installed". There is nothing that indicates to me that it is being emulated and no reads nor writes are displayed... I assume that's normal and is because it's in maintenance mode, but I could be wrong. yountnas-diagnostics-20210911-1759.zip
  15. I noticed a few days ago that 4 of my 12 shares were dropping occasionally - always the same 4 shares and always simultaneously. At first it might be hours before they dropped but today it only takes a minute or less. They always reappear after stopping and starting the array. This is Unraid 6.9.2 with all plugins and dockers updated. I read through a number of threads about shares dropping and tried some of the suggestions. SYMPTOMS: • Disc 2 says "too many files" when browsed from Main or Shares. • Disc 2 is red in Midnight Commander, which reports "cannot read directory contents". • Can't write to any of the 12 shares regardless of permissions. • Time Machine backups stopped working on 2 different MacBooks, each with its own UnRaid TimeMachine share. I think they stopped around the time when the delay before shares dropped got close to 1 minute. Prior to that I could get them to continue making progress by stopping and restarting the array. • Share settings: no common pattern for the 4 shares that go offline. This includes 2 TimeMachine shares, which both use Disc 2 only: One goes offline and the other does not. STEPS TAKEN: • Rebooting. Didn't help. • 8+ hours researching, including this thread: https://forums.unraid.net/topic/61879-unraid-shares-disappear/ • Stopped all Dockers. • Uninstalled Krusader (deprecated anyway, and it wasn't running). • Fix Common Problems run. No useful discoveries. • SMART short test completed without error. • SMART extended self-test "Interrupted (host reset)", presumably because the shares dropped. • Removed Disc 2 physically. Still had the same 4 shares drop offline about 1 minute after starting the array. (I did understand that Disc 2 would need to be reformatted and re-added because I started the array while it was offline, but with reason to suspect Disc 2 has issues and with 2 parity discs I wasn't concerned.) • Preclear scan of Disc 2 begun but won't be finished for awhile. I've uploaded my diagnostics, taken shortly after the shares dropped and while Disc 2 is still being precleared. All suggestions are welcome. Thanks in advance! yountnas-diagnostics-20210911-1438, complete.zip
  16. What I actually did was cancel the parity sync. Thanks for your instructions on how to properly do what I was trying to do in delaying the parity build till I had completed my manual reloading of data. Meanwhile, I'm very happy to report that all the data I'd copied to the replacement disks is still there after the parity sync completed. That makes me wonder why the Main tab showed only data writing to those disks and only reading from parity disks when it seems it should have been the reverse. Thanks to all of you who helped me deal with these disks! I'll be sure to pass on the knowledge. There are a few organizations in my community which could really benefit from an Unraid!
  17. My "parity sync / data rebuild" isn't working the way I thought it would. I disabled parity sync before loading data onto my two replacement 10tb data disks and then started the parity sync. I observed that during the sync it has only been reading from the parity disks and only writing to the replacement data disks, both of which have the "device contents emulated" flag. That's the behavior I expected if I'd added the disks and immediately performed a data rebuild, but since I added data to the disks and then clicked the Sync button, I was hoping it would be updating the parity disks, not the data disks. Obviously I was wrong. Was there a way to force it to behave the way I was expecting? The only thing I can think of is that I could perhaps have removed the parity disks from the Unraid configuration so it wouldn't even think a data rebuild was possible. Note that in my specific case the "data rebuild" function is a waste of time because the data from the missing data disks was also not on the parity disks (see earlier posts in this thread if interested in what I did wrong to make that happen). [Looks like I'll end up with both replacement data disks blank, because the parity drives thought the disks they replaced were blank. Fortunately, I can copy the data to the array again once this is complete. Doesn't look like I lost very much yet due to the increasing reallocated sector count.]
  18. Yes, definitely. But if the motherboard doesn't support VT-d (and it looks like it probably doesn't) I won't rush to replace it. I have very little time for gaming these days anyway. NAS was the main point of switching to UnRaid, followed by media serving. Hadn't even booted my gaming desktop for over a year!
  19. Update: I was unable to recover any data from either of the 10tb replacement disks, but it looks like I can get most of the data from the two disks those were replacing. Krusader is about half done transferring the data to the new disks. The reallocated sector count is climbing on disk ZA26ESJ8 while I'm copying from it, but not quickly. It's at 40 so far. I suspended the Parity-Sync during the process of copying the data from the failing disks to increase speed. As soon as that's done, I'll Sync for parity. Assuming that goes well, my next step will be figuring out if my Asus Sabertooth Z77 can support VT-d. I already know my i7 3770k cannot (but HVM works fine). I'll probably only be running one VM: Windows 10, mainly for gaming. 90% sure I need VT-d to do that. After that, it's setting up a mirrored SSD cache array and setting up my data and media serving for our laptops, Roku, etc. I'll update after parity sync is complete. Successfully, I hope!
  20. It's definitely the correct disk. Starting with the two replacement disks I've been keeping track of them with the serial numbers rather than "disk 1" and "disk 2". If xfs_repair fails on ZA27FQQF, can I assume the disk itself is just fine and put it back to use in the array? Begs the question of why the file system got messed up during the parity rebuild, of course. Was it a mistake to replace both disk 1 and disk 2 at the same time before rebuilding parity? Could that have been the cause of the file system errors? (This was a dual parity drive system with two 10tb parity drives reporting no errors.) If it's ok to use disk ZA27FQQF again, how important is it to preclear the disk again, given that it passed the first time? Dunno if Amazon will give me an extension on the Feb 6th return shipping deadline.
  21. Thanks everyone so far. [I'm now mainly using serial numbers to refer to the specific disks, since disk numbers change when drives are swapped out.] DISK ZA27FQQF RECOVERY I now understand that formatting replacement disk 1 (ZA28QW6X) updated parity so my two parity drives can't help me with recovery. Disk 2 (ZA27FQQF) was physically removed from the computer when I formatted disk 1 (ZA28QW6X) and I'm following jonnie.black's lead to recover from filesystem corruption. The "file system check" option in the GUI runs this command on disk ZA27FQQF,: /sbin/xfs_repair -n /dev/sdg1 2>&1 From that and the xfs_repair instructions I understand that my command line command should read: xfs_repair -v /dev/sdg1 It's currently in Phase 1 trying to find secondary superblocks since it couldn't find the primary. 15 minutes later, it's still writing only dots on the screen. I don't know how bad a sign that is. DISK ZA28QW6X RECOVERY This is the new replacement disk that I formatted when, immediately after the parity rebuild showed it had 9.99tb free instead of being nearly full. It was obviously a quick format, so I'm taking it to my local shop to see what equivalent of UFS Explorer they have. NEXT STEPS If I'm lucky, the original 10tb disc with the CRC error is indeed just fine and only needs a new SATA cable. I'll pick that up today. If I'm successful with both disk recoveries (ZA27FQQF and ZA28QW6X) then I'll have to see what drive I have to add to the raid temporarily to copy the recovered data before wiping the 10tb drives to return them to the array.
  22. *I might have just lost 10TB of data. Or perhaps 20tb.* [Setting aside the probably bad SATA cable for now (no drives connected to it)] After preclearing both replacement 10tb drives I had the system run a parity rebuild. During the parity rebuild I noticed that one drive was listed as formatted and the other wasn't. I hadn't formatted either, I believe, but hoped that the parity rebuild would take care of any potential issues or - at worse - I'd have to reformat the unformatted one and do the parity rebuild. When the parity rebuild was complete, both replacement 10tb data drives were listed as unmountable because they hadn't been formatted, although both listed "xfs" as a file system. I figured that at worse I'd just wasted the rebuild time and would simply have to format them and repeat the rebuild. To be on the safe side, I powered down and disconnected one of the drives (disc 2), formatted the other (disc 1) and restarted the array, having it remove the missing drive (disc 2) from the array. The raid status then said parity is valid and drives 1 and 2 each have 9.99gb free. It looks as if no data was recovered from the parity rebuild. I stopped the array, reconnected disc 2, restarted the array and discovered that disc 2 would not mount when I clicked "mount". Occam's razor says it's user error, due to ignorance. Any information on what happened and, more importantly, any steps to recover up to 20tb of data would be appreciated. I've attached system log and diagnostics. Below is part of the most recent log info. Disc 2 is "ZA27FQQF". I assume I should "run xfs_repair (>= v4.3)" on Disc 2 though I'm not at all familiar with that process and hope it can help with that disc's data. Any chance of data recovery for Disc 2 (ZA28QW6X)? No data was written to it since the quick format. Feb 4 22:12:22 YountNAS unassigned.devices: Adding disk '/dev/sdg1'... Feb 4 22:12:22 YountNAS unassigned.devices: Mount drive command: /sbin/mount -t xfs -o rw,noatime,nodiratime '/dev/sdg1' '/mnt/disks/ST10000NM0086-2AA101_ZA27FQQF' Feb 4 22:12:22 YountNAS kernel: XFS (sdg1): Mounting V5 Filesystem Feb 4 22:12:22 YountNAS kernel: XFS (sdg1): totally zeroed log Feb 4 22:12:22 YountNAS kernel: XFS (sdg1): Corruption warning: Metadata has LSN (1:1041896) ahead of current LSN (1:0). Please unmount and run xfs_repair (>= v4.3) to resolve. Feb 4 22:12:22 YountNAS kernel: XFS (sdg1): log mount/recovery failed: error -22 Feb 4 22:12:22 YountNAS kernel: XFS (sdg1): log mount failed Feb 4 22:12:22 YountNAS unassigned.devices: Mount of '/dev/sdg1' failed. Error message: mount: /mnt/disks/ST10000NM0086-2AA101_ZA27FQQF: wrong fs type, bad option, bad superblock on /dev/sdg1, missing codepage or helper program, or other error. Feb 4 22:12:22 YountNAS unassigned.devices: Partition 'ST10000NM0086-2AA101_ZA27FQQF' could not be mounted... yountnas-diagnostics-20190204-2219.zip yountnas-syslog-20190204-2230.zip
  23. Thanks for that insight! What part of the report hinted at the bad cable? I'm leaving that hot-swap bay empty until my parity rebuild is complete, then I'll replace the cable or just try a different drive and see what the SMART report says then.
  24. Thanks! I haven't used diagnostics yet and am not familiar with what to look for. yountnas-diagnostics-20190202-2156.zip