Jump to content

Viper359

Members
  • Content Count

    29
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Viper359

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Now I need to figure out how to access my Unraid Shares from within Synology
  2. We are just going to have to wait until @dmacias has time to update and finish his plugin.
  3. Yes, I switched and all my problems went away. Left my interposers in. Since all issues went away, I stopped trying to debug what the actual issue was.
  4. I too had this issue for the last few days. Used TOP to get process ID, then killed it, than typed WSDD Windows network locations showed back up after restarting typing just WSDD, or, that did nothing, and it automagically starts back up. Hopefully, it stays normal
  5. Had a different parity drive fail that night. Switched out the HBA to an LSI one. Rebuilt the parity drive. Ran 10hrs, no errors Started Rebuilding the other dropped drive. 21 drives running now, we shall see if this was the issue. Just waiting for rebuild to finish on the last drive.
  6. Alright, later tonight I will power down everything and make the switch and see what happens. I will also triple check the cables are seated properly. Those QSFP cables are thick and don't like to move.
  7. Yes, I believe that is the model I am using. I do have an LSI HBA card and a spare QSFP to whatever adapter cable. Maybe I should try this first. Do a rebuild, and see if it stays stable.
  8. I will have to downgrade and give it a go and see. Is it safe to downgrade now that its kicked a disk out of my array, and then rebuild it, and go from there?
  9. This is just the thing, this didn't happen on 6.7. Does this make any sense? Its a disk shelf, and everything seems to be working. The only thing I can thing of is the interposers, but, you would think I would have had this issue on 6.7 for the couple weeks I was running it? The other thing I cannot get, why do these errors only happen after a rebuild? Like I rebuilt the parity drive, nothing happen. Within an hour of it finishing, bam. Same when disk 11 was causing issues. Rebuilt the disk, nothing, hour or so later, boom.
  10. It has done it again. I am attaching my diagnostics. You will notice several disks report low amount of errors, and 1 drive has been kicked. tower-diagnostics-20191220-1027.zip
  11. Nope. Rebooted and rebuilding that parity drive that got kicked. When its done, i think I will try to downgrade to 6.7, I didn't have issues then.
  12. I know, what I was trying to say is I do it to ensure that the disk was fine. Now that several are reporting errors that magically stop happening after a disk is kicked out of the array, or rebuilding a disk, I am now certain its not my disks. Reboot et doesn't seem to stop this issue, and no red flags popup anywhere either.
  13. Because the first try it rebuilt fine, but I wanted to be 100 percent sure the second time. Elimination of possible paths if you would
  14. Is anyone else experiencing issues with write errors and then a disk being dropped? 6.7 was fine, not a single issue, then I updated to the release client, and it started from there. 21 drives. It would report one drive with read errors and kick it. I would run smart and preclear, not a single issue. Rebuild, not a single issue. Span a few hours later, many drives start having errors. Reboot and everything good, but it kicked the same disk. Pull it and definitely verify, everything is fine. Preclear again and rebuild, all is fine. Then a few hours after its done, parity disk #2 starts having these read erros. Now rebuilding, as i know the disk is fine. I am using a DS4246 disk shelf connected to my server. This issue wasn't present in 6.7 Any thoughts or ideas whats going on?
  15. Ah okey. Even from command line, I am not seeing the results I saw installing directly like you showed me. Lemme know if you need help troubleshooting.