icemansid

Members
  • Posts

    71
  • Joined

  • Last visited

Everything posted by icemansid

  1. I got this error, @Squid's post says to post here. Diag's attached. I havent noticed any issues nor have I rebooted yet. 10.1.1.2-diagnostics-20230929-1216.zip
  2. @dlandon would it be possible to only log deletes? or have an option to? I have been trying to track down rogue deletes for a long time and this plugin could greatly assist with that. or maybe add a notification upon a file delete? Edit/update: I found where the log file was kept and I setup a script to run every minute to grep out the delete events and clear everything else, and then re-write the log file with only the delete events. This allows me to use the tool but only keep the deleted log events. Awesome tool BTW!
  3. I just wanted to provide an update to this posting - as it turns out, the issue was with SMR drives being used in the system. While we removed the 9206 cards, it wasn't until we moved the SMR drives out of the parity that we were able to clear the issue.
  4. Shared disappeared.tower-diagnostics-20200207-2047.zip No idea what happened.. They appear is /mnt/user0 but now showing up on dashboard.
  5. No issues for me. Been using this for a few years now.
  6. File as requested. file_20200109_123133.tar.gz
  7. 2nd 2308 gives very similar results at this point. Nothing in the BIOS stands out but I have an HPE storage engineer looking at it as well. Hopefully he can assist with the PCI negotiations..
  8. I am using the HP 468405-002 and using a PCI-e to USB3 adpater to power it. The p420i is the onboard running the HP OEM dirves in a raid array acting as a cache drive. Ive rebooted and installed a second SAS2308 card for testing as well. Ive also attached a fresh diagnostic report. I dont exactly follow what you mean by PCIe switch. unraid-backup-diagnostics-20200108-1436.zip
  9. SAS expander in and no change. What I did come across is someones comment about is that old drives connected to this controller will cause drive speed issues across all of the drives. I think i will do some more testing as I am using a bunch of very old disks in this setup.
  10. It has 4 ports on it - 2 connect to 1 chip, two connect to the other. Also on the over-heating - its unlikely as this card is in a 1U HP enterprise server. It has enough fan power to take flight.
  11. Here is the same benchmark from the H310 - technically slower card - though it has much better/faster drives attached to it. Single disk reads are nearly identical to multi-disk reads. This is what I would expect the 2308 card to perform like.
  12. diags are at the top of this post - haven't rebooted since but i can pull a new one if you like. There are a ton of drive errors which have been resolved due to bad SAS cables but speeds are still way off from previous SAS card. The H310 shows max throughput of only 4 GB/s compared to the 6GB/s on the 2308.
  13. No real change in multi-drive reads. It does appear to have increased the single drive speed reads though. Also I want to mention - the ONLY thing that I changed was the SAS card and cables. Ive added that to the image. Comparing that to my other system with H310 cards installed, this one should be faster.
  14. Update #2 - for some reason, single disk speeds are normal, but when reading from all disks, speeds are slow. This was apparent with a parity check and using the DiskSpeed container and benchmarking all disks on the controller. By all accounts, the new SAS card should be much faster than the old SAS card.
  15. Quick update - the replacement cables are in and so far, issues seem to have been resolved, even though I am over the 1M total length spec. Currently at 1.5M but running a parity check and no errors. I also ordered a SAS expander so will be implementing that later this week as well.
  16. Can you expand on what SAS expander you are using and is there any IO limitation by accessing 12 disks over a single SAS cable? Also what HBA are you using for this?
  17. My old setup was similar - no issues. Will see how the new cables far. Right now they are as short as they can be.
  18. Ordered replacement cables. After a bit of digging, i found a number of negative reviews on the specific cables I ordered.
  19. The external cables are .5 meter and the internal cables are .5 meter as well - about as short as i can get it. HBA external to another chassis with a Dual Mini SAS 26pin SFF-8088 to 36pin SFF-8087 Adapter. All of the SFF8644 cables are new as well.
  20. Ive recently upgraded my server to use 2 9206-16e drives, flashed with the latest firmware. Ive got 2 systems that have this setup and both are experiencing the same issues. One system has large disks, most around 8TB, the one has only 2TB disks. When a parity check is issued, the parity disk will throw read errors. I am really at a loss. Attached are diagnostics. Previously I had the 9201-16E card in without issues. I would have kept the card but I needed to upgrade to a half-height card due to server only having 1 full height and 1 half height PCI-E slot. unraid-backup-diagnostics-20200101-0823.zip
  21. I would like to see if there is a possibility to have a gluster client added in some form to connect to Gluster storage nodes. I would like to pass some storage to VM's. I have it connected to a linux VM currently using the linux Gluster client but i would like the flexibility of passing the storage to a windows VM which has proven to be more difficult than expected.
  22. Just an FYI - came here looking for answers, didn't find them here but on the Docker Hub page. Docker has been deprecated. https://hub.docker.com/r/linuxserver/mysql/
  23. Squid Thank you thank thank you! you have helped me out more times than you can imagine! a few direct and many indirect. A few days and we are still going strong.