ssb201

Members
  • Content Count

    51
  • Joined

  • Last visited

Community Reputation

1 Neutral

About ssb201

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. It would be nice if Unraid functioned in a similar fashion to Hyper-V during array stops or system restarts. Hyper-V will automatically snapshot ("save") VMs in their current running state, shutdown/restart, and then rehydrate the VMs when the system starts back up. Currently, stopping the array requires a guest agent to be installed on the VM and for the VM to down itself gracefully for the array to shutdown. I have a Windows VM that never seems to accept the signal and thus hangs, keeping the entire array from shutting down due to file locks. I do not want anything running in the
  2. I did not think to upload the diagnostics because I assumed it would not be that interesting since the logs are completely full of the failed spindown. Here are the diagnostics: tower-diagnostics-20191226-2145.zip
  3. Upgraded my server (from 6.6.6) last week and ran into two issues: 1) Lockups and reboots. After upgrading the server would lockup and/or reboot randomly after a few hours to a day of running. This is the same behavior that happened when I tried to upgrade to 6.7.0 (I ended up going back down to 6.6.6 and was stable for 125 days straight). Nothing obvious appears in the logs. During one of the periods between reboots, I happened to have also installed the Disable Mitigation Settings plugin for some testing. As soon as I turned off all the mitigations the problem went away. The syst
  4. I seemed to have similar issues as others in this forum. Upgraded my server from 6.6.6 to 6.7.2. My server was stable for a few days and then it would either lock up or reboot unexpectedly. Nothing showed in logs. The last time it was fine when I went to sleep, but was not responding to ping in the morning. After reboot it came up, ran for an hour, then rebooted on its own again. As soon as I returned home from work I downgraded back to 6.6.6 and it has been stable once again. Any idea how I can collect logs or details of the crash/hang/reboot? I am leary of updating again, but if
  5. I hooked it up to the on-board SATA controller and saw ATA errors. [ 369.829354] sd 4:0:0:0: [sdc] tag#26 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x06 [ 369.829360] sd 4:0:0:0: [sdc] tag#26 CDB: opcode=0x88 88 00 00 00 00 00 00 00 64 00 00 00 06 00 00 00 [ 369.829362] print_req_error: I/O error, dev sdc, sector 25600 It still seems to work just fine on my Windows machine using a USB-SATA controller. I have given up trying to figure this puzzle out. I am ordering a new drive and will just use the problem child somewhere else. Thanks everyone for the
  6. Yeah, I understand that there will be additional work for the drives until I replace it. I took the drive out and used it with a USB-SATA controller on Windows and it worked just fine. That leads me to suspect a controller problem, despite the firmware update. I am just puzzled, because I have other 512e drives working just fine with the controller and the drive is explicitly listed in the controller support document: https://docs.broadcom.com/docs/IT-SAS-Gen2.5CompatibilityList The two Hitachi drives that are working with the controller are: HDN728080ALE604 - Des
  7. Doh. That makes perfect sense. I am still not clear why I am getting read errors without any SMART issues at all. I will have to pull the drive and try it in some other systems.
  8. New update: The drive shows as Not Installed (missing) from the array. For some reason it is still receiving writes to shares as when I extracted files to a share they ended up being written to the array. I assume this is due to some weirdness with the union file system. What I do not get is why I was able to go directly to /mnt/disk5 and read and write files without any errors or problems.
  9. After updating the firmware I no longer had any problems at start, but my read speeds on the array plummeted to almost nothing. I tried removing the disk (since nothing of interest had been written yet) and rerunning preclear on it. As soon as the pre-read started it was spitting out the same read errors. Still no issues showing in SMART and I am baffled why it is only on reads, never writes.
  10. The disk is connected via the 12 port back plane that all the drives are connected to. The controller is a Dell Perc H200 flashed with IT firmware. The UPS is not plugged in at the moment. I had to disconnect it to reset the memory when replacing batteries and have not plugged it back in yet. I see a million read errors in dmesg, but not a one in the SMART(posted above - not sure why it shows on the direct page but not in diagnostics). No write errors at all. Only difference I can think of between this and the other disks is this is an AF 4Kn drive. I could try swapping
  11. Diagnostics attached tower-diagnostics-20190216-1915.zip
  12. So I just added a new drive to my array and I am getting weird errors. I pre-cleared the drive (admittedly without pre or post read since this was a drive I had used previously) and it ran overnight successfully. I added the drive to my array and it formatted and joined seemingly normal. But if I review the syslog I see: Feb 16 17:35:01 Tower kernel: sd 8:0:6:0: attempting task abort! scmd(00000000f8e68c0b) Feb 16 17:35:01 Tower kernel: sd 8:0:6:0: [sdj] tag#0 CDB: opcode=0x88 88 00 00 00 00 00 00 61 c1 b0 00 00 00 08 00 00 Feb 16 17:35:01 Tower kernel: scsi target8:0:6: handle
  13. Since it is looking more and more like the RC series is coming to an end I had a question about moving to 6.4. If I use a 4Kn drive as a second parity drive and have to downgrade back to an earlier release, what will happen? I assume the array will still work with the single parity drive it can recognize. If later moving back to a 6.4 release will there be any conflict with the now out of date original 4Kn parity drive?
  14. The array was definitely in an unusual state. New config worked well. Plex docker was missing and had to rejoin AD, but everything else looked fine. A few movies are corrupt and will need to be re-ripped, but nothing major. I have a new parity drive building now. So glad a pulled the trigger on an extra drive for Black Friday. Unfortunately, the drive the failed, while under warranty is flagged as needing to be returned to system vendor rather than Hitachi, and I have no idea who that is. Thanks everyone for the assistance.
  15. Correct. Apologies for any confusion, the tooltip for the yellow says emulated: