Jump to content

ottoguy

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by ottoguy

  1. I have an external USB drive I use for storing some of my VM drive images. I mount it using the Unassigned Devices Plugin. It frequently stops responding and gives an i/o failure. What I've Tried that doesn't work: Physically unplugging and re-plugging in the drive doesn't help. Replaced the USB drive enclosure with a new one. Unmount the drive from the CLI, usually won't work because it's "busy", but lsof can't read it to find open files. The only thing that seems to bring it back is to physically unplug the drive, move it to my desktop ubuntu machine and plug it in. The drive usually auto-mounts and works fine. Then I can move it back to the Unraid server and manually re-mount it from the CLI. Sometimes I have to change the UUID (because Unraid still thinks its mounted) or sometimes it just works. Any suggestions why it seems to get dropped frequently? The USB boot drive is never an issue, so it doesn't seem to be a USB problem. tower-diagnostics-20221025-0856.zip
  2. Thanks, I'll try turning on email notifications for Notice logs. I saw in the logs mine had gone up and down, but not the log level. Guessing the different output I have is because I'm using the NUT plugin? Aug 28 11:41:22 Tower upsmon[499]: UPS [email protected] on battery Aug 28 11:41:27 Tower upsmon[499]: UPS [email protected] on line power Followup question, how do you add the log level to the log output?
  3. I connected my Tripplite UPS via USB to Unraid using the NUT plugin. I received an alert that the UPS was on battery from Unraid, but when it went back to Line power there is no email alert, just a log entry. Is there an easy way to give an email alert when the system goes back onto Line power? And/or an alert when the system is shutting down due to low battery? ( I haven't tested this yet so maybe it just works). tower-diagnostics-20220829-0908.zip
  4. I went to shutdown the array and I hit stop-all on both Docker and VMs then Shutdown on the Array. Apparently since some of the VMs were Paused already it didn't shut them down and so the array couldn't unmount the drives. I did a bit of killing processes and lsof to get it finally to stop. However now it says the config is stale and there is no Start Array option. Any suggestions? tower-diagnostics-20220216-1807.zip
  5. Does anyone know if a UCS C240 LFF M4 is a good option for unraid? I have an old dell server I need to replace. I see some people posting they are using the UCS C240 but they seem to have issues with either the NIC or getting drives to work. Just wanted to see if anyone had one in good working order with unraid before I spend the money on ebay.
  6. Is there guide that shows an upgrade best practices? I'm on 6.6.6 planning on doing an upgrade. I assume it's as simple as clicking "upgrade", however I didn't know if there was a guide on things to do before the upgrade. Take a screenshot of the setup, or backup the config somewhere or anything else that may be good to have if the upgrade fails for some reason.
  7. Jan 4 19:58:45 Tower kernel: print_req_error: critical medium error, dev sdb, sector 1956768680 Jan 4 20:05:55 Tower kernel: sd 5:0:0:0: [sdb] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Jan 4 20:05:55 Tower kernel: sd 5:0:0:0: [sdb] tag#0 Sense Key : 0x3 [current] Jan 4 20:05:55 Tower kernel: sd 5:0:0:0: [sdb] tag#0 ASC=0x11 ASCQ=0x0 Jan 4 20:05:55 Tower kernel: sd 5:0:0:0: [sdb] tag#0 CDB: opcode=0x88 88 00 00 00 00 00 74 a2 7e 38 00 00 01 e0 00 00 Jan 4 20:05:55 Tower kernel: print_req_error: critical medium error, dev sdb, sector 1956806472 Jan 4 20:07:08 Tower kernel: sd 5:0:0:0: [sdb] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Jan 4 20:07:08 Tower kernel: sd 5:0:0:0: [sdb] tag#0 Sense Key : 0x3 [current] Jan 4 20:07:08 Tower kernel: sd 5:0:0:0: [sdb] tag#0 ASC=0x11 ASCQ=0x0 Jan 4 20:07:08 Tower kernel: sd 5:0:0:0: [sdb] tag#0 CDB: opcode=0x88 88 00 00 00 00 00 74 a2 8d 78 00 00 01 20 00 00 Jan 4 20:07:08 Tower kernel: print_req_error: critical medium error, dev sdb, sector 1956810232 I'm getting the above errors on a disk. Is it a disk issue or a slot issue? It's a dell server with 12 drive bays. Should I move it to another slot or is it a drive problem? Attached are the logs. Thanks tower-diagnostics-20200104-2140.zip
×
×
  • Create New...