Skipdog

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by Skipdog

  1. UNRAID Ver: 6.9.1 PreClear ver: 2021.01.03 Server type: Supermicro CS836, LSI SAS2308 controller Noticed in the fix notes for 2020:12-22 Fix: script failing during drive zeroing I've purchased several brand new external WD 8TB drives. Each of the drives runs through pre-reads just fine. Then toward the very end of zeroing it reports a failure. I've tried different slots and the behavior is the same. I've pulled the drive out of the server and put it back into the original enclosure to run pre-clear on another system. Should I generate a full diag next time or attach the pre-clear log. I am wondering if i'm hitting the bug behavior or just a hardware type issue causing the pre-clear bug. Skip edit: attached diag- but syslog looks like it is missing data tower-diagnostics-20210316-1218.zip
  2. System is a Supermicro X9DRH with Dual E5-2670v2. Ram=256gig. The server has been super stable with 6.8 with uptimes reaching 150+ days. I could not leave well enough alone and upgraded to 6.9 RC2. I upgraded in part to get the new graphics plugin working with my Quadro card. I've enabled syslog now and i'm dumping them to disk. Is there any other diagnostics I can turn on to catch another potential crash? Parity check auto-ran and found roughly 8000 errors which is expected since it shutdown dirty. I see no errors on the drives currently. Thanks, Skip
  3. Walter- Epic fail on my part! Thank you! I had thought it installed all of the tools by default! Skip
  4. Apologies i did search for a solution! I've installed and reinstalled the nerd pack but when I attempt to execute iperf3 it doesn't seem to find it. Looked in various directories. What rookie mistake am I making? Skip
  5. Hi there. I am trying to use Krusader to copy between local UNRAID volume and an SMB network connection. The SMB connects fine. When I go to copy files/directories I get the error "The file or folder smb://user@xx.xx.xx.xx/backup/folder/subfolder/.@_thumb does not exist. Any ideas what it is unhappy about? Skip
  6. talked to a friend who has the same drives in an enterprise QNAP. He started to have the same problem at the same time. We are wondering if something in the Linux kernel/driver set was updated that is causing issues with the Seagate Constellation series. Also checking firmware to see if that has any impact.
  7. Yes i moved the Seagates to another supermicro that effectively uses the onboard SATA. they are producing the errors in the text I pasted. The drives aren't throwing bad sectors - but maybe they are mechanical problems in nature.
  8. I should also add i've got 7x WDC_WD80EMAZ that have given me zero problems since they were installed. On the spare supermicro, they were indeed going into the onboard SATA. There are no HBAs installed in that server. The primary supermicro is using SAS 9207-8i type card.
  9. Still struggling with this as just the Seagate 8TB EXOS drives continue to have problems. I have another supermicro and i'm seeing these Seagate drives also produce problems in that. Completely different chassis, HBA, cables. My system failed to rebuild parity on one of the drives just now, and then after a reboot, the drive that did rebuild correctly decided to drop out. I didn't reboot this time and captured diags. Here is snippet of the logs from the drive when it happened: Aug 12 10:43:12 Tower kernel: sd 7:0:7:0: [sdk] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB) Aug 12 10:43:12 Tower kernel: sd 7:0:7:0: [sdk] 4096-byte physical blocks Aug 12 10:43:12 Tower kernel: sd 7:0:7:0: [sdk] Write Protect is off Aug 12 10:43:12 Tower kernel: sd 7:0:7:0: [sdk] Mode Sense: 7f 00 10 08 Aug 12 10:43:12 Tower kernel: sd 7:0:7:0: [sdk] Write cache: enabled, read cache: enabled, supports DPO and FUA Aug 12 10:43:12 Tower kernel: sdk: sdk1 Aug 12 10:43:12 Tower kernel: sd 7:0:7:0: [sdk] Attached SCSI disk Aug 12 10:43:12 Tower kernel: BTRFS: device fsid e2a97d07-5b8c-4f40-9733-01c1060204f8 devid 1 transid 387385 /dev/sdk1 Aug 12 10:43:38 Tower emhttpd: ST8000NM0055-1RM112_ZA105CRR (sdk) 512 15628053168 Aug 12 10:43:38 Tower kernel: mdcmd (1): import 0 sdk 64 7814026532 0 ST8000NM0055-1RM112_ZA105CRR Aug 12 10:43:38 Tower kernel: md: import disk0: (sdk) ST8000NM0055-1RM112_ZA105CRR size: 7814026532 Aug 12 10:43:41 Tower emhttpd: shcmd (25): /usr/local/sbin/set_ncq sdk 1 Aug 12 10:43:41 Tower root: set_ncq: setting sdk queue_depth to 1 Aug 12 10:43:41 Tower emhttpd: shcmd (26): echo 128 > /sys/block/sdk/queue/nr_requests Aug 12 10:46:54 Tower kernel: sd 7:0:7:0: [sdk] tag#7 CDB: opcode=0x8a 8a 00 00 00 00 00 01 51 c2 c0 00 00 00 40 00 00 Aug 12 10:46:57 Tower kernel: sd 7:0:7:0: [sdk] Synchronizing SCSI cache Aug 12 10:46:58 Tower kernel: sd 7:0:7:0: [sdk] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 Aug 12 10:46:58 Tower kernel: sd 7:0:7:0: [sdk] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 01 51 c2 c0 00 00 00 40 00 00 Aug 12 10:46:58 Tower kernel: print_req_error: I/O error, dev sdk, sector 22135488 Aug 12 10:46:58 Tower kernel: sd 7:0:7:0: [sdk] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 Aug 12 10:46:58 Tower kernel: sd 7:0:7:0: [sdk] tag#0 CDB: opcode=0x8a 8a 00 00 00 00 00 70 27 f9 48 00 00 00 08 00 00 Aug 12 10:46:58 Tower kernel: print_req_error: I/O error, dev sdk, sector 1881667912 Aug 12 10:46:58 Tower kernel: sd 7:0:7:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00 Skip tower-diagnostics-20190812-1452.zip
  10. OK so pre-clear on two drives failed during different times. One looks like during read. The other i'm not sure about. It looks like I/O errors. Drives are in sleds in supermicro. Diags uploaded. Skip tower-diagnostics-20190801-1256.zip
  11. OK, so I think yes I made some mistakes on this one. When something like this happens should I generate diag, and then power down completely? Post reboot see what is coming up? When this happened I did notice that disks were spun down. I've got two disks pre-clearing right now. Thanks for the pointer to UD.
  12. Parity finished rebuilding. I'm actually not sure if I lost data or not. Drive 8 is available, but no longer in the array configuration. UNRAID is prompting me that I can format it. It almost seems like several drives dropped out around the same time. I took one of the three out and have been testing it offline and can find no SMART errors. Is it a backplane supermicro issue that caused the disconnects, or are the drives really flaking out? Two of the drives are running pre-clear again. One of them failed but I was able to restart the job. None of the Western Digital drives experienced any issues during this. Another random question - what is a safe temperature for the drives in the supermicro? I'm not hitting the warning for the most part -- usually 100-108 F. Attaching diag file. Thank you kindly. Skip
  13. Need some array/disk advice. System is running 6.7.2. 2 x 8TB parity. 12 x 8TB data drives running btrfs. Supermicro X9DRH-7TF. Dual Xeon E5-2670 v2s. Adventure started when one of the two parity drives started throwing errors. Is it normal that the drive spins down when it encounters errors? Or does this happen as a result of read errors? I shut the system down and replaced the drive with a spare 8TB I had for this purpose. I started parity rebuild and during rebuild one of the 8TB data drives ran into 2560 errors and also dismounted. (red x and says its spun down) Should I finish parity rebuild? Should I dismount array, turn off and remove power in order to try to get Disk 8 to spin back up? Parity rebuild is at 58% with approximate 7 hours remaining. Skip
  14. If I change to Bridge mode, the docker won't be able to proxy for my "host" dockers correct? That is what I am trying to deal with is connectivity. I've got SONARR/RADARR/SAB in host network mode. When I ran in bridge I could no longer connect. Would the solution be to run everything in bridge?
  15. Thank you Djoss for the project - it is really nice. So i'm trying to place the docker on the host network, but i'm running into an issue where the Tautulli docker is already on 8181. Is there a way to force Nginx Proxy Manager to use an alternate container port? It doesn't seem like there is unless i'm totally missing it during the installation. Clicking edit on the various lines shows it seems to be hard coded 8181. Skip
  16. Thank you! I think the botched install the first time did not include the user/password configuration. Removing the directory fully and installing again fixed the issue. Many thanks! Skip
  17. Sorry for noob question- Just deployed this docker. Using BR0 with static IP address and i'm able to hit the website. Trying to use the default login/password I get the no relevant user found. What could I be doing wrong? I've tried to deploy container twice. Skip