DekChi Posted April 1, 2019 Share Posted April 1, 2019 Hi, Newbie here. Just started using unraid a couple of days ago. I was able to install a couple of dockers through community application without issue. Then something was loading the system. I notice this through the dashboard. I did go to terminal and do top. I believed it is shfs process. I did try to install another docker while that took about 50% of my cpu. The installation never finish. I figure it would let the process finish first then install again. Installation also did not finish. I also try to restart the NAS and that did not help. Any suggestion??? Any help would be greatly appreciated. I did include the diagnostics. Please let me know if you need any other information. The error is /usr/bin/docker: Error response from daemon: Failed to create btrfs snapshot: read-only file system. See '/usr/bin/docker run --help'. Thanks in advance! nas-diagnostics-20190331-2002.zip Link to comment
trurl Posted April 1, 2019 Share Posted April 1, 2019 You have a corrupt docker image. Disable, delete and recreate (Settings - Docker) then you can reinstall your dockers using the Previous Apps feature on the Apps page. Link to comment
DekChi Posted April 1, 2019 Author Share Posted April 1, 2019 Thanks for the advice. How do i know which one is corrupted? I did try to remove my docker (any) any it showed up as Execution error Server error I did click on the log and nothing showed up. Link to comment
JorgeB Posted April 1, 2019 Share Posted April 1, 2019 6 hours ago, DekChi said: How do i know which one is corrupted? You delete the image for all dockers: https://forums.unraid.net/topic/57181-real-docker-faq/?do=findComment&comment=564309 Also, corruption was the result of issues with the cache drive, start by replacing the cables: Mar 31 19:37:12 NAS kernel: ata8.00: failed command: WRITE FPDMA QUEUED Mar 31 19:37:12 NAS kernel: ata8.00: cmd 61/20:f8:c0:48:25/00:00:00:00:00/40 tag 31 ncq dma 16384 out Mar 31 19:37:12 NAS kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Mar 31 19:37:12 NAS kernel: ata8.00: status: { DRDY } Mar 31 19:37:12 NAS kernel: ata8: hard resetting link Mar 31 19:37:15 NAS kernel: ata8: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 31 19:37:15 NAS kernel: ata8.00: configured for UDMA/133 Mar 31 19:37:15 NAS kernel: scsi_io_completion: 4 callbacks suppressed Mar 31 19:37:15 NAS kernel: sd 8:0:0:0: [sdf] tag#20 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x06 Mar 31 19:37:15 NAS kernel: sd 8:0:0:0: [sdf] tag#20 CDB: opcode=0x2a 2a 00 00 25 30 a0 00 06 00 00 Mar 31 19:37:15 NAS kernel: print_req_error: I/O error, dev sdf, sector 2437280 Mar 31 19:37:15 NAS kernel: sd 8:0:0:0: [sdf] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x06 Mar 31 19:37:15 NAS kernel: sd 8:0:0:0: [sdf] tag#19 CDB: opcode=0x2a 2a 00 00 25 26 a0 00 0a 00 00 Mar 31 19:37:15 NAS kernel: print_req_error: I/O error, dev sdf, sector 2434720 Mar 31 19:37:15 NAS kernel: BTRFS error (device sdf1): bdev /dev/sdf1 errs: wr 28, rd 0, flush 0, corrupt 0, gen 0 Mar 31 19:37:15 NAS kernel: sd 8:0:0:0: [sdf] tag#18 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x06 Mar 31 19:37:15 NAS kernel: sd 8:0:0:0: [sdf] tag#18 CDB: opcode=0x2a 2a 00 00 25 20 a0 00 06 00 00 Mar 31 19:37:15 NAS kernel: print_req_error: I/O error, dev sdf, sector 2433184 Mar 31 19:37:15 NAS kernel: sd 8:0:0:0: [sdf] tag#17 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x06 Mar 31 19:37:15 NAS kernel: sd 8:0:0:0: [sdf] tag#17 CDB: opcode=0x2a 2a 00 00 25 16 a0 00 0a 00 00 Mar 31 19:37:15 NAS kernel: print_req_error: I/O error, dev sdf, sector 2430624 Mar 31 19:37:15 NAS kernel: BTRFS error (device sdf1): bdev /dev/sdf1 errs: wr 29, rd 0, flush 0, corrupt 0, gen 0 Mar 31 19:37:15 NAS kernel: ata8: EH complete Mar 31 19:37:39 NAS kernel: print_req_error: I/O error, dev loop2, sector 0 Mar 31 19:37:39 NAS kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 0, flush 2, corrupt 0, gen 0 Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.