Jump to content

spamalam

Members
  • Posts

    122
  • Joined

  • Last visited

Everything posted by spamalam

  1. Here's the latest diagnostics, can anyone help? the curious line is the following, where shfs says not tainted but many files have dissappeared? [64557.473458] XFS (md2): Internal error xfs_trans_cancel at line 983 of file fs/xfs/xfs_trans.c. Caller xfs_create+0x3a7/0x467 [64557.478616] CPU: 1 PID: 20269 Comm: shfs Not tainted 4.9.30-unRAID #1 beyonder-nas-diagnostics-20171215-2202.zip
  2. No dice, the disk went again... unraid doesn't care and pretends everything is fine. root@beyonder-nas:~# ls -l /mnt/disk{1..15} /bin/ls: cannot access '/mnt/disk2': Input/output error Is this really the correct behaviour for a NAS product? [64557.473458] XFS (md2): Internal error xfs_trans_cancel at line 983 of file fs/xfs/xfs_trans.c. Caller xfs_create+0x3a7/0x467 [64557.478616] CPU: 1 PID: 20269 Comm: shfs Not tainted 4.9.30-unRAID #1 [64557.478618] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To Be Filled By O.E.M., BIOS P2.80 12/04/2014 [64557.478620] ffffc90003e27b68 ffffffff813a4a1b ffff88080ce5ed98 0000000000000000 [64557.478624] ffffc90003e27b80 ffffffff8129c98d ffffffff812a8f3b ffffc90003e27ba8 [64557.478628] ffffffff812b2f96 ffff88081c6d5000 0000000000000000 ffff8805913a2d00 [64557.478631] Call Trace: [64557.478639] [<ffffffff813a4a1b>] dump_stack+0x61/0x7e [64557.478643] [<ffffffff8129c98d>] xfs_error_report+0x32/0x35 [64557.478646] [<ffffffff812a8f3b>] ? xfs_create+0x3a7/0x467 [64557.478649] [<ffffffff812b2f96>] xfs_trans_cancel+0x49/0xbf [64557.478651] [<ffffffff812a8f3b>] xfs_create+0x3a7/0x467 [64557.478654] [<ffffffff812a6a46>] xfs_generic_create+0xae/0x24b [64557.478658] [<ffffffff81054110>] ? capable_wrt_inode_uidgid+0x3a/0x47 [64557.478660] [<ffffffff812a6c08>] xfs_vn_mknod+0xf/0x11 [64557.478662] [<ffffffff812a6c2b>] xfs_vn_create+0xe/0x10 [64557.478666] [<ffffffff8112df93>] path_openat+0x7c5/0xca8 [64557.478669] [<ffffffff8112ce40>] ? filename_parentat+0xd4/0xef [64557.478671] [<ffffffff8112e4be>] do_filp_open+0x48/0x9e [64557.478675] [<ffffffff811352ca>] ? d_lookup+0x29/0x3d [64557.478678] [<ffffffff8113b6c4>] ? mntput_no_expire+0x27/0x17a [64557.478681] [<ffffffff81120a71>] do_sys_open+0x137/0x1c6 [64557.478683] [<ffffffff81120a71>] ? do_sys_open+0x137/0x1c6 [64557.478686] [<ffffffff8112e7d9>] ? SyS_mkdirat+0x3a/0xac [64557.478688] [<ffffffff81120b19>] SyS_open+0x19/0x1b [64557.478692] [<ffffffff8167f537>] entry_SYSCALL_64_fastpath+0x1a/0xa9 [64557.478697] XFS (md2): xfs_do_force_shutdown(0x8) called from line 984 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff812b2faf [64557.757668] XFS (md2): Corruption of in-memory data detected. Shutting down filesystem [64557.759839] XFS (md2): Please umount the filesystem and rectify the problem(s) [64557.762566] XFS (md2): xfs_imap_to_bp: xfs_trans_read_buf() returned error -5.
  3. My hardware NAS has filesystem integrity checks, but surely a simple read/write test could solve that problem ? Wouldn't it be wise for unraid to be doing something similar? The errors behind the curtain trying to read by shfs must have been there, right?
  4. and it passed a second check! Thank you very much! I will also use unbalance to re-balance the files around a bit, and start a parity check/rebuild once I know its all okay. Okay, so XFS corrupted. Let's assume this was a historical reboot and stayed hidden until the hard drive got close to full. I'll be adding filesystem checks into the reboot procedure if I lose power again. I have a really big concern with UNRAID about this whole thing though. Unraid didn't care that a filesystem had corrupted, and presented all the shares, minus the files. My expectation is that the array should have been put offline or paused. This actually caused data loss to an application and I personally think this is pretty catastrophic in potential. Is this something that the unraid team is aware of? Is there a fix? I don't want this to reoccur as I was lucky a client was offline at the time.
  5. Seems to have gone through: xfs_repair status: Phase 1 - find and verify superblock... - block cache size set to 3020352 entries Phase 2 - using internal log - zero log... zero_log: head block 2870652 tail block 2870652 - scan filesystem freespace and inode maps... Metadata corruption detected at xfs_agf block 0xe8e05f31/0x200 fllast 118 in agf 2 too large (max = 118) - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Thu Dec 14 09:13:31 2017 Phase Start End Duration Phase 1: 12/14 09:07:57 12/14 09:07:57 Phase 2: 12/14 09:07:57 12/14 09:08:00 3 seconds Phase 3: 12/14 09:08:00 12/14 09:10:30 2 minutes, 30 seconds Phase 4: 12/14 09:10:30 12/14 09:10:31 1 second Phase 5: 12/14 09:10:31 12/14 09:10:31 Phase 6: 12/14 09:10:31 12/14 09:12:54 2 minutes, 23 seconds Phase 7: 12/14 09:12:54 12/14 09:12:54 Total run time: 4 minutes, 57 seconds done
  6. From the filesystem check it does look like there's something up with xfs: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... Metadata corruption detected at xfs_agf block 0xe8e05f31/0x200 fllast 118 in agf 2 too large (max = 118) agf 113 freelist blocks bad, skipping freelist scan sb_fdblocks 739441, counted 739435 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting.
  7. Attaching diagnostics. Stopping the array and running a filesystem check. I lost data in this, because the disk went input/output read error and not red ball the /mnt/user shares simply saw files dissappear. This is really dangerous no? Surely if a disk is input/output error it should stop the array? The application syncthing saw this as deleted files, and then wiped all the clients. I lost files, encryption keys, etc. Luckily I had a client offline and was able to restore, but I think this is rather concerning and might be worthwhile considering a bug? beyonder-nas-diagnostics-20171214-0851.zip
  8. XFS crashed first, the reboot i forced yesterday only after files dissappeared. unraid had been running for three weeks previously when it was a clean shutdown on that run. Can system corruption hide like that for months? I think the last unclear shutdown was back in September. XFS has crashed again today after recovery from the last boot. I've just forced another reboot, I will attach the diagnostics when it returns. Will post asap. Thanks
  9. root@beyonder-nas:/tmp# cd /mnt/disk2 root@beyonder-nas:/mnt/disk2# ls -l /bin/ls: cannot open directory '.': Input/output error And from Smart: # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 1 Raw read error rate 0x002f 200 200 051 Pre-fail Always Never 0 3 Spin up time 0x0027 184 179 021 Pre-fail Always Never 7775 4 Start stop count 0x0032 096 096 000 Old age Always Never 4944 5 Reallocated sector count 0x0033 200 200 140 Pre-fail Always Never 0 7 Seek error rate 0x002e 200 200 000 Old age Always Never 0 9 Power on hours 0x0032 059 059 000 Old age Always Never 30351 (3y, 5m, 15d, 15h) 10 Spin retry count 0x0032 100 100 000 Old age Always Never 0 11 Calibration retry count 0x0032 100 253 000 Old age Always Never 0 12 Power cycle count 0x0032 100 100 000 Old age Always Never 93 192 Power-off retract count 0x0032 200 200 000 Old age Always Never 23 193 Load cycle count 0x0032 196 196 000 Old age Always Never 12782 194 Temperature celsius 0x0022 123 079 000 Old age Always Never 29 196 Reallocated event count 0x0032 200 200 000 Old age Always Never 0 197 Current pending sector 0x0032 200 200 000 Old age Always Never 0 198 Offline uncorrectable 0x0030 100 253 000 Old age Offline Never 0 199 UDMA CRC error count 0x0032 200 200 000 Old age Always Never 0 200 Multi zone error rate 0x0008 100 253 000 Old age Offline Never 0
  10. Hi, I'm having issues with unraid. My second disk appears to be okay from a SMART perspective but it is showing an input/ouput error and a high number of reads. I ran DMESG and got the following, which looks like XFS has crashed: [58253.774556] XFS (md2): Internal error xfs_trans_cancel at line 983 of file fs/xfs/xfs_trans.c. Caller xfs_create+0x3a7/0x467 [58253.778728] CPU: 1 PID: 3942 Comm: shfs Not tainted 4.9.30-unRAID #1 [58253.778730] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To Be Filled By O.E.M., BIOS P2.80 12/04/2014 [58253.778732] ffffc900112fbb68 ffffffff813a4a1b ffff880185518828 0000000000000000 [58253.778736] ffffc900112fbb80 ffffffff8129c98d ffffffff812a8f3b ffffc900112fbba8 [58253.778739] ffffffff812b2f96 ffff88085b9cd000 0000000000000000 ffff88059460d640 [58253.778743] Call Trace: [58253.778751] [<ffffffff813a4a1b>] dump_stack+0x61/0x7e [58253.778755] [<ffffffff8129c98d>] xfs_error_report+0x32/0x35 [58253.778758] [<ffffffff812a8f3b>] ? xfs_create+0x3a7/0x467 [58253.778760] [<ffffffff812b2f96>] xfs_trans_cancel+0x49/0xbf [58253.778762] [<ffffffff812a8f3b>] xfs_create+0x3a7/0x467 [58253.778765] [<ffffffff812a6a46>] xfs_generic_create+0xae/0x24b [58253.778768] [<ffffffff81054110>] ? capable_wrt_inode_uidgid+0x3a/0x47 [58253.778771] [<ffffffff812a6c08>] xfs_vn_mknod+0xf/0x11 [58253.778773] [<ffffffff812a6c2b>] xfs_vn_create+0xe/0x10 [58253.778776] [<ffffffff8112df93>] path_openat+0x7c5/0xca8 [58253.778779] [<ffffffff8112ce40>] ? filename_parentat+0xd4/0xef [58253.778782] [<ffffffff8112e4be>] do_filp_open+0x48/0x9e [58253.778785] [<ffffffff811352ca>] ? d_lookup+0x29/0x3d [58253.778787] [<ffffffff8113b6c4>] ? mntput_no_expire+0x27/0x17a [58253.778790] [<ffffffff81120a71>] do_sys_open+0x137/0x1c6 [58253.778793] [<ffffffff81120a71>] ? do_sys_open+0x137/0x1c6 [58253.778795] [<ffffffff8112e7d9>] ? SyS_mkdirat+0x3a/0xac [58253.778798] [<ffffffff81120b19>] SyS_open+0x19/0x1b [58253.778801] [<ffffffff8167f537>] entry_SYSCALL_64_fastpath+0x1a/0xa9 [58253.778805] XFS (md2): xfs_do_force_shutdown(0x8) called from line 984 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff812b2faf [58253.889353] XFS (md2): Corruption of in-memory data detected. Shutting down filesystem [58253.891506] XFS (md2): Please umount the filesystem and rectify the problem(s) [58253.933206] XFS (md2): xfs_imap_to_bp: xfs_trans_read_buf() returned error -5. The filesystem is looking to be full too  Disk 2 WDC_WD40EFRX-68WT0N0_WD-WCC4E1552391 - 4 TB (sdt) 29 C 743,666 72,958 0 xfs 4 TB 4 TB 36.9 KB but I configured it to leave space so I'm not sure how its managed to fill up to 36.9KB. I've attached the syslog and dmesg. This starter yesterday when i left my house and travelled to the uk, i forced a reboot, and XFS repaired and now this has reoccured. From an array perspective I am concerned because instead of showing a red ball that the drive is broken, files have dissappeared from my array! logs.tar.gz
  11. I stop the array, then from the drop down increase from 12 drives to 13, which triggers a page reload and immediately segfaults and crashes out of emhttp. most of the time it will not restart Jun 23 21:13:24 aserver-nas kernel: emhttp[10932]: segfault at 0 ip 00002b2bad8b4527 sp 00007fff704a0830 error 4 in libc-2.17.so[2b2bad879000+1bf000] syslog.txt
  12. No packages were being installed, it seems like it tried to install that 0 byte package and then bombed out. I could manually install everything else but that version of perl seemed to not be good. maybe an error with the installpkg command exits rather than ignores and continues?
  13. {code}root@nas:~# unrar -bash: unrar: command not found{code} I did some digging around, and looks like this failed because perl is 0 bytes {code}root@nas:/boot/config/plugins/NerdPack/packages/6.1# ls -l total 34036 -rwxrwxrwx 1 root root 231684 Nov 19 23:51 apr-1.5.0-x86_64-1.txz* -rwxrwxrwx 1 root root 124572 Nov 19 23:51 apr-util-1.5.3-x86_64-1.txz* -rwxrwxrwx 1 root root 41644 Nov 19 23:50 bwm-ng-0.6-x86_64-1_SBo.txz* -rwxrwxrwx 1 root root 182520 Nov 19 23:50 cpio-2.11-x86_64-2.txz* -rwxrwxrwx 1 root root 4330056 Nov 19 23:51 git-2.3.5-x86_64-1.txz* -rwxrwxrwx 1 root root 38604 Nov 19 23:50 iftop-1.0pre2-x86_64-1.txz* -rwxrwxrwx 1 root root 43908 Nov 19 23:50 inotify-tools-3.14-x86_64-1.txz* -rwxrwxrwx 1 root root 74188 Nov 19 23:51 iperf-3.0.11-x86_64-1_SBo.txz* -rwxrwxrwx 1 root root 1147168 Nov 19 23:50 kbd-1.15.3-x86_64-2.txz* -rwxrwxrwx 1 root root 729564 Nov 19 23:51 lftp-4.6.1-x86_64-1.txz* -rwxrwxrwx 1 root root 1337912 Nov 19 23:50 lshw-B.02.17-x86_64-1_SBo_LT.txz* -rwxrwxrwx 1 root root 207400 Nov 19 23:51 neon-0.29.6-x86_64-2.txz* -rwxrwxrwx 1 root root 1150356 Nov 19 23:51 p7zip-9.38.1-x86_64-1tom.txz* -rwxrwxrwx 1 root root 0 Jan 30 22:56 perl-5.22.0-x86_64-1.txz* -rwxrwxrwx 1 root root 14210320 Nov 19 23:51 python-2.7.9-x86_64-1.txz* -rwxrwxrwx 1 root root 343876 Nov 19 23:51 readline-6.3-x86_64-1.txz* -rwxrwxrwx 1 root root 534436 Nov 19 23:50 screen-4.2.1-x86_64-1.txz* -rwxrwxrwx 1 root root 54152 Nov 19 23:51 sshfs-fuse-2.5-x86_64-1sl.txz* -rwxrwxrwx 1 root root 164496 Nov 19 23:50 strace-4.10-x86_64-1.txz* -rwxrwxrwx 1 root root 3377472 Nov 19 23:51 subversion-1.7.16-x86_64-1.txz* -rwxrwxrwx 1 root root 324924 Nov 19 23:50 unrar-5.2.5-x86_64-1_SBo.txz* -rwxrwxrwx 1 root root 13912 Nov 19 23:50 utempter-1.1.5-x86_64-1.txz* -rwxrwxrwx 1 root root 6138512 Jan 30 22:57 vim-7.4.898-x86_64-1.txz*{code}
  14. Okay so its potentially an application action? This is specifically Kodi, do you know if there's an unraid friendly setting? @ram, good call, my new build will have plenty of ram so I will take a look at that.
  15. Hey, I find that when i am browsing a share, lets say the movies folder via kodi, it spins up all drives with the movies folder to load up the index. Is there any reason that the file listing is not cached in memory or at least on the cache disk to avoid unnecessary disk spin-up? Wouldn't it be better to spin up the disk when an actual file read is issued? Potentially I'm not understanding how unraid is working, but when I browse a movie share then check the unraid gui I see a green ball next to all the drives that provide that share and that wasn't what i was expecting until I perform a read.
  16. Problems with BTRFS for me in this release, here's the kernel panic: BTRFS info (device loop0): no csum found for inode 1723 start 1994752 BTRFS info (device loop0): no csum found for inode 1723 start 1998848 BTRFS info (device loop0): no csum found for inode 1723 start 2002944 BTRFS info (device loop0): no csum found for inode 1723 start 2007040 BTRFS info (device loop0): no csum found for inode 1723 start 2011136 BTRFS info (device loop0): no csum found for inode 1723 start 2015232 BTRFS info (device loop0): no csum found for inode 1723 start 2019328 BTRFS info (device loop0): no csum found for inode 1723 start 2023424 BTRFS info (device loop0): no csum found for inode 1723 start 2027520 BTRFS info (device loop0): no csum found for inode 1723 start 2031616 BTRFS info (device loop0): no csum found for inode 1723 start 2035712 BTRFS info (device loop0): no csum found for inode 1723 start 2039808 BTRFS info (device loop0): no csum found for inode 1723 start 2043904 BTRFS info (device loop0): no csum found for inode 1723 start 2048000 BTRFS info (device loop0): no csum found for inode 1723 start 2052096 BTRFS info (device loop0): no csum found for inode 1723 start 2056192 BTRFS info (device loop0): no csum found for inode 1723 start 2060288 BTRFS info (device loop0): no csum found for inode 1723 start 2064384 BTRFS info (device loop0): no csum found for inode 1723 start 2068480 BTRFS info (device loop0): no csum found for inode 1723 start 2072576 BTRFS info (device loop0): no csum found for inode 1723 start 2076672 BTRFS info (device loop0): no csum found for inode 1723 start 2080768 BTRFS info (device loop0): no csum found for inode 1723 start 2084864 BTRFS info (device loop0): no csum found for inode 1723 start 2088960 BTRFS info (device loop0): no csum found for inode 1723 start 2093056 BTRFS info (device loop0): no csum found for inode 1723 start 167936 docker0: port 1(veth6a1399b) entered disabled state device veth6a1399b left promiscuous mode docker0: port 1(veth6a1399b) entered disabled state btrfs_dev_stat_print_on_error: 6652 callbacks suppressed BTRFS: bdev /dev/sdk1 errs: wr 7, rd 162176, flush 0, corrupt 0, gen 0 BTRFS: bdev /dev/sdk1 errs: wr 7, rd 162177, flush 0, corrupt 0, gen 0 BTRFS: bdev /dev/sdk1 errs: wr 7, rd 162178, flush 0, corrupt 0, gen 0 BUG: unable to handle kernel NULL pointer dereference at 0000000000000028 IP: [<ffffffff812a2b40>] __btrfs_abort_transaction+0x4d/0xff PGD 103f30067 PUD 103dfc067 PMD 0 Oops: 0000 [#6] SMP Modules linked in: veth xt_nat ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 nf_nat iptable_filter ip_tables md_mod i2c_i801 e1000e ptp ahci pps_core libahci CPU: 3 PID: 566 Comm: umount Tainted: G D W 3.18.5-unRAID #3 Hardware name: Supermicro X7SPA-HF/X7SPA-HF, BIOS 1.2 09/14/11 task: ffff88008d179020 ti: ffff880006e00000 task.ti: ffff880006e00000 RIP: 0010:[<ffffffff812a2b40>] [<ffffffff812a2b40>] __btrfs_abort_transaction+0x4d/0xff RSP: 0018:ffff880006e03c48 EFLAGS: 00010283 RAX: ffff88013961e000 RBX: 00000000fffffffb RCX: 0000000000000a9f RDX: ffffffff8163f1a0 RSI: ffff880006985000 RDI: 0000000000000000 RBP: ffff880006e03c78 R08: 00000000fffffffb R09: ffff8800a6730100 R10: 0000000000000000 R11: ffff880006985000 R12: ffff880006985000 R13: 0000000000000000 R14: ffffffff8163f1a0 R15: 0000000000000a9f FS: 00002b65d1deae00(0000) GS:ffff88013fd80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 0000000000000028 CR3: 0000000103dfe000 CR4: 00000000000007e0 Stack: ffff88013961e068 ffff880006985000 0000000000000000 ffff88013961e068 000000000063ef80 000000000063ef80 ffff880006e03cd8 ffffffff812f7bab 0000000000000003 ffff8801391dd000 0000000000000001 0000000000000000 Call Trace: [<ffffffff812f7bab>] free_log_tree+0x57/0xca [<ffffffff812f73a6>] ? join_running_log_trans+0x57/0x57 [<ffffffff812fd63b>] btrfs_free_log+0x17/0x26 [<ffffffff812c2a95>] btrfs_drop_and_free_fs_root+0x62/0x94 [<ffffffff812c2b8b>] btrfs_free_fs_roots+0xc4/0x102 [<ffffffff815fa86c>] ? wait_for_completion+0x18/0x1a [<ffffffff812c43cb>] close_ctree+0x1d0/0x29a [<ffffffff8110ce83>] ? evict_inodes+0xef/0xfe [<ffffffff812a1e9e>] btrfs_put_super+0x14/0x16 [<ffffffff810f91af>] generic_shutdown_super+0x6e/0xea [<ffffffff810f9429>] kill_anon_super+0xe/0x19 [<ffffffff812a1c7a>] btrfs_kill_super+0x13/0x8f [<ffffffff810f971d>] deactivate_locked_super+0x3b/0x50 [<ffffffff810f9bed>] deactivate_super+0x3a/0x3e [<ffffffff8110f704>] cleanup_mnt+0x54/0x74 [<ffffffff8110f75a>] __cleanup_mnt+0xd/0xf [<ffffffff810562f4>] task_work_run+0x7e/0x96 [<ffffffff8100ab4c>] do_notify_resume+0x55/0x66 [<ffffffff815fd420>] int_signal+0x12/0x17 Code: 01 00 00 f0 0f ba a8 28 0d 00 00 02 72 1d 48 c7 c2 8b a3 76 81 44 89 c1 be 04 01 00 00 48 c7 c7 1c a1 76 81 31 c0 e8 e5 f0 d9 ff <49> 83 7d 28 00 66 41 89 5d 50 75 35 83 c3 1e 49 c7 c0 fb dd 74 RIP [<ffffffff812a2b40>] __btrfs_abort_transaction+0x4d/0xff RSP <ffff880006e03c48> CR2: 0000000000000028 ---[ end trace cfa7766fd11243cd ]--- program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO docker0: port 5(veth5653070) entered disabled state device veth5653070 left promiscuous mode docker0: port 5(veth5653070) entered disabled state edit and syslog syslog.tar.gz
  17. This release has been a disaster, I am getting constant kernel crashes @ btrfs, docker is pinned to 100% cpu. Really, I know its a beta but this is extremely dangerous for your live cluster. Stick with beta-12! This one is 'alpha' quality at best. Attempting to downgrade now.
  18. Are you using needo37's docker? https://github.com/needo37/sickrage/blob/master/Dockerfile https://codeload.github.com/SickragePVR/SickRage/tar.gz/release_0.2.1 does not exist which it is trying to wget. This is reverting to git for the updates. There looks to be an open bug on the github for the docker. I found the GUI would not load too when I rebooted the docker, but I've started again, the gui loads and its currently indexing. I will try reboot and see if I can the same again. If so, it might be some issue with the versions its pulling down, although atm I'm not sure how.
×
×
  • Create New...