doesntaffect

Members
  • Posts

    188
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by doesntaffect

  1. -- adjusted the title to better reflect the issue Edit: Below issue seem to be with the Photoprism app / container. Hi folks, since couple of hours my system log is full of these entries. Nov 29 21:18:08 Ryzen kernel: eth0: renamed from veth49cafcb Nov 29 21:18:10 Ryzen kernel: veth49cafcb: renamed from eth0 Nov 29 21:19:11 Ryzen kernel: eth0: renamed from veth0124e07 Nov 29 21:19:13 Ryzen kernel: veth0124e07: renamed from eth0 Nov 29 21:20:15 Ryzen kernel: eth0: renamed from vethb387d3f Nov 29 21:20:17 Ryzen kernel: vethb387d3f: renamed from eth0 Nov 29 21:21:18 Ryzen kernel: eth0: renamed from vethb1a6ac2 Nov 29 21:21:20 Ryzen kernel: vethb1a6ac2: renamed from eth0 Nov 29 21:22:22 Ryzen kernel: eth0: renamed from vetha2266bd Nov 29 21:22:24 Ryzen kernel: vetha2266bd: renamed from eth0 Nov 29 21:23:25 Ryzen kernel: eth0: renamed from veth9335967 Nov 29 21:23:28 Ryzen kernel: veth9335967: renamed from eth0 Nov 29 21:24:05 Ryzen kernel: vethd93d4cd: renamed from eth0 How can I troubleshoot this? How can I get the full system log from last 24hrs on UnRaid? Not sure if this is related, but I have a Pihole instance which I had to stop today since it seem to be broken, however this renaming keept going for hours and hours. I stopped the heimdall container and it seems like the renaming stopped now. Can the renaming have an effect on Pihole? Any advise how to pin this down? Thanks!
  2. I am fairly new to Unraid and want to put Authelia in front of my nextcloud / heimdall. What I understood so far is that the template https://github.com/ibracorp/authelia.xml/blob/master/authelia.xml is meant as a docker template. Please correct me if thats not the case. My question is, how do I get a authelia container set up, which is based on this template? In the CA "Apps" I see only the official authelia container for download. Thanks for any advise
  3. Thanks guys, I swapped the Sata cable and will monitor this further on.
  4. Hi folks, still learning how to manage UnRaid as I move towards the end of my trial period and so far I received 4 CRC errors on one of my disks. Today the 4th error has been reported and I am curious whether I should worry. The disks are brand new, all of the same type. Its my main disk where these erros occur. From the logs below I understand tha there is something going wrong with the disk bus system. I am using WD Red 4TB NAS disks. Any advise? So far I am acknowledging the errors only. I also checked the Sata cables. Would ECC Memory do any trick here? Nov 27 01:58:39 Ryzen kernel: ata2.00: failed command: WRITE FPDMA QUEUED Nov 27 01:58:39 Ryzen kernel: ata2.00: cmd 61/20:a8:10:89:2a/01:00:e9:00:00/40 tag 21 ncq dma 147456 out Nov 27 01:58:39 Ryzen kernel: res 40/00:a0:10:89:2a/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 01:58:39 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 01:58:39 Ryzen kernel: ata2: hard resetting link Nov 27 01:58:39 Ryzen kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 27 01:58:39 Ryzen kernel: ata2.00: configured for UDMA/133 Nov 27 01:58:39 Ryzen kernel: ata2: EH complete Nov 27 02:01:06 Ryzen kernel: ata2.00: exception Emask 0x11 SAct 0x11f000 SErr 0x680100 action 0x6 frozen Nov 27 02:01:06 Ryzen kernel: ata2.00: irq_stat 0x48000008, interface fatal error Nov 27 02:01:06 Ryzen kernel: ata2: SError: { UnrecovData 10B8B BadCRC Handshk } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/f8:60:78:74:b1/00:00:e9:00:00/40 tag 12 ncq dma 126976 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/10:68:38:77:b1/00:00:e9:00:00/40 tag 13 ncq dma 8192 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/20:70:98:77:b1/00:00:e9:00:00/40 tag 14 ncq dma 16384 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/10:78:d8:78:b1/00:00:e9:00:00/40 tag 15 ncq dma 8192 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/10:80:28:86:3f/00:00:e9:00:00/40 tag 16 ncq dma 8192 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2.00: failed command: READ FPDMA QUEUED Nov 27 02:01:06 Ryzen kernel: ata2.00: cmd 60/20:a0:d0:64:1e/00:00:e9:00:00/40 tag 20 ncq dma 16384 in Nov 27 02:01:06 Ryzen kernel: res 40/00:a0:d0:64:1e/00:00:e9:00:00/40 Emask 0x10 (ATA bus error) Nov 27 02:01:06 Ryzen kernel: ata2.00: status: { DRDY } Nov 27 02:01:06 Ryzen kernel: ata2: hard resetting link Nov 27 02:01:07 Ryzen kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 27 02:01:07 Ryzen kernel: ata2.00: configured for UDMA/133 Nov 27 02:01:07 Ryzen kernel: ata2: EH complete
  5. So far the system has been stable for a couple of days after I changed my Cache config from 3 NVMEs (2x512+1x250; the latter in a PCIe 1x converter card) to just 2x512GB. The single NVME has been turned into a separate cache, obviously without any Raid config. The Win10 VM is behaving fine, even with heavy load on the host and the VM over a couple of days. I'll mark this as solved since I cannot provide a copy of the original diagnostics. Thanks guys!
  6. The plugin does not recognize my 3 cache drives (NVMEs). Is that on purpose?
  7. I managed to copy parts of the sys log. Moved the VM from Cache to HDD and so far no issues. Didn't test with a new VM on the SSD cache yet as I am afraid this will trouble the parity disk again. Any further advise? Nov 20 08:30:31 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 2, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 149152 op 0x1:(WRITE) flags 0x1800 phys_seg 3 prio class 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 3, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 153248 op 0x1:(WRITE) flags 0x1800 phys_seg 35 prio class 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 4, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 5, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 6, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 673440 op 0x1:(WRITE) flags 0x1800 phys_seg 3 prio class 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 7, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 677536 op 0x1:(WRITE) flags 0x1800 phys_seg 35 prio class 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 8, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 9, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 10, rd 0, flush 0, corrupt 0, gen 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 156928 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Nov 20 08:30:34 Ryzen kernel: blk_update_request: I/O error, dev loop2, sector 681216 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Nov 20 08:30:34 Ryzen kernel: BTRFS: error (device loop2) in btrfs_commit_transaction:2327: errno=-5 IO failure (Error while writing out transaction) Nov 20 08:30:34 Ryzen kernel: BTRFS info (device loop2): forced readonly Nov 20 08:30:34 Ryzen kernel: BTRFS warning (device loop2): Skipping commit of aborted transaction. Nov 20 08:30:34 Ryzen kernel: BTRFS: error (device loop2) in cleanup_transaction:1898: errno=-5 IO failure Nov 20 08:30:40 Ryzen smbd[24055]: [2020/11/20 08:30:40.654245, 0] ../../source3/smbd/service.c:850(make_connection_snum) Nov 20 08:30:40 Ryzen smbd[24055]: make_connection_snum: '/mnt/user/isos' does not exist or permission denied when connecting to [isos] Error was Input/output error Nov 20 08:30:40 Ryzen smbd[24055]: [2020/11/20 08:30:40.655062, 0] ../../source3/smbd/service.c:850(make_connection_snum) Nov 20 08:30:40 Ryzen smbd[24055]: make_connection_snum: '/mnt/user/isos' does not exist or permission denied when connecting to [isos] Error was Input/output error Nov 20 08:30:40 Ryzen smbd[24055]: [2020/11/20 08:30:40.655821, 0] ../../source3/smbd/service.c:850(make_connection_snum) Nov 20 08:30:40 Ryzen smbd[24055]: make_connection_snum: '/mnt/user/isos' does not exist or permission denied when connecting to [isos] Error was Input/output error
  8. I am still trailing Unraid 6.9b35 and after I managed to get couple of container and shares up and running I created a Win10 VM which seems to kill my parity disk. Symptoms: I spin up the VM and use it to browse the web, suddenly the parity disk turns red, disabled and the sys log is full of errors. I wonder if my cache disk setup could case the issues. The VM is stored on the cache disk. When I stop the array, remove the disk, add in again I can start a parity sync which has run for 3hrs with turbo write (constantly 182Mb/sec) without issues. Starting the VM again ended in the same result as described above. System setup: 3x4 TB WD Red 2x 512GB SSD + 1x 250GB as Cache Pool Raid 1 Any advise how I can troubleshoot this? Diagnostics attached. Thanks all! ryzen-diagnostics-20201120-2201.zip
  9. plugged the stick into a USB2 port and so far things are running fine. I am new UnRaid, coming from Synology and still learn. Thanks for the support!
  10. Thanks, should it be that easy? I'll give it a shot. Shutdown and replugging the stick to another port doesnt mess up anything with shares / container etc.?
  11. Hi folks, I have two failing USB sticks in a couple of days an wonder if there is a pattern. The server (hardware) is brand new and I am running the latest beta. I tried to create a Win 10 VM which failed and suddently Docker and VMs are disabled and the Dashboard looks scrambled and shows following error messages: The registration page says: Error code: ENOFLASH3 I stopped the Array. Main doesnt show the 3 Cache SSDs anymore, however the Dashboard does show the cache. Any advise?