Shantarius

Members
  • Posts

    14
  • Joined

  • Last visited

Shantarius's Achievements

Newbie

Newbie (1/14)

3

Reputation

  1. Wow this is great and exactly what i need. Now i can use a Debian VM use as a (AirPrint)Printserver. If the Printer is off, the printjob is waiting in the cups-scheduler until i have turned on the printer and the printer auto connects to the VM 🙂 Very nice!
  2. Hi, i passthrough a USB Printer to a Debian VM with this Plugin. I have checked the Box in the VM Setting to passtrhough the printer and the printer is available in the VM: root@debian11-103:~# lsusb Bus 001 Device 004: ID 03f0:132a HP, Inc HP LaserJet 200 color M251n Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Tablet Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub root@debian11-103:~# If i turn off the printer, the printer i not more available in the VM. If then i turn on the printer, the printer does not connect automatically to the VM. I must manually add the printer to the VM in the VM-Tab. How can i passthrough automatically to the VM after turning on the Printer? Thx Chris
  3. Hi, thank you for your answer! I have removed the LSI controler and use now the SSDs with brandnew SATA cables with the Mainboards SATA II Controler. After i have removed the LSI controler i have changed a old SSD with a brandnew EVO870 with which the errors occur again. Nevertheless with the new SSD the errors coming up after a reboot, but over night running with dockers and vms there was no errors. Today i have made an experiment after i have turned off NCQ to show the status: 1. Before reboot root@Avalon:/mnt/zpool/Docker/Telegraf# zpool status pool: zpool state: ONLINE scan: scrub repaired 0B in 00:06:59 with 0 errors on Sun Nov 7 17:05:41 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 0 0 errors: No known data errors /dev/sdg (EVO 870 new SSD) # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 9 Power on hours 0x0032 099 099 000 Old age Always Never 52 (2d, 4h) 12 Power cycle count 0x0032 099 099 000 Old age Always Never 10 177 Wear leveling count 0x0013 099 099 000 Pre-fail Always Never 2 179 Used rsvd block count tot 0x0013 100 100 010 Pre-fail Always Never 0 181 Program fail count total 0x0032 100 100 010 Old age Always Never 0 182 Erase fail count total 0x0032 100 100 010 Old age Always Never 0 183 Runtime bad block 0x0013 100 100 010 Pre-fail Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0032 074 062 000 Old age Always Never 26 195 Hardware ECC recovered 0x001a 200 200 000 Old age Always Never 0 199 UDMA CRC error count 0x003e 100 100 000 Old age Always Never 0 235 Unknown attribute 0x0012 099 099 000 Old age Always Never 5 241 Total lbas written 0x0032 099 099 000 Old age Always Never 347926153 2. After reboot, before starting the array After reboot, before starting Array: root@Avalon:~# zpool status pool: zpool state: ONLINE scan: scrub repaired 0B in 00:06:59 with 0 errors on Sun Nov 7 17:05:41 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 0 0 errors: No known data errors root@Avalon:~# /dev/sdg (EVO 870 new SSD) # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 9 Power on hours 0x0032 099 099 000 Old age Always Never 52 (2d, 4h) 12 Power cycle count 0x0032 099 099 000 Old age Always Never 10 177 Wear leveling count 0x0013 099 099 000 Pre-fail Always Never 2 179 Used rsvd block count tot 0x0013 100 100 010 Pre-fail Always Never 0 181 Program fail count total 0x0032 100 100 010 Old age Always Never 0 182 Erase fail count total 0x0032 100 100 010 Old age Always Never 0 183 Runtime bad block 0x0013 100 100 010 Pre-fail Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0032 078 062 000 Old age Always Never 22 195 Hardware ECC recovered 0x001a 200 200 000 Old age Always Never 0 199 UDMA CRC error count 0x003e 100 100 000 Old age Always Never 0 235 Unknown attribute 0x0012 099 099 000 Old age Always Never 5 241 Total lbas written 0x0032 099 099 000 Old age Always Never 347946967 3. After reboot and after a few minutes after the Array has started root@Avalon:/mnt/zpool# zpool status pool: zpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: scrub repaired 0B in 00:06:59 with 0 errors on Sun Nov 7 17:05:41 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 19 0 errors: No known data errors root@Avalon:/mnt/zpool# /dev/sdg (EVO 870 new SSD) # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 9 Power on hours 0x0032 099 099 000 Old age Always Never 52 (2d, 4h) 12 Power cycle count 0x0032 099 099 000 Old age Always Never 10 177 Wear leveling count 0x0013 099 099 000 Pre-fail Always Never 2 179 Used rsvd block count tot 0x0013 100 100 010 Pre-fail Always Never 0 181 Program fail count total 0x0032 100 100 010 Old age Always Never 0 182 Erase fail count total 0x0032 100 100 010 Old age Always Never 0 183 Runtime bad block 0x0013 100 100 010 Pre-fail Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0032 078 062 000 Old age Always Never 22 195 Hardware ECC recovered 0x001a 200 200 000 Old age Always Never 0 199 UDMA CRC error count 0x003e 100 100 000 Old age Always Never 0 235 Unknown attribute 0x0012 099 099 000 Old age Always Never 5 241 Total lbas written 0x0032 099 099 000 Old age Always Never 348067881 root@Avalon:/mnt/zpool# cat /var/log/syslog | grep 16:16:37 Nov 8 16:16:37 Avalon kernel: ata5.00: exception Emask 0x0 SAct 0x1e018 SErr 0x0 action 0x6 frozen Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 61/04:18:06:6a:00/00:00:0c:00:00/40 tag 3 ncq dma 2048 out Nov 8 16:16:37 Avalon kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 61/36:20:57:6a:00/00:00:0c:00:00/40 tag 4 ncq dma 27648 out Nov 8 16:16:37 Avalon kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 61/10:68:10:26:70/00:00:74:00:00/40 tag 13 ncq dma 8192 out Nov 8 16:16:37 Avalon kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 61/10:70:10:24:70/00:00:74:00:00/40 tag 14 ncq dma 8192 out Nov 8 16:16:37 Avalon kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 61/10:78:10:0a:00/00:00:00:00:00/40 tag 15 ncq dma 8192 out Nov 8 16:16:37 Avalon kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5.00: failed command: SEND FPDMA QUEUED Nov 8 16:16:37 Avalon kernel: ata5.00: cmd 64/01:80:00:00:00/00:00:00:00:00/a0 tag 16 ncq dma 512 out Nov 8 16:16:37 Avalon kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 8 16:16:37 Avalon kernel: ata5.00: status: { DRDY } Nov 8 16:16:37 Avalon kernel: ata5: hard resetting link Nov 8 16:16:37 Avalon kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 8 16:16:37 Avalon kernel: ata5.00: supports DRM functions and may not be fully accessible Nov 8 16:16:37 Avalon kernel: ata5.00: supports DRM functions and may not be fully accessible Nov 8 16:16:37 Avalon kernel: ata5.00: configured for UDMA/133 Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=30s Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 Sense Key : 0x5 [current] Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 ASC=0x21 ASCQ=0x4 Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 CDB: opcode=0x2a 2a 00 0c 00 6a 06 00 00 04 00 Nov 8 16:16:37 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 201353734 op 0x1:(WRITE) flags 0x700 phys_seg 2 prio class 0 Nov 8 16:16:37 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=103092063232 size=2048 flags=40080c80 Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=30s Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 Sense Key : 0x5 [current] Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 ASC=0x21 ASCQ=0x4 Nov 8 16:16:37 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 CDB: opcode=0x2a 2a 00 0c 00 6a 57 00 00 36 00 Nov 8 16:16:37 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 201353815 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0 Nov 8 16:16:37 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=103092104704 size=27648 flags=180880 Nov 8 16:16:37 Avalon kernel: ata5: EH complete Nov 8 16:16:37 Avalon kernel: ata5.00: Enabling discard_zeroes_data root@Avalon:/mnt/zpool# 4. After starting the docker service and the vm service (dockers and vms are on the zpool) no new errors. 5. After zpool scrub no new errors in syslog and zpool status root@Avalon:/mnt/zpool# zpool status -v zpool pool: zpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: scrub repaired 0B in 00:05:29 with 0 errors on Mon Nov 8 16:34:53 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 19 0 errors: No known data errors root@Avalon:/mnt/zpool# /dev/sdg (EVO 870 new SSD) # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 9 Power on hours 0x0032 099 099 000 Old age Always Never 52 (2d, 4h) 12 Power cycle count 0x0032 099 099 000 Old age Always Never 10 177 Wear leveling count 0x0013 099 099 000 Pre-fail Always Never 2 179 Used rsvd block count tot 0x0013 100 100 010 Pre-fail Always Never 0 181 Program fail count total 0x0032 100 100 010 Old age Always Never 0 182 Erase fail count total 0x0032 100 100 010 Old age Always Never 0 183 Runtime bad block 0x0013 100 100 010 Pre-fail Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0032 073 062 000 Old age Always Never 27 195 Hardware ECC recovered 0x001a 200 200 000 Old age Always Never 0 199 UDMA CRC error count 0x003e 100 100 000 Old age Always Never 0 235 Unknown attribute 0x0012 099 099 000 Old age Always Never 5 241 Total lbas written 0x0032 099 099 000 Old age Always Never 349169583 6. After zpool clear root@Avalon:/mnt/zpool# zpool status -v zpool pool: zpool state: ONLINE scan: scrub repaired 0B in 00:05:14 with 0 errors on Mon Nov 8 16:42:34 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 0 0 errors: No known data errors root@Avalon:/mnt/zpool# What is meant with "ata5: hard resetting link" in the syslog? Is it possible that the "vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5" Partition is damaged? Can i set the 870EVO offline, delete the Partitions at this SSD and replace/add it back to the mirror?
  4. Hello there, i have a solution for the problem with the access to /var/run/docker.sock. For me it works with telegraf:1.18.3 telegraf:alpine. You just add the following in the Extra Parameters Value: --user telegraf:$(stat -c '%g' /var/run/docker.sock) Then Telegraf has access to the docker.sock and i have some data from all dockers in Grafana (CPU Usage, RAM Usage, Network Usage). Its pretty nice 🙂 For the smartctl Problem i have no solution. If i add to Post Arguments /bin/sh -c 'apk update && apk upgrade && apk add smartmontools && telegraf' the telegraf docker doesn't starts up and in the docker log i found the error ERROR: Unable to lock database: Permission denied ERROR: Failed to open apk database: Permission denied Has anyone a solution for this? 🙂
  5. Hello, i have a Problem with my zpool. In my pool (mirror) are one Samsung 850 EVO 1TB and one one 870 EVO (1TB). This two SSDs were connected to a LSI Controler with IT Mode (Version 19). Few days before i had some errors on one SSD (Samsung 870 EVO) in the syslog and also some UDMA CRC Errors and Hardware ECC recovered. Then i have disconnected both SSDs from the HBA, connected the Samsung 850 EVO 1TB and a brandnew 870 EVO (1TB) with new SATA cables to the SATA onboard Connectors to my HP Proliant ML310e Gen8 Version 1 (SATA II not III). The new 870 EVO (1TB) i have replaced and resilvered in the zpool and everything was fine. Since yesterday during or after and today after a reboot and starting of the docker-services and some dockers i have again some errors with the new 870 EVO (1TB) in the syslog and write errors in the zpool showing with "zpool status". Then i have made a zpool scrub, during this scrub some cksum error in zpool status comes up. The i have made zpool clear and all errors are gone. After this i have made a new scrub and had no new error (read/write or cksum). The SMART values for the new SSD 850 EVO 1TB showing okay, no CRC or other errors. The server is now running since a few hours and in the syslog comes no new errors from the SSD. I dare not to restart the server because otherwise new errors could occur. This is an excerpt of the syslog from a reboot this afternoon, /dev/sdg is the brandnew replaced Samsung 850 EVO 1TB: Nov 7 16:39:45 Avalon kernel: ata5.00: exception Emask 0x0 SAct 0xfd901f SErr 0x0 action 0x6 frozen Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: SEND FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 64/01:00:00:00:00/00:00:00:00:00/a0 tag 0 ncq dma 512 out Nov 7 16:39:45 Avalon kernel: res 40/00:01:e0:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: SEND FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 64/01:08:00:00:00/00:00:00:00:00/a0 tag 1 ncq dma 512 out Nov 7 16:39:45 Avalon kernel: res 40/00:01:e0:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/08:10:4b:ab:0c/00:00:0e:00:00/40 tag 2 ncq dma 4096 out Nov 7 16:39:45 Avalon kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/44:18:5b:83:03/00:00:0e:00:00/40 tag 3 ncq dma 34816 out Nov 7 16:39:45 Avalon kernel: res 40/00:01:e0:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/44:20:73:bb:03/00:00:0e:00:00/40 tag 4 ncq dma 34816 out Nov 7 16:39:45 Avalon kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/23:60:06:40:00/00:00:1c:00:00/40 tag 12 ncq dma 17920 out Nov 7 16:39:45 Avalon kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/42:78:31:bb:03/00:00:0e:00:00/40 tag 15 ncq dma 33792 out Nov 7 16:39:45 Avalon kernel: res 40/00:01:e0:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/4d:80:b7:bb:03/00:00:0e:00:00/40 tag 16 ncq dma 39424 out Nov 7 16:39:45 Avalon kernel: res 40/00:01:e0:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/11:90:8f:b2:03/00:00:0e:00:00/40 tag 18 ncq dma 8704 out Nov 7 16:39:45 Avalon kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/08:98:8d:f5:02/00:00:1c:00:00/40 tag 19 ncq dma 4096 out Nov 7 16:39:45 Avalon kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/70:a0:9a:9f:03/00:00:1c:00:00/40 tag 20 ncq dma 57344 out Nov 7 16:39:45 Avalon kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/18:a8:0a:a0:03/00:00:1c:00:00/40 tag 21 ncq dma 12288 out Nov 7 16:39:45 Avalon kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/42:b0:de:90:03/00:00:0e:00:00/40 tag 22 ncq dma 33792 out Nov 7 16:39:45 Avalon kernel: res 40/00:01:09:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5.00: failed command: WRITE FPDMA QUEUED Nov 7 16:39:45 Avalon kernel: ata5.00: cmd 61/30:b8:ab:6a:03/00:00:0e:00:00/40 tag 23 ncq dma 24576 out Nov 7 16:39:45 Avalon kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 7 16:39:45 Avalon kernel: ata5.00: status: { DRDY } Nov 7 16:39:45 Avalon kernel: ata5: hard resetting link Nov 7 16:39:45 Avalon kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 7 16:39:45 Avalon kernel: ata5.00: supports DRM functions and may not be fully accessible Nov 7 16:39:45 Avalon kernel: ata5.00: supports DRM functions and may not be fully accessible Nov 7 16:39:45 Avalon kernel: ata5.00: configured for UDMA/133 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=42s Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#2 Sense Key : 0x5 [current] Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#2 ASC=0x21 ASCQ=0x4 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#2 CDB: opcode=0x2a 2a 00 0e 0c ab 4b 00 00 08 00 Nov 7 16:39:45 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 235711307 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=120683140608 size=4096 flags=180880 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=30s Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 Sense Key : 0x5 [current] Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 ASC=0x21 ASCQ=0x4 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#3 CDB: opcode=0x2a 2a 00 0e 03 83 5b 00 00 44 00 Nov 7 16:39:45 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 235111259 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=120375916032 size=34816 flags=180880 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=30s Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 Sense Key : 0x5 [current] Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 ASC=0x21 ASCQ=0x4 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#4 CDB: opcode=0x2a 2a 00 0e 03 bb 73 00 00 44 00 Nov 7 16:39:45 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 235125619 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=120383268352 size=34816 flags=180880 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#12 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=42s Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#12 Sense Key : 0x5 [current] Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#12 ASC=0x21 ASCQ=0x4 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#12 CDB: opcode=0x2a 2a 00 1c 00 40 06 00 00 23 00 Nov 7 16:39:45 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 469778438 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=240525511680 size=17920 flags=180880 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#15 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=30s Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#15 Sense Key : 0x5 [current] Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#15 ASC=0x21 ASCQ=0x4 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#15 CDB: opcode=0x2a 2a 00 0e 03 bb 31 00 00 42 00 Nov 7 16:39:45 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 235125553 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=120383234560 size=33792 flags=180880 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#16 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=30s Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#16 Sense Key : 0x5 [current] Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#16 ASC=0x21 ASCQ=0x4 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#16 CDB: opcode=0x2a 2a 00 0e 03 bb b7 00 00 4d 00 Nov 7 16:39:45 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 235125687 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=120383303168 size=39424 flags=180880 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#18 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=42s Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#18 Sense Key : 0x5 [current] Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#18 ASC=0x21 ASCQ=0x4 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#18 CDB: opcode=0x2a 2a 00 0e 03 b2 8f 00 00 11 00 Nov 7 16:39:45 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 235123343 op 0x1:(WRITE) flags 0x700 phys_seg 2 prio class 0 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=120382103040 size=8704 flags=180880 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=35s Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#19 Sense Key : 0x5 [current] Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#19 ASC=0x21 ASCQ=0x4 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#19 CDB: opcode=0x2a 2a 00 1c 02 f5 8d 00 00 08 00 Nov 7 16:39:45 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 469955981 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=240616413696 size=4096 flags=180880 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#20 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=35s Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#20 Sense Key : 0x5 [current] Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#20 ASC=0x21 ASCQ=0x4 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#20 CDB: opcode=0x2a 2a 00 1c 03 9f 9a 00 00 70 00 Nov 7 16:39:45 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 469999514 op 0x1:(WRITE) flags 0x700 phys_seg 14 prio class 0 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=240638702592 size=57344 flags=40080c80 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#21 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=35s Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#21 Sense Key : 0x5 [current] Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#21 ASC=0x21 ASCQ=0x4 Nov 7 16:39:45 Avalon kernel: sd 6:0:0:0: [sdg] tag#21 CDB: opcode=0x2a 2a 00 1c 03 a0 0a 00 00 18 00 Nov 7 16:39:45 Avalon kernel: blk_update_request: I/O error, dev sdg, sector 469999626 op 0x1:(WRITE) flags 0x700 phys_seg 1 prio class 0 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=240638759936 size=12288 flags=180880 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=120377687040 size=33792 flags=180880 Nov 7 16:39:45 Avalon kernel: zio pool=zpool vdev=/dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M-part1 error=5 type=2 offset=120372680192 size=24576 flags=180880 Nov 7 16:39:45 Avalon kernel: ata5: EH complete Nov 7 16:39:45 Avalon kernel: ata5.00: Enabling discard_zeroes_data Actual SMART Values of the brandnew replaced Samsung 850 EVO 1TB after reboot and the error in the syslog: # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 9 Power on hours 0x0032 099 099 000 Old age Always Never 32 (1d, 8h) 12 Power cycle count 0x0032 099 099 000 Old age Always Never 10 177 Wear leveling count 0x0013 099 099 000 Pre-fail Always Never 1 179 Used rsvd block count tot 0x0013 100 100 010 Pre-fail Always Never 0 181 Program fail count total 0x0032 100 100 010 Old age Always Never 0 182 Erase fail count total 0x0032 100 100 010 Old age Always Never 0 183 Runtime bad block 0x0013 100 100 010 Pre-fail Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0032 075 062 000 Old age Always Never 25 195 Hardware ECC recovered 0x001a 200 200 000 Old age Always Never 0 199 UDMA CRC error count 0x003e 100 100 000 Old age Always Never 0 235 Unknown attribute 0x0012 099 099 000 Old age Always Never 5 241 Total lbas written 0x0032 099 099 000 Old age Always Never 291524814 Has anyone any idea what is here the problem? Is the brandnew drive faulty or have i a problem with the zpool? root@Avalon:~# zpool status pool: zpool state: ONLINE scan: scrub repaired 0B in 00:06:59 with 0 errors on Sun Nov 7 17:05:41 2021 config: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Samsung_SSD_850_EVO_1TB_S2RFNX0HA28280F ONLINE 0 0 0 ata-Samsung_SSD_870_EVO_1TB_S626NF0R226283M ONLINE 0 0 0 errors: No known data errors Thanks Chris
  6. Hi, now i have tested the SSD on a SATA-Port at the mainboard, and voila no error: root@Avalon:~# fstrim -v /mnt/cache/ /mnt/cache/: 232.9 GiB (250048569344 bytes) trimmed For me means that a) downgrade the firmware of the HBA from 19 to 16 or b) buy a respectively two SATA III PCIe Cards because the Mainboard of my HP Proliant hat only SATA II onboard.
  7. Hello, since a few days i have the following error in my syslog: Oct 30 13:53:35 Avalon kernel: blk_update_request: critical target error, dev sdd, sector 486672334 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0 Oct 30 13:53:35 Avalon kernel: BTRFS warning (device sdd1): failed to trim 1 device(s), last error -121 A manual use of fstrim -v /mnt/cache/ (/mnt/cache/=sdd1) and the use of the Trim-Plugin generates this error. In my Opinion this is a new error, never had this error before. The SSD is connected to a HBA with IT-Mode flashed, the Firmware is 19. But i have never had this error before so i think is it not ascribe to the HBA Firmware. How can i check and repair this an the blk_update_request: critical target error? Thanks a lot Christian
  8. Hi, if i execute the script i become the folloing error: root@Avalon:/boot/config/scripts# sh backup_rsync.sh backup_rsync.sh: line 74: syntax error near unexpected token `>' backup_rsync.sh: line 74: ` exec &> >(tee "${backup_path}/logs/${new_backup}.log")' root@Avalon:/boot/config/scripts# How can i solve this? Thank you!
  9. Thank you both for your answers. My question is how can i (auto)mount two different USB Disks (in rotation) to the same mount point, for example /mnt/disks/usb_backup Can i use the /etc/fstab file? Is this permanently?
  10. Hi, is it possible to mount two different external USB Disks to the same mount point (not at the same time!). Background: i wanna use two USB Disks in periodicaly exchange for a backup script and it would be easier if can use only one mount point for the two disks in the backup script. Thank You!
  11. Hello, today i have found some message in the syslog file. Can anyone say what is meant with this? Aug 4 12:12:41 Avalon kernel: ------------[ cut here ]------------ Aug 4 12:12:41 Avalon kernel: WARNING: CPU: 1 PID: 61907 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Aug 4 12:12:41 Avalon kernel: Modules linked in: vhost_net tun vhost vhost_iotlb tap kvm_intel kvm zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcom mon(PO) znvpair(PO) spl(O) md_mod macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables xt_nat xt_tcpudp veth xt_ conntrack nf_conntrack_netlink nfnetlink xt_addrtype br_netfilter xfs iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard cur ve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic l ibchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bonding igb i2c_algo_bit tg3 ipmi_ssif x86_pkg_temp_thermal intel_powerclamp coretemp crc t10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper rapl intel_cstate intel_uncore ahci libahci mpt3sas acpi _power_meter raid_class i2c_core button acpi_ipmi ipmi_si Aug 4 12:12:41 Avalon kernel: scsi_transport_sas thermal ie31200_edac [last unloaded: kvm] Aug 4 12:12:41 Avalon kernel: CPU: 1 PID: 61907 Comm: kworker/1:0 Tainted: P IO 5.10.28-Unraid #1 Aug 4 12:12:41 Avalon kernel: Hardware name: HP ProLiant ML310e Gen8, BIOS J04 05/21/2018 Aug 4 12:12:41 Avalon kernel: Workqueue: events macvlan_process_broadcast [macvlan] Aug 4 12:12:41 Avalon kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Aug 4 12:12:41 Avalon kernel: Code: e8 dc f8 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 36 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 6d f3 ff ff e8 35 f5 ff ff e9 22 01 Aug 4 12:12:41 Avalon kernel: RSP: 0018:ffffc90000298dd8 EFLAGS: 00010202 Aug 4 12:12:41 Avalon kernel: RAX: 0000000000000188 RBX: 00000000000037e3 RCX: 0000000040482715 Aug 4 12:12:41 Avalon kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffa01d618c Aug 4 12:12:41 Avalon kernel: RBP: ffff8881967d2300 R08: 00000000fd65d7cc R09: ffff888160a6be40 Aug 4 12:12:41 Avalon kernel: R10: 0000000000000000 R11: ffff8881022fe500 R12: 0000000000003f51 Aug 4 12:12:41 Avalon kernel: R13: ffffffff8210b440 R14: 00000000000037e3 R15: 0000000000000000 Aug 4 12:12:41 Avalon kernel: FS: 0000000000000000(0000) GS:ffff8885f2a40000(0000) knlGS:0000000000000000 Aug 4 12:12:41 Avalon kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Aug 4 12:12:41 Avalon kernel: CR2: 000014e7af4cf000 CR3: 000000000200a005 CR4: 00000000001706e0 Aug 4 12:12:41 Avalon kernel: Call Trace: Aug 4 12:12:41 Avalon kernel: <IRQ> Aug 4 12:12:41 Avalon kernel: nf_conntrack_confirm+0x2f/0x36 [nf_conntrack] Aug 4 12:12:41 Avalon kernel: nf_hook_slow+0x39/0x8e Aug 4 12:12:41 Avalon kernel: nf_hook.constprop.0+0xb1/0xd8 Aug 4 12:12:41 Avalon kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe Aug 4 12:12:41 Avalon kernel: ip_local_deliver+0x49/0x75 Aug 4 12:12:41 Avalon kernel: __netif_receive_skb_one_core+0x74/0x95 Aug 4 12:12:41 Avalon kernel: process_backlog+0xa3/0x13b Aug 4 12:12:41 Avalon kernel: net_rx_action+0xf4/0x29d Aug 4 12:12:41 Avalon kernel: __do_softirq+0xc4/0x1c2 Aug 4 12:12:41 Avalon kernel: asm_call_irq_on_stack+0x12/0x20 Aug 4 12:12:41 Avalon kernel: </IRQ> Aug 4 12:12:41 Avalon kernel: do_softirq_own_stack+0x2c/0x39 Aug 4 12:12:41 Avalon kernel: do_softirq+0x3a/0x44 Aug 4 12:12:41 Avalon kernel: netif_rx_ni+0x1c/0x22 Aug 4 12:12:41 Avalon kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Aug 4 12:12:41 Avalon kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Aug 4 12:12:41 Avalon kernel: process_one_work+0x13c/0x1d5 Aug 4 12:12:41 Avalon kernel: worker_thread+0x18b/0x22f Aug 4 12:12:41 Avalon kernel: ? process_scheduled_works+0x27/0x27 Aug 4 12:12:41 Avalon kernel: kthread+0xe5/0xea Aug 4 12:12:41 Avalon kernel: ? __kthread_bind_mask+0x57/0x57 Aug 4 12:12:41 Avalon kernel: ret_from_fork+0x22/0x30 Aug 4 12:12:41 Avalon kernel: ---[ end trace 67f601bd1746e199 ]--- Thank You! Chris
  12. Hi, i'm new in virtualisation with unraid. In my HP Proliant ML310a Gen8 is a 4-Port Intel Pro Gbit-Network Card installed. In the VM Manager Settings i have PCIe ACS Override set to Downstream and AVFIO allow unsafe interrupts set to yes. Now i have 4 separate IOMMU Groups for each Ethernet Port on the Intel Network Card: IOMMU group 17: [8086:10d6] 0d:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02) IOMMU group 18: [8086:10d6] 0d:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02) IOMMU group 19: [8086:10d6] 0e:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02) IOMMU group 20: [8086:10d6] 0e:00.1 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02) Group 17 and 18 are used for my Server-Network Connection to my Switch. Group 19 and 20 i wanna pass through to an Virtual Machine (Ubuntu). For this is have activated the Checkboxes for IOMMU Group 19 & 20 in Tools/System Devices and have activated this setting with BIND SELECTED TO VFIO AT BOOT. After the reboot of Unraid i can select the two PCI devices in the Settings of the VM. But if i wanna start the VM i become an error message: Execution error internal error: qemu unexpectedly closed the monitor: 2021-06-27T12:16:08.524836Z qemu-system-x86_64: -device vfio-pci,host=0000:0e:00.0,id=hostdev0,bus=pci.4,addr=0x0: vfio 0000:0e:00.0: failed to setup container for group 19: Failed to set iommu for container: Operation not permitted The same thing happens if i want to pass hrought the HP Broadcom Networkdevices. What is wrong here? Thank you!
  13. Hello, Vaultwarden is running on my machine since a few weeks perfectly. Today i have activated the Email 2FA. If i log in to the webvault with the browser the webvault ask me for the 2FA code after typing in the masterpassword. The 2FA code then sended via mail. With this 2FA code i can unlock my webvault. Same thing if i want to synchronize the iPhone-App or the Browser-Plugin with the webvault: after typing in the credentials (email & password) i recive an eMail with the 2FA code and then with this code i unlock the vault an can synchronize the App or Browser Plugin with the webvault/database. But, after every sended eMail with the (correct) 2FA code i have this error message in my bitwarden.log: [2021-06-05 16:03:13.603][error][ERROR] 2FA token not provided [2021-06-05 16:07:19.638][error][ERROR] 2FA token not provided [2021-06-05 16:08:21.815][error][ERROR] 2FA token not provided [2021-06-05 16:13:22.129][error][ERROR] 2FA token not provided [2021-06-05 16:20:09.245][error][ERROR] 2FA token not provided Second Question: From time to time i hvae the folloing message in the bitwarden.log: ########################################################### '/notifications/hub' should be proxied to the websocket server or notifications won't work. Go to the Wiki for more info, or disable WebSockets setting WEBSOCKET_ENABLED=false. ########################################################################################### What is mean with that, what is notification/hub and what can i do to solve this? Thank you very much for the help! Christian
  14. Hi, i have followed spaceinvaders installation steps from this videos: https://www.youtube.com/watch?v=I0lhZc25Sro&t=1341s https://www.youtube.com/watch?v=HLcj-p-lcXY&t=442s The vaultwarden is now accessible via the internet with my corresponding DuckDNS Subdomain. The only one thing is that i can reach the /admin page under this DuckDNS Subdomain. Has anyone a idea how i can use the tip from Tolete with Swag (former Letsencryp). I use Swag as reverse proxy instead NGINX Proxy Manager. Thank you!