Jump to content

mrMTB

Members
  • Posts

    17
  • Joined

  • Last visited

Posts posted by mrMTB

  1. I'm on 6.12.2 and have seen the following avahi-daemon entries filling up the logs. Any pointers on troubleshooting this would be appreciated.

    Jul 12 15:08:45 tower avahi-daemon[12964]: Record [_smb._tcp.local#011IN#011PTR tower._smb._tcp.local ; ttl=4500] not fitting in legacy unicast packet, dropping.
    Jul 12 15:08:45 tower avahi-daemon[12964]: Record [tower._smb._tcp.local#011IN#011TXT  ; ttl=4500] not fitting in legacy unicast packet, dropping.
    Jul 12 15:08:45 tower avahi-daemon[12964]: Record [tower._smb._tcp.local#011IN#011SRV 0 0 445 tower.local ; ttl=120] not fitting in legacy unicast packet, dropping.
    Jul 12 15:08:45 tower avahi-daemon[12964]: Record [tower.local#011IN#011A 192.168.5.40 ; ttl=120] not fitting in legacy unicast packet, dropping.

     

  2. unRAID 6.11.5

    Platform: Asrock Z690 Extreme + 13500k

    Parity: WD140EDGZ

    Array: 2xWD140EDGZ, 3xWD80EFAX, 1xWD80EMAZ

     

    Replatformed machine in April. Parity checks set for quarterly and started June 1. During the parity check, I noticed some ata errors popping up in the syslog.

    Jun  2 13:28:14 wintermute kernel: ata8.00: exception Emask 0x10 SAct 0x20000 SErr 0x840000 action 0x6 frozen
    Jun  2 13:28:14 wintermute kernel: ata8.00: irq_stat 0x08000000, interface fatal error
    Jun  2 13:28:14 wintermute kernel: ata8: SError: { CommWake LinkSeq }
    Jun  2 13:28:14 wintermute kernel: ata8.00: failed command: READ FPDMA QUEUED
    Jun  2 13:28:14 wintermute kernel: ata8.00: cmd 60/00:88:a8:24:04/01:00:10:06:00/40 tag 17 ncq dma 131072 in
    Jun  2 13:28:14 wintermute kernel:         res 40/00:00:a8:23:04/00:00:10:06:00/40 Emask 0x10 (ATA bus error)
    Jun  2 13:28:14 wintermute kernel: ata8.00: status: { DRDY }
    Jun  2 13:28:14 wintermute kernel: ata8: hard resetting link
    Jun  2 13:28:14 wintermute kernel: ata8: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    Jun  2 13:28:14 wintermute kernel: ata8.00: supports DRM functions and may not be fully accessible
    Jun  2 13:28:14 wintermute kernel: ata8.00: supports DRM functions and may not be fully accessible
    Jun  2 13:28:14 wintermute kernel: ata8.00: configured for UDMA/133
    Jun  2 13:28:14 wintermute kernel: ata8: EH complete
    Jun  2 15:05:31 wintermute kernel: ata8.00: exception Emask 0x10 SAct 0x8000 SErr 0x840000 action 0x6 frozen
    Jun  2 15:05:31 wintermute kernel: ata8.00: irq_stat 0x08000000, interface fatal error
    Jun  2 15:05:31 wintermute kernel: ata8: SError: { CommWake LinkSeq }
    Jun  2 15:05:31 wintermute kernel: ata8.00: failed command: READ FPDMA QUEUED
    Jun  2 15:05:31 wintermute kernel: ata8.00: cmd 60/00:78:e8:20:81/01:00:08:02:00/40 tag 15 ncq dma 131072 in
    Jun  2 15:05:31 wintermute kernel:         res 40/00:00:e8:1f:81/00:00:08:02:00/40 Emask 0x10 (ATA bus error)
    Jun  2 15:05:31 wintermute kernel: ata8.00: status: { DRDY }
    Jun  2 15:05:31 wintermute kernel: ata8: hard resetting link
    Jun  2 15:05:31 wintermute kernel: ata8: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

     

    Had a small number of corrections written to parity (~5). Based on the errors, though, I decided to shut down the server and double check the SATA and power cables to all of the drives, and started the parity check again. This afternoon the server locked up. After power cycling, the server booted back up, but the parity drive was unresponsive. I swapped the SATA cable and then the SATA port, to no avail.

     

    Picked up a new WD140EDGZ, shucked, and the system came right up with the new drive recognized. I started a parity rebuild, and am seeing similar errors on the new drive, too.

    Jun  6 18:26:26 wintermute kernel: ata7.00: exception Emask 0x10 SAct 0x800 SErr 0x840000 action 0x6 frozen
    Jun  6 18:26:26 wintermute kernel: ata7.00: irq_stat 0x08000000, interface fatal error
    Jun  6 18:26:26 wintermute kernel: ata7: SError: { CommWake LinkSeq }
    Jun  6 18:26:26 wintermute kernel: ata7.00: failed command: WRITE FPDMA QUEUED
    Jun  6 18:26:26 wintermute kernel: ata7.00: cmd 61/50:58:00:cb:84/04:00:e7:00:00/40 tag 11 ncq dma 565248 out
    Jun  6 18:26:26 wintermute kernel:         res 40/00:00:a0:c9:84/00:00:e7:00:00/40 Emask 0x10 (ATA bus error)
    Jun  6 18:26:26 wintermute kernel: ata7.00: status: { DRDY }
    Jun  6 18:26:26 wintermute kernel: ata7: hard resetting link
    Jun  6 18:26:26 wintermute kernel: ata7: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    Jun  6 18:26:26 wintermute kernel: ata7.00: supports DRM functions and may not be fully accessible
    Jun  6 18:26:26 wintermute kernel: ata7.00: supports DRM functions and may not be fully accessible
    Jun  6 18:26:26 wintermute kernel: ata7.00: configured for UDMA/133
    Jun  6 18:26:26 wintermute kernel: ata7: EH complete

     

    Any thoughts on next steps for isolating the issue would be appreciated.

     

    wintermute-diagnostics-20230606-1748.zip

  3. 23 hours ago, Johnny Utah said:

    Thanks for this command suggestion!  Worked like a charm for me.  My question, however, is how do I get this to run on every boot?

     

    If you have "User Scipts" installed, create a new script with the following and set it to run on first start of the array.

     

    #!/bin/bash
    #set persistence mode
    nvidia-smi -pm 1

     

    If you were running any other patches on your card, you could insert them in the same script. :)

  4. 18 minutes ago, Schicksal said:

    how to activate IPv6 for my this plex container?

    i thought with network "host" it has the same like the main unraid system.

    This command will enumerate the networks configured in docker

    docker network ls

    Find the entry for your host networking, and use the inspect command on it to see if IPv6 is active on that interface:

    docker inspect <yourIDhere> | grep v6

     

  5. 6 minutes ago, Schicksal said:

    Hello, I’m new to unraid and I try to solve my problems by reading all this stuff.

    I have one problem:
    How can I find out which IPv6 my Plex Container is using?
    Network is set to „host“.
    In Plex settings the use of IPv6 is checked.

    Is there a way to show the IPs in the console of the docker?
    Is the docker using IPv6 generally (main unraid is configured and has a IPv6, Plex Settings IPv6 is checked)

    Thank you for your help. I‘m still learning.

    Regards

    From the docker tab toggle advanced on to get the container ID. It will look like "d61de8b911c5". Go into the terminal of the host machine, and use the docker inspect command:

    docker inspect d61de8b911c5 | grep v6

    This will extract the IPv6 lines from the running docker's config.

    • Like 1
  6. On 2/1/2020 at 5:51 AM, Koshy said:

    This is not true, proper driver support would mean that the GPU will be able to enter low power mode even when not assigned to a VM.

    Koshy, have you tried setting persistence mode on for your card?

    nvidia-smi -pm 1

    I did this for my pair of 1660 Supers, and after a few minutes they drop from P0 to P8.

  7. Firstly let me thank the developer(s) for their work and an excellent container.

     

    My unRAID server is only about a month old, and I'm slowly working through optimizing everything in it. I have a pair of 1660 Supers in my machine, one I've isolated for use in a Win10 gaming VM (primary position) and a second I use for transcoding (currently passed through to Plex and Handbrake, though the latter is generally not running).

     

    Transcoding works like a charm in Plex, and I've scaled it to twenty HEVC>1080p transcodes with resources left to spare on the card. When the transcoding stops, however, the card stays in P0 consuming about 35W compared to my isolated card that sits in P8 consuming about 11W. I have persistence mode enabled on both cards. If I restart the Plex container the card will immediately drop to P8.

     

    Does anyone else see this behavior or have thoughts on how I might get the card to idle properly without having to manually restart the container?

×
×
  • Create New...