bb12489

Members
  • Posts

    65
  • Joined

  • Last visited

Posts posted by bb12489

  1. I just upgraded my Unraid server to an entirely new setup. I'm probably part of a very small group who have their drives attached to a small form factor system over USB4/Thunderbolt. My 8 data drivers reside in an OWC Thunderbay 8, and are connected to my new Minisforum NPB7 (Intel 13700H, 64GB Crucial DDR5 5200, 2TB 980 Pro NVME). 

     

    Initially I just swapped over my 6.11 install and everything started without any issue. No problems detecting the drives, or starting containers. Even QSV transcoding is working between Plex, Tdarr, and Jellyfin. However there seems to be random crashes that happen without much warning. From what I can tell, it's not heat related, although I am going to try and adjust the TDP of the 13700H from it's 90w default. My NVME drives spikes up to the 60's in temp, but also has active cooling on it. 

     

    I tried a few suggested fixes related to the built in Iris XE graphics that could be causing the crashes, but still no luck. I finally decided to take the leap and upgrade to 6.12RC6 since it has a much newer kernel. My thinking was that there is much more stable support for my 13th gen CPU and graphics engine. I even applied a fix of adding "i915.enable_dc=0" to my boots parameters as suggested in another thread. The system still randomly crashes. 

     

    I just setup my syslog settings this morning to mirror the syslog to my flash drive. I haven't ever needed to pull logs before since I've never had crashes like this in all my years of using Unraid. Once another crash occurs, I will attach diagnostics and the syslog. I'm just at a loss of what is causing this. 

     

    On a possibly unrelated note....my docker containers no longer auto-start, but I this started happening after upgrading to 6.12RC6. 

  2. 29 minutes ago, Tetsuo said:

    FWIW i found that using this command (add your claim token of course)
     

    curl -X POST 'http://127.0.0.1:32400/myplex/claim?token=claim-xxxxxxx'

    directly in the Unraid web terminal works in claiming the server.

    Before that i also tried it via the variable in the dockers settings but that wouldn´t work.

    I have it, sort of, back up and running now. Access via the local web app works. Access remotely via my Phone on mobile network works as well. What´s still weird is that access via the official webapp (app.plex.tv) doesn´t work for now claiming it can´t establish a secure connection.

     

    Can confirm that this fixed my issue as well. I changed my password, then couldn't access any of my libraries with the not authorized messaged. After running this command with the claim code; everything seems to be back to normal. 

  3. After an update to the Sab docker this morning, It appears that there is an error keeping it from starting up. I've tried removing the container completely, and re-creating it, but it still ends with the same error as posted below. Is anyone else running into this? My sab docker was working just fine before going to bed. 

    2019-02-27 08:52:31,667::ERROR::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Error in 'start' listener <bound method Server.start of <cherrypy._cpserver.Server object at 0x150cfe541b90>>
    Traceback (most recent call last):
      File "/usr/share/sabnzbdplus/cherrypy/process/wspbus.py", line 207, in publish
        output.append(listener(*args, **kwargs))
      File "/usr/share/sabnzbdplus/cherrypy/_cpserver.py", line 167, in start
        self.httpserver, self.bind_addr = self.httpserver_from_self()
      File "/usr/share/sabnzbdplus/cherrypy/_cpserver.py", line 158, in httpserver_from_self
        httpserver = _cpwsgi_server.CPWSGIServer(self)
      File "/usr/share/sabnzbdplus/cherrypy/_cpwsgi_server.py", line 64, in __init__
        self.server_adapter.ssl_certificate_chain)
      File "/usr/share/sabnzbdplus/cherrypy/wsgiserver/ssl_builtin.py", line 56, in __init__
        self.context.load_cert_chain(certificate, private_key)
    SSLError: [SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2779)
    
    2019-02-27 08:52:31,769::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Serving on http://0.0.0.0:8080
    2019-02-27 08:52:31,770::ERROR::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Shutting down due to error in start listener:
    Traceback (most recent call last):
      File "/usr/share/sabnzbdplus/cherrypy/process/wspbus.py", line 245, in start
        self.publish('start')
      File "/usr/share/sabnzbdplus/cherrypy/process/wspbus.py", line 225, in publish
        raise exc
    ChannelFailures: SSLError(336245134, u'[SSL: CA_MD_TOO_WEAK] ca md too weak (_ssl.c:2779)')
    
    2019-02-27 08:52:31,770::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Bus STOPPING
    2019-02-27 08:52:31,773::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('0.0.0.0', 8080)) shut down
    2019-02-27 08:52:31,773::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE HTTP Server None already shut down
    2019-02-27 08:52:31,774::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Bus STOPPED
    2019-02-27 08:52:31,774::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Bus EXITING
    2019-02-27 08:52:31,774::INFO::[_cplogging:219] [27/Feb/2019:08:52:31] ENGINE Bus EXITED

     

  4. 20 minutes ago, johnnie.black said:

    The device was never added to the pool, don't know why but you mounted the device with UD just before adding to the pool, so it was busy and failed when Unraid tried to wipe it.

     

    
    Feb 24 17:19:46 Tower root: wipefs: error: /dev/sdd1: probing initialization failed: Device or resource busy

     

    Unmount the device on UD, unassign it from the pool, start array, stop array, re-add to pool.

     

     

    That did it! Thank you. I must have mounted it for some stupid reason. Everything is working probably now!

  5. I currently have two 500GB Samsung 850 EVO SSD's in a raid 0 setup for cache drives. I needed space and speed over redundancy. I recently came into possession of a third 500GB EVO drive, and when adding it to the cache pool, it kicks off a re-balance as expected. Once the balance has finished though, the array size still sits at 1TB instead of the expected 1.5TB. I thought this might be because Unraid switch the RAID level during the automatic balance, but upon checking, it says I'm still running in raid 0 config. So I tried yet another balance to raid 0 with the same result, then finally tried balancing to raid 1 and back to raid 0, with again, the same result...

     

    The only thing that stands out to me as being odd (aside from the space not increasing) is that the drive shows no read or write activities on it (shown in the screenshot). Does anyone have any recommendations? 

     

     

    cache.PNG

  6. Hey guys, I'm just finally getting started with setting up a Gamestream VM to use with my Nvidia Shield TV. I think I've gotten the CPU pinning set correctly, but I'm hoping someone could give it a second look.

     

    My system is running dual Xeon L5640's (6 core 12 thread), so I have 24 threads to work with. My thought was to isolate the last 3 cores (bolded below) which would give me 6 threads for the VM. Is my thinking correct? The only thing that looks off to me in the XML is the "cputune". Shouldn't this be showing 9,21,10,22,11,23?

     

    My thread pairing is as follows....

     

    cpu 0 <===> cpu 12
    cpu 1 <===> cpu 13
    cpu 2 <===> cpu 14
    cpu 3 <===> cpu 15
    cpu 4 <===> cpu 16
    cpu 5 <===> cpu 17
    cpu 6 <===> cpu 18
    cpu 7 <===> cpu 19
    cpu 8 <===> cpu 20
    cpu 9 <===> cpu 21
    cpu 10 <===> cpu 22
    cpu 11 <===> cpu 23

     

     

    Quote

    <domain type='kvm' id='6'>
      <name>Windows 10 - Gamestream</name>
      <uuid>0123ac52-2486-8760-f9d3-aafc8578e469</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>8388608</memory>
      <currentMemory unit='KiB'>8388608</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>6</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='9'/>
        <vcpupin vcpu='1' cpuset='10'/>
        <vcpupin vcpu='2' cpuset='11'/>
        <vcpupin vcpu='3' cpuset='21'/>
        <vcpupin vcpu='4' cpuset='22'/>
        <vcpupin vcpu='5' cpuset='23'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/0123ac52-2486-8760-f9d3-aafc8578e469_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough'>
        <topology sockets='1' cores='3' threads='2'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/Windows 10 - Gamestream/vdisk1.img'/>
          <backingStore/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <alias name='virtio-disk2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/ISOs/Windows/10/16299.15.170928-1534.RS3_RELEASE_CLIENTPRO_OEMRET_X64FRE_EN-US.ISO'/>
          <backingStore/>
          <target dev='hda' bus='ide'/>
          <readonly/>
          <boot order='2'/>
          <alias name='ide0-0-0'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/ISOs/virtio-win-0.1.126-2.iso'/>
          <backingStore/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <alias name='ide0-0-1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <alias name='usb'/>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <alias name='usb'/>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <alias name='usb'/>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pci-root'>
          <alias name='pci.0'/>
        </controller>
        <controller type='ide' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:87:f2:ca'/>
          <source bridge='br0'/>
          <target dev='vnet0'/>
          <model type='virtio'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/0'/>
          <target port='0'/>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/0'>
          <source path='/dev/pts/0'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-6-Windows 10 - Gamestr/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'>
          <alias name='input0'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input1'/>
        </input>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
          </source>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </hostdev>
        <memballoon model='virtio'>
          <alias name='balloon0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
        </memballoon>
      </devices>
      <seclabel type='none' model='none'/>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+0:+100</label>
        <imagelabel>+0:+100</imagelabel>
      </seclabel>
    </domain>


     

  7. Just a question here.....

     

    By any chance has the BTRFS balancing options been updated in the stable build? I remember from a few beta builds ago that there was going to be a fix for switching the BTRFS cache pool from raid 1 to raid 0, and having it not rebuild the pool back to raid 1 every time you added/removed a drive. I know I may be one of the very few users on here who is running their cache pool in raid 0, but I do have my reasons (space and speed).

  8. It actually isn't all the clients, but only the clients that had saved the credentials for the read/write user while it was being used.  Now the server is completely wide open again this shouldn't be needed for any client.  Only clients that had never logged in while it was private are able to get in today.  Blame Windows 10, or unRaid?

     

    This is a Windows issue, if your shares are set to public type a random username without any password, e.g., "user" and it should work, if it works save credentials.

     

    I had the same issue with all my shares. They are all set to public, but after the upgrade from RC5 to Stable, I was no longer able to access the shares from my Windows 10 desktop. Kept prompting for a username and password. I even checked credential manager and there are no saved credentials.

     

    I did end up trying your solution of typing in "user" for the username and no password, and now I'm back in though. Very odd.

  9. For some reason the Grafana icon isn't showing on my Unraid Dashboard. Just shows as a broken img. I don't believe it's an issue with my server. All other dashboard icons are working.

    I never set up an icon, so that's why it looks like that.

    Oh weird. In community apps it shows an icon, so I thought it had an icon. My bad!

     

    Sent from my XT1575 using Tapatalk