Jump to content

rbm

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by rbm

  1. Hi,  I'm currently using the Unifi docker by pducharme to manage my APs.  I want to change over to this docker.  I currently autobackup the configuration of my controller from within the Unifi controller application itself.  If I do switch, will I be able to restore the working config from my previous docker to this one using the .unf file I download from the controller? The version of Unifi I run is 5.6.37.

  2. Understand.  But I guess I'll have to reinstall because I don't have a backup of the XML file to understand what the original files were called.

     

    If I want to increase memory size of the VM without risking breakage like this, what is the best way to change the value?  Manual editing of the XML?  The default size is 4GB and I want 8GB.

  3. Hi, I used spaceinvader's Macinabox to install a VM on my Unraid server that runs MacOS 10.11 "El Capitan".  I installed using a Time Machine restoration to the OS vdisk.  That booted properly and ran correctly first time.  I performed a controlled shutdown and, using the Edit feature of the VM, changed memory from 4096MB to 8192MB.  Upon restarting the VM, the system fails to boot.  It flips back and forth between showing the Clover bootloader and the initial Apple logo.  No progress bar is displayed and control returns to Clover after a few seconds.  I attempted to restore the configuration back to 4096MB but the same problem occurs.  Can someone please help me to restore operation of this machine?

     

    <!--
    WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
    OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
      virsh edit MacinaboxHighSierra
    or other application using the libvirt API.
    -->

    <domain type='kvm'>
      <name>MacinaboxHighSierra</name>
      <uuid>9ae51750-1144-42ee-b5ce-3cd2286e59ed</uuid>
      <description>MacOS High Sierra</description>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="default.png" os="HighSierra"/>
      </metadata>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>2</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='0'/>
        <vcpupin vcpu='1' cpuset='1'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/9ae51750-1144-42ee-b5ce-3cd2286e59ed_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='1' threads='2'/>
      </cpu>
      <clock offset='utc'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/mnt/user/domains/MacinaboxHighSierra/Clover.qcow2'/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/MacinaboxHighSierra/HighSierra-install.img'/>
          <target dev='hdd' bus='sata'/>
          <address type='drive' controller='0' bus='0' target='0' unit='3'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/MacinaboxHighSierra/macos_disk.img'/>
          <target dev='hde' bus='sata'/>
          <address type='drive' controller='0' bus='0' target='0' unit='4'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0x12'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x10'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x11'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:9c:78:36'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'>
          <listen type='address' address='0.0.0.0'/>
        </graphics>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
        </video>
        <memballoon model='virtio'>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </memballoon>
      </devices>
    </domain>[/CODE]
  4. Hi,  I'm running an Unraid v6.7.1  server with a 2-disk redundant cache pool, using standard Seagate Barracuda HDDs. I want to replace both disks in the pool with Western Digital SSDs of the same size as the old HDDs.  Will it be possible for me to replace both disks at the same time in one operation, following the FAQ procedure for cache disk replacement?  Or must I do one at a time?

  5. Hi

     

    This is more of a presales question rather than a support question.  I want a backup sync solution that would copy files on the array to an unassigned device when inserted into the Unraid server.  I have only a little data (6TB) so a 6TB - 10TB hard drive would contain the backup.  I envision inserting the drive, mounting it, having Seafile copy all the required files, and then unmounting and removing the backup disk.  Would Seafile be a good choice for such a backup solution?

  6. So, what I did was:

    - uninstalled the Plex docker

    - stopped the array

    - move the old plex directory out of the way

    - restarted the array

    - reinstalled the plex docker and included at least one media directory, pictures.  Also made sure /config was mapped to /mnt/cache/appdata/plex as suggested

    Opening the Webgui link resulted in sucess.  Plex is back up.  

     

    Thanks for the help.  I am much wiser about operating Plex now.

     

  7. Thanks Squid.  I did read above and I interpreted the discussion as affecting a minority of users not being me.  It was coincidence then that my work on my Unraid occurred exactly at the same time as this app server problem.  I'm new to Unraid and am still in the "getting familiar with all this" mode.

  8. Hi,

     

    I'm suddenly having connection problems with my CA plugin on my Unraid server.  When I click on the Apps menu button, the system hangs for a minute or so before producing the following error:

    Download of appfeed failed.
    
    Community Applications requires your server to have internet access. The most common cause of this failure is a failure to resolve DNS addresses. You can try and reset your modem and router to fix this issue, or set static DNS addresses (Settings - Network Settings) of 8.8.8.8 and 8.8.4.4 and try again.
    
    Alternatively, there is also a chance that the server handling the application feed is temporarily down. Switching CA to operate in Legacy Mode might temporarily allow you to still utilize CA.
    Last JSON error Recorded: JSON Error: Syntax error
    Docker FAQ
    
    Community Applications Version: 2018.03.30a

    The server is connected to the Internet and can access its resources, so the first suggestion is not the cause of the problem.  It might the access to the application servers but I can't imagine this being the case.  I've already tried a deinstall / reinstall but same symptoms are occuring.  How should I go about debugging this problem?

  9. 1 hour ago, aptalca said:

     

    Plex is crashing on you. Check the plex media server log in the config folder. Likely a corrupted database, in which case you can restore from a backup (plex does automatic db backups). Or it could be permissions related. Either way, the log will tell you

    Appears to be the case:

    Mar 31, 2018 23:45:55.758 [0x146c44bff700] INFO - Plex Media Server v1.12.1.4885-1046ba85f - ubuntu docker x86_64 - build: linux-ubuntu-x86_64 ubuntu - GMT -04:00
    Mar 31, 2018 23:45:55.759 [0x146c44bff700] INFO - Linux version: 4.14.26-unRAID (#1 SMP PREEMPT Mon Mar 12 16:21:20 PDT 2018), language: en-US
    Mar 31, 2018 23:45:55.759 [0x146c44bff700] INFO - Processor AMD A8-5500 APU with Radeon(tm) HD Graphics    
    Mar 31, 2018 23:45:55.759 [0x146c44bff700] INFO - /usr/lib/plexmediaserver/Plex Media Server
    Mar 31, 2018 23:45:55.756 [0x146c50607800] DEBUG - BPQ: [Idle] -> [Starting]
    Mar 31, 2018 23:45:55.759 [0x146c50607800] DEBUG - Opening 20 database sessions to library (com.plexapp.plugins.library), SQLite 3.13.0, threadsafe=1
    Mar 31, 2018 23:45:55.864 [0x146c50607800] DEBUG - Running migrations.
    Mar 31, 2018 23:45:55.871 [0x146c50607800] ERROR - SQLITE3:0x10, 11, database corruption at line 59739 of [fc49f556e4]
    Mar 31, 2018 23:45:55.871 [0x146c50607800] ERROR - SQLITE3:0x10, 11, statement aborts at 10: [select max(max(metadata_items.changed_at),max(metadata_items.resources_changed_at)) from metadata_items] database disk image is malformed
    Mar 31, 2018 23:45:55.871 [0x146c50607800] ERROR - Database corruption: sqlite3_statement_backend::loadOne: database disk image is malformed
     

    I can start from scratch if necessary instead of attempting a DB repair/restore. Can I just completely uninstall the Plex instance and then delete all files left over in appdata after the uninstall?  That seems to me to be the most expedient solution but I don't know if I'm getting myself into more hot water if I try.

  10. 3 hours ago, CHBMB said:

     

    Post your docker run command.  Link in my sig

     

    Please find the Run command below:

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='plex' --net='host' -e TZ="America/New_York" -e HOST_OS="unRAID" -e 'PUID'='99' -e 'PGID'='100' -e 'VERSION'='latest' -v '/mnt/user/appdata/plex':'/config':'rw' 'linuxserver/plex'
    c46d54c36af23e16f030061a03ac3d6419fc6fe6ec4a9124c57ebac80e2aa074
    
    The command finished successfully!

     

  11. Hi,

     

    I'm trying to debug a startup problem with Linuxserver.io's Plex docker on my Unraid v6.5.0 server.  I've installed the ap and configured it under Docker page.  When I launch it from the Docker page, it repeatedly outputs the message "Starting Plex Media Server.", never launching a listener on port 34200 as it should.  Log extract below shows the startup and shutdown of the app.

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 10-adduser: executing...
    usermod: no changes
    
    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/
    
    
    Brought to you by linuxserver.io
    We gratefully accept donations at:
    https://www.linuxserver.io/donations/
    -------------------------------------
    GID/UID
    -------------------------------------
    
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 30-dbus: executing...
    [cont-init.d] 30-dbus: exited 0.
    [cont-init.d] 40-chown-files: executing...
    [cont-init.d] 40-chown-files: exited 0.
    [cont-init.d] 50-plex-update: executing...
    
    
    
    #####################################################
    # Login via the webui at http://<ip>:32400/web #
    # and restart the docker, because there was no #
    # preference file found, possibly first startup. #
    #####################################################
    
    
    [cont-init.d] 50-plex-update: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    Starting Plex Media Server.
    Starting dbus-daemon
    [services.d] done.
    dbus[271]: [system] org.freedesktop.DBus.Error.AccessDenied: Failed to set fd limit to 65536: Operation not permitted
    
    Starting Plex Media Server.
    Starting Avahi daemon
    Found user 'avahi' (UID 106) and group 'avahi' (GID 107).
    Successfully dropped root privileges.
    avahi-daemon 0.6.32-rc starting up.
    No service file found in /etc/avahi/services.
    *** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. ***
    
    *** WARNING: Detected another IPv6 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. ***
    
    Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
    New relevant interface docker0.IPv4 for mDNS.
    Joining mDNS multicast group on interface br0.IPv6 with address fe80::4f3:beff:fe49:2b09.
    New relevant interface br0.IPv6 for mDNS.
    Joining mDNS multicast group on interface br0.IPv4 with address 192.168.2.22.
    New relevant interface br0.IPv4 for mDNS.
    Joining mDNS multicast group on interface bond0.IPv6 with address fe80::6a05:caff:fe2a:1182.
    New relevant interface bond0.IPv6 for mDNS.
    Network interface enumeration completed.
    Registering new address record for 172.17.0.1 on docker0.IPv4.
    Registering new address record for fe80::4f3:beff:fe49:2b09 on br0.*.
    Registering new address record for 192.168.2.22 on br0.IPv4.
    Registering new address record for fe80::6a05:caff:fe2a:1182 on bond0.*.
    Server startup complete. Host name is NAS.local. Local service cookie is 4093093207.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    .
    .
    .
    .
    Starting Plex Media Server.
    Starting Plex Media Server.
    Starting Plex Media Server.
    Got SIGTERM, quitting.
    Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
    Leaving mDNS multicast group on interface br0.IPv6 with address fe80::4f3:beff:fe49:2b09.
    Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.2.22.
    Leaving mDNS multicast group on interface bond0.IPv6 with address fe80::6a05:caff:fe2a:1182.
    avahi-daemon 0.6.32-rc exiting.
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] syncing disks.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.

    I believe this situation is related to some servicing that I did on the Unraid server.  Plex was installed and operating for a few days before.  I fooled with the cache drive and things went a bit wrong when I tried to remove the cache.  I needed to follow the instructions on how to change the cache drive, meanwhile I deinstalled the Plex docker from the system.  I since corrected the cache problem and now with a clean reinstall I'm getting this problem.

×
×
  • Create New...