rickybello

Members
  • Posts

    18
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

rickybello's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi all. I hope you can help me because this is driving me crazy. I can't seem to mount images store in my Unraid server. I have around three different ones, and I have been able to mount them in the past. I have my photo library stored in them. i dont get an error. My Finder just freezes, and I don't get an error. Sometimes I get this "the operation couldn’t be completed. Operation not supported by the device." This is what I have tried so far: The space bundle images are store in different drives, so is not a particular drive I transferred one of those images to an SSD extreme and was able to mount it by USB with absolutely no problems, telling me that it is not an image corruption and the files are fine I was able to create a new image and save it and so far was able to use it to save something but still not able to mount my old ones been using my Unraid for years with up and down, but this one is driving my nuts Device: Unraid OS Version: 6.10.3 MacBook pro-2021 M1 running Beta Ventura but was using Monterey before and still no dice Uptade: the issue only happens with unraid 6.10 the moment I reverted back to 6.9 I'm able to mount the images and the network doesn't crash. Any suggestions why the os is doing this
  2. Normally I used a spares bundle disk to save my photos and I been using it for months. Recently I upgrade to version 6.10.3 and I cannot mount the disk. The issue is that I don't even get an error it just spins M1 Macbook pro I can write and read to the drive and can access the sparse bundle disks from krusader Anyone can tell me if there is anything I should be doing Sent from my iPhone using Tapatalk
  3. Can someone please let me know what is going on with my unraid. thanks. i change two bad drives but i still getting the errors medianas-diagnostics-20201211-0936.zip
  4. Im having a problem everyday around 3PM my cache drive (drives) appears to grow and grow for not apparent reason. the system cores also goes all crazy and this is now driving me crazy. try to stop containers and nothing here are some logs: if anyone please can help Oct 17 14:56:16 MediaNas nginx: 2019/10/17 14:56:16 [error] 6440#6440: *21243 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.100.1, server: , request: "GET /webGui/include/ShareList.php?compute=yes&path=Shares&scale=-1&number=.%2C&fill=ssz HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.43", referrer: "http://192.168.1.43/Shares" Oct 17 14:59:41 MediaNas nginx: 2019/10/17 14:59:41 [error] 6440#6440: *21835 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.100.1, server: , request: "GET /webGui/include/ShareList.php?compute=yes&path=Shares&scale=-1&number=.%2C&fill=ssz HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.43", referrer: "http://192.168.1.43/Shares" Oct 17 15:10:53 MediaNas nginx: 2019/10/17 15:10:53 [error] 6440#6440: *24229 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.100.1, server: , request: "POST /plugins/dynamix.docker.manager/include/ContainerManager.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.43", referrer: "http://192.168.1.43/Docker" Oct 17 15:17:28 MediaNas nginx: 2019/10/17 15:17:28 [error] 6440#6440: *25943 upstream timed out (110: Connection timed out) while reading upstream, client: 192.168.100.1, server: , request: "GET /update.htm?cmd=/webGui/scripts/share_size&arg1=appdata&arg2=ssz1&csrf_token=10B35601F8CC2461 HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm?cmd=/webGui/scripts/share_size&arg1=appdata&arg2=ssz1&csrf_token=10B35601F8CC2461", host: "192.168.1.43", referrer: "http://192.168.1.43/Shares" Oct 17 15:19:05 MediaNas nginx: 2019/10/17 15:19:05 [error] 6440#6440: *26504 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.100.1, server: , request: "POST /webGui/include/Boot.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.43", referrer: "http://192.168.1.43/Dashboard"
  5. hello, i hope someone could take their time and thanks in advance. i been using unraid for years without no problem whatsoever. recently i just upgrade the server to new board and CPU (ASRock X399 Taichi Version | AMD Ryzen Threadripper 2920X ) also upgrade PSU for a more demanding hardware. i already lost a drive and now today i lost my cache device (one of them and now it has "crc error count is 2"), i attached my system diagnostic. i hope someone could lend a hand. thanks. medianas-diagnostics-20190929-2245.zip
  6. Hello and thanks for reading. well i kind of have problem when passing through a GPU on my win 10 machine. the audio just has this glitchy sound sometimes it evens skip itself and is getting me annoying. Before you guys can help me, here is what ive done i tested two cards (MSI GT710 and MSI gt1050) i created a new windows machine tested both and still the same issue i tried the msifix.exe and also try the registry tweak and it did show on the computer device manager and unraid as being on the nonnegative side. any help please will be appreciated here is the XML Untitled.pages
  7. Hello and thanks in advance, i always have set up VM Win10 with default levels and it has always works fine but i deleted my old windows and created a new one defaults settings and set install disk to Cache but now disk utilization is always 99%, i follow the youtube video to enable the best performance on the disk side but still nothing, any inputs or suggestions XML, Diagnostic and some pictures Model: Custom M/B: Gigabyte Technology Co., Ltd. - H170N-WIFI-CF CPU: Intel® Core™ i7-6700K CPU @ 4.00GHz HVM: Enabled IOMMU: Enabled Cache: 256 kB, 1024 kB, 8192 kB Memory: 16 GB (max. installable capacity 64 GB) <domain type='kvm' id='3'> <name>Windows 10</name> <uuid>816b5966-73e7-0bbb-6285-14b47ba3e13e</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/816b5966-73e7-0bbb-6285-14b47ba3e13e_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/domains/Windows 10/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows/Windows 10.iso'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.126-2.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:21:9e:6b'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-Windows 10/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1f' function='0x6'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x093a'/> <product id='0x2521'/> <address bus='1' device='3'/> </source> <alias name='hostdev3'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1b1c'/> <product id='0x1b36'/> <address bus='1' device='4'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x20a0'/> <product id='0x0006'/> <address bus='1' device='8'/> </source> <alias name='hostdev5'/> <address type='usb' bus='0' port='3'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </memballoon> </devices> <seclabel type='none' model='none'/> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> medianas-diagnostics-20170422-2054.zip
  8. The principle remains the same Still dont work. Question the file downloads and gets rename with not problem. By pointing to books will it move the file as well. Could you provide me with a configuration example Sent from my iPhone using Tapatalk
  9. i have read a your title however i have couch potato, sonarr, Headphones all post processing to their correspondent files
  10. I also have a problem the books download on the correct folder on SabNZBD but the docker doesn't pick the books automatically.
  11. Retune https://play.google.com/store/apps/details?id=com.squallydoc.retune&hl=en Works fine... When the container is working. I had a working setup I use occasionally several weeks ago, it seems to be broken at the moment. Don't know if it's related to the current support case or not. I downloaded retune, and admittedly didn't spend too long with it, but could only get so far as getting a PIN number on my Android screen and a key in my log. What do I need to do from there? i use a text editor app and check for the retune name to apear on the log and the pin on the screen and save the file as a .remote file
  12. heres is the config page and also reinstall the app and this is the initial log: 1st time after installation use retune for one minute worked with only one album, after that i run remote on iphone and it all went down again first install log rought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/index.php/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-dbus: executing... [cont-init.d] 30-dbus: exited 0. [cont-init.d] 40-config: executing... [cont-init.d] 40-config: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. dbus[245]: [system] org.freedesktop.DBus.Error.AccessDenied: Failed to set fd limit to 65536: Operation not permitted [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client connecting [ LOG] db: Could not check database version, trying DB init [ LOG] db: Database OK with 0 active files and 6 active playlists [ LOG] mdns: Failed to create service browser: Bad state [ LOG] raop: Could not add mDNS browser for AirPlay devices [ LOG] mdns: Failed to create service browser: Bad state [ LOG] cast: Could not add mDNS browser for Chromecast devices [ LOG] mdns: Failed to create service browser: Bad state [FATAL] remote: Could not browse for Remote services [FATAL] main: Remote pairing service failed to start [ LOG] main: MPD deinit [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: HTTPd deinit [ LOG] main: Player deinit [ LOG] main: File scanner deinit [ LOG] main: Cache deinit [ LOG] main: Worker deinit [ LOG] main: Database deinit [ LOG] main: mDNS deinit [ LOG] main: Exiting. Found user 'avahi' (UID 86) and group 'avahi' (GID 86). Successfully dropped root privileges. avahi-daemon 0.6.32 starting up. WARNING: No NSS support for mDNS detected, consider installing nss-mdns! Loading service file /etc/avahi/services/sftp-ssh.service. Loading service file /etc/avahi/services/ssh.service. *** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** socket() failed: Address family not supported by protocol Failed to create IPv6 socket, proceeding in IPv4 only mode socket() failed: Address family not supported by protocol Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. New relevant interface virbr0.IPv4 for mDNS. Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. New relevant interface docker0.IPv4 for mDNS. Joining mDNS multicast group on interface br0.IPv4 with address 192.168.1.43. New relevant interface br0.IPv4 for mDNS. Network interface enumeration completed. Registering new address record for 192.168.122.1 on virbr0.IPv4. Registering new address record for 172.17.0.1 on docker0.IPv4. Registering new address record for 192.168.1.43 on br0.IPv4. [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client registering [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 0 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec Server startup complete. Host name is MediaNas.local. Local service cookie is 3090305525. [ LOG] mdns: Avahi state change: Client running Service "MediaNas" (/etc/avahi/services/ssh.service) successfully established. Service "MediaNas" (/etc/avahi/services/sftp-ssh.service) successfully established. second logs, those last 4 lines of code just keep repeating and repeating i hope someone can help. i really want to make it to work Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/index.php/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-dbus: executing... [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... ------------------------------------- _ _ _ | |___| (_) ___ | / __| | |/ _ \ | \__ \ | | (_) | |_|___/ |_|\___/ |_| Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/index.php/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 30-dbus: executing... [cont-init.d] 30-dbus: exited 0. [cont-init.d] 40-config: executing... [cont-init.d] 40-config: exited 0. [cont-init.d] done. [services.d] starting services Process 244 died: No such process; trying to remove PID file. (/var/run/avahi-daemon//pid) Found user 'avahi' (UID 86) and group 'avahi' (GID 86). Successfully dropped root privileges. avahi-daemon 0.6.32 starting up. WARNING: No NSS support for mDNS detected, consider installing nss-mdns! dbus[243]: [system] org.freedesktop.DBus.Error.AccessDenied: Failed to set fd limit to 65536: Operation not permitted [services.d] done. Loading service file /etc/avahi/services/sftp-ssh.service. Loading service file /etc/avahi/services/ssh.service. *** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** socket() failed: Address family not supported by protocol Failed to create IPv6 socket, proceeding in IPv4 only mode socket() failed: Address family not supported by protocol Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. New relevant interface virbr0.IPv4 for mDNS. Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. New relevant interface docker0.IPv4 for mDNS. Joining mDNS multicast group on interface br0.IPv4 with address 192.168.1.43. New relevant interface br0.IPv4 for mDNS. Network interface enumeration completed. Registering new address record for 192.168.122.1 on virbr0.IPv4. Registering new address record for 172.17.0.1 on docker0.IPv4. Registering new address record for 192.168.1.43 on br0.IPv4. [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client registering [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec Server startup complete. Host name is MediaNas.local. Local service cookie is 3618831226. [ LOG] mdns: Avahi state change: Client running Server startup complete. Host name is MediaNas.local. Local service cookie is 3618831226. [ LOG] mdns: Avahi state change: Client running Service "MediaNas" (/etc/avahi/services/ssh.service) successfully established. Service "MediaNas" (/etc/avahi/services/sftp-ssh.service) successfully established. [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off [ LOG] main: mDNS init [ LOG] mdns: Avahi state change: Client running [ LOG] db: Now vacuuming database, this may take some time... [ LOG] db: Database OK with 11 active files and 6 active playlists [ LOG] scan: Bulk library scan completed in 0 sec [ LOG] main: Forked Media Server Version 24.1 taking off
  13. Any specific installation you follow, like i said im able to link the remote app but i dont see my files (in my case i have only one song. as i am testing
  14. I also have an android device and I don't work Sent from my iPhone using Tapatalk