Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Vynce

  • Rank
    Advanced Member


  • Gender
  • Location
    Midwest US
  1. That sounds exactly like the issue I saw. Try moving all albums that include jpg artwork to another folder temporarily and see if that fixes the Remote app issue.
  2. The default username/password for forked-daapd seems to be admin/unused, but there's no actual web interface there in 25.0. A lot of the commits since 25.0 look like they're related to adding a web interface, so the web interface instructions must be for that or maybe from a previous web interface that was removed from this fork (?).
  3. Just as a heads up for anyone else trying to get this working, forked-daapd 25.0 seems to crash repeatedly when the iOS Remote app requests album artwork and the artwork is stored in jpg format -- artwork in png format works fine. It doesn't matter if the artwork is embedded in the audio files or stored as a separate artwork.jpg file in each album folder. I don't see any obvious issues in the logs after turning on debug logging and I haven't been able to find any crash logs anywhere in the docker container. I tried rolling back to forked-daapd 24.2 by pulling the 115 tag, but I couldn't get remote pairing to work there. The work around I eventually came up with was to export each album's artwork and save it as artwork.png in each album folder. This works because forked-daapd looks for an artwork.png/jpg file in the folder before trying to extract any embedded artwork from the audio files, so the audio files can still contain embedded jpg artwork. There have been a few artwork-related changes in forked-daapd since 25.0, so I was wondering if it's possible to build the latest source inside the container to test it out? I didn't want to open a new issue in the forked-daapd project without first checking if the issue is still present at the tip of master.
  4. Looks like 5.4.9 was pulled due to some upgrade issues: https://community.ubnt.com/t5/UniFi-Updates-Blog/UniFi-5-4-9-Stable-has-been-released/ba-p/1800599 https://community.ubnt.com/t5/UniFi-Wireless/5-4-9-get-pulled/m-p/1818020
  5. Any idea why the latest build downgraded from 5.4.9 to 5.3.11?
  6. What I was seeing is that the main Crashplan port was getting set to 0 and the service port set to 1 by 01_config.sh whenever the container was started. This would not occur if you set the TCP_PORT_4242 and TCP_PORT_4243 variables in the docker config. I did not have those variables set. grep -nHE '(location|servicePort)' /mnt/cache/appdata/crashplan/conf/my.service.xml /mnt/cache/appdata/crashplan/conf/my.service.xml:9: <location></location> /mnt/cache/appdata/crashplan/conf/my.service.xml:18: <servicePort>1</servicePort> I launched a bash shell in the container: " docker exec -it CrashPlan bash " and made the edits to 01_config.sh as described in my previous post. This allows Crashplan to start with the correct default ports and I'm able to connect to it again now.
  7. There are a couple of errors in the sed commands in 01_config.sh which are causing Crashplan to be configured with bad ports and makes it unreachable. ${TCP_PORT_4242} on line 46 needs to be replaced with ${SERVICE_PORT} and ${TCP_PORT_4243} on line 47 needs to be replaced with ${BACKUP_PORT}
  8. VNC appears to be broken with the most recent update: # docker exec -it CrashPlan bash root@unRAID:/# /etc/service/tigervnc/run [...] Sat Oct 1 17:34:27 2016 vncext: VNC extension running! vncext: Listening for VNC connections on all interface(s), port 4239 vncext: created VNC server for screen 0 XKB: Failed to compile keymap Keyboard initialization failed. This could be a missing or incorrect setup of xkeyboard-config. Fatal server error: Failed to activate core devices. root@unRAID:/# /etc/service/novnc/run Must have netstat installed
  9. Thanks guys! Back up and running again
  10. Running unRaid 6.1.6 6 disks total: 2TB parity, a couple more 2TB data drives, a couple of 500GB data drives, and a cache/app/docker drive. Some write failures occurred on disk1 (2TB), so unraid took it offline/redballed the drive. I bought a 4TB replacement and started preclearing it. Some data was written to the emulated disk1 after it was taken offline. Data was also written to disk2. At some point the USB flash drive glitched and became unmounted (I may have bumped it). Lots of FAT read errors like these in the syslog: FAT-fs (sda): Directory bread(block 520) failed FAT-fs (sda): Directory bread(block 521) failed FAT-fs (sda): Directory bread(block 522) failed FAT-fs (sda): Directory bread(block 523) failed FAT-fs (sda): Directory bread(block 524) failed FAT-fs (sda): Directory bread(block 525) failed FAT-fs (sda): Directory bread(block 526) failed I rebooted the server and it came up without any assigned disks. super.dat is 0 bytes. There are super.old and super.prev files, but their last modification times are back in 2013. unRaid thinks it's creating an initial configuration. The syslog concurs: md: unRAID driver 2.5.3 installed md: could not read superblock from /boot/config/super.dat md: initializing superblock mdcmd (1): import 0 0,0 mdcmd (2): import 1 0,0 mdcmd (3): import 2 0,0 mdcmd (4): import 3 0,0 mdcmd (5): import 4 0,0 I've reassigned all the disks to their previous slots. I think the parity disk is assigned correctly, but I'm not 100% sure. Is there an easy way to tell which disk doesn't have a filesystem? I don't think I want to start the array now, because it looks like that's going to rebuild parity based on the incomplete contents of disk1. Telling it to trust parity doesn't seem quite right either since it's also going to trust the contents of disk1 (which it shouldn't). How do I tell unraid that disk1 is bad and should be rebuilt (preferably onto the new drive)? Current syslog is attached. I was also able to save off all the dmesg output from my terminal that shows the flash drive errors before I rebooted. I can attach that as well if it could be helpful. syslog.txt
  11. Ah, good to know. It's just a bit confusing since all the documentation mentions ' /log[b]s[/b] '.
  12. It looks like there's a typo in https://github.com/gfjardim/docker-containers/blob/templates/needo/plexWatch.xml#L43 <ContainerDir>/log</ContainerDir> The ' s ' in ' /log[b]s[/b] ' is missing.