xthursdayx

Community Developer
  • Posts

    397
  • Joined

  • Last visited

Everything posted by xthursdayx

  1. I'm running into the same problem while trying to transfer files from my MacBook Pro to my Unraid server via SMB, either directly through Finder or using rsync or Carbon Copy Cloner. I am also using 6.10.0-rc2. When I try to make transfers they eventually slow and then freezes and then my Unraid SMB shares are no longer viewable/connectable from my MacBook. My server remains connectable via ssh and WebGUI.
  2. Yeah, this is kind of the wall I've run into unfortunately. As you noted, I'm not an expert, and while I was able to get this container working with Matrix for video calls in the past, troubleshooting other use cases is beyond the scope of what I have time to dig into. Moreover, the development of Coturn in general is pretty specialized and slow - mostly undertaken buy one dev, and dockerized versions in particular have been difficult to develop and troubleshoot. I may try to dig into this again in the future and create my own docker image (and new Unraid template), but for now it's on a bit of an indefinite hold.
  3. Did you ever figure out this issue? I'm having the same problem, but not sure why. @lnxd have you run into this issue before?
  4. I've had this with a variety of my docker containers in Unraid. I'm not sure why, but sometimes the Dockerman interface just seems to lose track of the icon. Anyway, you can try deleting your container and reinstalling it with a new template (inputting the same settings) and see if that re-pulls the icon. That may help, but honestly, I've found sometimes they disappear and then reappear without a whole lot of rhyme or reason.
  5. Hi, I used to have the iOS app working well, but for some reason the interface stopped updating my server. I deleted the server from the app but am now unable to re-add my server, either by manual address entry or QR code. I don't get any error code, just a perpetually spinning wheel. My firewall is setup to allow traffic from my phone to the server. Any idea what else might be preventing the connection, or a way to access a debug log?
  6. I created a version with a dark theme installed. Change your container tag to xthursdayx/gpodder-docker:dark in your template and pull the image. You should have the dark theme installed then.
  7. The Ferdi-server container parameters and Community Apps template have been updated. Please replace the ENV variable EXTERNAL_DOMAIN with APP_URL. APP_URL can be used to specify either a local http URL if your Ferdi-server is not available externally (e.g. http://192.168.2.1:3333) or an external URL (e.g. https://ferdi.my.domain).
  8. Sorry for the difficulties! gPodder should recognize the existing podcasts in your /downloads directory once you add the podcast URLs back into gPodder. Sorry for the extra step of having to re-add the URLs, but you shouldn't have to re-download your existing episodes at least.
  9. @Ryonez @musicking and @stefan marton since the updated and rebased container was just pushed to Docker Hub, please try running Ferdi-server with the updated parameters described on Page 1 of this thread. The new Ferdi-server Unraid template with updated defaults should be parsed and available in Community Apps soon. If you are migrating an existing Ferdi-server container please see the migration instructions above. However, to be honest, I've found it smoother to just roll a new container from scratch after this update. Please let me know if the new container fixes the issues you're having, or if you're still having problems. Cheers!
  10. The new version of the Ferdi-server docker image has been pushed to Docker Hub. Existing users, please note: The latest updates to Ferdi-server and the Ferdi-server Docker image introduce changes to the default SQLite database name and location, as well as the internal container port. The new container port is 3333. If you would like to keep your existing SQLite database, you will need to add the DATA_DIR variable and change it to /app/database, to match your existing data volume. You will also need to change the DB_DATABASE variable to development to match your existing database. Please see the parameters in the Migration section below. Migrating from an existing Ferdi-server: If you are an existing Ferdi-server user using the built-in `SQlite` database, you should include the following variables: | -p 3333:3333 | existing Ferdi-server users will need to update their container port mappings from 80:3333 to `3333:3333` | | -e DB_PASSWORD=development | existing Ferdi-server users who use the built-in sqlite database should use the database name development | | -e DATA_DIR=/app/database | existing Ferdi-server users who use the built-in sqlite database should add this environmental variable to ensure data persistence | | -v <path to data on host>=/app/databases | existing Ferdi-server users who use the built-in sqlite database should use the volume name /app/database | If you are an existing Ferdi-server user who uses an external database or different variables for the built-in `SQlite` database, you should update your parameters accordingly. For example, if you are using an external MariaDB or MySql database your unique parameters might look like this: | -e DB_CONNECTION=mysql | for specifying the database being used | | -e DB_HOST=192.168.10.1 | for specifying the database host machine IP | | -e DB_PORT=3306 | for specifying the database port | | -e DB_USER=ferdi | for specifying the database user | | -e DB_PASSWORD=ferdipw | for specifying the database password | | -e DB_DATABASE=adonis | for specifying the database to be used | | -v <path to database>:/app/database | this will strore Ferdi-server's database on the docker host for persistence | | -v <path to recipes>:/app/recipes | this will strore Ferdi-server's recipes on the docker host for persistence | **In either case, please be sure to pass the correct variables to the new Ferdi-server container in order maintain access to your existing database.** For more information please check out the Docker README.md on Github: https://github.com/getferdi/server/tree/master/docker
  11. You shouldn't see any ports (or question marks) associated with your RoonServer container, since no ports should manually be assigned to the RoonServer container and your container should be running in host mode. Based on your screenshot, you do have your container network running as Host, which is correct. As I mentioned previously, Roon's developers have stated that Roon runs on ports 9003/UDP and 9100-9200/TCP, however they've also mentioned that Roon may randomly assign TCP ports, and other troubleshooting has indicated that Roon conflicts with Emby and Roon, due RoonServer using port 1900, despite the devs never mentioning this. The way Roon manages port allotment is very annoying, and it's frustrating that Roon's devs can't (or won't) be more clear about what ports RoonServer uses. I looked into changing RoonServer's ports when I was considering rolling my own RoonServer image (to replace Steef's), but it is not possible to change the ports RoonServer uses. You can map your RoonServer container ports to different host ports if you run the container in bridge mode, however, there is no way to tell your various Roon endpoints and apps that your CORE server is running on a different port, so they will not be able to find your Roon CORE server if you do this. I don't know why your container shows three question marks for the port. In my experience, there should not be any ports listed at all, since the container is running in host mode. See my configuration, you'll notice no ports or local container IP listed: The only way to make sure that your RoonServer ports do not conflict with any other running container on your UNRAID server is to make sure not to map any other containers ports to 1900, 9003, or 9100-9200. My suggestion is map your Emby DLNA port to another port, or just remove it as you already did. If you choose to stick with removing it, I'd also turn off DLNA and automatic port mapping within Emby. (In Emby, in Devices>DLNA, turn off “Enable DLNA Play To” and “Enable DLNA server”. In Advanced turn off “Enable automatic port mapping”). Roon plays well with Plex, which also uses port 1900 for UPnP discovery, so the problem seems to be with Emby rather than Roon. Apparently this is a known issue with Roon and Emby. You can read more about it here. Your safest bet for now is to keep port 1900 open, along with 9003 and 9100-9200. You should have been able to see which devices needed to be updated when this update nag appeared. I assume it probably included your Roon CORE server, and if so, you would have been able to update it via your iPad Roon Remote app. You can't check whether this was the case retroactively though. The data path contains Roon's database, logs, and cache files (including those for RoonServer, RAATServer, and RoonGoer). Deleting this directory alone will only cause you more problems. If you want to reinstall RoonServer my suggestion is that you delete your entire appdata folder, delete your RoonServer container (and RoonServer image), and reinstall using the template in CommunityApps. Fix Common Problems will not recognize or fix parity issues. Your 836 parity errors are likely the result of an unclean shutdown/power loss (or more than one), or due to impending drive failure, though this should have been apparent when you rant your SMART tests. You could try to run extended SMART tests on your drives, but you should also run another parity test to see if it completes without errors. Either way, I'm not sure if this relates to any issues you're having with RoonServer since it seems that port 1900 may the culprit. However, if you still have issues with Roon data loss it might be that one of your drives is failing and you're having data corruption, or that your server is rebooting without a clean shutdown which is causing data corruption. However, as I noted previously, don't think this is an issue with UNRAID, but more likely an issue your server itself (either hardware, or MB bios). I hope this helps you get RoonServer running smoothly.
  12. This container doesn't have a web interface because it is just for the Roon CORE server, which can only be accessed and controlled by Roon Remote or Roon (their terminology is a bit confusing and imprecise) running on another computer or as an app on a phone or tablet, like you're doing. When you say that you updated Roon via your iPad, do you mean that you updated the Roon CORE (running on Unraid), or the Roon Remote app on your iPad? Or both? If your install is broken then the only thing to do is probably to delete your Roon Server appdata folder, reinstall this Roon Server container and create the library from scratch. Unfortunately I'm not sure that we'll be able to help you figure out what is going wrong. Updating works fine for me using this container and my Unraid template, and it doesn't seem like anyone else has been having trouble lately. One thing I'd note is that the trouble you're having is similar to the trouble I had early in my attempts to get this container running on Unraid, before I properly separated the /app and /data directories within the Roon Server appdata folder. If you do decide to delete your instance and start from scratch, make sure to install Roon from Community Apps using the current template, and make sure to follow the template directory structure. If you've already done this and have your /app and /data directories properly set up, then the updates should be working. If they aren't, then I wonder if the problem is either related to your local network or if there is so issue with persistence on your Unraid server. I can't think what else would be the problem. The fact that you had updating problems before (as you mentioned here) but were able to successfully update (as mentioned here and here) and previously got things working again by uninstalling and reinstalling the Roon software on a separate remote PC is confusing to me and seems to indicate that the problem is with your network, rather than the RoonServer container itself, but I'm not sure. I haven't personally seen evidence to support your concerns about Unraid's "robustness", but I also haven't run into the problem of losing my music files before (or any other files) and haven't seen anyone else have this issue with Unraid (or Roon) either. I've had a drive fail before, but Unraid's parity system was able to reconstruct the data without any issues when I replaced the drive. My gut feeling is that you may still have an issue with your container settings, or are having more drive problems. Are you using a cache drive (preferably an SSD) to store your appdata directory? Outside of that it could be your local network settings (as I mentioned above), but it's hard to know. You can post your RoonServer container settings here if you'd like to confirm that they're correct. My best advice, other than that, is to check the SMART readings of your drive(s), use an SSD cache for your appdata directory, and check on your network traffic, making sure that your router/firewall is allowing traffic between your remote PC(s) and Roon apps and your RoonServer container/Unraid server on the ports I mentioned previously. Best of luck getting to the root of your problems.
  13. Hey, thanks for that @psycho_asylum. I've found the same result as you — no problem downloading (or adding to seed) files or directories with non-ascii characters. My main concern is about maintaining proper file and directory names of items that are long-term seeding but which have those non-ascii characters.
  14. @binhex Is there any way to get this docker container to work with non-ascii characters, for things like directories, for example? I have a number of torrents seeding that have East Asian characters in their file names as well as other non-ascii charaters (ü å etc) and currently when I add them to ruTorrent those characters are just skipped in the resulting filenames and/or directories. I previously ran ruTorrent using the Linuxserver docker image and didn't have a problem with these characters, so I assume that the issue isn't with ruTorrent (or rTorrent) but rather has to do with the locale encoding of your arch-base image? Since your base image has LANG set as en_GB.UTF-8 those characters should work, but perhaps some of the locale files are missing? Or it could be some problem with my container settings, but my setup is pretty simple. Any ideas? Thanks!
  15. I'll contact Steef about it to make sure he's planning to update the core image. If not, I'll roll my own and update here. I'll report back here once I know though. Edit: the necessary libraries are already installed in this image, so no changes are necessary. This has already been tested by someone running the beta update. Please see this Github issue for more info. Cheers!
  16. Yep, the issue is due to their being two approved templates for the same image in CA. Squid will have to decide which one to remove, and then you should no longer see this error in the Fix Common Problems plugin. In the meantime, you shouldn't worry about this error as it shouldn't impact your ability to use your Whoogle-search contiainer. Apologies if you added your template first @FoxxMD. I added mine because I didn't see one in CA and was using Whoogle myself, so I figured I'd share UNRAID my template, but it's possible that you'd already added yours by the time that mine updated in CA.
  17. As @Squid mentioned above, this is because both FoxxMD and I created templates for the same image. I think he's removed mine now, but I'm not sure as I'm out of town and can't check my UNRAID server at the moment. My best suggestion is to check CA and see which Whoogle template is listed, and then use that one.
  18. Yeah, you're right, I should have included that. I checked /var and it was not full. See: df -h /var Filesystem Size Used Avail Use% Mounted on rootfs 32G 7.0G 25G 23% / However /var/log does show as 100% full: ❯ df -h /var/log Filesystem Size Used Avail Use% Mounted on tmpfs 128M 128M 0 100% /var/log I deleted syslog.1 and syslog.2, restarted nginx and tailed my syslog. No more nchan out of memory errors, but I'm getting this error constantly: nginx: 2021/09/17 16:53:47 [alert] 2815#2815: worker process 11259 exited on signal 6
  19. I've just started running into this issue as well, on a machine running unRAID 6.10.0-rc1 with 64Gb of memory, which has always been plenty, though I too am guilty of leaving unRAID WebUI tabs open (as well as mosh/ssh sessions running). Interestingly, I'm able to access the pages of the WebUI and see the header and nav buttons, as well as the major sections of pages like the Dashboard and Main, however they're empty, not showing my disks, docker containers, etc. The only way I've been able to (temporarily) “fix” the issue is by restarting my server. I tried restarting /etc/rc.d/rc.nginx/, but it didn't make any difference, as you can see in these logs: Sep 17 02:25:29 vulfTower nginx: 2021/09/17 02:25:29 [alert] 25209#25209: worker process 32623 exited on signal 6 Sep 17 02:25:29 vulfTower nginx: 2021/09/17 02:25:29 [alert] 25209#25209: worker process 32645 exited on signal 6 Sep 17 02:25:31 vulfTower nginx: 2021/09/17 02:25:31 [alert] 25209#25209: worker process 32647 exited on signal 6 Sep 17 02:25:31 vulfTower nginx: 2021/09/17 02:25:31 [alert] 25209#25209: worker process 334 exited on signal 6 Sep 17 02:25:31 vulfTower nginx: 2021/09/17 02:25:31 [alert] 25209#25209: worker process 339 exited on signal 6 Sep 17 02:25:32 vulfTower nginx: 2021/09/17 02:25:32 [alert] 25209#25209: worker process 350 exited on signal 6 Sep 17 02:25:33 vulfTower nginx: 2021/09/17 02:25:33 [alert] 25209#25209: worker process 405 exited on signal 6 Sep 17 02:25:33 vulfTower nginx: 2021/09/17 02:25:33 [alert] 25209#25209: worker process 565 exited on signal 6 Sep 17 02:25:33 vulfTower nginx: 2021/09/17 02:25:33 [alert] 25209#25209: worker process 591 exited on signal 6 Sep 17 02:25:34 vulfTower nginx: 2021/09/17 02:25:34 [alert] 25209#25209: worker process 594 exited on signal 6 Sep 17 02:25:34 vulfTower rsyslogd: file '/var/log/syslog'[2] write error - see https://www.rsyslog.com/solving-rsyslog-write-errors/ for help OS error: No space left on device [v8.2102.0 try https://www.rsys log.com/e/2027 ] Sep 17 02:25:34 vulfTower rsyslogd: action 'action-0-builtin:omfile' (module 'builtin:omfile') message lost, could not be processed. Check for additional error messages before this one. [v8.2102.0 try https:/ /www.rsyslog.com/e/2027 ] Sep 17 02:25:34 vulfTower rsyslogd: rsyslogd[internal_messages]: 561 messages lost due to rate-limiting (500 allowed within 5 seconds) Sep 17 02:25:34 vulfTower rsyslogd: file '/var/log/syslog'[2] write error - see https://www.rsyslog.com/solving-rsyslog-write-errors/ for help OS error: No space left on device [v8.2102.0 try https://www.rsys log.com/e/2027 ] Sep 17 02:25:34 vulfTower rsyslogd: action 'action-0-builtin:omfile' (module 'builtin:omfile') message lost, could not be processed. Check for additional error messages before this one. [v8.2102.0 try https:/ /www.rsyslog.com/e/2027 ] ad infinitum I'm also getting these errors in my logs: Sep 17 02:32:54 vulfTower nginx: 2021/09/17 02:32:54 [alert] 25209#25209: worker process 4936 exited on signal 6 Anyway, I tried running du to check my log sizes as well as df to check how full my cache drives and boot flash drive are, to see if I could identify the issue. You can see the results below. While my NGINX and Syslog are large, neither seem to be large enough to disable access to the WebUI, and neither of my cache drives (where I store my syslog backup). ❯ du -sh /var/log/* 4.0K /var/log/apcupsd.events 0 /var/log/btmp 0 /var/log/cron 0 /var/log/debug 88K /var/log/dmesg 1012K /var/log/docker.log 0 /var/log/faillog 2.8M /var/log/file.activity.log 16K /var/log/gitflash 4.0K /var/log/lastlog 4.0K /var/log/libvirt 4.0K /var/log/maillog 0 /var/log/messages 0 /var/log/nfsd 52M /var/log/nginx 0 /var/log/packages 0 /var/log/pkgtools 0 /var/log/plugins 0 /var/log/preclear.disk.log 0 /var/log/pwfail 0 /var/log/removed_packages 0 /var/log/removed_scripts 0 /var/log/removed_uninstall_scripts 20K /var/log/samba 0 /var/log/scripts 0 /var/log/secure 0 /var/log/setup 0 /var/log/spooler 0 /var/log/swtpm 69M /var/log/syslog 3.6M /var/log/syslog.1 0 /var/log/vfio-pci 8.0K /var/log/wtmp ❯ df -h /mnt/cache Filesystem Size Used Avail Use% Mounted on /dev/sdf1 466G 306G 157G 67% /mnt/cache ❯ df -h /mnt/cache_io Filesystem Size Used Avail Use% Mounted on /dev/sdk1 932G 244G 688G 27% /mnt/cache_io ❯ df -h /boot Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.5G 893M 6.6G 12% /boot Any ideas? For some reason, I'm currently unable to download my diagnostics, but any advice or ideas would be much appreciated. Cheers!
  20. That’s very alarming @xrqp! Glad you were able to resurrect your lost data last time and hope you’ll be able to again. My guess would be that problem is your harddrives or something about how your unRAID shares are set up. I’ve never had this problem or heard of anyone having it in relation to Roon. Sorry not to be of more help, but best of luck. And definitely let us know if you do end up finding some connection to Roon. I’ve already been working on either making a new base image or adding a PR to Steef’s, so it’s a good time if some work needs to be done.
  21. Interesting, I've been having some instability and temp issues recently and now I'm wondering if it has to do with the flashed Dell PERC H310 I have my server right now. In terms of the PIKE 2308 card, did you just follow the same directions cited above for the 2008 card? Or did you run into any differences? BTW, thanks for replying, despite the necro-thread!
  22. I realize this is raising a post from the depths of death, but I was wondering if you were ever able to get one of those PIKE 2308 cards flashed and working @Dextros?
  23. Also, thanks a lot for this, I'm using this (with the appropriate link, of course) as part of an existing script to install neofetch at the start up of the array, since the package NerdPack doesn't seem to work.