Loch

Members
  • Posts

    195
  • Joined

  • Last visited

Everything posted by Loch

  1. A bit old now, but for anyone else, to set up a mariadb container I referred back to @SpaceInvaderOne tutorial on setting up Nextcloud Here Very quick and simple. After reading through a Reddit debate on multiple databases per install, I went with a separate mariadb designated for Photoview. I've been looking for a good photo gallery for me and sharing with family/friends. I wish I could get rid of the Info sidebar (supposedly coming) and I wish it had some internal photo management stuff (moving/tagging) but so far not too bad.
  2. Just for clarification (perhaps I'm just a bit dense today). paths are what messed me up. Host: /config = appdata: /binhex-crafty/ so server path = /config/crafty/servers/CoolServerName then put your jar in that folder. Start it and it makes the files/dir you need and errors out telling you that you have to agree to the EULA Click FILES button, click eula.txt to open and change "false" to "true" (no quotes). Save Restart server Notes: 1) My Docker container started filling up rapidly and it turns out it was making Server Backups every minute and the default backup location was in the container so you may want to turn this off in TASKS or edit it to /config/crafty/Backup or wherever you want via the BACKUP button 2) Getting "Unable to Connect" errors on some servers (older versions) so I can't see #of players, etc. But terminal and commands seem to work fine
  3. These seem like quite different projects. Correct me if I am wrong but eCloud but it seems to be a way to replace "Google" in the android world. Instead of "Google", you would run your own version of "Google" for syncing cloud/email/cal/etc. I actually just found Yunohost this week and tested it out in a VM. It's pretty slick, running a fortified Debian host with scripts to install a multitude of "apps" (software/servers). The webUI is functional but it still would be wise to have a decent amount of knowledge in self-hosting as I've had to do a bunch of googling and try and learn. While it provides an easy way to install those "apps", it still seems to require the experience to know how to maintain them in a linux setup and know more about maintaining the network settings. I got a few apps up and running but it seems to be a complete system with it's own reverse proxy and firewall. It appears to be meant to be fully exposed to the Net rather than having Dockers behind a reverse proxy in unRaid. I think I like the Dockerized approach a bit more so when I break something, the damage is limited. You could spin it up as a VM on unRaid and run things that way. Again, I've only been testing for a few days and I am by no means a linux or networking guru but it did provide a very simple way to set up Pixelfed, Pleroma, Mastodon, etc if you are looking into those types of social apps. But you will need to do all the management of those apps once installed.
  4. I played around with those instructions but no luck (perhaps just be me). Other option is to just start fresh. I found https://dev.to/joenas/pixelfed-beta-with-docker-and-traefik-35hm which is old but seems to have instructions using docker-compose that may work. Hopefully someone with more experience than me might be able to help.
  5. +1 Found this https://blog.pixelfed.de/2020/05/29/pixelfed-in-docker/ which has the dock-compose file. I guess I need to just spend some time figure out how to use that in unRaid.
  6. See my post above. Good luck trying to find the cause. I just nuked the Docker image , deleted the app dir and reinstalled from the Unraid Docker template. I was up and running in <15 min (although I spent an hour or two trying to "fix" it before that). My set up isn't fancy so I don't really know what all you may loose. I feel like updating Nextcloud is like putting it all on red at the roulette table.
  7. @BrunoVic@jmial I've been running NC since like vers 12 or so and about every other time I upgrade it bonks totally. When I upgraded NC this time I ran into the same "internal Service Error". According to the NC forums that pretty much the generic error message. Instead of going through the logs I decided to just nuke the Docker image and delete the appdata directory. I was on vers 18 and was only really using it for file serving/backup which I have all going to a directory outside the appdata directory and Docker so I figured those files were all safe anyway. I tried deleting and starting from my back-up NC appdata folder but it gave me the same error. After you delete the NC image and dir (I actually moved it instead for backup - although it doesn't work), you can just load your NC template and start from scratch. It will automatically install the newest version (v20) and recreate the appdata directory. You will need to connect it to the old database so NOTE: If you want to keep your users/pw you will need to know your user/database/pw for your Nextcloud database on Maraidb (assuming you did the SpaceInvader video you can go there and maybe you used the same?? Fortunately I wrote mine down). O/w you will need to start fresh with a new db and users, etc The issue I ran into here was it would not let me use the same admin account I used before so I had to create a new admin username/pw and then use the old database. But with that, it set up and everything was still there including my upload history and such for other users. This is what worked for me. Good luck. But you may want to make back-ups of everything just in case. I like Nextcloud and it has gotten much better over time, but it is still very temperamental! If it's not something easily googled, I just wipe it and start over.
  8. I migrated to the cloud (Google Play Music) a while ago but any subsonic client should work with Airsonic. DSub used to be the best but it hasn't been updated in a long time and sounds like it is loosing functionality. Ultrasonic sounds like it may be a viable option. If you go the Booksonic route, the booksonic app gets good reviews. Plexamp looks nice and it sounds like there is some active development on some similar apps for Jellyfin on Reddit Now with Google Play Music shutting down I may have to starting looking at self hosting again.
  9. Airsonic works great for all remote audio streaming. I believe there is a Booksonic as well which is a slimmed down version of Airsonic primarily for audiobooks if that is all you currently want. And yes, I would recommend using some kind of reverse proxy. Try out NginxProxyManager Docker (I just tried it out and find it easier to set up than the Letsencrypt Docker). You should check your client too as I don't know that all clients support bookmarking the same (which can be especially painful with large audiobooks).
  10. Quick google search suggests this has been around for many years. https://github.com/gpodder/gpodder/issues/508 https://bugs.gpodder.org/show_bug.cgi?id=369 Suggests manually editing the config (up to 1000) and then restarting. Don't know if this will help you or not.
  11. For ebook management - Calibre For transferring to Kindle - COPS (runs a web server that you can access using the web browser on the Kindle). No emailing
  12. Sorry, late to the party but I had this same issue. Website doesn't work in Chrome but works fine in Firefox in case this helps others.
  13. Just adding my experience. Not a huge Nextcloud user and I've had trouble with every update. From past experience I decided to go 16->17->18. 16->17 via GUI. Borks several times but with some retries it finally finishes but all I get is Maintenance mode. go to the manual guide, disable it and I get the upgrade screen, again which borks but it's still using CPU so I give it time and once the CPU goes to 0 it lets me log in and I'm at 17 17->18 No GUI option so I do the manual route below. Way more smooth (just use the click->console from the unRaid GUI). Upon restart it takes a long time to finish executing (7-10 min but it's not the fastest processor) but finally finishes and I'm at 18. Haven't tried any of the new Office stuff but everything seems to working fine so far.
  14. Just a quick reply on my experience with the Adoption loop. Docker installed fine and started fine in bridge mode (I had to change the communication port to 8083 since I was already using 8080 for something else). Tried to adopt APs but just kept looping. Tried to change to host mode but it would always revert to using port 8080 for communication (even when the run command listed 8083 or whatever I put in the config??). So I tried setting a different IP for the Unifi-Controller (same subnet). Worked like a breeze. Didn't have to override Hostname/IP on the controller or log into the APs to force inform. This may be a possible solution for those running into problems. Anyone see any issues with running it this way?
  15. @xthursdayx Just updated. Works great. Just tick the extension for autoconversion and once downloaded any mp4 podcasts will automatically convert over to mp3 and delete the original. Thanks!
  16. There is an extension in GPodder which will auto convert to mp3 on download but you need to have ffmpeg installed on the system with it. I thought if you added the ffmpeg install to the container it might work.
  17. Thanks for this. Less of a resource hog than Airsonic when now I only need the podcatcher component. Would you be able to update it to include ffmpeg or avconv? My Sansa mp3 player seems to have problems with m4a files so it would be nice to autoconvert them on d/l. If not I'll have to script something up. Thanks
  18. Not an expert by any means but it sounds like any data that needs to traverse from any computer on Switch A to any computer on Switch B has to go through that single cable connection (assuming 1 Gbps). That will limit your transfer rates depending on how much and where data is traveling to. ie, say you have Computers A-E on the 5 port switch and F-M on the 8 port switch. If you are sending from A->F and from B->G at the same time, then they are sharing the bandwidth of that connecting port and cable (presumably 1Gps). Alternately if they all were on the same switch, A->F and B->G would be each be able to utilize their own max bandwidths as long as the switch is fast enough to handle it (which should realistically be the case with modern switches I would imagine) Depends on whether that is worth the extra $$ for you oh yeah, and you don't loose 2 ports for the connecting cable and it tends to be less cluttered if that matters
  19. Ran into the “Updates between major versions and downgrades are unsupported” when trying to upgrade from NextCloud version 14 to the new 16. This solved my issue and made the upgrade fairly straightforward : https://mergy.org/2018/10/nextcloud-14-upgrade-error-updates-between-major-versions-and-downgrades-are-unsupported-fix/ Simply comment out 3 lines in the upgrade in Updater.php and you prevent the halt from the version check. Went straight from v14 to v16 without issue Hopefully this may help some folks.
  20. ie in case your cache fills and it needs to write something to it?
  21. @rbmatt1s Under the Shares tab in the web GUI, you can change the setting for specific shares. Just set whatever share you want only on the cache disk to "Use Cache Disk" - "Only" and it will leave anything in that share/folder only on the cache drive. It will thus NOT be backed up to the array, will show as Orange in the GUI and if the drive dies, it will all be gone, but if you use Cache Pool, then you should have your RAID level backup. And all those programs are well done with Docker (soo much easier than a few years ago). Use the wonderful CA Backup/Restore Plugin and it can back-up all the app-data (important parts) to the array so if you loose the cache drive (or in your case loose both drives), then all you have to do is reinstall download the containers and restore the app-data and you are back in business.
  22. The question is if you want redundancy of your cache drive. You can run a cache pool (RAID1) to give you that but it will use up 2 SATA ports for only 250gb of space. If it's true "cache" than it will be moved to the array every 24 hours so only a short period for loss. I just use my cache drive for all my Docker stuff and unRaid mover ignores it. For me redundancy isn't that big of a deal as I back-up all the apps data so that can be rebuilt fairly easily and I don't worry about the cache data. All depends on how you want to allocate your SATA ports and drive space. As you said, maybe 1 for cache and 1 for unassigned and then when you need more array space, just swap the unassigned for an array disk. And welcome to the forums. Beware, Docker is addicting! and you can do so much more than Amahi.
  23. UnRaid uses the notify command /usr/local/emhttp/webGui/scripts/notify which I found actually easier. Once you set up notifications in the GUI, this just tacks on to the system notifications. You can specify the specifics from the command including severity. No expert here but it wasn't hard to use in a script
  24. Nevermind. Deleted the entire JAVA_OPTS entry and it works now. Seems rather slow but functional for my needs.