Leaderboard

Popular Content

Showing content with the highest reputation on 04/09/19 in all areas

  1. Application Name: szurubooru-api Application Site: https://github.com/rr-/szurubooru Docker Hub: https://hub.docker.com/r/sgsunder/szurubooru-api Template Repo: https://github.com/FoxxMD/unraid-docker-templates Application Name: szurubooru-client Application Site: https://github.com/rr-/szurubooru Docker Hub: https://hub.docker.com/r/sgsunder/szurubooru-client Template Repo: https://github.com/FoxxMD/unraid-docker-templates Overview Szurubooru is an image board engine inspired by services such as Danbooru, Gelbooru and Moebooru dedicated for small and medium communities. It is pronounced as shoorubooru. Some features: Post content: images (JPG, PNG, GIF, animated GIF), videos (MP4, WEBM), Flash animations Ability to retrieve web video content using youtube-dl Post comments Post notes / annotations, including arbitrary polygons Rich JSON REST API Token based authentication for clients Rich search and privilege system Autocomplete in search and while editing tags Tag categories Tag suggestions Tag implications (adding a tag automatically adds another) Tag aliases Duplicate detection Post rating and favoriting; comment rating Prerequisites Postgres 11 is required for szurubooru to run. It is accessed through IP so any existing instances you have can be used. Otherwise check CA for "postgres" to find available options. How To Use Start by installing szurubooru-api. The template provides defaults for volumes and api port, you will need to: Fill in the IP endpoints for Postgres as well as user/pass/db. Optionally, if you have your own server config you can override the default production config by creating another Path entry and setting container dir to /opt/app/config.yaml, with the host dir pointing a valid file EX /mnt/user/appdata/szurubooru/config.yaml After the api is running install szurubooru-client. The template provides defaults for everything but the api endpoint. Visit your docker page or check "show docker allocations" to find the internal IP szurubooru-api is bound to. Use that as the BACKEND_HOST value. The first user created will have admin privileges. NOTE: If you are upgrading from v2.2 Elasticsearch is no longer necessary. It is recommended you backup your postgres DB and data folders before upgrading. Once you have started the new (upgraded) container do not stop/update the container until post data has been rehashed -- progress can be checked by viewing the logs. After hashing is complete you can safely remove the Elasticsearch Host variable from your unraid template and restart the the container.
    1 point
  2. You will need to remember. Of course FCP will catch it for you if you don't (I always forget to enable the setting also)
    1 point
  3. I'm not sure if this is the answer you need, but I do this all the time. Make a file with the contents you want at /boot/config/mybashprofile (or whatever name you want but that's the general location.) Then in your go file at /boot/config/go - add a new line that reads: cat /boot/config/mybashprofile >> /root/.bash_profile Reboot to test.
    1 point
  4. The default for those shares should already be cache Prefer. Your dockers path should currently be set to /mnt/user type paths, and if so the move is completely transparent as cache is included in User Shares. The main thing is not have any shares you do NOT want on the cache set to cache Prefer as this would cause mover to attempt to move their contents to the cache. If you have any shares that you initially want written to the cache and then transparently moved to array disks when mover runs overnight set them to cache Yes. It is not uncommon for people to get confused by the difference between cache-Prefer and cache-Yes (although the GUI Help does give a good explanation of the differences if you make use of it).
    1 point
  5. Hi @GoodOlCap, glad it worked for you! The mapping you listed is for the build directory in the intermediate layer. Instead overwrite the copied favicon in the final layer like this: Container Path: /var/www/img/favicon.png Host Path: /mnt/user/Docker/appdata/szurubooru/img/favicon.png You may then need to flush the favicon cache for your browser if restarting the browser doesn't work: https://www.stirtingale.com/guides/2018/01/how-to-favicon-flush
    1 point
  6. Hey Jorgen, Below is my docker run. It already has NET_ADMIN in it so i'm at a loss. docker run -d --name='openvpn-as' --net='bridge' -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e 'PGID'='100' -e 'PUID'='99' -p '943:943/tcp' -p '9443:9443/tcp' -p '1194:1194/udp' -v '/mnt/user/appdata/openvpn-as':'/config':'rw' --cap-add=NET_ADMIN 'linuxserver/openvpn-as' This is where is gets really bizarre. If if terminal into my unraid box and run an nmap to the container the port is open. I can even connect to localhost, the IP of my unraid box and the ip of the container all successfully. But as soon as i try from another device it shows as blocked. Anyone else have any ideas? ***EDIT*** Still scratching my head so i decided to setup the openvpn appliance. changed my port forward to the new appliance and it worked first time. Could it be my Unraid server blocking the connection?? **FINAL EDIT** So i'm a muppet. Something funky must have been going on in my Unraid box. Restarted it and it came good straight away... I forgot rule 1 of tech support. Have you tried turning it off and on again....
    1 point
  7. You wouldn't reverse proxy a database connection via a HTTP proxy. Just point your windows application at the firebird SQL port number and away you go.
    1 point
  8. As long as you don't use any path that specifies a specific disk, and instead only use paths that refer to user shares, then you are most of the way to getting things setup to run on cache. There are a few steps you would need to perform to get the files actually moved to cache, but you shouldn't have to change any paths used by individual dockers.
    1 point
  9. I just went through a migration myself, I don't think its worth the hassle to use green drives as cache, unless you are creating a number of temp files that would go away before mover runs it's not really saving a ton of wear and tear on the array drives and certainly won't be any faster than just writing to the array. As for moving files later, there is a function in the plugin "Fix common problems" to move the files I did the same thing as I did not initially setup the cache when I spun up the system.
    1 point
  10. No reason not to go with the Release Candidate. It is thought to be very close to going Stable anyway. The RCs tend to be stable as long as you do not have any hardware specific issues and even then it can be a good idea so such issues can be reported and fixed in time for the next Stable release. You can always revert to the Stable release if necessary.
    1 point
  11. Assuming Disk5 is the unassigned 1TB Samsung, it looks past its best days, I would replace it.
    1 point
  12. What appears to be happening is that since disk 5 went 'MIA', Unraid is now proposing to rebuild whatever disk you assign to the disk5 position using its reconstructed contents from Parity. (Whatever happens at this point, do NOT authorize a format of this disk!!! Formatting will also update parity and you will have an empty drive on both the 'real' drive and 'emulated' one that could be reconstructed by parity.) To maximize your chances of not losing an data, this is what I would do. Remove the present disk5 from the server, tag it with a large post-it note (with an instruction not to use) and set it aside until everything is resolved. Get another disk (Of equal or larger size) and install it into the array. Now double check that all of the SATA connectors-- both power and data-- on every drive are firmly pushed in. Assign it to the disk5 position in the array. Unraid should now offer to rebuild the disk. (Whatever happens, never allow a format!) After the rebuild is completed check out its contents.
    1 point
  13. Unfortunately yes. With the current version of the cache, it can take several downloads for it to cache fully. A new version of the cache is being worked on. @mlebjerg How go's your work with the new version
    1 point
  14. This worked perfectly for me. I tried to do the initial installs of the AP and got the adoption loop when I had the docker to set to bridge. One I changed the above settings in the controller everything connected properly. No need to change the docker to host with this change.
    1 point
  15. Yeah remove your existing image. then go click add container on the docker page. click on the template and go down to your user templates... Then click on the openvpn one. Then go to the repository line and change it to linuxserver/openvpn-as:2.6.1-ls11 . That is literally all I had to do.
    1 point
  16. Looks similar to this one: https://forums.unraid.net/topic/75436-guide-how-to-use-rclone-to-mount-cloud-drives-and-play-files/?do=findComment&comment=729731
    1 point
  17. As this isn't really a defect report, but rather, a workaround to solving a current limitation, I'm going to move this thread to the KVM Hypervisor forum (and maybe sticky thread it) so folks see it there. This isn't really a bulletproof solution either, as reboots can cause the bus/device addresses to change sometimes.
    1 point