Leaderboard

Popular Content

Showing content with the highest reputation on 09/01/19 in all areas

  1. It appears that the docker images --digests --no-trunc command is showing, for whatever reason, the digest of the manifest list rather than the manifest itself for containers pushed as part of a manifest list (https://docs.docker.com/engine/reference/commandline/manifest/#create-and-push-a-manifest-list). I'm not sure if that's always been the case, or is the result of some recent change on the Docker hub API. Also not sure if it's intentional or a bug. This causes an issue since in DockerClient.php (/usr/local/emhttp/plugins/dynamix.docker.manager/include), the request made to get the comparison digest is /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]); which retrieves information about the manifest itself, not the manifest list. So it ends up comparing the list digest as reported by the local docker commands to the individual manifest digests as retrieved from docker hub, which of course do not match. Changing the Accept header to the list mime type: 'application/vnd.docker.distribution.manifest.list.v2+json' causes it to no longer consistently report updates available for these containers. Doing this however reports updates for all containers that do not use manifest lists, since the call now falls back to a v1 manifest if the list is not available and the digest for the v1 manifest doesn't match the digest for the v2 manifest. If the Accept header is instead changed to 'application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json' docker hub will fallback correctly to the v2 manifest, and the digests now match the local output for both containers using straight manifests and those using manifest lists. Until docker hub inevitably makes another change. /** * Step 4: Get Docker-Content-Digest header from manifest file */ $ch = getCurlHandle($manifestURL, 'HEAD'); curl_setopt( $ch, CURLOPT_HTTPHEADER, [ 'Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json', 'Authorization: Bearer ' . $token ]);
    10 points
  2. Thanks for the fix @bluemonster ! Here is a bash file that will automatically implement the fix in 6.7.2 (and probably earlier, although I'm not sure how much earlier): https://gist.github.com/ljm42/74800562e59639f0fe1b8d9c317e07ab It is meant to be run using the User Scripts plugin, although that isn't required. Note that you need to re-run the script after every reboot. Remember to uninstall the script after you upgrade to Unraid 6.8 More details in the script comments.
    2 points
  3. So I appear to be having a problem with dockers, Specifically Linuxserver ones, but they said to me it is an unRAID issue and it is "not just us." I chatted with someone from Linuxserver in private and they said it is an issue with "Update all containers." The dockers will say there is an update ready, but when updated, it does not do anything. Tried manually updating a docker, same result. gibson-diagnostics-20190829-1841.zip
    1 point
  4. It's not written to any permanent storage, it's written to a file in the root file system which is in RAM. Your "secret password" is only to decrypt the drives - even if the password were leaked somehow they still need the physical drives to make use of it. You'll first have to teach her to hack your root login password in order to get into your server. As @bonienl mentioned in above post we have made some changes so that casual user doesn't accidentally leave a keyfile laying around. These will appear in Unraid OS 6.8 release. In meantime, click that 'Delete' button after Starting array and, I'm done talking with you.
    1 point
  5. The files are created at that point, but the folder isn't. Those errors only appear when /tmp/plugins doesn't exist. I would surmise that due to a misconfiguration of the Plex template or Plex itself that Plex deleted the plugins folder. And the diagnostics show that /tmp/plugins does not exist. Change Plex to not autostart in the docker tab and reboot. This will confirm that it is Plex that is causing the problem.
    1 point
  6. The files under /tmp/plugins are created each time a update check is performed. This sounds like you run out of space in /tmp and file creation wasn't possible. What is the output of df -h /tmp
    1 point
  7. I made the change suggested above and my containers are now updating as expected. Thanks
    1 point
  8. Because on Friday's at 2300 GMT all LSIO containers are issued updates rain or shine.
    1 point
  9. If you install the Fix Common Problems App does it say anything about Write Caching?
    1 point
  10. It's not a problem with the OS, but rather how the manufacturers choose to configure their hard drives. Which is why Fix Common Problems is telling you about it. Not sure if I'd want unRaid (or any OS) automatically doing this for me. On the other hand though, a setting in Disk Settings would make life easier.
    1 point
  11. You cannot add both drives at the same time to the parity protected array. if you want to avoid a parity build followed by a clear of the new disk then you could do Tools->New Config and assign all drives as you want them. Existing data drives will keep their data intact. This would invalidate the current parity but when you start the array Unraid wii accept the new data drive and start building both parity drives in parallel. This is faster than adding in 2 steps but means that until the parity is built you are not protected against an existing data drive failing. You have to decide if safety or speed is more important to you.
    1 point
  12. I am from Europe. It seems to be more reliable today, the timeout now happens once I reach AWS in Seattle, which I presume is their firewall.
    1 point
  13. I have found (and you, apparently) that Google is the best way to search this forum. Using unraid or unraid.net as one of the search parameters will target the forum. I just searched for unraid fstab nfs and there were several 'hits'. Not sure if any will necessary help you... If not, please don't be afraid to start your own topic to see if someone has the answer. (Remember that Linux is much more case sensitive than Windows is...) While there are not a true Linux users on the Forum (from my observations), there are a couple who really seem to have a good grasp of it.
    1 point
  14. Not the way it's set up on unraid.
    1 point
  15. Your formatted the emulated disk, formatting deletes all data and updates parity accordingly, then you rebuilt an empty disk.
    1 point
  16. Looks like somehow you deleted /tmp or /tmp/plugins Sent from my NSA monitored device
    1 point
  17. Strange - nothing stood out in the diagnostics indicating a problem. Are you sure these diagnostics covered the period when the problem was occurring? The error message in the screen shots suggest that the plugin files that are loaded into RAM during system boot have somehow disappeared. I would be tempted to just reboot the server to see if the problem goes away as that would reload all plugins as part of the booting process.
    1 point
  18. This has been working fine for me from the first release. A few re-installs or rebuilds when some of things happened but each time it's pretty straight foward. What's important is that you run the first library scan from a pc that has a kodi client install that also uses the MySQL backend. In fact, after creating the DB, it might be easier to confifure everything on a PC and get the client working there. Then you know your settings, paths etc work. The headless kodi is just another kodi client but running on a server. The config to use MySql is the same. It's worth the effort of getting it working
    1 point
  19. <FacePalm> You totally baited me. I feel like I got rick rolled.
    1 point
  20. Basically, the Nvidia Linux drivers are designed such that they aren't fully independent of the display server. Parts of the driver aren't active until a display server like X11 or Wayland hooks the driver resources. As a result of there not being an active display in the unraid environment, nvidia-settings can't be called to change driver settings. In the example above ":0" is shorthand for: "localhost:0.0" which is the first screen, on the first display server, on the local machine. That display doesn't exist, so the n idia settings application just tells you it can't do what you asked. Normally, in Linux land, we have "sysfs" nodes for driver and hardware settings. "Sysfs" is the /sys/ folder on Linux systems. All of the driver flags, power states, temperature sensors, etc live in this folder as files. Nvidia, for whatever reason, has avoided embracing both KMS (kernel mode setting, which let's the kernel make decisions about what the display should be doing during boot) and sysfs nodes. This honestly is something Nvidia should be fixing. We might be able to band-aid it by running a display server in a docker with the Nvidia settings application and using it to manage the power state. It also might fail horribly. It might be better to create a fake second X server on unraid itself using a userscript as the temporary solution. EDIT2: Just tested my above theory with a fake X server on unraid and it works perfectly. Let me write a userscript to create one of these fake environments on the fly.
    1 point
  21. Alright got Mayan-EDMS working, DEV complete : root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='Mayan-EDMS' --net='bridge' -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'MAYAN_DATABASE_ENGINE'='django.db.backends.postgresql' -e 'MAYAN_DATABASE_HOST'='192.168.128.90' -e '5432'='5432' -e 'MAYAN_DATABASE_NAME'='mayan' -e 'MAYAN_DATABASE_PASSWORD'='PASSWORD' -e 'MAYAN_DATABASE_USER'='mayan' -e 'MAYAN_DATABASE_CONN_MAX_AGE'='60' -p '8000:8000/tcp' -v '/mnt/disks/Samsung_SSD_860_EVO_500GB_S3Z1NB0KB48195D/Mayan-EDMS/':'/var/lib/mayan/
    1 point
  22. OTOH with zero testing, etc basically following the dockerHub directions for it. See here https://hub.docker.com/r/mayanedms/mayanedms/ Install PostGres (available in Apps) Add the following environment variables POSTGRES_USER=mayan POSTGRES_DB=mayan POSTGRES_PASSWORD=mayanuserpass Enable dockerHub searching in CA's settings and search for mayanedms Add the following environment variables MAYAN_DATABASE_ENGINE=django.db.backends.postgresql MAYAN_DATABASE_HOST=unRaid's IP address MAYAN_DATABASE_PORT=the port for postgres MAYAN_DATABASE_NAME=mayan MAYAN_DATABASE_PASSWORD=mayanuserpass MAYAN_DATABASE_USER=mayan MAYAN_DATABASE_CONN_MAX_AGE=60
    1 point
  23. Can someone help with a docker install script for Mayan-EDMS? This is such a fantastic document management system that would be so helpful for many. It seems like most of the work is done and just needs a template so we can set document volumes, etc. It's above my abilities but maybe someone here can do it? #!/bin/sh set -e # This script is meant for quick & easy install via: # $ curl -fsSL get.mayan-edms.com -o get-mayan-edms.sh # $ sh get-mayan-edms.sh # # NOTE: Make sure to verify the contents of the script # you downloaded matches the contents of docker.sh # located at https://gitlab.com/mayan-edms/mayan-edms/blob/master/contrib/scripts/install/docker.sh # before executing. : ${VERBOSE:=true} : ${INSTALL_DOCKER:=false} : ${DELETE_VOLUMES:=false} : ${DATABASE_USER:=mayan} : ${DATABASE_NAME:=mayan} : ${DATABASE_PASSWORD:=mayanuserpass} : ${DOCKER_POSTGRES_IMAGE:=postgres:9.5} : ${DOCKER_POSTGRES_CONTAINER:=mayan-edms-postgres} : ${DOCKER_POSTGRES_VOLUME:=/docker-volumes/mayan-edms/postgres} : ${DOCKER_POSTGRES_PORT:=5432} : ${DOCKER_MAYAN_IMAGE:=mayanedms/mayanedms:latest} : ${DOCKER_MAYAN_CONTAINER:=mayan-edms} : ${DOCKER_MAYAN_VOLUME:=/docker-volumes/mayan-edms/media} cat << EOF ███╗ ███╗ █████╗ ██╗ ██╗ █████╗ ███╗ ██╗ ████╗ ████║██╔══██╗╚██╗ ██╔╝██╔══██╗████╗ ██║ ██╔████╔██║███████║ ╚████╔╝ ███████║██╔██╗ ██║ ██║╚██╔╝██║██╔══██║ ╚██╔╝ ██╔══██║██║╚██╗██║ ██║ ╚═╝ ██║██║ ██║ ██║ ██║ ██║██║ ╚████║ ╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝╚═╝ ╚═══╝ Docker deploy script NOTE: Make sure to verify the contents of this script matches the contents of docker.sh located at https://gitlab.com/mayan-edms/mayan-edms/blob/master/contrib/scripts/install/docker.sh before executing. EOF if [ "$VERBOSE" = true ]; then echo "Variable values to be used:" echo "---------------------------" echo "INSTALL_DOCKER: $INSTALL_DOCKER" echo "DELETE_VOLUMES: $DELETE_VOLUMES" echo "DATABASE_USER: $DATABASE_USER" echo "DATABASE_NAME: $DATABASE_NAME" echo "DATABASE_PASSWORD: $DATABASE_PASSWORD" echo "DOCKER_POSTGRES_IMAGE: $DOCKER_POSTGRES_IMAGE" echo "DOCKER_POSTGRES_CONTAINER: $DOCKER_POSTGRES_CONTAINER" echo "DOCKER_POSTGRES_VOLUME: $DOCKER_POSTGRES_VOLUME" echo "DOCKER_POSTGRES_PORT: $DOCKER_POSTGRES_PORT" echo "DOCKER_MAYAN_IMAGE: $DOCKER_MAYAN_IMAGE" echo "DOCKER_MAYAN_CONTAINER: $DOCKER_MAYAN_CONTAINER" echo "DOCKER_MAYAN_VOLUME: $DOCKER_MAYAN_VOLUME" echo "\nStarting in 10 seconds." sleep 10 fi if [ "$INSTALL_DOCKER" = true ]; then echo -n "* Installing Docker..." curl -fsSL get.docker.com -o get-docker.sh >/dev/null sh get-docker.sh >/dev/null 2>&1 rm get-docker.sh echo "Done" fi if [ -z `which docker` ]; then echo "Docker is not installed. Rerun this script with the variable INSTALL_DOCKER set to true." exit 1 fi echo -n "* Removing existing Mayan EDMS and PostgreSQL containers (no data will be lost)..." true || docker stop $DOCKER_MAYAN_CONTAINER >/dev/null 2>&1 true || docker rm $DOCKER_MAYAN_CONTAINER >/dev/null 2>&1 true || docker stop $DOCKER_POSTGRES_CONTAINER >/dev/null 2>&1 true || docker rm $DOCKER_POSTGRES_CONTAINER >/dev/null 2>&1 echo "Done" if [ "$DELETE_VOLUMES" = true ]; then echo -n "* Deleting Docker volumes in 5 seconds (warning: this delete all document data)..." sleep 5 true || rm DOCKER_MAYAN_VOLUME -Rf true || rm DOCKER_POSTGRES_VOLUME -Rf echo "Done" fi echo -n "* Pulling (downloading) the Mayan EDMS Docker image..." docker pull $DOCKER_MAYAN_IMAGE >/dev/null echo "Done" echo -n "* Pulling (downloading) the PostgreSQL Docker image..." docker pull $DOCKER_POSTGRES_IMAGE > /dev/null echo "Done" echo -n "* Deploying the PostgreSQL container..." docker run -d \ --name $DOCKER_POSTGRES_CONTAINER \ --restart=always \ -p $DOCKER_POSTGRES_PORT:5432 \ -e POSTGRES_USER=$DATABASE_USER \ -e POSTGRES_DB=$DATABASE_NAME \ -e POSTGRES_PASSWORD=$DATABASE_PASSWORD \ -v $DOCKER_POSTGRES_VOLUME:/var/lib/postgresql/data \ $DOCKER_POSTGRES_IMAGE >/dev/null echo "Done" echo -n "* Waiting for the PostgreSQL container to be ready (10 seconds)..." sleep 10 echo "Done" echo -n "* Deploying Mayan EDMS container..." docker run -d \ --name $DOCKER_MAYAN_CONTAINER \ --restart=always \ -p 80:8000 \ -e MAYAN_DATABASE_ENGINE=django.db.backends.postgresql \ -e MAYAN_DATABASE_HOST=172.17.0.1 \ -e MAYAN_DATABASE_NAME=$DATABASE_NAME \ -e MAYAN_DATABASE_PASSWORD=$DATABASE_PASSWORD \ -e MAYAN_DATABASE_USER=$DATABASE_USER \ -e MAYAN_DATABASE_PORT=$DOCKER_POSTGRES_PORT \ -e MAYAN_DATABASE_CONN_MAX_AGE=60 \ -v $DOCKER_MAYAN_VOLUME:/var/lib/mayan \ $DOCKER_MAYAN_IMAGE >/dev/null echo "Done" echo -n "* Waiting for the Mayan EDMS container to be ready (might take a few minutes)..." while ! curl --output /dev/null --silent --head --fail http://localhost:80; do sleep 1 && echo -n .; done; echo "Done"
    1 point
  24. For those having issues getting Cloud Print to work in the CUPS docker... Here's how I got it all going: Add your printer in the WebUI ssh into your unRAID box run the following command to log into your CUPS docker. If you've renamed the docker, replace CUPS with your new image name docker exec -t -i CUPS /bin/bash run the following command to start up the cloudprint service and attach it to your Google account: ./etc/service/cloudprint/run This will give you a URL to use to link the Printer to your Google account. Copy/paste it into your local browser and follow the directions Once linked, go back to your ssh session... Press control-C and type exit Restart your docker. Print to celebrate
    1 point