2Piececombo

Members
  • Posts

    138
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

2Piececombo's Achievements

Apprentice

Apprentice (3/14)

6

Reputation

  1. I did attempted to install mcelog, but cant seem to find it in nerdtools?
  2. I received an alert that there could be hardware issues with my server. I haven't had a chance to run a memtest yet, but I wanted to see if there was anything concerning in the daigs, which I have included. Cheers netserv-diagnostics-20240127-1856.zip
  3. Open a console window for Nextcloud, and cd to that dir. type: "cd config/www/nextcloud/config" Use nano to open the file "nano config.php" and look for the version Then edit the container and change the repository to the version you need. Use the linuxserver docker page to fine the appropriate tag. For example, if you need version 26.0.2, change the repository to: "linuxserver/nextcloud:26.0.2"
  4. Issue is resolved. Seems to be related to the cron file being wrong. Found the current cron file and replaced mine. In my old cron file it said s6-setuidgid abc php7 -f /app/www/public/cron.php but the new one just just php, no 7. s6-setuidgid abc php -f /app/www/public/cron.php Remembering my error about "s6-applyuidgid: fatal: unable to exec php7:" this seems to have fixed it. For anyone that needs it, heres the full current cron file # do daily/weekly/monthly maintenance # min hour day month weekday command */15 * * * * run-parts /etc/periodic/15min 0 * * * * run-parts /etc/periodic/hourly 0 2 * * * run-parts /etc/periodic/daily 0 3 * * 6 run-parts /etc/periodic/weekly 0 5 1 * * run-parts /etc/periodic/monthly # nextcloud cron */5 * * * * s6-setuidgid abc php -f /app/www/public/cron.php
  5. Last week I had some issues updating (was on 24.something) and updated the container which broke things. I updated manually til there were no more updates, and everything has been fine for days. Suddenly today though, nextcloud is broken. The container log shows this: The warning about the active conf dates happened after manually updating, but have not actually caused any problem thus far to my knowledge, as it's been working fine til now. After searching the "s6-applyuidgid: fatal: unable to exec php7: No such file or directory" error I found someone on this thread linking to this. I followed the instructions and ran docker exec nextcloud touch /config/www/nextcloud/config/needs_migration and then changed the version to this lspipepr/nextcloud:27.0.0-pkg-34240624-dev-06ca2ef0a15179a65b6a1d869563b3729cf93cbb-pr-325 The container log now shows this Can't start Nextcloud because the version of the data (27.0.1.2) is higher than the docker image version (27.0.0.8) and downgrading is not supported. Are you sure you have pulled the newest image version? So did I manually update too far? Im confused why it worked fine for several days, then suddenly wont work again. I have no idea what steps to take next.
  6. Whatever the issue was it seems to have resolved itself. Tried again today and it seems to be fine.
  7. I suddenly cannot access the web ui. The log doesnt seem to have anything concerning. Updated and reinstalled but still no gui. Any suggestions?
  8. was indeed a bad stick of memory. pulled out the offending stick and repaired the btrfs filesystem on the cache and it seems to be fine now.
  9. He's a friend of mine and I did my best to help him last night. We ended up pulling everything important off the cache, reformatting them and recreating the cache pool. deleted docker.img and recreated, redownloaded dockers, and restored appdata from backup. things were fine for a while. but now its throwing more errors. I had a look through the most recent syslog and didnt see anywhere that it identified corrupt files. How can we identify which files are the problem?
  10. deleted docker folder and recreated. All is well now. Takes a long time to delete when its a dir and not an img.
  11. When I attempt to reinstall the container, this is all I get: docker run -d --name='adminer' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e HOST_HOSTNAME="pieserv" -e HOST_CONTAINERNAME="adminer" -e 'ADMINER_DESIGN'='flat' -e 'ADMINER_PLUGINS'='' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:8080]' -l net.unraid.docker.icon='https://raw.githubusercontent.com/selfhosters/unRAID-CA-templates/master/templates/img/adminer.png' -p '8181:8080/tcp' 'adminer' 142ad3e3853233481831a9cb80ba486da8f6980bb81cb0698765dafa8f54d07c The command finished successfully!
  12. I had adminer working just fine at one point. Then it no longer worked. I removed it, checking the box to delete the image too, and checked appdata cleanup to ensure nothing was left behind. I redownloaded it, it finished successfully, but didnt auto start. Upon manually starting, nothing happens. I figured since it doesn't have an appdata folder, something could have been cached in RAM, so I again removed/deleted image, and even deleted the template, then rebooted and reinstalled. Same thing, didnt auto start and cant manually start. The log for adminer contains nothing except: exec /usr/local/bin/entrypoint.sh: no such file or directory exec /usr/local/bin/entrypoint.sh: no such file or directory ** Press ANY KEY to close this window ** Not sure if the image isnt actually being deleted, because upon redownload it's almost instant, like it's not actually doing anything. Is there some sort of cache I should clear? Or a way to manually remove an image? Attached diags as usual. Cheers pieserv-diagnostics-20221030-2114.zip
  13. Ive run into a problem. My scripts work great when stopping/starting the array. It takes some time to write out the data from RAM to the cache, so when I stop the array I see it re-trying over and over to while it waits for the script to finish. But eventually once the script is done copying data, the array stops and all is well. The issue is rebooting. If my understanding is correct, reboot will kill anything still running, this means my script dies and not all the data is copied out of RAM to the cache. I just learned this the hard way after a reboot and now nextcloud is broken. For reference, the scripts are: at start of array: rsync -ar /mnt/user/ramcache/appdata/ /tmp/appdata/ docker start nextcloud docker start mariadb at stop of array: rsync -ar /tmp/appdata/ /mnt/user/ramcache/appdata/ Is there some way to force the system to let the script finish instead of killing it? The performance benefit of having nextcloud, and even mariadb in ram is incredible and I don't want to have to give it up. Is there perhaps a better way than userscrips to achieve what im doing? Cheers
  14. this is probably a dumb question, but when a script is set to run at stop of the array, does that mean it waits til the array is stopped to run? I just want to be sure the container will be stopped before it tries to copy data.
  15. I realized start/stop of array are triggers. That should work. Im using rsync with the -a flag, that seems like it should preserve and file perms/attributes