2Piececombo

Members
  • Posts

    138
  • Joined

  • Last visited

Everything posted by 2Piececombo

  1. I did attempted to install mcelog, but cant seem to find it in nerdtools?
  2. I received an alert that there could be hardware issues with my server. I haven't had a chance to run a memtest yet, but I wanted to see if there was anything concerning in the daigs, which I have included. Cheers netserv-diagnostics-20240127-1856.zip
  3. Open a console window for Nextcloud, and cd to that dir. type: "cd config/www/nextcloud/config" Use nano to open the file "nano config.php" and look for the version Then edit the container and change the repository to the version you need. Use the linuxserver docker page to fine the appropriate tag. For example, if you need version 26.0.2, change the repository to: "linuxserver/nextcloud:26.0.2"
  4. Issue is resolved. Seems to be related to the cron file being wrong. Found the current cron file and replaced mine. In my old cron file it said s6-setuidgid abc php7 -f /app/www/public/cron.php but the new one just just php, no 7. s6-setuidgid abc php -f /app/www/public/cron.php Remembering my error about "s6-applyuidgid: fatal: unable to exec php7:" this seems to have fixed it. For anyone that needs it, heres the full current cron file # do daily/weekly/monthly maintenance # min hour day month weekday command */15 * * * * run-parts /etc/periodic/15min 0 * * * * run-parts /etc/periodic/hourly 0 2 * * * run-parts /etc/periodic/daily 0 3 * * 6 run-parts /etc/periodic/weekly 0 5 1 * * run-parts /etc/periodic/monthly # nextcloud cron */5 * * * * s6-setuidgid abc php -f /app/www/public/cron.php
  5. Last week I had some issues updating (was on 24.something) and updated the container which broke things. I updated manually til there were no more updates, and everything has been fine for days. Suddenly today though, nextcloud is broken. The container log shows this: The warning about the active conf dates happened after manually updating, but have not actually caused any problem thus far to my knowledge, as it's been working fine til now. After searching the "s6-applyuidgid: fatal: unable to exec php7: No such file or directory" error I found someone on this thread linking to this. I followed the instructions and ran docker exec nextcloud touch /config/www/nextcloud/config/needs_migration and then changed the version to this lspipepr/nextcloud:27.0.0-pkg-34240624-dev-06ca2ef0a15179a65b6a1d869563b3729cf93cbb-pr-325 The container log now shows this Can't start Nextcloud because the version of the data (27.0.1.2) is higher than the docker image version (27.0.0.8) and downgrading is not supported. Are you sure you have pulled the newest image version? So did I manually update too far? Im confused why it worked fine for several days, then suddenly wont work again. I have no idea what steps to take next.
  6. Whatever the issue was it seems to have resolved itself. Tried again today and it seems to be fine.
  7. I suddenly cannot access the web ui. The log doesnt seem to have anything concerning. Updated and reinstalled but still no gui. Any suggestions?
  8. was indeed a bad stick of memory. pulled out the offending stick and repaired the btrfs filesystem on the cache and it seems to be fine now.
  9. He's a friend of mine and I did my best to help him last night. We ended up pulling everything important off the cache, reformatting them and recreating the cache pool. deleted docker.img and recreated, redownloaded dockers, and restored appdata from backup. things were fine for a while. but now its throwing more errors. I had a look through the most recent syslog and didnt see anywhere that it identified corrupt files. How can we identify which files are the problem?
  10. deleted docker folder and recreated. All is well now. Takes a long time to delete when its a dir and not an img.
  11. When I attempt to reinstall the container, this is all I get: docker run -d --name='adminer' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e HOST_HOSTNAME="pieserv" -e HOST_CONTAINERNAME="adminer" -e 'ADMINER_DESIGN'='flat' -e 'ADMINER_PLUGINS'='' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:8080]' -l net.unraid.docker.icon='https://raw.githubusercontent.com/selfhosters/unRAID-CA-templates/master/templates/img/adminer.png' -p '8181:8080/tcp' 'adminer' 142ad3e3853233481831a9cb80ba486da8f6980bb81cb0698765dafa8f54d07c The command finished successfully!
  12. I had adminer working just fine at one point. Then it no longer worked. I removed it, checking the box to delete the image too, and checked appdata cleanup to ensure nothing was left behind. I redownloaded it, it finished successfully, but didnt auto start. Upon manually starting, nothing happens. I figured since it doesn't have an appdata folder, something could have been cached in RAM, so I again removed/deleted image, and even deleted the template, then rebooted and reinstalled. Same thing, didnt auto start and cant manually start. The log for adminer contains nothing except: exec /usr/local/bin/entrypoint.sh: no such file or directory exec /usr/local/bin/entrypoint.sh: no such file or directory ** Press ANY KEY to close this window ** Not sure if the image isnt actually being deleted, because upon redownload it's almost instant, like it's not actually doing anything. Is there some sort of cache I should clear? Or a way to manually remove an image? Attached diags as usual. Cheers pieserv-diagnostics-20221030-2114.zip
  13. Ive run into a problem. My scripts work great when stopping/starting the array. It takes some time to write out the data from RAM to the cache, so when I stop the array I see it re-trying over and over to while it waits for the script to finish. But eventually once the script is done copying data, the array stops and all is well. The issue is rebooting. If my understanding is correct, reboot will kill anything still running, this means my script dies and not all the data is copied out of RAM to the cache. I just learned this the hard way after a reboot and now nextcloud is broken. For reference, the scripts are: at start of array: rsync -ar /mnt/user/ramcache/appdata/ /tmp/appdata/ docker start nextcloud docker start mariadb at stop of array: rsync -ar /tmp/appdata/ /mnt/user/ramcache/appdata/ Is there some way to force the system to let the script finish instead of killing it? The performance benefit of having nextcloud, and even mariadb in ram is incredible and I don't want to have to give it up. Is there perhaps a better way than userscrips to achieve what im doing? Cheers
  14. this is probably a dumb question, but when a script is set to run at stop of the array, does that mean it waits til the array is stopped to run? I just want to be sure the container will be stopped before it tries to copy data.
  15. I realized start/stop of array are triggers. That should work. Im using rsync with the -a flag, that seems like it should preserve and file perms/attributes
  16. I've been using nextcloud for quite some time and I love it. What I don't love is how slow it is when using the webui. My appdata lives on an nvme cache. Sadly my mobo only has pcie 2.0 slots, so the speed is somewhat limited, but its still painfully slow. As a test, I moved the appdata for nextcloud into /tmp, and it runs great! The UI is responsive and snappy and performance is awsome. Obviously ram gets wiped every reboot, but is there any realistic way to have a certain folder moved into ram at boot, then moved out of ram for a shutdown? I suspect simply moving the data is easy enough with userscripts. Perhaps even a scipt to start that particular container only after the data has been moved to RAM. But what about a script that runs on shutdown, after the containers are stopped, to move the data out of ram? Any suggestions are welcome :) Some people have lots of ram, and I feel like having an option to load appdata for specific dockers into RAM would actually be pretty nice. Theres a million posts out there complaining about how slow nextcloud is. But running it in RAM is a night and day difference.
  17. Update appears to have done the trick! Thanks Jorge
  18. Okay thats good to know. I'm 1/3 of the way through a preclear on a new disk, but once that's done I'll update mark solution if all goes well. Cheers
  19. The ones I posted were after a reboot, in basically the same state it's currently in. But in any case, here pieserv-diagnostics-20221020-0030.zip
  20. After closer inspection it seems like the devices I removed, even when re-assigned, are showing as unassigned. Clicking on the settings for a pool says the filesystem is xfs.
  21. I stopped the array, removed one drive from each pool and mounted them manually to check if the data was still there. It was. I reassigned the drives to their respective pools and started the array. Both pools now seem to be fine. Rebooted and back to the same issue. Any ideas how to fix this?
  22. Seemingly out of nowhere, both my cache pools (one made up of 2x1tb nvmes, and the other made up of 2x500gb sata ssds) are showing as invalid pool config. Surely it's not hardware as 2 different pools did this at the same time. I rebooted hoping whatever had happened would fix itself, but no luck. grabbed diags after reboot. I found this thread and tried the command, but all I got back was No valid Btrfs found on /dev/sdj Open ctree failed Anything else I can try other than removing the drives from the pool, copying data off, and rebuilding? Cheers pieserv-diagnostics-20221019-2204.zip
  23. I thought the CPU error I was seeing was related to memory that needed to be reseated. During the CPU swap a RAM module must have developed a poor connection, because after re-seating them all it was fine and the server ran for about 9 hours without issue, then about 2 minutes ago it rebooted out of nowhere again. Now Im seeing the CPU errors again. Im going to have to assume the motherboard is faulty, I don't know what else it could be. The strangest part to me is that it has never once crashed while booted into windows/linux/etc. ONLY unraid. Regardless, I'm going to replace the mobo. I said previously that the mobo had been replaced, and that's sorta true. I had ordered another, installed it in this server, and still experienced a problem (this was 8-10 months ago) so I put the original mobo back in and built a second server with the new mobo. It's a Tyan S7012, which are still available new on ebay. Im going to pick up yet another one, swap the board out, and see what happens. If I still have problems im going to nuke this install of unraid completely and start over. I dont have the space to completely back up my media library, but oh well that can always be.... reacquired
  24. Well I'm at a complete loss now... Finally had some time to look into what going on. My IPMI is now showing over 400 events. (see pic below) The same two things over and over again.. But the timestamps make absolutely no sense. I booted up the server, and when I tired to access the webui i get an nginx error. I log into the terminal and the aspect ratio is super wierd, it shows I have no dockers isntalled, all my cache drives are showing as unassigned, and it freezes, then reboots. This is with no GPU installed, btw. So where before I would only have issues WITH a gpu, it's not just gone to absolute shit. I think im going to have to backup all my data and assume this server is going to fail completely sooner than later. I dont think its going to stay alive long enough for me to grab diags anymore, so maybe syslog will give some idea.... Question, does the syslog include any sensitive info? Or am I goo to post it? I've not seen anything sensitive as I scrolled through it, but I want to be sure before I post it. Also, I watched the entire boot up process, and something about CPU errors. I have another pair of CPUs im going to drop in and see what happens. Wish me luck...
  25. Possibly, unless it's somehow specific to certain GPUs. Both the GPUs I had wer older, a Quadro 600, and then a Quadro P600. I dont recall if you specified what card you have, but maybe it's related to this. I'm going to try a different card at some point and see what happens, though I have a very interesting update.. My server ran for at least 12 hours in safe mode with array started and docker running. Only thing disabled were plugins I believe. Then at some point I heard the beeps my mobo makes when it reboots. I presume it booted back to normal (not safemode) and then booted/crashed repeatedly til I just turned it off. I haven't had any time to look into why it might have shut off in safe mode, but when I power it back on ill grab diags and the mirror to USB syslog and post them here.