tomwhi

Members
  • Posts

    24
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

tomwhi's Achievements

Noob

Noob (1/14)

14

Reputation

1

Community Answers

  1. Hi gang. I had this same problem with NextCloud and the PHP error (This version of Nextcloud is not compatible with PHP>=8.2. You are currently running 8.2.6.) and I've managed to fix it. My write up is over on this thread if it's any help. I've tried to make it as step-by-step as possible as it was the first time I had to do this myself.
  2. I found a way to progress past this error. I added a tag onto my repo referenced in my container. In my case I'm on V24 so I used this tag however all the tags can be found can be found here: https://hub.docker.com/r/linuxserver/nextcloud/tags (This is assuming you're using the LinuxServer container, adjust for your own image creator of choice). I then started my container up and ran the following command in the Console for Nexcloud container (I'm assuming the user is abc but I've seen it called other things in different OS'): sudo -u abc php /config/www/nextcloud/updater/updater.phar I kept doing this until my application was up to date (I tried running this command without changing the tag first, and I found it wouldn't progress past the error). During the process I couldn't update any more... I tried reverting back to the "latest" tag (by not imputting a tag), but i still got the error. I changed to tag on the repo to a v25 tag (25.0.4) and was able to get back into the GUI and was able to keep updating. I was then presented with an update to v26.0.2 inside the GUI and the CLI updater. Once I completed that I removed the tag off my application and restarted the docker again. This is probably a good time to think about the way we look after our NextCloud instances. It's an amazing app but it's not "set and forget"... I think what I'll start doing it setting the tag to the version I'm on so that it can update the container within the major version (i.e. now I'm on v26) which will not allow it to go to v27 when that comes out - and i'll do the in-app updates first before I do the container updates.... Ref: https://github.com/linuxserver/docker-nextcloud/issues/288 Ref: https://www.reddit.com/r/unRAID/comments/13xlxyz/nextlcoud_stuck_in_step_3/
  3. Oh gang, this is perfect. Thank you so much for supporting me during this mega stressful time. I'll try and work out what's happened another day. But for now i'm mostly back up and running. Thank you again...!!
  4. Thank you so much! They array is back online and all my shares are there! However the dockers aren’t, maybe the docker.img was corrupted? Is there any advice about where to start with that? I’ve uploaded a fresh drag but I can’t see anything obvious in the logs to why the dockers aren’t defined any more… tomnas-diagnostics-20230214-1730.zip
  5. Thank you! I am normally way better at this but my brain is all foggy. Am I at any risk that the -L removes any data off the disk or wipes it other than stuff that's corrupted?
  6. Thank you - Attached is the output of check disk against Disk1. It looks like there might be something that didn't finish with a success message, so I've tried to run the same option wihtout the "n" flag. However when I come to run the following command (while the array is still up in maint-mode) I get this error. root@TomNAS:/dev# xfs_repair -v /dev/sdb1 Phase 1 - find and verify superblock... - block cache size set to 709656 entries Phase 2 - using internal log - zero log... zero_log: head block 859423 tail block 859419 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Do I need to stop the array to carry out the option to fix the problem do you think? checkdisk.txt
  7. Hi guys I'm really struggling - I lost my dad yesterday so my brain is a little foggy but I've woken up to my Unraid server broken and lost all its shares and what seems to be an array disk missing data / showing the wrong data. Also my shares aren't showing on the SHARES tab, and none of my dockers are starting, I assume because something is with Disk1 is messed up. My setup is : HP Microserver Gen10 3x Array disks (not in parity) (sdb, sdc, sdd) Cache drive on 50GB SSD (sde) Flash on 8GB USB stick (sda) When I click on SDB, Disk 1 in the array, I only see system files and none of my data. I would expect to see a list of "share folders" in here and not a linux root folder layout. When I click on Disk2 and Disk3 I see all the data I expect to see. When I click on "Fix common problems" it seems that Disk 1 is read-only or full But it's full not - and it's showing the amout of "Used" space to be what I expect this disk to be at - as disk 3 is now where all my new data is being written to. And I can't see it being read only anywhere, only that the data looks weird on the disk when I click "View". I've attached a diagnostics to see if that helps anyone. And I appoligise for not giving much more information but I wasn't execpting this fault to happen so soon after a family tragidy. I do have backups of the data so all is not lost, it just means I'd need to rebuild all my apps again if Unraid is really that broken and the restore process would take ages and I'd like to avoid it, please try to avoid judging my setup too harshly, it works for me and we can circle back around to what I could have done better later. Thank you so much for understanding, and any help you're able to offer me! Tom tomnas-diagnostics-20230214-0843.zip
  8. This is a nice idea! I'm going to try to pad it out a little more to answer the questions about what's backed up, and setting up cron jobs to do it on a scheduled but it's deffo I great starting point for anyone who wants an application aware backup of their bitwarden / vaultwarden vault!
  9. OH MY GOSH - I've just worked out why my Home Assistant couldn't talk to my Phoscon/Deconz app.... They weren't on the same docker network... DOH!! My HA was on "host" because there was too many ports to map across, and my Phoscon was on br0 (also as shown in SIO's instructions but made the most sense because it needs it's own IP for things to find it on tcp/80 and tcp/443. I found out by going into the console of the HA container and trying to ping my Phoscon IP - it failed! So the "Api key couldn't not be whatever" is utter bullsh!t from an error message. I also realised it couldn't talk to Pi-hole (which I was less worried about, but equally as annoyed with not working) - and for anyone else who lives and breaths the SIO videos - it's also on br0!! So once my HA was moved onto the br0 network everything worked - well - not everything, now it can't talk to the host, so I can't get Sonnar or Raddar data into my HA setup but that really is the lowest of the low priorities and now I've worked this out I'll try and work out how to get it talking to both br0 and host networks.
  10. I am pretty desperate for this video SIO! :-) Please can you let us know when you think it will be due out?
  11. I've just had the same thing. I've been ripping my hair out all day. I was in the same situation as you, couldn't get the SIO one working so used marthoc/deconz, then on Saturday when my docker updated automatically I was getting a USB error. I've been able to get the docker up and running again by using SIO's and my original appdata, but I am still getting the API error that we've both had with Home Assistant. I've read the API issue is normal in HA, and I've tried updating the configuration.yml for HA with my host IP and port (various attempts) with nothing working. I'd be interested to know if anyone gets this working in any way, shape or form.
  12. I had exactly the same issue too, my post is in this thread so you might have tried this already. I ended up using a different container off the Docker Hub. I can't see which container you're using (I'd need to see the edit page & Repo or send another screenshot with advanced options enabled). I used this guide and it sorted me out:
  13. I managed to get this "working", and I have to say without you, @Hoopster and @ken-ji's posts in the original thread I would have been lost. So thank you! I say "working", because my keys don't survive a reboot but I've been through all 6 pages of the original thread and worked out what I did wrong so that'll be a doddle to fix. Even so, all I have to do is put in a username and password until I can automate it so the solution is ready to rock. I am going to try and write my solution up at some point (mainly because a mate at work wants to do the same thing) and hopefully it helps others. I am using your original rsync commands from the original post for data I want sync'd, and then I got Duplicati working to encrypt personal files, photos and videos (that I'd rather not be in plain text on my dad's server!!) which are sent using the same idea of SSH/SFTP so it's encrypted in transit too, and tested a restore across the wire and locally to my dad's server in case my server totally died. #sample rsync commands rsync -avu --numeric-ids --progress --stats -e "ssh -i /root/.ssh/Tower-rsync-key -T -o Compression=no -x" "/mnt/user/Audio Books/" [email protected]:"/mnt/user/media/Audio\ Books/" rsync -avu --numeric-ids --progress --stats -e "ssh -i /root/.ssh/Tower-rsync-key -T -o Compression=no -x" "/mnt/user/Ebooks/" [email protected]:"/mnt/user/media/Ebooks/" I installed zerotier this morning and this is just the icing on the already glamorous cake. I tested it with my mate and he was able to access my server once I authorised him access (which was quickly revoked after testing). I selected the IP range of 192.168.192.* so everything I join to that network is on the same network range and can access other nodes on that virtual network - epic, it worked so quickly and easily. Tom Lawrance on YouTube does a good starter video on it. It works amazingly in UnRaid too and passes the virtual network down to the host. So I can setup my rsync and duplicati to use the 192.168.192 range which will never change even if public IP's change. And just for piece of mind the data is still encrypted because of your original SSH idea - which I keep, even though it'll be double encrypted, I'm not fussy about transfer speeds as long as the data gets there.
  14. cheers for the info! I'll give the thread a good read now and get cracking :-)