tomwhi

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by tomwhi

  1. Hi gang. I had this same problem with NextCloud and the PHP error (This version of Nextcloud is not compatible with PHP>=8.2. You are currently running 8.2.6.) and I've managed to fix it. My write up is over on this thread if it's any help. I've tried to make it as step-by-step as possible as it was the first time I had to do this myself.
  2. I found a way to progress past this error. I added a tag onto my repo referenced in my container. In my case I'm on V24 so I used this tag however all the tags can be found can be found here: https://hub.docker.com/r/linuxserver/nextcloud/tags (This is assuming you're using the LinuxServer container, adjust for your own image creator of choice). I then started my container up and ran the following command in the Console for Nexcloud container (I'm assuming the user is abc but I've seen it called other things in different OS'): sudo -u abc php /config/www/nextcloud/updater/updater.phar I kept doing this until my application was up to date (I tried running this command without changing the tag first, and I found it wouldn't progress past the error). During the process I couldn't update any more... I tried reverting back to the "latest" tag (by not imputting a tag), but i still got the error. I changed to tag on the repo to a v25 tag (25.0.4) and was able to get back into the GUI and was able to keep updating. I was then presented with an update to v26.0.2 inside the GUI and the CLI updater. Once I completed that I removed the tag off my application and restarted the docker again. This is probably a good time to think about the way we look after our NextCloud instances. It's an amazing app but it's not "set and forget"... I think what I'll start doing it setting the tag to the version I'm on so that it can update the container within the major version (i.e. now I'm on v26) which will not allow it to go to v27 when that comes out - and i'll do the in-app updates first before I do the container updates.... Ref: https://github.com/linuxserver/docker-nextcloud/issues/288 Ref: https://www.reddit.com/r/unRAID/comments/13xlxyz/nextlcoud_stuck_in_step_3/
  3. Oh gang, this is perfect. Thank you so much for supporting me during this mega stressful time. I'll try and work out what's happened another day. But for now i'm mostly back up and running. Thank you again...!!
  4. Thank you so much! They array is back online and all my shares are there! However the dockers aren’t, maybe the docker.img was corrupted? Is there any advice about where to start with that? I’ve uploaded a fresh drag but I can’t see anything obvious in the logs to why the dockers aren’t defined any more… tomnas-diagnostics-20230214-1730.zip
  5. Thank you! I am normally way better at this but my brain is all foggy. Am I at any risk that the -L removes any data off the disk or wipes it other than stuff that's corrupted?
  6. Thank you - Attached is the output of check disk against Disk1. It looks like there might be something that didn't finish with a success message, so I've tried to run the same option wihtout the "n" flag. However when I come to run the following command (while the array is still up in maint-mode) I get this error. root@TomNAS:/dev# xfs_repair -v /dev/sdb1 Phase 1 - find and verify superblock... - block cache size set to 709656 entries Phase 2 - using internal log - zero log... zero_log: head block 859423 tail block 859419 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Do I need to stop the array to carry out the option to fix the problem do you think? checkdisk.txt
  7. Hi guys I'm really struggling - I lost my dad yesterday so my brain is a little foggy but I've woken up to my Unraid server broken and lost all its shares and what seems to be an array disk missing data / showing the wrong data. Also my shares aren't showing on the SHARES tab, and none of my dockers are starting, I assume because something is with Disk1 is messed up. My setup is : HP Microserver Gen10 3x Array disks (not in parity) (sdb, sdc, sdd) Cache drive on 50GB SSD (sde) Flash on 8GB USB stick (sda) When I click on SDB, Disk 1 in the array, I only see system files and none of my data. I would expect to see a list of "share folders" in here and not a linux root folder layout. When I click on Disk2 and Disk3 I see all the data I expect to see. When I click on "Fix common problems" it seems that Disk 1 is read-only or full But it's full not - and it's showing the amout of "Used" space to be what I expect this disk to be at - as disk 3 is now where all my new data is being written to. And I can't see it being read only anywhere, only that the data looks weird on the disk when I click "View". I've attached a diagnostics to see if that helps anyone. And I appoligise for not giving much more information but I wasn't execpting this fault to happen so soon after a family tragidy. I do have backups of the data so all is not lost, it just means I'd need to rebuild all my apps again if Unraid is really that broken and the restore process would take ages and I'd like to avoid it, please try to avoid judging my setup too harshly, it works for me and we can circle back around to what I could have done better later. Thank you so much for understanding, and any help you're able to offer me! Tom tomnas-diagnostics-20230214-0843.zip
  8. This is a nice idea! I'm going to try to pad it out a little more to answer the questions about what's backed up, and setting up cron jobs to do it on a scheduled but it's deffo I great starting point for anyone who wants an application aware backup of their bitwarden / vaultwarden vault!
  9. OH MY GOSH - I've just worked out why my Home Assistant couldn't talk to my Phoscon/Deconz app.... They weren't on the same docker network... DOH!! My HA was on "host" because there was too many ports to map across, and my Phoscon was on br0 (also as shown in SIO's instructions but made the most sense because it needs it's own IP for things to find it on tcp/80 and tcp/443. I found out by going into the console of the HA container and trying to ping my Phoscon IP - it failed! So the "Api key couldn't not be whatever" is utter bullsh!t from an error message. I also realised it couldn't talk to Pi-hole (which I was less worried about, but equally as annoyed with not working) - and for anyone else who lives and breaths the SIO videos - it's also on br0!! So once my HA was moved onto the br0 network everything worked - well - not everything, now it can't talk to the host, so I can't get Sonnar or Raddar data into my HA setup but that really is the lowest of the low priorities and now I've worked this out I'll try and work out how to get it talking to both br0 and host networks.
  10. I am pretty desperate for this video SIO! :-) Please can you let us know when you think it will be due out?
  11. I've just had the same thing. I've been ripping my hair out all day. I was in the same situation as you, couldn't get the SIO one working so used marthoc/deconz, then on Saturday when my docker updated automatically I was getting a USB error. I've been able to get the docker up and running again by using SIO's and my original appdata, but I am still getting the API error that we've both had with Home Assistant. I've read the API issue is normal in HA, and I've tried updating the configuration.yml for HA with my host IP and port (various attempts) with nothing working. I'd be interested to know if anyone gets this working in any way, shape or form.
  12. I had exactly the same issue too, my post is in this thread so you might have tried this already. I ended up using a different container off the Docker Hub. I can't see which container you're using (I'd need to see the edit page & Repo or send another screenshot with advanced options enabled). I used this guide and it sorted me out:
  13. I managed to get this "working", and I have to say without you, @Hoopster and @ken-ji's posts in the original thread I would have been lost. So thank you! I say "working", because my keys don't survive a reboot but I've been through all 6 pages of the original thread and worked out what I did wrong so that'll be a doddle to fix. Even so, all I have to do is put in a username and password until I can automate it so the solution is ready to rock. I am going to try and write my solution up at some point (mainly because a mate at work wants to do the same thing) and hopefully it helps others. I am using your original rsync commands from the original post for data I want sync'd, and then I got Duplicati working to encrypt personal files, photos and videos (that I'd rather not be in plain text on my dad's server!!) which are sent using the same idea of SSH/SFTP so it's encrypted in transit too, and tested a restore across the wire and locally to my dad's server in case my server totally died. #sample rsync commands rsync -avu --numeric-ids --progress --stats -e "ssh -i /root/.ssh/Tower-rsync-key -T -o Compression=no -x" "/mnt/user/Audio Books/" [email protected]:"/mnt/user/media/Audio\ Books/" rsync -avu --numeric-ids --progress --stats -e "ssh -i /root/.ssh/Tower-rsync-key -T -o Compression=no -x" "/mnt/user/Ebooks/" [email protected]:"/mnt/user/media/Ebooks/" I installed zerotier this morning and this is just the icing on the already glamorous cake. I tested it with my mate and he was able to access my server once I authorised him access (which was quickly revoked after testing). I selected the IP range of 192.168.192.* so everything I join to that network is on the same network range and can access other nodes on that virtual network - epic, it worked so quickly and easily. Tom Lawrance on YouTube does a good starter video on it. It works amazingly in UnRaid too and passes the virtual network down to the host. So I can setup my rsync and duplicati to use the 192.168.192 range which will never change even if public IP's change. And just for piece of mind the data is still encrypted because of your original SSH idea - which I keep, even though it'll be double encrypted, I'm not fussy about transfer speeds as long as the data gets there.
  14. cheers for the info! I'll give the thread a good read now and get cracking :-)
  15. Thank you! I think I saw another one of your posts during my Googlin' referencing this post. I'll deffo give it a read though - because it was quite an old post I didn't want to be missing a trick if there was something else around. In the post I did see I saw it was setup on the same LAN - did you move the server off the WAN and does it just operate over SSH (port 22)?
  16. Hi, I wondered what the advice would be to sync files from my UnRaid server to another UnRaid server over the internet (my Dad's house). I want to be able to both put our project photos in plain view without modification on both sides (not encrypting or chunking into Zips). I would also like to backup my private data and encrypt it onto his server in mine had a disaster (zip and encrypt would be ok for this). I've seen Duplicati can be used to backup which can encrypt but also zips the files into chunks (which would be ok for the backup part of my project). I've seen rclone can be used to Sync but it looks complicated. I've seen spaceinveder one has done a video on both but mainly talks about putting it into the cloud. Are there any good ideas/advice on how to do this, and any good guides you'd point me at before I start down the wrong road?
  17. Yes - sorry, I have read your post again and see what you mean. I have been through SabNZB, Deluge, and others and they look ok to me. The cases match what they should be (I use the auto complete where possible so I don't think there is a problem with case). I can see why this would be normally the case but this seems to have been creeping up slowly for me so I'm not 100% sure it's the case. When I total my docker sizes vs what unraid is reporting then the two totals don't add up so I think there's more in the docker image than just the live dockers I have installed. To dig around some more, I looked inside /var/lib/docker and with the command "du -ah /var/lib/docker/containers/ | sort -rh | head -60" I could see containers in there which I no longer have; is there a way to safely remove these and just leave the dockers I am actually running? I have a bad habit of adding and removing dockers quite a lot so I wonder if the removal process isn't removing the old images?
  18. Well - my entire Unraid journey thus far has been to wait for a SpaceInvaderOne video, copy what he did, be happy with the results because it always worked first time. I wish I knew this video series was on the way otherwise I wouldn't have spent 3 hours last night trying to get deconz working!! I'm so excited to see this series! During the lockdown home automation has been my number 1 thing to keep me sane, I've tried a CC2531 Zigbee stick, bought loads of Xiaomi sensors and just put a new Sonoff BasicR3 one in to the network. @SpaceInvaderOne, I saw your docker and tried it - it worked well until I tried to get it linked with HA, but then it wasn't able to get an API key. I ended up using the one out of the docker hub with this post to fix my integration issues with the home assistant core docker. I needed to set loads of environment variables and also the "dialout" usermod setting to get it working properly. I don't know if it's worth you adding those into your docker image too? I used the command ls -l /dev/serial/by-id to find out what my conbee2 was using. lrwxrwxrwx 1 root root 13 May 30 02:40 usb-dresden_elektronik_ingenieurtechnik_GmbH_ConBee_II_DE2195326-if00 -> ../../ttyACM0 This is what I ended up with in my docker settings (obviously I had to remap the port from the usual port 80 to some other random port).
  19. I'm not using DVR in Plex - I have probably set it up at some point but I never really got any use out of it. My transcode directory doesn't look like it's configured correctly, I thought I have pointed this at a share on my cache drive. I have had a quick look through all my container settings and it seems they are all setup correctly using the case I set my shares to, i.e. "Download" and not "download" (this is massively helped by the unraid GUI doing the path creation for me. I've mainly used Binhex's applications so I can't imagine the paths in the application are incorrect, but maybe I'm misunderstanding something. Trying to find the main culprit I used the cAdvisor (based on another post) to see the virtual size of my dockers and it seems that the ones that reference my "Download" share are the ones with higher usage. I don't know if this is the right way to go. I drilled into Sonarr (just to pick one) and the directories look correct The settings in docker seem to match my shares... Are there any ways to look inside the docker.img file so I can see what the kind of files are that are in there, so then I can reverse engineer the problem and try to work out what in the applications are misconfigured?
  20. Dude - this is amazing, I've been pulling my hair out for hours trying to fix this and your post sorted me right out!!! I was stuggling with an API issue in HomeAssistant trying to link it up to the spaceinvaderone docker image - I didn't want to create my own docker because that's a ballache, but obviously it's the only way to get it really working. Thank you so much for putting all the technical detail into the post.
  21. Thanks for the reply trurl. I think I jumped to some false conclusions about the app data and docker image, thanks for correcting me. I've attached the diag file from my server. This is the run command from the container (ignore the test environment variable) root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-plexpass' --net='host' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TRANS_DIR'='/config/transcode' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -e 'Test'='Test' -v '/mnt/user/Movies/':'/movies':'rw' -v '/mnt/user/TV/':'/tv':'rw' -v '/mnt/user/Music/':'/music':'rw' -v '/mnt/user/Music-Unsorted/':'/MusicUnsorted':'rw' -v '/mnt/user/Fitness/':'/fitness':'rw' -v '/mnt/user/Home Videos/':'/HomeVideos':'rw' -v '/mnt/user/Training/':'/training':'rw' -v '/mnt/user/Audio Books/':'/audio-books':'rw' -v '/mnt/user/appdata/binhex-plexpass':'/config':'rw' 'binhex/arch-plexpass' 925f49e97196c5af9b5d66a0f19f21bd19cb36e01f43c6c56b92b713c544cfad The command finished successfully! tomnas-diagnostics-20200529-1845.zip
  22. Hi, I'm seeing my unraid server has a high docker memory usage. I have two questions. I think I've terribly misconfigured my dockers as I've been setting them up because stuff like plex is at 9.3GB and others are higher than I imagined. I assume it's because I've been careless when choosing where to put the /config directly and allowing it to be put into the appdata folder. For Plex I can see my metadata folder is massive - I can confirm I don't have any media hosted in there - just the /config directory is massive. This looks to be the same case for all my other dockers where I've not put any thought into where the config dir should live. Should I move my /config directory out of the appdata or is it ok in there? Is there any issue with increasing the size of the docker.img to accommodate these large folders? Cheers Tom