REllU

Members
  • Posts

    88
  • Joined

  • Last visited

Everything posted by REllU

  1. EDIT: Seems that this is my issue. Or atleast part of it: https://github.com/RocketChat/Rocket.Chat/issues/29382 Original message below: Here to see if anyone else is experiencing similar oddities.. I seem to have a-lot of issues with files currently. I'm using File System as my storage type, which has been working just fine the past few months. However, now I'm getting "ENOENT: no such file or directory, stat '/app/uploads//647df273c68d8a46579c7e20'" Uploading files seems to be working fine. Issue comes when someone is trying to view the file, or delete the file. A bit of backstory It started from trying to create a new Team inside the Rocket.Chat, uploading an avatar to said Team, and then removing the avatar in order to replace it with something else. I noticed that I wasn't able to upload another avatar anymore, so I figured I'd just delete the Team, and start again. However, now the Team was "gone", but I wasn't able to create another one, since Rocket.Chat was determined that there was already a Team with the same name. I was unable to access it anywhere else, except in the settings menu, where I saw the Team. When trying to remove the Team there, it only resulted in more errors. Now however, everything related to files uploaded to Rocket.Chat, are behaving.. oddly. I haven't been able to really pinpoint the issue yet, but basically, only some of my files are loaded, in some client applications. This is very weird, as some files can be loaded on one Client app (such as Windows app, web-app, or mobile app) but for example a file that was uploaded through Android app, was only being displayed on the Web version. I've recently updated our Rocket.Chat to 6.2, which could be the culprit. But very un-sure. I've now been googling and banging my head against the wall for half of today, and I haven't really been finding anything. If anyone's interested, here's the whole error from trying to delete a file:
  2. Just dropping a comment here, thank you! I've had plenty of issues with the VM's not wanting to restart properly unless I restart the entire server. Symptoms (for whoever is currently googling them and pulling their hair) are that, once the VM is restarted (or shut down, and then booting it up, for example when trying to update Windows), the first CPU that's allocated to the VM is stuck at 100%, and there's no signal through a monitor, and I was also unable to ping the VM.
  3. Thank you so much! You just saved my hair, and my weekend! I'm not quite sure what happened here, as our backup server went through the same update just a week ago, and FileBrowser seemed to work just fine there. Whatever the case may be, chown seemed to be the answer here. Thank you for the quick answer!
  4. After updating my UnRaid server from 6.9.2 to 6.11.0, the FileBrowser docker seems to have stopped working. The issue is that FileBrowser cannot access the database file: To confirm this, I changed the path to the database.db file from the container settings, which resulted in a new database file to be created, and the FileBrowser docker to be fully functional again. Though, without any of my settings and user data etc. Is there a way to grant permission to the FileBrowser to it's original database.db file? Worst case scenario, I can always manually set up the docker again, but that's a bit of a pain in the butt.
  5. Just dropping a message here, in case someone needs to see it. I recently lost access to FileBrowser through the net. This was because of an expired SSL certificate. I wasn't able to renew the certificate within Nginx Proxy Manager either. To fix this, I changed the scheme within Nginx Proxy Manager to HTTP instead of HTTPS (which previously worked fine, for whatever reason) Maybe this'll save a few hours of hair pulling from someone
  6. Opening the container's ip:port on Firefox doesn't show https on the front of the address (which it does for containers like UniFi), and really, from the testings I've done now, it's probably safe to say that FileBrowser uses HTTP. As for the nonsense, good to know! Definitely a possibility. However, access to my FileBrowser was disabled due to expired certificate? It makes sense, that the Nginx wasn't able to renew the certificate, because it couldn't access the FileBrowser container, but such an update _should've_ disabled the access to FileBrowser (through the net) all together? 🤔 Just tested this before I read your message. For whatever reason, renewing the certificate seemed to work just fine? Port 80 isn't disabled all together btw, what I meant was that I disabled the port-forward rule for port 80, as that was previously pointing to FileBrowser. From really quick testing, everything _seems_ to be working just fine right now. Seems that all of this was simply caused by the wrong protocol. 🥴
  7. Edit 3: I've now tried to change the protocol to HTTP instead of HTTPS within Nginx, as I'm not really sure what protocol FileBrowser want's to use. Turns out, this seems to work from quick on/off testing. I'm a bit confused as to why HTTPS worked just fine with the last Nginx container I had, but not here? Going to the certificates tab within Nginx, and testing the reachability of the server seems to still give me the same error as before. So I'm guessing renewing the certificates will still be an issue. Edit 4: Disabling the port-forward rule for port 80 within my router seems to still work. Doing this however, does give me a different result in reachability test, which is now stating that there is no server.
  8. Rightyo! Let's see.. 4.) Does NPM reach your target container? Nope. Nginx container is in br0, and wasn't able to connect to FileBrowser, since that was in Bridge mode. Changing FileBrowser into br0 as well, allows Nginx to connect to it succesfully. I've then changed the port forward rule to reflect this change, which seems to work fine. However, the situation is still very much the same: ✔️ http://[my-public-ip] (Skipping Nginx entirely) ✔️[FileBrowser_ip]:[FileBrowser_port] (Skipping Nginx entirely) ✔️Connection between Nginx and FileBrowser (with br0 network) ✔️http:// domain . com ❌https:// domain . com (results in bad gateway) ❌Server reachability test (Within Nginx) I've only now jumped to the official docker image of the Nginx Proxy Manager, previously, I was rocking the jlesage's docker image, which has server me well up until the issue yesterday with renewing certificates. It was also using Bridge-network mode. I read somewhere about potential issues with that particular docker image, and I figured I'd try this one out, just in case 🤷‍♂️ I feel like I'm just missing something obvious here.
  9. Here's what I've got: However, I'm not exactly in the same spot anymore, as I was with the previous message. I got the certificate to renew itself by changing the port-forward rules, so that from port 80, instead of it being directed to FileBrowser, it was instead directed at Nginx docker. This, however, created a "Bad gateway" error, which I'm now struggling with. I've since changed the port forward rules back to what they were, so port 80 is now directed to FileBrowser again. But the result is the same. I'm also getting the same result with the reachability test. Thank you for the quick response! Appreciate it
  10. Hey there, I've set up an Nginx proxy to an FileBrowser docker (and previously NextCloud) about a year ago, and so far everything has been working like a dream. Until yesterday >.< Not sure what happened, and when, but Nginx wasn't able to auto-renew the SSL certificate for my domain. I only noticed this yesterday, when I wasn't able to connect to the FileBrowser from the net anymore. I've tried to "manually" (as in, from the Nginx GUI) renew the certificate, but it keeps failing. On certain browsers (like Samsung's own web app, which doesn't seem to give two hoots about secure connections) I can connect to the FileBrowser docker just fine. I also created a new domain, and new Nginx proxy host to my FileBrowser docker, without the SSL certificates, just to test if the connection is good. It is. This is what I'm getting from the Docker log (inside of UnRaid) when I try to renew the certificate: Internal Error Error: Command failed: certbot certonly --config "/etc/letsencrypt.ini" --cert-name "npm-6" --agree-tos --authenticator webroot --email "[EMAIL HERE]" --preferred-challenges "dns,http" --domains "[EMAIL HERE]" Saving debug log to /var/log/letsencrypt/letsencrypt.log Some challenges have failed. Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details. at ChildProcess.exithandler (node:child_process:397:12) at ChildProcess.emit (node:events:390:28) at maybeClose (node:internal/child_process:1064:16) at Process.ChildProcess._handle.onexit (node:internal/child_process:301:5) The external log files I can't seem to find anywhere 🤔 Trying to test the server reachability, I get: Any help would be appreciated!
  11. I got the same issue, and asked for help quite a while ago. Long story short, I decided to move to a docker called "FileBrowser" It was rather easy to setup, with a little help from a chinese video, and it's been working flawlessly ever since I set it up! As an added bonus, downloading files bigger than 1gb is actually possible straight out of the box with FileBrowser (I never got that to work with NextCloud)
  12. This seems to have done the trick for me, thanks!
  13. I don't know what the script is though, but probably shouldn't be too hard to create one on your own 🤷‍♂️
  14. Hah, that would actually be good enough for our use, as 95% of the time we're using the files through SMB, and only 5% (whenever someone is working from home, and they're creating new folders) is used with FileBrowser. Mind sharing the script you did?
  15. Did you find a way to get this working?
  16. Either that, or he's just not too concerned about a free piece of software he created a long time ago I tried the "refresh" button just now, and unfortunately, no dice First, I ran the job through user-scripts, and opened the "manage backups" button, to see if the issue was still there (just in case) It was, so now I pressed the "refresh" button. That didn't work, and the backup wasn't updated. Then, I tried to run the cron-job again, and now, I tried to click on the "refresh" before opening the "manage backups" (since this is how it works with changing the profiles as well) and nothing. Just to check I wasn't crazy, I tried it my way, running the cron-job, and then switching between two profiles, and checking the "manage backups", and it worked just fine.
  17. It does sound like it, since it would refresh the profiles 🤔 Would there be a command that could be ran after the profile, to refresh the profiles automatically? 🤔
  18. I don't have the answer for your issue, but I'm using an app called "AutoSync" on our household phones. Cost's like 7eur to get the license to use it with an SMB share, and you can set it to sync automatically any folders you want when you're connected to a specific wifi. Has been working nicely for our needs atleast.
  19. (Old issue, before updating) EDIT: (Internal server error, fixed in the next edit) EDIT2: (Fixing the internal server error, and back to square one) EDIT3: NextCloud is amazing for what it's trying to achieve, when it works. But it has just given me way too many headaches, and it's pretty much overkill for what I want to do with it (accessing files through the internet) that I've now decided to move to "FileBrowser" docker instead. Setting that up took me 10 minutes without really knowing what to do, and it's been working nicely (knock on wood), and even issues that we previously had (not being able to download files over 1gb) are now gone. Good luck all of you UnRaider's, who're battling with NextCloud!
  20. Right, in that case, I could just have a restart every day through the user scripts, after the backup work is done :thinking: Thanks!
  21. I'm in a middle of a backup right now (doing a server upgrade), but yeah. I realized after posting, that there's an option to reboot the LuckyBackup after it's done it's cron-job, which should do the same effect. I'll try that later.
  22. Hey, again! I think I stumbled upon a solution by accident for the (LuckyBackup) cron-job not working with the snapshot files! I created a new profile, that I wanted to run every hour or so (for our security cameras, to backup their video footage remotely) While I was doing this, I noticed that the snapshot files were updated successfully! Here's basically the step-by-step on how to get it to work: 1. You need another profile. 2. Set up your cron-jobs the way you want (either within the LuckyBackup itself, or with User Scripts.) 3. Let the cron-job run as it should. 4. Before checking the "manage backup" button within LB, change the profile, and then go back to the profile you're using 5. Check the "manage backup" button, and ta-dah! The snapshot should be updated correctly! Somewhat useless rambling.
  23. This one slipped through the cracks, just bumping it up EDIT: I think I found the solution, but unsure on how to apply it.. I'm using Nginx Proxy Manager on UnRaid, if that makes any difference. The solution I was able to find from here: https://autoize.com/nextcloud-performance-troubleshooting/ If someone could point me into the right direction, that'd be great!
  24. I did try deleting some task's, and creating new ones, resulting in same behavior. Don't know if it's related, and I couldn't really test it, but with previous versions creating new task's wasn't an issue. Could you try creating a 16th task, and see if that works for you or not?
  25. I originally had only one profile, and I had 7 task's on it. I tried to add a new one, and the app crashed. I then duplicated the default profile, to try this on another profile, and the crash happened again. On this duplicate profile, I then tried to remove one of the task's, and add a new one. Adding the 7th task went OK (though, once I saved the profile, the app crashed) After this, I tried to add an 8th task into the new profile, and a crash happened again.