REllU

Members
  • Posts

    73
  • Joined

  • Last visited

Everything posted by REllU

  1. Either that, or he's just not too concerned about a free piece of software he created a long time ago I tried the "refresh" button just now, and unfortunately, no dice First, I ran the job through user-scripts, and opened the "manage backups" button, to see if the issue was still there (just in case) It was, so now I pressed the "refresh" button. That didn't work, and the backup wasn't updated. Then, I tried to run the cron-job again, and now, I tried to click on the "refresh" before opening the "manage backups" (since this is how it works with changing the profiles as well) and nothing. Just to check I wasn't crazy, I tried it my way, running the cron-job, and then switching between two profiles, and checking the "manage backups", and it worked just fine.
  2. It does sound like it, since it would refresh the profiles 🤔 Would there be a command that could be ran after the profile, to refresh the profiles automatically? 🤔
  3. I don't have the answer for your issue, but I'm using an app called "AutoSync" on our household phones. Cost's like 7eur to get the license to use it with an SMB share, and you can set it to sync automatically any folders you want when you're connected to a specific wifi. Has been working nicely for our needs atleast.
  4. (Old issue, before updating) EDIT: (Internal server error, fixed in the next edit) EDIT2: (Fixing the internal server error, and back to square one) EDIT3: NextCloud is amazing for what it's trying to achieve, when it works. But it has just given me way too many headaches, and it's pretty much overkill for what I want to do with it (accessing files through the internet) that I've now decided to move to "FileBrowser" docker instead. Setting that up took me 10 minutes without really knowing what to do, and it's been working nicely (knock on wood), and even issues that we previously had (not being able to download files over 1gb) are now gone. Good luck all of you UnRaider's, who're battling with NextCloud!
  5. Right, in that case, I could just have a restart every day through the user scripts, after the backup work is done :thinking: Thanks!
  6. I'm in a middle of a backup right now (doing a server upgrade), but yeah. I realized after posting, that there's an option to reboot the LuckyBackup after it's done it's cron-job, which should do the same effect. I'll try that later.
  7. Hey, again! I think I stumbled upon a solution by accident for the (LuckyBackup) cron-job not working with the snapshot files! I created a new profile, that I wanted to run every hour or so (for our security cameras, to backup their video footage remotely) While I was doing this, I noticed that the snapshot files were updated successfully! Here's basically the step-by-step on how to get it to work: 1. You need another profile. 2. Set up your cron-jobs the way you want (either within the LuckyBackup itself, or with User Scripts.) 3. Let the cron-job run as it should. 4. Before checking the "manage backup" button within LB, change the profile, and then go back to the profile you're using 5. Check the "manage backup" button, and ta-dah! The snapshot should be updated correctly! Somewhat useless rambling.
  8. This one slipped through the cracks, just bumping it up EDIT: I think I found the solution, but unsure on how to apply it.. I'm using Nginx Proxy Manager on UnRaid, if that makes any difference. The solution I was able to find from here: https://autoize.com/nextcloud-performance-troubleshooting/ If someone could point me into the right direction, that'd be great!
  9. I did try deleting some task's, and creating new ones, resulting in same behavior. Don't know if it's related, and I couldn't really test it, but with previous versions creating new task's wasn't an issue. Could you try creating a 16th task, and see if that works for you or not?
  10. I originally had only one profile, and I had 7 task's on it. I tried to add a new one, and the app crashed. I then duplicated the default profile, to try this on another profile, and the crash happened again. On this duplicate profile, I then tried to remove one of the task's, and add a new one. Adding the 7th task went OK (though, once I saved the profile, the app crashed) After this, I tried to add an 8th task into the new profile, and a crash happened again.
  11. Hey, It seems that LuckyBackup crashes when I'm adding a new task to a profile after 7 task's. I'm trying to add a remote destination task, not sure if that has anything to do with it or not. Validate and okay- buttons both crash the application.
  12. Right, in that case, I might just swap the SSD's entirely, and start the cache from scratch (as again, nothing important was lost with this. I really just wanted to learn how to handle issues like this, should they occur to the main server. But seems that this is a rather rare situation) Just tossing a (probably stupid) idea. Since the SSD is constantly complaining about the missing device 2, would it affect anything if we tried to add in the failing SSD in there, so that the configuration was as it was originally, and then try to do recovery things? (though, that's what we did in the first place, I guess..) Since both of the SSD's were already used, and pretty low-end consumer grade ones, my guess is that both of them were failing, but on a different level. 🤷‍♂️
  13. Hey again, took me a while. Moved to a new place after our chat, so things have been a bit hectic. Anyhow! I went through the steps in the thread you showed, and here's the results: I have both of my servers in the same network right now, and I created a new share in to the main server (called "DataStriver") and mounted a remote share on the backup server to it. I mounted the remote share, and then tried to do the backup from SSD to the remote share. I'll throw the diagnostics for you. datastriver-diagnostics-20210901-1528.zip
  14. Interesting.. I'll give these a go tomorrow. In the meanwhile, would you have any ideas as of what actually happened here? And if I manage to recover the data from the SSD, should I replace this SSD as well, or would I be able to continue to use it after a format? I am planning to replace both SSD's in the near-ish future with WD Red 500gig ones, that are coming off from the main server, once I've replaced them with 1tb ones. Thanks for all the help so far, and hats off to you with your replying times!
  15. So we are getting somewhere I suppose, that's good. Running the code in terminal threw out an error, in attachments. syslog.txt
  16. Mount button pretty much just flashes with UD, and it's throwing out errors in log. syslog.txt
  17. Still no dice. datastriver-diagnostics-20210823-1858.zip
  18. Still unmountable. Diagnostics as attachment. Just to double-check, I did - Stop the array - Un-assign the SSD - Start the array - Stop the array - Assign the working SSD - Start the array once more after I took the diagnostics. Result's being the same. datastriver-diagnostics-20210823-1847.zip
  19. Sorry, first time doing this. Here's the (hopefully) correct file(s) datastriver-diagnostics-20210823-1841.zip
  20. - Stopped the array - While the SSD's were un-assigned, I ran the command above on both SSD's - Only the known-good SSD seemed to be fine with it, the dying one threw some errors (I'll throw a pic of the terminal in attachments) - Rebooted the system (with auto-start disabled), grabbed the diags Both SSD's look like they can be mounted with Un-Assigned Devices- plugin. So that's a good sign.
  21. Hey again, so I did what you asked here, and here's the log file. The dying SSD is on Cache 2, the sdd one (Kingston_SV) (EDIT: Removed the log file, in case there was anything sensitive)
  22. In that case, only thing I can think of, is that the new SSD was already formatted, and filled with (old) data. Like I said in the original post, UnRaid seemed to be OK after I replaced the SSD. Everything was green, and the new SSD was part of the cache pool just fine. Is there anything else you could think of, why this would've happened?
  23. Ah, sorry, I missed a step while I was writing the message. I did un-assign the dying SSD, started the array, and then shut down the server. I guess the oopsie here, was that I didn't change the cache pool to be the size of 1 when I started the array again? If so, that might be worth it to be added on the work-around message, or maybe do a "official" work-around post somewhere?
  24. I saw a workaround here Which I wanted to test. This system is our backup server, and there's not a-lot of data in the cache drives, so it's not a huge issue if the data is gone. But as stated in the original message, I just want to learn how to deal with situations like this. I'm also planning to upgrade our main server with bigger SSD cache (from 500gigs to 1tb), so I want to test and see what works and what doesn't with the backup server, before I do anything that cannot be fixed. The bug also seemed to be out there for quite a while, despite how serious it seems. I feel like I've read this exact message somewhere already haha Anyway, I'll do this once I get back home. That'll be in 6-8 hours or so. Appreciate the help. EDIT: Oh also, out of curiosity. What _should_ I do, if a disk / SSD dies on a pool? I have 2 SSD's on both of my servers, as well as a parity drive for the HDD's. Has there been any word from LimeTech about this whole issue? Just seems rather weird to me.
  25. EDIT:´As you probably noticed from my original message, I am aware that the 6.9.x has issues with pool device replacements, and that I am willing to post diagnostics, as soon as I know what to do before I download the diagnostics log. As of right now, the server is shut down, and the cache pool doesn't exists, so downloading any diagnostics right now wouldn't really help much I don't think.