Jump to content
We're Hiring! Full Stack Developer ×

JonathanM

Moderators
  • Posts

    16,325
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. BTW, the general broken nature of nextcloud's update process, i.e. needing to do multiple manual steps to successfully upgrade, followed by taking care of all the security and setup action items on the administration overview page after the upgrade is complete, that whole complicated mess is WHY lsio does NOT automatically upgrade the nextcloud app in their container updates. Nextcloud is a complicated beast with many inter-operating parts, it needs care and feeding to stay healthy, you can't just fire and forget. The closest you will get to that is set a fixed tag for the container, then only update when you are willing to take the time and step through the process manually.
  2. Go to this page https://download.nextcloud.com/server/releases/ Find the release zip file you need, probably https://download.nextcloud.com/server/releases/nextcloud-25.0.7.zip Open the file, extract the shipped.json out of the core folder and copy it to appdata/nextcloud/www/nextcloud/core then rerun the updater.phar
  3. If you don't want to update on lsio's schedule, use a fixed repository tag. That way you can update when you want. Using the latest tag sometimes requires extra work to stay running.
  4. If you edit the nextcloud container, what is the exact content of your Repository: line?
  5. Possibly you could use Nextcloud, but the standard setup would expose your IP. I think a cloudflared url could hide the IP, but I've not personally tried that. Most any file sharing setups will expose the IP of the source, that's kind of how the internet works.
  6. Unfortunately licenses can not be transferred, so you will need to get your friend to contact support.
  7. Shouldn't be necessary, it's just a troubleshooting step to see if the array stops in a reasonable period of time. There is likely something keeping the array from stopping, causing the shutdown to kill the array prematurely, triggering a parity check on startup. If that's the case, you need to figure out what is keeping the array from stopping.
  8. Try stopping the array before hitting the shutdown GUI button.
  9. Assuming you assessed the situation correctly, parity swap is what you need. If any of your other drives isn't able to be read during the procedure you will lose data, so you really want to be sure of what are doing. If you're at all hesitant, attach your diagnostics to the next post in this thread so someone can look things over for you.
  10. Have you tried deleting the network.cfg and letting Unraid recreate it?
  11. Not sure what you mean, I don't see anything out of the ordinary. Maybe this is throwing you off? https://docs.unraid.net/legacy/FAQ/understanding-smart-reports#1-raw_read_error_rate
  12. Exactly. Never update the container without first checking the Nextcloud version inside the app. The two events will seldom if ever overlap, the app will typically update much less often than the container, so many times you will still be up to date with Nextcloud when the container updates. The issue comes when people ignore the app updates for months and automatically update the container.
  13. When using this container for Nextcloud, YOU are responsible for keeping the Nextcloud app up to date. The updates for this specific container NEVER upgrade the application, only the surrounding support files. If you don't want to be surprised by this, set the docker tag to a specific version instead of latest, so things don't get updated until you update both the application and the container version manually. There are other Nextcloud container implementations that may approach this differently, you may wish to investigate how they handle updates and switch if it fits your needs better.
  14. Port mapping only applies to bridge mode. When a container has a different IP address than the host, mapping is not used or needed, all ports are open and the listening port(s) are directly controlled by the application(s) in the container.
  15. Frequently discussed in the nextcloud support threads. Caused by failure to update the application inside the container before updating the container, the container OS gets too far ahead of the nextcloud program version.
  16. Take this as a reminder to keep your flash drive backup up to date.
  17. I believe you are correct, maintenance mode from step 2 until step 7 would be a good idea. In step 3 you need to assign it to the array so Unraid detects the step 6 removal and starts the emulation in step 7. @trurl@JorgeB, anything I missed? I personally don't have any Unraid encrypted file systems in place so I may have missed something there. Possibly may need to set the file system to encrypted instead of auto on each of the data slots? It wouldn't hurt to have a test drive or two, even old ones, to help validate the trial is working well before you commit to recovery and start attaching the real data disks. Starting completely over with new everything means memtest and at least some burn in hours before you should trust it. It would really suck to find out the new system mangles data while trying to recover a failed disk. I would really work over the trial system in step 1 to uncover any gremlins. After you are satisfied the new system is stable, you will need to set a "new config" to remove the test disks and prepare for step 2 and on. Don't rush through any of this process. I fully expect you to pound on the trial for several days before moving on, preferably starting off with 24 hours of memtest, I'd download the latest free version from https://www.memtest86.com/ and run it.
  18. Pretty much everywhere in the Unraid GUI.
  19. Yes, any locations not mapped to specific locations are stored in the docker image. Maybe plex did an update?
  20. Probably plex writing into an unmapped path.
  21. Click on the VM icon and select remove.
  22. Wrong metric IMHO. Find the actual draw of your setup during a parity check, then time how long shutdown takes. Multiply the time by at least 2, so you will be shutdown hopefully before the UPS drops much below 50%. Take your two numbers and look up the runtime charts of various UPS models and find the best fit. You should be able to find the charts with minutes of runtime at different power draws. Be careful to not mix up watts vs volt/amps.
  23. Did you have alerts set up? The daily health report has that info. As would diagnostics collected before the incident.
  24. If you know which drives were in which logical disk slots, and have a replacement drive for the burnt one the same size or larger but not larger than parity, it's fairly straightforward. Glossing over the details, some of which may be very important, you would... Set up a trial version of Unraid, get it booting and able to access the GUI on a system capable of connecting all the drives Connect all the drives including the replacement for the burnt drive, assign them to the correct disk slots Start the array being VERY sure to select the "parity is valid" option. If you don't do this, you will lose the data on the failed drive Stop the array and remove the replacement drive so parity will emulate the data Start the array and verify all your drives are mounting with the correct content Retrieve your backup and put the content of the config folder on your trial USB, overwriting what's currently there Start the array and see if things are working the way they should Assign the replacement drive to the emulated disk slot and rebuild it There be dragons, don't move forward unless you either a. know exactly what you are doing and why or b. get help here on the forum with explicit details on each step You will lose data if things aren't done correctly, or parity wasn't in sync when things went boom. The failed drive recovery hinges on parity being perfect at the point the drive died.
×
×
  • Create New...