Jump to content
We're Hiring! Full Stack Developer ×

JonathanM

Moderators
  • Posts

    16,319
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Unfortunately licenses can not be transferred, so you will need to get your friend to contact support.
  2. Shouldn't be necessary, it's just a troubleshooting step to see if the array stops in a reasonable period of time. There is likely something keeping the array from stopping, causing the shutdown to kill the array prematurely, triggering a parity check on startup. If that's the case, you need to figure out what is keeping the array from stopping.
  3. Try stopping the array before hitting the shutdown GUI button.
  4. Assuming you assessed the situation correctly, parity swap is what you need. If any of your other drives isn't able to be read during the procedure you will lose data, so you really want to be sure of what are doing. If you're at all hesitant, attach your diagnostics to the next post in this thread so someone can look things over for you.
  5. Have you tried deleting the network.cfg and letting Unraid recreate it?
  6. Not sure what you mean, I don't see anything out of the ordinary. Maybe this is throwing you off? https://docs.unraid.net/legacy/FAQ/understanding-smart-reports#1-raw_read_error_rate
  7. Exactly. Never update the container without first checking the Nextcloud version inside the app. The two events will seldom if ever overlap, the app will typically update much less often than the container, so many times you will still be up to date with Nextcloud when the container updates. The issue comes when people ignore the app updates for months and automatically update the container.
  8. When using this container for Nextcloud, YOU are responsible for keeping the Nextcloud app up to date. The updates for this specific container NEVER upgrade the application, only the surrounding support files. If you don't want to be surprised by this, set the docker tag to a specific version instead of latest, so things don't get updated until you update both the application and the container version manually. There are other Nextcloud container implementations that may approach this differently, you may wish to investigate how they handle updates and switch if it fits your needs better.
  9. Port mapping only applies to bridge mode. When a container has a different IP address than the host, mapping is not used or needed, all ports are open and the listening port(s) are directly controlled by the application(s) in the container.
  10. Frequently discussed in the nextcloud support threads. Caused by failure to update the application inside the container before updating the container, the container OS gets too far ahead of the nextcloud program version.
  11. Take this as a reminder to keep your flash drive backup up to date.
  12. I believe you are correct, maintenance mode from step 2 until step 7 would be a good idea. In step 3 you need to assign it to the array so Unraid detects the step 6 removal and starts the emulation in step 7. @trurl@JorgeB, anything I missed? I personally don't have any Unraid encrypted file systems in place so I may have missed something there. Possibly may need to set the file system to encrypted instead of auto on each of the data slots? It wouldn't hurt to have a test drive or two, even old ones, to help validate the trial is working well before you commit to recovery and start attaching the real data disks. Starting completely over with new everything means memtest and at least some burn in hours before you should trust it. It would really suck to find out the new system mangles data while trying to recover a failed disk. I would really work over the trial system in step 1 to uncover any gremlins. After you are satisfied the new system is stable, you will need to set a "new config" to remove the test disks and prepare for step 2 and on. Don't rush through any of this process. I fully expect you to pound on the trial for several days before moving on, preferably starting off with 24 hours of memtest, I'd download the latest free version from https://www.memtest86.com/ and run it.
  13. Pretty much everywhere in the Unraid GUI.
  14. Yes, any locations not mapped to specific locations are stored in the docker image. Maybe plex did an update?
  15. Probably plex writing into an unmapped path.
  16. Click on the VM icon and select remove.
  17. Wrong metric IMHO. Find the actual draw of your setup during a parity check, then time how long shutdown takes. Multiply the time by at least 2, so you will be shutdown hopefully before the UPS drops much below 50%. Take your two numbers and look up the runtime charts of various UPS models and find the best fit. You should be able to find the charts with minutes of runtime at different power draws. Be careful to not mix up watts vs volt/amps.
  18. Did you have alerts set up? The daily health report has that info. As would diagnostics collected before the incident.
  19. If you know which drives were in which logical disk slots, and have a replacement drive for the burnt one the same size or larger but not larger than parity, it's fairly straightforward. Glossing over the details, some of which may be very important, you would... Set up a trial version of Unraid, get it booting and able to access the GUI on a system capable of connecting all the drives Connect all the drives including the replacement for the burnt drive, assign them to the correct disk slots Start the array being VERY sure to select the "parity is valid" option. If you don't do this, you will lose the data on the failed drive Stop the array and remove the replacement drive so parity will emulate the data Start the array and verify all your drives are mounting with the correct content Retrieve your backup and put the content of the config folder on your trial USB, overwriting what's currently there Start the array and see if things are working the way they should Assign the replacement drive to the emulated disk slot and rebuild it There be dragons, don't move forward unless you either a. know exactly what you are doing and why or b. get help here on the forum with explicit details on each step You will lose data if things aren't done correctly, or parity wasn't in sync when things went boom. The failed drive recovery hinges on parity being perfect at the point the drive died.
  20. Set up a new VM, assign the damaged disk as the main disk, put your favorite bootable recovery image as the installation CD image, and start up the VM. Or, my favorite, set up a VM specifically for recovery and maintenance tasks, and add the damaged image as a secondary drive.
  21. You would likely need to redo your share setup from scratch in order to accomplish this. The files must stay within a single share for both download and consumption. https://trash-guides.info/Hardlinks/How-to-setup-for/Unraid/ There are plenty of opinions about this subject, many that conflict. I personally like to keep separate shares, but that doesn't work for what you want.
  22. The concept you are looking for is docker container path mapping. What you put in the host side (Unraid's view) will appear in the container side at the mapped location. You show the container path as /data1, so that's where the contents of /mnt/disk2 will appear from the container's view. General support is not normally the correct place to ask container specific questions, you should be able to find the correct spot to ask questions if you click on the container in the Unraid dashboard and select support.
  23. VM's really need SSD for reasonable performance.
×
×
  • Create New...