Jump to content
We're Hiring! Full Stack Developer ×

JonathanM

Moderators
  • Posts

    16,325
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Set up a new VM, assign the damaged disk as the main disk, put your favorite bootable recovery image as the installation CD image, and start up the VM. Or, my favorite, set up a VM specifically for recovery and maintenance tasks, and add the damaged image as a secondary drive.
  2. You would likely need to redo your share setup from scratch in order to accomplish this. The files must stay within a single share for both download and consumption. https://trash-guides.info/Hardlinks/How-to-setup-for/Unraid/ There are plenty of opinions about this subject, many that conflict. I personally like to keep separate shares, but that doesn't work for what you want.
  3. The concept you are looking for is docker container path mapping. What you put in the host side (Unraid's view) will appear in the container side at the mapped location. You show the container path as /data1, so that's where the contents of /mnt/disk2 will appear from the container's view. General support is not normally the correct place to ask container specific questions, you should be able to find the correct spot to ask questions if you click on the container in the Unraid dashboard and select support.
  4. VM's really need SSD for reasonable performance.
  5. My speculation is that this looping happens when the list of containers too be updated is not current. I have been able to induce a loop by NOT rechecking for updates, updating one of the containers that had an update available from the automatic check, then clicking update all. It tries to pull updates for all the containers that were listed before I updated the single container, but it still includes the container I just manually updated, pulls zero bytes, and starts looping. If that is the case, you can work around the issue by first checking for updates, then hitting update all. That way you should be guaranteed to have the latest list.
  6. 6.12 is still in RC phase, so you need to select the next branch instead of stable.
  7. I use UrBackup for my local workstations, and a handful of remote machines as well. It handles windows and linux clients.
  8. No. There is no standard pinout for the PSU end of modular cables, and it's fairly common for the wrong cables to fry any drives or equipment connected. Putting +12V on the 5V pins of equipment generally ends badly.
  9. If the disk slot shows free and used space in Unraid's GUI, it takes forever, with write speeds of a few hundred k. If the GUI shows unmountable, the dd runs at full speed. Every time I umount the disk with the array running, the GUI still has something showing up in that slot. If I stop the array and start it again, it shows unmountable and runs fine. The script depends on being able to read the disk to find the text, so by definition it's mountable at the start of the script. If you umount a disk, does the GUI immediately show unmountable? Mine doesn't.
  10. In my experience, several days. It would be much quicker to simply remove the drive and rebuild parity with the new config tool.
  11. I've done multiple restores, and I remember something quirky in the process, but can't remember exactly what I tweaked. Do you have all three ports forwarded through? The server communicates through those upper ports IIRC, I don't think the web interface is used at all with a bare metal restore.
  12. Did you remove ALL the cables that went with the Thermaltake and replace them with the Corsair cables? Just because the modular ends fit doesn't mean they are compatible. The forum has MANY examples of people ruining hard drives by switching PSU's and leaving the old cables in place.
  13. Set your temp alert thresholds high enough that normal operation doesn't trigger it, or if your temps are higher than the manufacturer recommended max, fix your cooling. Temp alerts should be configured so that it's an actual issue, like a fan failure or environmental issue.
  14. Turn that off and see if the situation improves. cache dirs should be set to operate on the bare minimum of folders, it can easily become counterproductive and keep the array needlessly spinning if your RAM can't contain the full list.
  15. yes yes yes no that's all you need to get the array running, make sure you tick the "parity is already valid" box. Since the final moments of the server running were a little chaotic, you probably need to do a parity check, but that can wait until you recover your backup.
  16. https://wiki.unraid.net/Manual/Troubleshooting#Lost_boot_drive_and_do_not_know_which_are_the_parity_drives
  17. If you aren't being completely hyperbolic here, you need to change your temp alert thresholds, or fix your cooling. A fail should really indicate a full failure, like a drive going near the manufacturers recommended high temp limit, meaning a fan or environment failure that really does need your attention. Normal drive temps should be WELL below those maximums. (60C is the manufacturer's max for a Seagate Exos 16TB)
  18. How large is the pool you have assigned for domains?
  19. Temporarily set a fixed IP on the workstation ethernet connection you are using to connect to Unraid. 192.168.100.11 should work.
  20. Why so complicated? https://wiki.unraid.net/Manual/Storage_Management#Replacing_disks
  21. Are you sharing or bridging the wifi with the ethernet on the workstation? Does the workstation ethernet have a fixed IP? What is it? What IP is Unraid showing?
  22. That won't work. If you format a drive, the format will be written to parity, and the rebuilt drive will be blank, all files will be gone.
×
×
  • Create New...