Jump to content

JonathanM

Moderators
  • Posts

    16,711
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. If you sync everything, that's not a good backup, as deletions and corruptions are instantly updated. Backup implies being able to retrieve previous versions of files.
  2. Now you should do a parity check to ensure everything can be read accurately.
  3. If you are not fully stopping the rclone and mergerfs processes BEFORE trying to stop the array you probably have mounts not under Unraid's control holding the /mnt/user system open.
  4. As an aside I recommend updating your signature. Either keep it up to date or remove the system description. 6.1.2 Pro | Antec 1200 v3 | Gigabyte Z77-UD3H | 16GB | Core i5-3470s | Corsair AX750 | SuperMicro, Norco, Icy Dock and iStarUSA 5x3 Cages | 1 x SASLP-MV8 | 2 x RocketRaid 2300 I'm assuming you are no longer on 6.1.2, or running marvell based controllers, which have been not recommended for a few years now.
  5. You keep saying installer, and I suppose that's technically somewhat accurate, but keep in mind that it "installs" into RAM, so it has to "install" on every boot. It's not something you boot from once and then complete a process, the USB stick stays connected whenever Unraid is running. It's more like a "Live" install that saves settings to the USB flash drive.
  6. On Unraid cache isn't cache in the traditional sense. It's tiered storage. Unraid doesn't have the option to use a disk as read cache. The files are read from the storage location, whether that's on the main parity array or on one of the defined pools. Existing files are updated in place. Only new files have the option to be written to one location and subsequently moved to another on schedule. If a file needs to be written and read at the speed of NVME, the file will need to stay there. You could manually set up a backup routine to make copies to a different location if you wish.
  7. If you would have read the link Squid posted, the answer was here all the time, the issue was addressed and solved a couple weeks ago. https://forums.unraid.net/topic/108643-all-docker-containers-lists-version-“not-available”-under-update/?tab=comments#comment-993588
  8. If you click on one of the devices in the pool, it will take you to a page with some tools to do a BTRFS scrub, and list some stats. Here is a link for extended monitoring options. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/?tab=comments#comment-700582
  9. No experience with that exact scenario, but my gut feeling is that Unraid will happily set up whatever you tell it to, the only issue would be that the motherboard may try to manipulate the disks and undo what unraid has done. Since it's a new setup with zero data to lose, now is the ideal time to perform the experiment if you want. Just be prepared to redo it in Unraid after you reboot if removing the disks from the BIOS RAID does mess it up. If I were you, I'd take the opportunity to learn. Set up the pool, confirm it's working, check the BTRFS stats and info on it, reboot and remove the disks from the BIOS RAID, and do a btrfs file system check after you boot back into Unraid. It's good to know how to keep up with the health of a BTRFS RAID volume on Unraid anyway, currently the stock setup isn't as thorough with BTRFS health checks as it should be.
  10. Unraid doesn't support BIOS software based RAID, but you can create a pool and add both devices to it and they will be set up with BTRFS RAID1.
  11. If parity is valid when the drive is replaced, the rebuilt drive will be identical. How long ago was your last parity check with zero errors? Why did you replace the drive? Tools, diagnostics, download the zip file and attach it to your next post in this thread.
  12. Ahh. That's not what I read. Unclear, I assumed kill meant dead. So was that enabled during an event? I'm guessing not, or you would have already posted those results.
  13. No, just run an extended smart test, after that completes download the diagnostics zip and attach it to your next post in this thread. Marginal power supply? Any time the server crashes to a completely power off condition points to power issues somewhere, board or psu.
  14. Attach the diagnostics zip file to your next post in this thread. As a general rule we won't access files not attached here.
  15. Start the array without the parity drive assigned, stop the array, assign the parity drive and let it build.
  16. Most boards have a BIOS setting that determines which errors to skip and boot anyway and which errors to halt on.
  17. Is the issue ongoing, or are you just trying to analyze this specific instance?
  18. https://forums.unraid.net/topic/57181-docker-faq/#comment-564306
  19. Unclean shutdown means Unraid doesn't know for sure that all writes were completed, so a parity check is automatically started in case there was data in flight when the shutdown occurred. During a clean shutdown, as soon as the array is stopped and all the drives are unmounted, a status file is updated. On array start, that status file is checked and changed to running status.
  20. You weren't trying to access the same folder, it's SH-GAME-W10-01 not sh-game-w10-01. Linux is case sensitive.
  21. It will not, however, add the space to the partition inside the image. For that, you will need to expand the partition. Easiest way I know of is to download gparted live or some other utility iso that can manipulate partitions, and set up a new VM with the utility iso set as the install medium, and the vdisk.img file you need to operate on as the primary disk.
  22. I asked because running that script is normally a precursor to removing a drive, and the best way forward depended on a whole bunch of factors, especially since you said So, given that you just wanted to go back to using the drive in the array, you did exactly what was needed.
  23. Links to parts you plan to use? I'm curious how you are planning to accomplish that. Keystone AF / AF with AM / AM cable connected to the server?
×
×
  • Create New...