Jump to content

itimpi

Moderators
  • Content Count

    9821
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by itimpi

  1. Parity Swap is not appropriate. To do what you want first replace the parity drive, and then only when parity has been rebuilt use the standard procedure for adding a new drive to the array to add the old parity drive as a new data drive. You may find this section of the online documentation to be of use.
  2. That is off the Main page in the Unraid GUI.
  3. You would need to run the 6.9.0 beta release (which includes a much newer Linux kernel than the 6.8.3 release) to have drivers for your NIC.
  4. It means the drive is being emulated using the combination of the parity drive PLUS all the data drives. The data is therefore available just if the drive was really present, and you can do anything you could normally do.
  5. After the rebuild of parity2 completes you can stop the array and Unassign parity1 and the array will start without it. As you are then again down to a single parity drive (albeit parity2) your array is protected against any 1 drive failing. At that point you can re-use the old parity1 for whatever you want. You need to leave the 8TB drive as parity2 as the calculations for parity1 and parity2 are different so they are not interchangeable.
  6. Unfortunately those diagnostics are from after a reboot so do not tell us much other than confirm that disk1 has been disabled. I was hoping you had diagnostics from before the reboot as that would show what lead up to the disk being disabled. Do the contents of the ‘emulated’ disk1 look oK? A rebuild will result in exactly what you can currently see on the emulated disk so worth checking as the first step - if not then other recovery options than a rebuild might be better.
  7. Your syslog shows continual errors on disk1. I would suggest that you need to carefully checks the power/SATA cables as being the most likely cause of the errors.
  8. I would have thought that running HexChat in a docker container may well give you what you want? Not tried it myself though. Looking in the Apps tab there to be several irc clients available for running on Unraid so if the HexChat container is not good enough one of the others may be.
  9. The SMART report looks fine. We might be able to tell you more if you post the system's diagnostics zip file (obtained via Tools->Diagnostics) covering the period when the drive got disabled. All the information on handling disabled drives is here in the online documentation.
  10. That should not happen as long as the following are true: Parity is valid before the rebuild the ‘emulated’ drive mounts with no problems and shows all your files
  11. The read-only Mount option is for Unassigned drives - not array drives. the other approach is: assign ALL drives as data drives. start array. A parity drive will always show as unmountable as it has no file system. Make a note of its serial number. All data drives should mount ok. If this is not the case then more work will be needed to identify the parity drive. Go to Tools -> New Config and use the option to retain current drive settings (not mandatory but makes things easier) return to Main tab and now that you know which drive is parity correct the assignments to reflect this start array to commit changes and build parity.
  12. That is up to you. The rebuild process involves re-writing every sector on the drive so in that sense it is superfluous. However you may want to prove this to yourself .
  13. Do you have any docker containers that if you look at the Docker tab and switch on the Advanced view show they are ‘healthy’? It has been noticed that such containers have been set up by their authors to write to the docker image every few seconds as part of a health check, and although the writes are small the write amplification inherent in using BTRFS means this can add up. I believe that an issue has been raised against docker in case this is a bug in the Docker engine rather than the container authors simply mis-using the health check capability. In addition there are other options available in the Docker settings that can reduce this load such as using an XFS formatted image instead of BTRFS, or not using an image at all and storing directly into the file system. The array has to be stopped to see these options. Have you tried any of these?
  14. Have you implemented any of the several options mentioned in the Release notes to cut down on excessive writes? Simply updating without taking further action does not solve the issue so if you have not then you will still be getting excessive writes.
  15. Unraid does not support RAID1 in the array, although you can have the BTRFS variant of RAID1 in a cache pool. Which drives are you intending to keep in the array, and which ones as cache pools? Note that there has to be at least 1 drive in the array (although it is possible to use something like a USB pen drive to which is never going to store actual data to satisfy this requirement)
  16. You can simulate any disk failing by doing the following: stop array unassign the disk start the array to commit the change and make Unraid ‘forget’ the old assignment. If the disk was a data drive Unraid will show it as being emulated (i.e. its contents still available) using the combination of parity plus other data drives. If a parity drive then Unraid will now simply show the array as unprotected.. stop the array assign the disk to be rebuilt (data or parity it does not matter) start the array to commit the change and start the rebuild.
  17. When you assign disks to the array after a New Config then data disks are only over-written if they are not already in Unraid format (and thus the importance of knowing which is the parity drive). If you are not sure which drive is the parity drive then you can try mounting them in read-only mode from Unassigned Devices. The parity drive does not have a file system so will not mount, and hopefully all the others mount fine. With single parity the order of the data disks does not affect parity so after assigning all the disks you can tick the 'parity is valid' option (although I would still run a parity check to make sure).
  18. The instructions you posted left out an important step. After doing the New Config and making any desired changes you need to start the array at least once to commit the changes. your data will still be intact. The question to ask at this point is whether you know which drive(s) are parity drives and whether you had 1 or 2.
  19. Not sure why you expect these options to show up under Tools - that has never been where they are located The Docker and VM tabs will show when you have enabled the respective services under Settings.
  20. It can be frustrating to get this working as WireGuard gives minimal feedback. I expect it is going to be something simple when you track down what is wrong. Exactly what are you trying to access via the tunnel (Unraid server, LAN connected machines, internet). For instance are you trying to access your server by IP or by name? Name will not work as the DNS server specified is not the one local to the Unraid server so name->IP resolution will not work. If using IP address you should use 10.253.0.1 since that is the address of the server end of the tunnel.
  21. Not clear what your problem might be. It might be a good idea to post a screenshot or the WireGuard settings page from Unraid. the other thing is that occurs to me is whether port forwarding is set up correctly on your router? The fact that you mention there does not even appear to be any attempt to handshake suggests maybe the inbound connection is just not reaching your Unraid server
  22. Are you booting in legacy or UEFI mode? If UEFI mode have you made sure that the EFI folder on the flash does not have a trailing ~ character (if so this needs removing to enable UEFI boot).
  23. I thought that the unrar cli command is included as standard with Unraid?
  24. It does not normally make sense to be using the cache for a share that is going to have enough written to it to expect it to fill up the cache before mover runs. Mover will not normally be able to move things off the cache fast enough in such scenarios. It is probably better to by-pass the cache for such shares and write directly to the array.
  25. The check report indicates that doing this will fix the unmountable problem. Basically do it from the GUI, make sure there is no -n option which forces a check-only.