tstor

Members
  • Content Count

    104
  • Joined

  • Last visited

Community Reputation

9 Neutral

About tstor

  • Rank
    Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. No, the data you write is encoded for two reasons: guarantee a minimum of transitions for clock recovery (see RLL codes, https://en.wikipedia.org/wiki/Run-length_limited) and modern error recovery algorithms (https://en.wikipedia.org/wiki/Low-density_parity-check_code, https://web.archive.org/web/20161213104211/http://www.marvell.com/storage/assets/Marvell_88i9422_Soleil_pb_FINAL.pdf) In other words, there will always be a lot of flux reversals / polarity changes regardless of what exactly you write. Yes, but assume you have an address line defect in a higher address
  2. Your assumptions on magnetic media are wrong. The information on a disk is not coded into magnetised / demagnetised spots, it is coded into transitions of magnetisation with opposite polarities. The read heads detect magnetic flux changes, not the magnetisation itself. Also what is written are not the bits that the disk driver hands over to the drive. The data coming from the driver gets re-encoded in a way that optimises several parameters, e.g. number of transitions for clock recovery, influence on neighbouring spots, error correction. How it is done on a given drive is the secret sauce of i
  3. Excellent, I'l use it with the next swap. And with the current one I have learned a few new things about encrypted drives.
  4. blkid did not show any duplicate UUIDs. I changed the duplicate UUID of the encrypted partition with "cryptsetup luksUUID...", but I didn't think about the XFS file system becoming visible once the encrypted partition is unlocked. So here is write-up of what I did ultimately to get access in case someone else gets into the same situation Goal: Access an encrypted drive taken out of the array in UD on the same system. Issue: If this was due to a swap and the drive taken out has been reconstructed, there will be UUID conflicts that prevent mounting. This is because the reconstru
  5. I did, the diagnostics in my previous post to which you replied were the ones after changing the UUID manually and rebooting. https://forums.unraid.net/topic/92462-unassigned-devices-managing-disk-drives-and-remote-shares-outside-of-the-unraid-array/?do=findComment&comment=935227 But here is a fresh set: tower-diagnostics-20210116-1333.zip
  6. Thanks Thanks again, I really appreciate the efforts you put into the UD plugins. Now even though I have first changed the UUID via CLI and then rebooted the server, UD does not mount the encrypted disk. Any idea?
  7. Here they are. Please note that in the mean time I have changed the conflicting UUID manually (/dev/sdt). However UD still does not show any disk under "Change Disk UUID". By the way and completely unrelated, when searching this thread for information regarding LUKS and UD I got a bit confused regarding LUKS and SSDs. In your first post it is stated first that "SSD disks formatted with xfs, btrfs, or ext4 will be mounted with 'discard'. This includes encrypted disks." Then further down in the same post it is said that "Discard is disabled on an encrypted SSD because of potential
  8. Makes sense and is correct. Partition 1 however has a different GUID on the rebuilt drive. Is this Unraid's work when resizing the partition after the rebuild? For some reason it didn't. The list of available drives for changing the GUID in UD is empty. Any idea?
  9. I have a question regarding UD / encrypted disks. I replaced one of the encrypted array data drives with larger one. After rebuild had successfully finished, I plugged the previous data drive into an empty slot and wanted to mount it. However it does not mount. It obviously has the same password as the array and the array is mounted. UD displays "luks" as file system for both the drive as well as partition 1. Mount button for the drive is clickable, for partition 1 it is greyed out. If I click on mount, UD spends some seconds doing something, the disk log adds one line of "/usr/sbin/cryptsetu
  10. Thanks, the array is now rebuilding the missing drive (disk12). I also observe the head load count because in my opinion it is excessive. For the busy array drives it currently remains stable, but for the idling UA drives (not mounted) it continues to increase (S.M.A.R.T 192 & 193 increase about 3 per hour). It is known that WD Green drives aggressively park their heads, but these are HGST data center drives. Looking at the high values in the counters, all drives seem to do this, when inactive. Is this normal? tower-diagnostics-20200421-0920.zip
  11. Thanks, I will. I'd like to do that it maintenance mode so that I can be sure that there are no writes to the array during rebuild. But I would like to have read access. Is there a way to do that? If not, can I mount individual drives with mount -o ro /dev/sdX1 /x or does that interfere with the rebuild process?
  12. There were no errors during the parity sync. I ran the parity check in non-correcting mode and it terminated with zero errors.
  13. I have started to use two 16 TB drives from Seagate, one of each for the parity drives: - Seagate Exos X16 ST16000NM001G 16 TB SATA - Seagate Exos X16 ST16000NM002G 16 TB SAS I did chose Seagate simply because they are the only ones from which it is currently easy to get 16 TB drives in the open market. The competitors seem to sell everything above 14 TB almost exclusively to data center operators and storage vendors. Interestingly the SAS drives at that size are sometimes cheaper than the SATA, it seems that they sell more of those. I chose one of each, hoping to minimis
  14. While I originally wanted to avoid increasing the array size and therefore was interested in the array swap procedure, I ultimately decided to add another controller and increase the number of drives. Therefore I just put in larger parity drives and recalculated parity. For that I mounted the array in maintenance mode, so that If a drive would fail during the recalculation, I still would have the old parity drives for reconstruction. Now two things are worrying me. 1. Parity does not match between old a new drives. In order to learn more about Unraid as well as a
  15. Hello, I need to upgrade my two parity drives with larger disks in oder to be able to use larger data drives in the future. The current parity drives shall then replace the smallest data drives. Is the parity swap procedure described here (https://wiki.unraid.net/The_parity_swap_procedure) still supported with Unraid 6.8.2 and dual parity array and if yes, can I do both swaps at the same time? It would obviously be faster to just copy the current parity drives to the new ones and filling the remaining area on the parity disks with the correct bits using the swap procedure than