Jump to content

DiscDuck

Members
  • Posts

    46
  • Joined

  • Last visited

Posts posted by DiscDuck

  1. On 7/20/2021 at 8:21 PM, jbrodriguez said:

    I've been completely swamped with the ControlR app rewrite :)

    I checked the unbalance code and it's filtering cache drives by name.

    Would you mind following the instructions in https://github.com/jbrodriguez/controlr-support ?

    This would allow me to check out how the multiple pool drives are named/defined.

    Hi :)

     

    honestly, having more than one cache pool drive is not a 1 in a million exotic server case but very common place since the release of unraid 6.9.x many moons ago.

     

    If you find the time to look into this, simply add another cache pool drive (doesn't even need to be an ssd) to your server and you have all the information you could possibly need.

     

    If you don't have the time or the drive to bring unbalance up-to-date, that is of course totally up to you. You will still get my appreciation for the great tool that has helped me moving many many bytes.

  2. On 6/17/2021 at 2:42 PM, jbrodriguez said:

    Hi, I haven't looked at it yet, still caught up with work and other matters. I'll probably check it out in a week or two, if not I'll post back here

     

    Hi :) did the week or two allow you to look into the missing pool drives? With 6.10 release around the corner it would be nice if unbalance would support 6.9.x. Again, if we can help pinging Limetech, looking for api's and such, let us know. Thanks

  3. On 4/22/2021 at 8:39 PM, jbrodriguez said:

    I see, probably there's a new 'api' to get the other cache pools.

    I'll take a look, but it might take a bit of time.

     

    Hi :) Just wondering if you had any luck. It seems more and more users enjoy the multiple cache pools function. If we can help pinging Limetech, looking for api's and such, let us know. Thanks

  4. I would love to find out, how the choices for the gui icons came about? What made you choose a paw for boot devices? A snow flake for array operations? More about this and the early decisions.

    • Haha 1
  5. 6 minutes ago, Squid said:

    I did open up a issue a couple of days ago on an internal bug tracker

     

    Thanks for the quick response.

     

    I guess I'm used to companies that charge money (and even ones that don't) for their products, would have at least given an acknowledgment after several days. Let's see when a response and a fix is published.

  6. After testing 6.9.0 on two other unraid servers for a couple of days, without any problems, I updated the next unraid server, my storage server from 6.9.0-rc2 to 6.9.0.

     

    Now I see drives with a red lock icon, claiming "WRONG ENCRYPTION KEY".

     

    But every time I try to start the array, it is a different set of drives. Even managed to get the array started with all drives unlocked once.

     

    The two other unraid servers also have their drives encrypted, the same way, the same key. No problems there.

     

    Example array start:

    Quote

    Mar 5 10:58:52 Storage emhttpd: Opening encrypted volumes...
    Mar 5 10:58:52 Storage emhttpd: shcmd (1639): /usr/sbin/cryptsetup luksOpen /dev/md1 md1 --allow-discards --key-file=/root/keyfile
    Mar 5 10:58:55 Storage emhttpd: shcmd (1641): /usr/sbin/cryptsetup luksOpen /dev/md2 md2 --allow-discards --key-file=/root/keyfile
    Mar 5 10:58:57 Storage emhttpd: shcmd (1643): /usr/sbin/cryptsetup luksOpen /dev/md3 md3 --allow-discards --key-file=/root/keyfile
    Mar 5 10:58:59 Storage emhttpd: shcmd (1645): /usr/sbin/cryptsetup luksOpen /dev/md4 md4 --allow-discards --key-file=/root/keyfile
    Mar 5 10:59:01 Storage emhttpd: shcmd (1647): /usr/sbin/cryptsetup luksOpen /dev/md5 md5 --allow-discards --key-file=/root/keyfile
    Mar 5 10:59:03 Storage root: No key available with this passphrase.
    Mar 5 10:59:03 Storage emhttpd: shcmd (1647): exit status: 2
    Mar 5 10:59:03 Storage emhttpd: shcmd (1649): /usr/sbin/cryptsetup luksOpen /dev/md6 md6 --allow-discards --key-file=/root/keyfile
    Mar 5 10:59:05 Storage emhttpd: shcmd (1651): /usr/sbin/cryptsetup luksOpen /dev/md7 md7 --allow-discards --key-file=/root/keyfile
    Mar 5 10:59:07 Storage emhttpd: shcmd (1653): /usr/sbin/cryptsetup luksOpen /dev/nvme0n1p1 nvme0n1p1 --allow-discards --key-file=/root/keyfile
    Mar 5 10:59:09 Storage emhttpd: shcmd (1655): /usr/sbin/cryptsetup luksOpen /dev/sdb1 sdb1 --allow-discards --key-file=/root/keyfile
    Mar 5 10:59:11 Storage emhttpd: shcmd (1656): touch /boot/config/forcesync
    Mar 5 10:59:11 Storage root: Starting diskload
    Mar 5 10:59:11 Storage emhttpd: Mounting disks...

    Quote

    Mar 5 10:59:12 Storage emhttpd: shcmd (1668): mkdir -p /mnt/disk4
    Mar 5 10:59:12 Storage emhttpd: shcmd (1669): mount -t xfs -o noatime /dev/mapper/md4 /mnt/disk4
    Mar 5 10:59:12 Storage kernel: XFS (dm-3): Mounting V5 Filesystem
    Mar 5 10:59:12 Storage kernel: XFS (dm-3): Ending clean mount
    Mar 5 10:59:13 Storage kernel: xfs filesystem being mounted at /mnt/disk4 supports timestamps until 2038 (0x7fffffff)
    Mar 5 10:59:13 Storage emhttpd: shcmd (1670): xfs_growfs /mnt/disk4
    Mar 5 10:59:13 Storage root: meta-data=/dev/mapper/md4 isize=512 agcount=11, agsize=268435455 blks
    Mar 5 10:59:13 Storage root: = sectsz=512 attr=2, projid32bit=1
    Mar 5 10:59:13 Storage root: = crc=1 finobt=1, sparse=1, rmapbt=0
    Mar 5 10:59:13 Storage root: = reflink=1
    Mar 5 10:59:13 Storage root: data = bsize=4096 blocks=2929717235, imaxpct=5
    Mar 5 10:59:13 Storage root: = sunit=0 swidth=0 blks
    Mar 5 10:59:13 Storage root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1
    Mar 5 10:59:13 Storage root: log =internal log bsize=4096 blocks=521728, version=2
    Mar 5 10:59:13 Storage root: = sectsz=512 sunit=0 blks, lazy-count=1
    Mar 5 10:59:13 Storage root: realtime =none extsz=4096 blocks=0, rtextents=0
    Mar 5 10:59:13 Storage emhttpd: shcmd (1671): mkdir -p /mnt/disk5
    Mar 5 10:59:13 Storage emhttpd: /mnt/disk5 mount error: Wrong encryption key
    Mar 5 10:59:13 Storage emhttpd: shcmd (1672): umount /mnt/disk5
    Mar 5 10:59:13 Storage root: umount: /mnt/disk5: not mounted.
    Mar 5 10:59:13 Storage emhttpd: shcmd (1672): exit status: 32
    Mar 5 10:59:13 Storage emhttpd: shcmd (1673): rmdir /mnt/disk5

     

  7. For preparation, I rented an external virtual server with a public IP address. Locally I configured a VM with pfSense. pfSense is up and working, standard NAT and test clients can access the Internet.

     

    How to proceed from here? Looking for help, tutorials, documentation here.

     

    Can pfSense connect to the external Server with SSH and could I forward all ports from external to the WAN interface of pfSense?

  8. Managed to get access to the lost data. Hope this little workflow helps the next person this might be happening to.

     

    So, after a system lockup and a hard rest, one of my data drives in the array showed up without any partition or data. The drive was part of an array with xfs-encrypted drives. There was no parity drive. (No risk for me, as I have plenty backups)

     

    Use the following method at your own risk. Always have backups, there is no substitute.

     

    The drive in question needs to be outside the array or the array needs to be stopped. My drive was outside the array and only visible in the GUI with the great plugin Unassigned Devices. At the end of the identification (Table Unassigned Devices, Main page), you find the device name sdx (where x typically is a, b, c, etc.). Let us say it is sdg.

     

    Open the terminal. First, we need to find the LUKS header.

    hexdump -C /dev/sdg | grep "4c 55 4b 53 ba be"

    If you get a result use CTRL-C to stop the dump.

     

    Note the first Hex number, it should look something like “00008000”. Now translate it into decimal. I used https://www.rapidtables.com/convert/number/hex-to-decimal.html

     

    00008000 translates to decimal 32768.

     

    Let us find the next free loop filesystem.

    df -h

    If you see entries like /dev/loop3 use the next one up, that is not listed. Now we combine the device, the decimal number and the next free loop in the next step.

    losetup -o 32768 /dev/loop4 /dev/sdg

    Now we open the device. If we get asked for the secret password, we hit the right spot to access the data.

    cryptsetup luksOpen /dev/loop4 lukstemp

    Enter your passphrase. If you get back to the prompt without any message, it worked!

     

    Finally, we create a mount point and mount the filesystem.

    mkdir /mnt/decrypted_drive
    mount /dev/mapper/lukstemp /mnt/decrypted_drive

    If all went as planned, we now can access the data under /mnt/decrypted_drive .

     

    • Like 1
×
×
  • Create New...