Jump to content

jonnyczi

Members
  • Posts

    15
  • Joined

  • Last visited

Posts posted by jonnyczi

  1. Oh I just went back and yes you are right sde isn't cache1 but my unassigned parity1. I just wiped it and I don't have any more errors except for one in syslog. My array started automatically when rebooting and didn't ask me for the keyfile.

     

    Feb 9 20:57:53 Tower emhttpd: /mnt/disk2 mount error: Volume not encrypted

     

    I will do more research on that.

     

    Sorry about that. I wasn't very vigilant.

     

    Thank you so much @johnnie.black

  2. /dev/loop0: TYPE="squashfs"
    /dev/loop1: TYPE="squashfs"
    /dev/sda1: LABEL="UNRAID" UUID="5249-83B7" TYPE="vfat"
    /dev/sdb1: UUID="0b1c567a-f0dc-4b7e-a313-8d098ed56c16" UUID_SUB="bdaf68b2-1077-4d73-9f79-66a978ae6a90" TYPE="btrfs"
    /dev/sdc1: UUID="0b1c567a-f0dc-4b7e-a313-8d098ed56c16" UUID_SUB="37b13129-7539-43b1-9c32-5367273a1b98" TYPE="btrfs"
    /dev/sdd1: UUID="f9b2c107-3b00-4ff5-bae5-2140e6db2314" UUID_SUB="4ec50007-10d0-4de9-b2bf-c793a6168b57" TYPE="btrfs" PARTUUID="adb9d840-8b94-4b72-b27d-225193802d5d"
    /dev/sde1: UUID="f9b2c107-3b00-4ff5-bae5-2140e6db2314" UUID_SUB="4ec50007-10d0-4de9-b2bf-c793a6168b57" TYPE="btrfs" PARTUUID="b7cc4bde-8906-4d69-ad30-185e5f291241"
    /dev/sdg1: UUID="45704a3d-73a8-4d3c-8fff-63c304e424f8" TYPE="ext4" PARTLABEL="primary" PARTUUID="ab7b71f6-036c-4096-94db-3dd7353e333a"
    /dev/md1: UUID="f9b2c107-3b00-4ff5-bae5-2140e6db2314" UUID_SUB="4ec50007-10d0-4de9-b2bf-c793a6168b57" TYPE="btrfs"
    /dev/loop2: UUID="288d3b2e-7fe7-4604-b4c0-999207f03d35" UUID_SUB="93c6a399-6006-42c6-b5e7-8a508268d2a0" TYPE="btrfs"
    /dev/loop3: UUID="cc89a9fc-a9c2-45e9-85d4-0a961199eb31" UUID_SUB="6ff694a5-9b3f-4513-8592-a090097b1236" TYPE="btrfs"
    /dev/sdf1: PARTUUID="bf3964db-c06a-4c5f-a011-37e0b6e56d82"

    syslog.txt

  3. Thank you so much johnnie.black! That fixed it!

    Quote

    It basically is, but parity will have the same one if used with a single array device.

    Is this in the documentations somewhere cause that really got me?

     

    On a side note, I have been providing a keyfile (a photograph) on array start but disk2 states "Unmountable: Volume not encrypted". Unraid gave the format option but that didn't change anything. Something doesn't look quite right.

     

    Feb  9 20:08:23 Tower root: Starting diskload
    Feb  9 20:08:23 Tower emhttpd: Mounting disks...
    Feb  9 20:08:23 Tower emhttpd: shcmd (11025): /sbin/btrfs device scan
    Feb  9 20:08:23 Tower root: WARNING: adding device /dev/sde1 gen 27591023 but found an existing device /dev/sdd1 gen 27591501
    Feb  9 20:08:23 Tower root: ERROR: cannot scan /dev/sde1: File exists
    Feb  9 20:08:23 Tower root: Scanning for Btrfs filesystems
    Feb  9 20:08:23 Tower emhttpd: shcmd (11026): mkdir -p /mnt/disk1
    Feb  9 20:08:23 Tower emhttpd: shcmd (11027): mount -t btrfs -o noatime,nodiratime /dev/md1 /mnt/disk1
    Feb  9 20:08:23 Tower kernel: BTRFS info (device md1): disk space caching is enabled
    Feb  9 20:08:23 Tower kernel: BTRFS info (device md1): has skinny extents
    Feb  9 20:08:37 Tower emhttpd: shcmd (11028): btrfs filesystem resize max /mnt/disk1
    Feb  9 20:08:37 Tower root: Resize '/mnt/disk1' of 'max'
    Feb  9 20:08:37 Tower emhttpd: shcmd (11029): mkdir -p /mnt/disk2
    Feb  9 20:08:37 Tower kernel: BTRFS info (device md1): new size for /dev/md1 is 4000786976768
    Feb  9 20:08:37 Tower emhttpd: /mnt/disk2 mount error: Volume not encrypted
    Feb  9 20:08:37 Tower emhttpd: shcmd (11030): umount /mnt/disk2
    Feb  9 20:08:37 Tower root: umount: /mnt/disk2: not mounted.
    Feb  9 20:08:37 Tower emhttpd: shcmd (11030): exit status: 32
    Feb  9 20:08:37 Tower emhttpd: shcmd (11031): rmdir /mnt/disk2
    Feb  9 20:08:37 Tower emhttpd: shcmd (11032): mkdir -p /mnt/cache
    Feb  9 20:08:38 Tower emhttpd: mount_pool: ERROR: cannot scan /dev/sde1: File exists
    Feb  9 20:08:38 Tower emhttpd: cache uuid: 0b1c567a-f0dc-4b7e-a313-8d098ed56c16
    Feb  9 20:08:38 Tower emhttpd: cache TotDevices: 2
    Feb  9 20:08:38 Tower emhttpd: cache NumDevices: 2
    Feb  9 20:08:38 Tower emhttpd: cache NumFound: 2
    Feb  9 20:08:38 Tower emhttpd: cache NumMissing: 0
    Feb  9 20:08:38 Tower emhttpd: cache NumMisplaced: 1
    Feb  9 20:08:38 Tower emhttpd: cache NumExtra: 0
    Feb  9 20:08:38 Tower emhttpd: cache LuksState: 0
    Feb  9 20:08:38 Tower emhttpd: shcmd (11033): mount -t btrfs -o noatime,nodiratime,degraded -U 0b1c567a-f0dc-4b7e-a313-8d098ed56c16 /mnt/cache
    Feb  9 20:08:38 Tower kernel: BTRFS info (device sdb1): allowing degraded mounts
    Feb  9 20:08:38 Tower kernel: BTRFS info (device sdb1): disk space caching is enabled
    Feb  9 20:08:38 Tower kernel: BTRFS info (device sdb1): has skinny extents
    Feb  9 20:08:38 Tower kernel: BTRFS info (device sdb1): enabling ssd optimizations

  4. Sure I will get that sorted and post back.

     

    Thank you so much for your help!

    All of the disks have been in the system for many months now. sde is actually my parity1 drive that I moved off of the array after running New Config so that I can move data faster with unBalance so that I can encrypt disk1. The only drive that changed was parity2 which is being moved and formatted as btrfs encrypted. When the array started with disk2 unraid asked me to format it so I did. Then upon restarting the array is when I got the cache drive problem.

    I didn't know if UUIDs ever change in hard drives but there are supposed to be so many that its basically impossible to duplicated them. What are the chances? 🤣

  5. Hey guy's, long time Unraid user here. I ran into something that doesn't make sense. I haven't added or removed any drives in a very long time.

     

    Cache 1 states "too many missing/misplaced devices" and wants to be formatted even though it's file system still states btrfs.

     

    Feb 9 16:18:38 Tower emhttpd: cache uuid: 0b1c567a-f0dc-4b7e-a313-8d098ed56c16

    Feb 9 16:18:38 Tower emhttpd: cache TotDevices: 2

    Feb 9 16:18:38 Tower emhttpd: cache NumDevices: 2 Feb 9 16:18:38 Tower emhttpd: cache NumFound: 2

    Feb 9 16:18:38 Tower emhttpd: cache NumMissing: 0

    Feb 9 16:18:38 Tower emhttpd: cache NumMisplaced: 3

    Feb 9 16:18:38 Tower emhttpd: cache NumExtra: 0 Feb 9 16:18:38 Tower emhttpd: cache LuksState: 0

    Feb 9 16:18:38 Tower emhttpd: /mnt/cache mount error: Too many missing/misplaced devices

    Feb 9 16:18:38 Tower emhttpd: shcmd (1310): umount /mnt/cache Feb 9 16:18:38 Tower root: umount: /mnt/cache: not mounted.

    Feb 9 16:18:38 Tower emhttpd: shcmd (1310): exit status: 32

    Feb 9 16:18:38 Tower emhttpd: shcmd (1311): rmdir /mnt/cache

     

    So I have 2x250GB cache drives. I also have 3x4TB drives. Two as parity and one as data. I needed more space and wanted to encrypt my data so I decided to move my parity 2 drive as a data drive.

     

    I stopped the array. Went to new config and chose to preserve cache assignments and clicked apply.

     

    I changed the default disk format from btrfs to btrfs encrypted.

     

    Then I reassigned my parity 1 and disk 1 the way it was before and added the old parity 2 drive to disk 2. Of course my cache drives are still in their respective places.

     

    So then I chose a keyfile and clicked on start array. The disk 2 didn't have a file system and required a format so I went to the bottom of the main page and saw the option to format disk 2. No other disk was there so I formatted.

     

    All was good so I went to disable the auto start of all my docker containers (The docker image file and appdata are all on the cache drive). Then I stopped and restarted the array. Which is where I got my cache drive issue.

     

    I thought that if one of the cache drives wasn't working I would still be able to see the data because the drives are mirrored but I don't see the data.

     

    Also disk 2 was formatted but now it has an unlocked yellow icon and the tool tip says "Device to be formatted" even though the file system stated on the drive is btrfs.

     

    I really need to fix my cache drive issue more so than anything else. I didn't even touch the cache drives and they were working after using the new configuration function and formatting disk 2. I know this because my dockers were all working. I mounted my cache drive in read only mode and my data is there but I still need to get it back into the array, hopefully without moving data around.

     

    PS: I have the license for 12 drives so I'm in the clear there although I have plugged in many drives over the years if that makes a difference.

     

    Thanks a bunch guys!

  6. 4 hours ago, subivoodoo said:

    For all with AMD Navi and Error 127... I've compiled my first linux kernel with the updated NAVI patch (from 27-11-2019) for unraid 6.8.0 kernel version 4.19.88. I do not need other patches like pci-reset-quirk or HW support of the 5.x kernels.

     

    It works for me (B450 Board with Ryzen 3600 + Sapphire 5700 XT Pulse) except the Navi audio. Some times I get the following error (after "Force Stop") and still need to reboot the rig:

    
    qemu-system-x86_64: vfio: Cannot reset device 0000:09:00.1, no available reset mechanism.

    But within Win10 ever reboot works like a charm... without the Navi patch I get always the D3 issue for every reboot.

     

    Use at your own risk!!! My compile is based on the Unraid-DVB Kernel build script but without Highpoint/Rocketraid drivers (got errors there).

    boot-4.19.88_navi_patch_20191214.zip 16.34 MB · 2 downloads

     

    Thank you for applying this patch. I was getting D3 on reboot of a Windows VM on a reference Asus 5700 XT but now it works!
    It appears to work even on Force Stop but I didn't do enough testing to confirm this.

     

    I also get the error for the audio. I think its because this patch is focused on the GPU part.

    qemu-system-x86_64: vfio: Cannot reset device 0000:25:00.1, no available reset mechanism.


    Am I right to assume that future Unraid updates will eventually overwrite the changes?

    My system specs are:
    MSI x570 Edge Wifi
    3900x
    Reference Asus 5700 XT

  7. Hi everyone!

     

    I'm new to the forum.

     

    I have three Unraid servers and remote access has become a primary goal for me recently. I can't for the life of me find the commands that I can run via SSH to start and stop the array.

     

    For example I upgraded my server to 6.2 remotely and rebooted. Now I'm stuck with SSH access and in order to access the web gui I need to start Docker which needs the array to be up. One of my containers contains a proxy server that allows me to access the web gui. I'm guessing that on this server I forgot to enable "Auto Start Array"

     

    Thank you for your help!

×
×
  • Create New...