randall526 Posted May 29, 2016 Share Posted May 29, 2016 I had this working before and don't know what happened. I decided to do a little experimenting with my Raw Device passthrough disk cache drive on my backup unRAID virtual machine first. The cache was directly attached to the VM at one point and I moved it off a SAS card attached to the VM and attached it to the motherboard to free up a SAS I/O card connection for a data drive. I RDM mapped it and all was well. I loose temp sensors and such this way but for a SSD cache disk, I was good with that. Then I thought about making that cache disk a VMware data store disk instead and pass a traditional vmdk disk as my caching disk just to see if it can be done. There was alot of wasted space on the disk the way I was using it that made this option attractive. I rsynced my RDM cache disk into the raid array and removed it. When assigning a typical vmdk, it failed to show up by disk-id and when assigning a device as a cache disk, unraid GUI just showed sda and the disk size with no other identifying characteristics. Assigning the cache disk wouldn't stick and the web page constantly showed the cache drive as unassigned after just assigning the disk. Hmmmm not desirable I say, so I back it all out. Removed the SSD drive from vmware's storage pool and setup a RDM disk again and the same thing happened again and it was just working before.... I manually formatted, mounted and restored my cache disk data which got my dockers up however unRAID does not see it as a cache disk via the GUI. See below, /dev/sda is there and mounted manually but with no identifiers. Any ideas? Is it possible to manually populate this data with a /proc file system hack? root@unRAIDbackup:/dev# ls -al /dev/disk/by-id | grep sda root@unRAIDbackup:/dev# ls -al /dev/disk/by-uuid | grep sda root@unRAIDbackup:/dev# cat /etc/mtab | grep sda /dev/sda1 /mnt/cache btrfs rw 0 0 root@unRAIDbackup:/dev# btrfs fi show /mnt/cache Label: none uuid: c3a77508-ff82-4fc5-a147-e0eb90812706 Total devices 1 FS bytes used 61.95GiB devid 1 size 238.47GiB used 88.04GiB path /dev/sda1 btrfs-progs v4.1.2 root@unRAIDbackup:/dev# fdisk -l /dev/sda WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 256.1 GB, 256060514304 bytes 168 heads, 63 sectors/track, 47252 cylinders, total 500118192 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 2048 500118191 250058072 83 Linux root@unRAIDbackup:/dev# root@unRAIDbackup:/dev# ls -al /dev/disk/by-id | grep sda root@unRAIDbackup:/dev# ls -al /dev/disk/by-uuid | grep sda root@unRAIDbackup:/dev# cat /etc/mtab | grep sda /dev/sda1 /mnt/cache btrfs rw 0 0 root@unRAIDbackup:/dev# btrfs fi show /mnt/cache Label: none uuid: c3a77508-ff82-4fc5-a147-e0eb90812706 Total devices 1 FS bytes used 61.95GiB devid 1 size 238.47GiB used 88.04GiB path /dev/sda1 btrfs-progs v4.1.2 root@unRAIDbackup:/dev# fdisk -l /dev/sda WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 256.1 GB, 256060514304 bytes 168 heads, 63 sectors/track, 47252 cylinders, total 500118192 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 2048 500118191 250058072 83 Linux root@unRAIDbackup:/dev# hdparm -I /dev/sda /dev/sda: SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ATA device, with non-removable media Serial Number: <@??? Standards: Likely used: 1 Configuration: Logical max current cylinders 0 0 heads 0 0 sectors/track 510 0 -- Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 0 MBytes device size with M = 1000*1000: 0 MBytes cache/buffer size = unknown Capabilities: IORDY not likely Cannot perform double-word IO R/W multiple sector transfer: not supported DMA: not supported PIO: pio0 Quote Link to comment
randall526 Posted June 2, 2016 Author Share Posted June 2, 2016 So I thought about what Linix was doing here and the device-id's are nothing more then symbolic links to the /dev/sda file. the sda file is getting picked up every time as the cache drive is manually mountable this way. I looked at how ESXi was listing the disk at it's Linux device level, looked at the naming format in unRAID and created my own sym links via the go file. ln -s /dev/sda1 /dev/disk/by-id/ata-LITEONIT_LCS2D256M6S_2.5_7mm_256GB_TW0XFJWX5508534T1145-part1 ln -s /dev/sda /dev/disk/by-id/ata-LITEONIT_LCS2D256M6S_2.5_7mm_256GB_TW0XFJWX5508534T1145 After doing this, unRAID picked up my manually populated disk ID's and I was able to assign the cache drive. The drive is now being used as a cache and sharing the cache share properly with out having to manually mount it. Dirty work around but it works. I may try my original idea of assigning vmdk disks sourced from different VMware store SSD drives and see if this is feasible. I was going to create a cache pool this way. I don't know if anyone else out there is setup this way via ESXi however shall see how it goes. Quote Link to comment
JonathanM Posted June 2, 2016 Share Posted June 2, 2016 So I thought about what Linix was doing here and the device-id's are nothing more then symbolic links to the /dev/sda file. the sda file is getting picked up every time as the cache drive is manually mountable this way. /dev/sd? assignments are not static, and can change if there are hardware changes, or even if a drive happens to be slow to respond. The by-id is. Quote Link to comment
randall526 Posted June 2, 2016 Author Share Posted June 2, 2016 hmmm I rebooted a few times to see how /dev/sda was getting assigned, seems the RDM assigned cache drive gets assigned as /dev/sda everytime. Since it's being passed through as a Raw device mapping, it seems to be seen first in the boot sequence before all my drives attached to the passed through I/O card does seance it has to load the I/O card driver little later in the boot sequence. I'll have to keep this in mind. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.