Jump to content

Having trouble mounting SSD as cache


pacload

Recommended Posts

Hello, I would appreciate if someone could offer me some input to resolve this little issue,

 

Basically I was trying to replace my existing cache drive with a larger one, the new one was used before as the main drive of a Windows server box with Hyper V VM's which I'm guessing is causing the issue.

 

Here is what I did in order:

 

-Stopped Array

 

-Added 3TB drive and SSD

 

-Precleared HDD

 

-Started Array and started to format both drives

 

-HDD completed sucessfuly , SSD showing as unmounted

 

and here are some extra info:

 

On the SSD page's "File system status: Unmountable - No file system (32) "

 

 

Log:

Sep 11 08:49:01 NAS emhttp: shcmd (484): set -o pipefail ; mkfs.btrfs -f -K -dsingle -msingle /dev/sdar1 |& logger

Sep 11 08:49:01 NAS logger: ERROR: /dev/sdar1 is mounted

Sep 11 08:49:01 NAS emhttp: shcmd: shcmd (484): exit status: 1

Sep 11 08:49:01 NAS emhttp: shcmd (485): mkdir -p /mnt/cache

Sep 11 08:49:01 NAS emhttp: shcmd (486): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/sdar1 /mnt/cache |& logger

Sep 11 08:49:01 NAS logger: mount: /dev/sdar1 already mounted or /mnt/cache busy

Sep 11 08:49:01 NAS logger: mount: according to mtab, /dev/sdar1 is mounted on /mnt/disks/Recovery

Sep 11 08:49:01 NAS emhttp: shcmd: shcmd (486): exit status: 32

Sep 11 08:49:01 NAS emhttp: mount error: No file system (32)

Sep 11 08:49:01 NAS emhttp: shcmd (487): rmdir /mnt/cache

Sep 11 08:49:01 NAS emhttp: shcmd (488): :>/etc/samba/smb-shares.conf

Sep 11 08:49:01 NAS emhttp: shcmd (489): cp /etc/netatalk/afp.conf- /etc/netatalk/afp.conf

Sep 11 08:49:01 NAS avahi-daemon[2411]: Files changed, reloading.

Sep 11 08:49:01 NAS avahi-daemon[2411]: Files changed, reloading.

Sep 11 08:49:01 NAS avahi-daemon[2411]: Files changed, reloading.

Sep 11 08:49:01 NAS emhttp: Restart SMB...

Sep 11 08:49:01 NAS emhttp: shcmd (490): killall -HUP smbd

Sep 11 08:49:01 NAS emhttp: shcmd (491): cp /etc/avahi/services/smb.service- /etc/avahi/services/smb.service

Sep 11 08:49:01 NAS avahi-daemon[2411]: Files changed, reloading.

Sep 11 08:49:01 NAS avahi-daemon[2411]: Service group file /services/smb.service changed, reloading.

Sep 11 08:49:01 NAS emhttp: shcmd (492): pidof rpc.mountd &> /dev/null

Sep 11 08:49:01 NAS emhttp: shcmd (493): /etc/rc.d/rc.atalk status

Sep 11 08:49:01 NAS emhttp: Restart AFP...

Sep 11 08:49:01 NAS emhttp: shcmd (494): /etc/rc.d/rc.atalk reload |& logger

Sep 11 08:49:01 NAS emhttp: shcmd (495): cp /etc/avahi/services/afp.service- /etc/avahi/services/afp.service

Sep 11 08:49:01 NAS avahi-daemon[2411]: Files changed, reloading.

Sep 11 08:49:01 NAS avahi-daemon[2411]: Service group file /services/afp.service changed, reloading.

Sep 11 08:49:02 NAS avahi-daemon[2411]: Service "NAS" (/services/smb.service) successfully established.

Sep 11 08:49:02 NAS avahi-daemon[2411]: Service "NAS-AFP" (/services/afp.service) successfully established.

Sep 11 08:49:06 NAS emhttp: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog

 

Thank You

 

Link to comment

thanks itimpi,

 

Now that I think of it I might have accidentally mounted it with the unassigned devices plugin during the pre clear, so i stopped the array and made sure everything was unmounted and re added it and now it does work! however I only see 315MB of the 250GB when I add it to cache, I do remember seeing the disk split in 4 parts when the array was stopped, what should I do to get rid of all these parts and reclaim the other space?

 

Thanks

 

K3ia3JM.png

Link to comment

thanks itimpi,

 

Now that I think of it I might have accidentally mounted it with the unassigned devices plugin during the pre clear, so i stopped the array and made sure everything was unmounted and re added it and now it does work! however I only see 315MB of the 250GB when I add it to cache, I do remember seeing the disk split in 4 parts when the array was stopped, what should I do to get rid of all these parts and reclaim the other space?

 

Thanks

 

K3ia3JM.png

 

The screenshot does not show it mounted as cache.  Are you sure you have added as cache?

Link to comment

I think that your problem is that you have multiple partitions on the SSD, and on an already partitioned drive UnRAID only uses the first one!  You need to delete any existing partitions and try again.

 

One way to 'zap' the current partitions is to stop the array, unassign the cache drive and from a console/telnet session run a command of the form:

dd if=/dev/zero of=/dev/sd? count=100

where sd? is the device to be the cache drive.  Make sure it is the right device as it will effectively destroy the existing partitioning of the drive.  On completing the command you can reassign the drive and since it is now effectively unpartitioned unRAID will now create a partition covering the whole disk.

Link to comment

I think that your problem is that you have multiple partitions on the SSD, and on an already partitioned drive UnRAID only uses the first one!  You need to delete any existing partitions and try again.

 

One way to 'zap' the current partitions is to stop the array, unassign the cache drive and from a console/telnet session run a command of the form:

dd if=/dev/zero of=/dev/sd? count=100

where sd? is the device to be the cache drive.  Make sure it is the right device as it will effectively destroy the existing partitioning of the drive.  On completing the command you can reassign the drive and since it is now effectively unpartitioned unRAID will now create a partition covering the whole disk.

 

Or pull the SSD and clear the partitions on another machine, so you don't accidently break your unraid shares. Then put it back in and reset it as the cache.

 

Plenty of bootable images/etc out there, heck, even windows disk manager will easily let you wipe the partitions out.

 

Just another idea, be EXTRA cafeful and double check you get the right drive name in the console, the more drives you have, the more you need to double check it.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...