Jump to content

Configuring Test Server to Emulate Production Server (unRAID 6.0+)


stchas

Recommended Posts

I'm trying to configure a single-drive test server to emulate my multi-drive production server for testing the new 64-bit unRAID and plugins. The reason I'm doing this is so I can just copy over my Install and Data Directories for my applications to the same logical location on my test server and not have to worry about reconfiguring the config files for the applications to point to different drives and folders.  But when I try to start the plugins from the webGui, the message WARNING: Your data is not persistent and WILL NOT survive a reboot. Please locate data on persistent storage. eg. cache/array disk. is displayed.  Any pointers on how I might go about this correctly?

 

Here's how I set up the folder structure on my single-drive test server disk1:

mkdir -m 777 -p /mnt/disk1/cache/custom
mkdir -m 777 -p /mnt/disk1/disk7/HDVideo

 

Here's what I put in the /boot/config/go script to mount the folders to emulate my production server:

logger "Testbed60: mounting /mnt/disk1/cache to /mnt/cache"
mkdir -m 777 -p /mnt/cache
mount --bind /mnt/disk1/cache /mnt/cache
logger "Testbed60: mounting /mnt/disk1/disk7 to /mnt/disk7"
mkdir -m 777 -p /mnt/disk7
mount --bind /mnt/disk1/disk7 /mnt/disk7

 

Here's what I put in the file I created at /etc/rc.d/rc.local_shutdown to unmount the connections for shutdown:

umount -l /mnt/cache
umount -l /mnt/disk7

 

I copied all my install/data folders (e.g., SABnzbd, Couchpotato_V2, SickBeard) from /mnt/cache/custom on my production server to the "same" location on the test server (physically at /mnt/disk1/cache/custom, but appears as /mnt/cache/custom on MS Explorer).  When SAB starts, its webGui goes to the configuration wizard. Can't get Couch to run. Sick looks like it's running, but its configuration settings are set to the defaults.

 

All this suggests that the apps are not running off the physical disk but are running off the ramFS. I'm wondering whether trying to mount the "simulated" drives in the /boot/config/go script is too soon in the boot sequence to make reliable connections. 

 

Thoughts?

Link to comment

go runs after the plugins are installed. The sequence on bootup is:

[*]Use installpkg to install all slackware packages which exist in /boot/extra.

[*]Use installplg to install all plugins which exist in /boot/plugins.

[*]Use installplg to install all plugins which exist in /boot/config/plugins.

[*]Invoke the /boot/go script.

as seen in README.md

 

This is the way it has worked since plugins were invented during the v5 betas.

Link to comment

Thanks for the clarification, trurl.

 

After considerably more digging, I recognized I needed to use the emhttp_event facilities to control mounting and unmounting folders to emulate my multi-drive production server on a single-drive test server.  Obviously, this approach is a little more challenging than just plunking drive mount commands into the go script.

 

Take a look at /usr/local/sbin/emhttp_event for a description of the events available and how to hook into them. I created two (2) event handlers, one to mount the emulated disks and the other to unmount them. For purposes of the emhttp_event facility I named this application "diskemu", and placed the event handlers in the /usr/local/emhttp/plugins/diskemu/event folder [more details later on how to do this].

 

Handler: /usr/local/emhttp/plugins/diskemu/event/disks_mounted

#!/bin/bash
#
# wait for /mnt/disk1 online, retry up to 9 times
for i in 0 1 2 3 4 5 6 7 8 9
do
  if [ ! -d /mnt/disk1 ]
  then
    sleep 1
  fi
done
#
# notify if mount failure
if [ ! -d /mnt/disk1 ]
then
logger "diskemu: failed to mount /mnt/disk1 within 10 seconds"
else
#
# mount emulated disks
  logger "diskemu: mounting /mnt/disk1/cache to /mnt/cache"
  mkdir -m 777 -p /mnt/cache
  mount --bind /mnt/disk1/cache /mnt/cache
  logger "diskemu: mounting /mnt/disk1/disk7 to /mnt/disk7"
  mkdir -m 777 -p /mnt/disk7
  mount --bind /mnt/disk1/disk7 /mnt/disk7
fi

I keep this script in /boot/custom/bzroot/diskemu/disks_mounted.

 

Handler: /usr/local/emhttp/plugins/diskemu/event/unmounting_disks

#!/bin/bash
#
# wait while /mnt/cache online, retry up to 9 times
for i in 0 1 2 3 4 5 6 7 8 9
do
  if [ -d /mnt/cache ]
  then
    umount -l /mnt/cache
    umount -l /mnt/disk7
    sleep 1
  fi
done
#
# notify if mount failure
if [ -d /mnt/cache ]
then
logger "diskemu: failed to unmount /mnt/cache within 10 seconds"
else
  logger "diskemu: emulated disks unmounted"
fi

I keep this script in /boot/custom/bzroot/diskemu/unmounting_disks.

 

After reviewing bubbaQ's excellent Permanently Adding Packages to unRAID 4.1 tutorial, making allowances for installing the Slackware 14.1 64-bit version of cpio (downloaded to /boot/packages), and spotting Spectrum's LZMA bzroot Compression post for the current compression/decompression commands, I put together a script that would allow me to build a customized bzroot image that contains my diskemu event handlers. The modified bzroot image is called /boot/bztest:

#!/bin/bash
#
# initialize names
BZTEMPDISK=/mnt/disk1  # disk used for temporary work folder
BZTEMPFOLDER=bz-mod  # temporary work folder name
BZSOURCE=/boot/bzroot  # name of source bzroot image
BZDEST=/boot/bztest   # name of modified bzroot image
BZSCRIPTS=/boot/custom/bzroot  # folder containing custom scripts to load in modified bzroot
CPIOPKG=/boot/packages/cpio-2.11-x86_64-2.txz  # source for Slackware 14.1 64-bit version of cpio

# verify work drive online (i.e., array running)
if [ ! -d $BZTEMPDISK ]
then
  echo "Work drive at $BZTEMPDISK offline...exiting"
  exit 1
fi

# verify temporary work folder does not exist
if [ -d $BZTEMPDISK/$BZTEMPFOLDER ]
then
  echo "Temporary work folder at $BZTEMPDISK/$BZTEMPFOLDER already exists...exiting"
  exit 1
fi

# make a temporary work folder to decompress bzroot into
echo "Creating temporary work folder at $BZTEMPDISK/$BZTEMPFOLDER..."
mkdir $BZTEMPDISK/$BZTEMPFOLDER

# install Slackware 14.1 64-bit version of cpio
echo "Installing $CPIOPKG for support..."
installpkg $CPIOPKG

# decompress source bzroot image
cd $BZTEMPDISK/$BZTEMPFOLDER
echo "Decompressing $BZSOURCE..."
xzcat $BZSOURCE | cpio -m -i -d -H newc --no-absolute-filenames

# add cpio to our modified bzroot image
echo "Adding package(s) to $BZDEST..."
ROOT=$BZTEMPDISK/$BZTEMPFOLDER installpkg $CPIOPKG

# add any other packages you want to make permanent using the format ...
#   ROOT=$BZTEMPDISK/$BZTEMPFOLDER installpkg /path/to/package

# copy custom scripts from $BZSCRIPTS to modified bzroot image
echo "Adding custom script(s) to $BZDEST..."
mkdir -m 755 -p /$BZTEMPDISK/$BZTEMPFOLDER/usr/local/emhttp/plugins/diskemu/event
cd $BZTEMPDISK/$BZTEMPFOLDER/usr/local/emhttp/plugins/diskemu/event
cp $BZSCRIPTS/diskemu/disks_mounted .
cp $BZSCRIPTS/diskemu/unmounting_disks .

# compress modified bzroot
cd $BZTEMPDISK/$BZTEMPFOLDER
echo "Compressing $BZDEST..."
find . | cpio -o -H newc | xz --format=lzma > $BZDEST
sync

# clean up the temporary folder
echo "Removing temporary work folder..."
cd $BZTEMPDISK
rm -Rf $BZTEMPDISK/$BZTEMPFOLDER

# done
echo "Done!"

 

If you place the following block of code in your /boot/syslinux/syslinux.cfg script, you'll have the option of booting unRAID from your new /boot/bztest version of bzroot:

label unRAID OS TEST
  kernel /bzimage
  append initrd=/bztest

 

Here's a copy of my complete /boot/syslinux/syslinux.cfg script to see how it looks:

default /syslinux/menu.c32
menu title Lime Technology
prompt 0
timeout 50
label unRAID OS
  menu default
  kernel /bzimage
  append initrd=/bzroot
label unRAID OS TEST
  kernel /bzimage
  append initrd=/bztest
label unRAID OS Safe Mode (no plugins)
  kernel /bzimage
  append initrd=/bzroot unraidsafemode
label Memtest86+
  kernel /memtest
label Xen/unRAID OS
  kernel /syslinux/mboot.c32
  append /xen dom0_mem=2097152 --- /bzimage --- /bzroot
label Xen/unRAID OS Safe Mode (no plugins)
  kernel /syslinux/mboot.c32
  append /xen dom0_mem=2097152 --- /bzimage --- /bzroot unraidsafemode

 

Enjoy!

Kevin

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...