unRAID Server Release 6.0-beta3-x86_64 Available


limetech

Recommended Posts

http://wiki.xen.org/wiki/Access_a_LVM-based_DomU_disk_outside_of_the_domU

 

It seems to me that block-attach gives the dom0 access to the VM disk but not the other way around (which is what we want).

 

I'm still, sadly, sure it can't be done.

 

 

IF, and this is a big decision, the unraid array were and LVM container then we could mount that inside a VM though I'm not sure if multiple VMs could share access.

 

Sent from my Nexus 5 using Tapatalk

 

Link to comment
  • Replies 661
  • Created
  • Last Reply

Top Posters In This Topic

I've just updated, everything works fine untill I try to execute my scripts, sab/sickbeard etc and I run into the error of not being able to execute python, has anyone else had this issue?

 

I've done a format and installed fresh, then copied over my config file

Link to comment

I've just updated, everything works fine untill I try to execute my scripts, sab/sickbeard etc and I run into the error of not being able to execute python, has anyone else had this issue?

 

I've done a format and installed fresh, then copied over my config file

You aren't trying to run 32bit plugins on 64bit unRAID are you?
Link to comment

XEN caused me to use a cache drive (again). I'm not used to it anymore so I do hope that this is the right part of the forum for my question.

 

I'm testing 2 VM's from Ironicbadger: ArchVM and Debcloud, both installed on the cache drive. The WebGUI is showing me red indicators next to these two shares: "some or all files on cache". I think that is alright and I see the following files:

 

mnt/cache/ArchVM: only arch.img
mnt/cache/Debian: debcloud.cfg & debcloud.img

mnt/user/ArchVM: arch.cfg, arc.img, test.cfg
mnt/user/Debian: debcloud.cfg, debcloud.img

mnt/user0/ArchVM: arch.cfg, test.cfg
mnt/user0/Debian: no files at all

 

Is that an ok behavior? The reason I'm asking is that I lost the Debian VM yesterday after a complete shutdown by using

xl shutdown <ID>

for all VM's and Dom0. I do believe that I used the manual mover but not sure.

Link to comment

XEN caused me to use a cache drive (again). I'm not used to it anymore so I do hope that this is the right part of the forum for my question.

 

I'm testing 2 VM's from Ironicbadger: ArchVM and Debcloud, both installed on the cache drive. The WebGUI is showing me red indicators next to these two shares: "some or all files on cache". I think that is alright and I see the following files:

Are you sure it is Red rather than Orange?    Red indicates a write error has occurred, while Orange is the correct colour for a share with files on the cache drive.

Link to comment

Well, ok you are right, seems to be more orange (see the comment that occurs by moving the mouse cursor aver the indicator).

 

However, still not sure why different views are showing different files  ???

Not sure I understand. Do you intend for these to remain on cache? If so, then make their shares cache-only. Anything at the root of a disk is automatically a share.

Is this what you currently have?

mnt/cache/ArchVM: only arch.img
mnt/cache/Debian: debcloud.cfg & debcloud.img

mnt/user/ArchVM: arch.cfg, arc.img, test.cfg
mnt/user/Debian: debcloud.cfg, debcloud.img

mnt/user0/ArchVM: arch.cfg, test.cfg
mnt/user0/Debian: no files at all

This indicates that the ArchVM share and the Debian share are not cache-only, so mover has moved some of the files to other disks.

 

Link to comment

I was not aware that there is something like a "Cache Only" setting and I flagged both shares (ArchVM & Debian) accordingly now.

 

In addition to the file summary below I have two files from the ArchVM directory on disk5 as a result of the move activity - correct? No other disk contains any files from the VM shares.

 

What is the recommendation now? Where should the file sit? Should I move the shares along with their files back to

  • /mnt/cache/ArchVM & /mnt/cache/Debian or
  • /mnt/user/ArchVM & /mnt/user/Debian or
  • /mnt/user0/ArchVM & /mnt0/user/Debian

 

I think that I can remove any of the disk5 files.

 

With the XEN concept coming to unRAID I believe that some users are new in creating a cache drive, hence the description of the "cache only" concept might we worthwhile to be mentioned in Tom's first guide. But I might be the only noob  ::)

Link to comment

I was not aware that there is something like a "Cache Only" setting and I flagged both shares (ArchVM & Debian) accordingly now.

 

In addition to the file summary below I have two files from the ArchVM directory on disk5 as a result of the move activity - correct? No other disk contains any files from the VM shares.

 

What is the recommendation now? Where should the file sit? Should I move the shares along with their files back to

  • /mnt/cache/ArchVM & /mnt/cache/Debian or
  • /mnt/user/ArchVM & /mnt/user/Debian or
  • /mnt/user0/ArchVM & /mnt0/user/Debian

 

I think that I can remove any of the disk5 files.

 

With the XEN concept coming to unRAID I believe that some users are new in creating a cache drive, hence the description of the "cache only" concept might we worthwhile to be mentioned in Tom's first guide. But I might be the only noob  ::)

 

It's in my video guides for sure, and also in the name of the share in the cfg file...

Link to comment

I might have overlooked it but I can't see what files need to bei in /mnt/cache, /mnt/cache/user and /mnt/cache/user0. Just need a clarification of  the cache mechanics.....

 

 

Sent from my iPad using Tapatalk

/mnt/cache shows the files that are on the cache drive.

/mnt/user0 shows the user shares excluding any files that are still on cache.

/mnt/user shows the user shares including any files that are still on cache but are destined for the array.

The mover script just does rsync between /mnt/cache and /mnt/user0, so you end up with the /mnt/user cached files actually on /mnt/user0.

Any shares set to cache-only are skipped by the mover script.

 

Long story short, move the files back to cache and set those shares to cache-only.

Link to comment

I might have overlooked it but I can't see what files need to bei in /mnt/cache, /mnt/cache/user and /mnt/cache/user0. Just need a clarification of  the cache mechanics.....

 

 

Sent from my iPad using Tapatalk

/mnt/cache shows the files that are on the cache drive.

/mnt/user0 shows the user shares excluding any files that are still on cache.

/mnt/user shows the user shares including any files that are still on cache but are destined for the array.

The mover script just does rsync between /mnt/cache and /mnt/user0, so you end up with the /mnt/user cached files actually on /mnt/user0.

Any shares set to cache-only are skipped by the mover script.

 

Long story short, move the files back to cache and set those shares to cache-only.

 

Alternatively, you can create a folder in /mnt/cache beginning with a period.

 

For example you can store your VMs in /mnt/cache/.VMs and the mover should ignore them.  Someone correct me if I'm mistaken.

Link to comment

I've been using v6 beta 3 for about a week on my test server and it seems rock solid at this point. I don't really use plugins except for health monitoring and haven't had a single issue.

 

I'm using the free version at this point and will be purchasing Pro to test AD integration. Hopefully that will go as well as everything else.

 

I'm glad I finally settled on using UnRAID. Fantastic software.

Link to comment

Rather a minor issue, but has anyone been able to get FTP to work?

 

Tom,

 

It seems that the latest version of vsftp sets the 'listen' parameter to 'YES'.

 

If I put this line into the vsftpd.conf file:

listen=NO

 

my ftp is working again.

 

You should probably add this to the default vsftpd.conf file you distribute.

 

EDIT: Just found it and some other changes in the provided script.  I looked it over and missed it the first time.

Link to comment

Tom,

 

Stopping the array and shutting down or rebooting from the webgui is very messy with VMs running.  The array won't stop if the VMs are not shutdown first, trying to then stop the VMs makes things worse.  I have my VM on my cache drive and not the array.

 

I would suggest that as part of the "Stop Array" process you shutdown all VMs.

 

You should also add it to the rc.local_shutdown script so anyone executing the "shutdown -r now" will not experience the hang up that happens when the VMs are not stopped first.  If the webgui is not responsive, "shutdown -r now" or the power button are the only recourse.

 

EDIT: I have some samba shares mounted in the VM.  This is probably why the array won't stop.

Link to comment

Tom,

 

Stopping the array and shutting down or rebooting from the webgui is very messy with VMs running.  The array won't stop if the VMs are not shutdown first, trying to then stop the VMs makes things worse.  I have my VM on my cache drive and not the array.

 

I would suggest that as part of the "Stop Array" process you shutdown all VMs.

 

You should also add it to the rc.local_shutdown script so anyone executing the "shutdown -r now" will not experience the hang up that happens when the VMs are not stopped first.  If the webgui is not responsive, "shutdown -r now" or the power button are the only recourse.

 

Not so. http://lime-technology.com/wiki/index.php/Console#To_cleanly_Stop_the_array_from_the_command_line

Link to comment

Tom,

 

Stopping the array and shutting down or rebooting from the webgui is very messy with VMs running.  The array won't stop if the VMs are not shutdown first, trying to then stop the VMs makes things worse.  I have my VM on my cache drive and not the array.

 

I would suggest that as part of the "Stop Array" process you shutdown all VMs.

 

You should also add it to the rc.local_shutdown script so anyone executing the "shutdown -r now" will not experience the hang up that happens when the VMs are not stopped first.  If the webgui is not responsive, "shutdown -r now" or the power button are the only recourse.

 

Not so. http://lime-technology.com/wiki/index.php/Console#To_cleanly_Stop_the_array_from_the_command_line

 

That'll also work, though a bit tedious.

Link to comment

Tom,

 

Stopping the array and shutting down or rebooting from the webgui is very messy with VMs running.  The array won't stop if the VMs are not shutdown first, trying to then stop the VMs makes things worse.  I have my VM on my cache drive and not the array.

 

I would suggest that as part of the "Stop Array" process you shutdown all VMs.

 

You should also add it to the rc.local_shutdown script so anyone executing the "shutdown -r now" will not experience the hang up that happens when the VMs are not stopped first.  If the webgui is not responsive, "shutdown -r now" or the power button are the only recourse.

 

EDIT: I have some samba shares mounted in the VM.  This is probably why the array won't stop.

 

Yes, this is a big issue that needs to be solved in B4 or B5, maybe it's time to start debugging B4 right now.

 

But I would like to Tom adopt the power down adon into the build, why do we need to still using this? better to Tom add a proper Shutdown function on the UnRaid.

 

 

//Peter

Link to comment

Maybe a simple test when hitting stop of ..

 

Part of the stop array process involves running the xl list command and parsing its output.

If xl list displays anything other than dom0 stop process fails and returns error message via webgui 'pls stop your VMs'.

 

Not all VMS will respond to xl shutdown gracefully. So this is the best option I can think of.

 

Sent from my Nexus 5 using Tapatalk

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.