unRAID Server Release 6.0-beta2-x86_64 Available


limetech

Recommended Posts

  • Replies 147
  • Created
  • Last Reply

Top Posters In This Topic

I have run into an issue with AFP, running Beta2...I'm not sure if this is specific to Beta 2 or not, as I ran into this issue with My main server on 5.04 last night as well... During a Huge ~300gb plus transfer, I get approx 1/3 way thru and come up with this error:

I don't have any issues transferring with smb, but it typically takes twice as long...

 

Jan 25 15:13:49 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:49 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:49 test kernel: REISERFS error (device md1): reiserfs-2025 reiserfs_cache_bitmap_metadata: bitmap block 17 is corrupted: first bit must be 1
Jan 25 15:13:49 test kernel: REISERFS (device md1): Remounting filesystem read-only
Jan 25 15:13:49 test kernel: REISERFS warning (device md1): clm-6006 reiserfs_dirty_inode: writing inode 47642 on readonly FS
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:51 test shfs/user: shfs_utimens: utimes: /mnt/disk1/Testing/Movies/R.I.P.D., 3D.[2013] (PG-13).mkv (30) Read-only file system
Jan 25 15:13:51 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:51 test shfs/user: shfs_utimens: utimes: /mnt/disk1/Testing (30) Read-only file system
Jan 25 15:13:51 test shfs/user: shfs_open: open: /mnt/disk1/Testing/Movies/.AppleDouble/R.I.P.D., 3D.[2013] (PG-13).mkv (30) Read-only file system
Jan 25 15:13:51 test kernel: REISERFS warning (device md1): clm-6006 reiserfs_dirty_inode: writing inode 47642 on readonly FS
Jan 25 15:13:51 test shfs/user: shfs_open: open: /mnt/disk1/Testing/Movies/.AppleDouble/Life of Pi, 3D [2012] (PG).mkv (30) Read-only file system
Jan 25 15:13:51 test shfs/user: shfs_open: open: /mnt/disk1/Testing/Movies/.AppleDouble/Escape From Planet Earth, 3D [2013] (PG).mkv (30) Read-o

syslog.zip

Link to comment

Installed on my production box, and haven't seen any issues thus-far.  Don't run much as far as plugins on my server (none at the moment, but did have unmenu & the directory cache plugins running before).  I have 6.0b1 up and running on a full Slackware 14.1 x86_64 distro, guess I need to upgrade it to reflect the changes in 6.0b2 now.

 

Link to comment

I have run into an issue with AFP, running Beta2...I'm not sure if this is specific to Beta 2 or not, as I ran into this issue with My main server on 5.04 last night as well... During a Huge ~300gb plus transfer, I get approx 1/3 way thru and come up with this error:

I don't have any issues transferring with smb, but it typically takes twice as long...

 

Jan 25 15:13:49 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:49 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:49 test kernel: REISERFS error (device md1): reiserfs-2025 reiserfs_cache_bitmap_metadata: bitmap block 17 is corrupted: first bit must be 1
Jan 25 15:13:49 test kernel: REISERFS (device md1): Remounting filesystem read-only
Jan 25 15:13:49 test kernel: REISERFS warning (device md1): clm-6006 reiserfs_dirty_inode: writing inode 47642 on readonly FS
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory

 

Two independent subsystems are simultaneously reporting trouble, afp reporting memory system problems (out of memory, or corrupted memory allocator, or bug), and reiserfs reporting file system corruption.  I can think of 3 possibilities, (1) afp has crashed and is somehow corrupting reiserfs system structures, (2) reiserfs has crashed and is somehow corrupting afp space, and (3) some other third subsystem is corrupting both of these (and maybe others too).  All of these seem extremely remote, but there is one that can be easily checked, and that is the reiser file system.  See the Check Disk File systems wiki page, and check the drive.  If it turns up any problems, follow its instructions to fix them, then test again.

Link to comment

I have run into an issue with AFP, running Beta2...I'm not sure if this is specific to Beta 2 or not, as I ran into this issue with My main server on 5.04 last night as well... During a Huge ~300gb plus transfer, I get approx 1/3 way thru and come up with this error:

I don't have any issues transferring with smb, but it typically takes twice as long...

 

Jan 25 15:13:49 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:49 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:49 test kernel: REISERFS error (device md1): reiserfs-2025 reiserfs_cache_bitmap_metadata: bitmap block 17 is corrupted: first bit must be 1
Jan 25 15:13:49 test kernel: REISERFS (device md1): Remounting filesystem read-only
Jan 25 15:13:49 test kernel: REISERFS warning (device md1): clm-6006 reiserfs_dirty_inode: writing inode 47642 on readonly FS
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory

 

Two independent subsystems are simultaneously reporting trouble, afp reporting memory system problems (out of memory, or corrupted memory allocator, or bug), and reiserfs reporting file system corruption.  I can think of 3 possibilities, (1) afp has crashed and is somehow corrupting reiserfs system structures, (2) reiserfs has crashed and is somehow corrupting afp space, and (3) some other third subsystem is corrupting both of these (and maybe others too).  All of these seem extremely remote, but there is one that can be easily checked, and that is the reiser file system.  See the Check Disk File systems wiki page, and check the drive.  If it turns up any problems, follow its instructions to fix them, then test again.

or ... "afp" has a memory leak and you've run out of free RAM after transferring 100gb of files.    that affects lots of programs.
Link to comment

Sooo is it just me or do parity checks CRUSH array reads?  On 5.0.4 I used to be able to watch a movie via Plex (I know, beta ... add-on ... just bare with me) with no trouble even during a parity check.  I just tried and it was impossible via my Roku; it gave me a "server unavailable error".  Then PlexWeb timed out just opening my library listing, and finally I tried playing the movie directly from the SMB share via VLC on my PC and it just sat there doing nothing until I cancelled the parity check and it immediately started.

 

I tried copying a file from the array to my PC and got:

 

No Parity Check: 52MB/s

W/ Parity Check: <1MB/s

 

For anyone testing, if you do the No Parity Check transfer first and then do it With Parity Check using the same file, you might actually see full speed because (my guess) is the file is still sitting in the cache.  If I then I choose a different file for the Parity Check test it slows to a crawl.  In fact I can start, stop, start the parity check in the middle of a large transfer and see the transfer speed slow down, speed up and slow down correspondingly.

 

Anyone else seeing horrible array read speeds during parity checks?

Link to comment

I have run into an issue with AFP, running Beta2...I'm not sure if this is specific to Beta 2 or not, as I ran into this issue with My main server on 5.04 last night as well... During a Huge ~300gb plus transfer, I get approx 1/3 way thru and come up with this error:

I don't have any issues transferring with smb, but it typically takes twice as long...

 

Jan 25 15:13:49 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:49 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:49 test kernel: REISERFS error (device md1): reiserfs-2025 reiserfs_cache_bitmap_metadata: bitmap block 17 is corrupted: first bit must be 1
Jan 25 15:13:49 test kernel: REISERFS (device md1): Remounting filesystem read-only
Jan 25 15:13:49 test kernel: REISERFS warning (device md1): clm-6006 reiserfs_dirty_inode: writing inode 47642 on readonly FS
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory
Jan 25 15:13:50 test afpd[2665]: afp_write(R.I.P.D., 3D.[2013] (PG-13).mkv): ad_write: Cannot allocate memory
Jan 25 15:13:50 test shfs/user: shfs_write: write: (12) Cannot allocate memory

 

Two independent subsystems are simultaneously reporting trouble, afp reporting memory system problems (out of memory, or corrupted memory allocator, or bug), and reiserfs reporting file system corruption.  I can think of 3 possibilities, (1) afp has crashed and is somehow corrupting reiserfs system structures, (2) reiserfs has crashed and is somehow corrupting afp space, and (3) some other third subsystem is corrupting both of these (and maybe others too).  All of these seem extremely remote, but there is one that can be easily checked, and that is the reiser file system.  See the Check Disk File systems wiki page, and check the drive.  If it turns up any problems, follow its instructions to fix them, then test again.

or ... "afp" has a memory leak and you've run out of free RAM after transferring 100gb of files.    that affects lots of programs.

 

Thnks Rob, first thing I did was check the file system...there was one error found and fixed..I was surprised to see the filesystem error, as these are new disks

Joe, it wouldn't surprise me if there is a mem leak in afp since large transfers pose a problem on both 5.04 & 6... although both systems gave different errors when the transfer failed.

Link to comment

Sooo is it just me or do parity checks CRUSH array reads?  On 5.0.4 I used to be able to watch a movie via Plex (I know, beta ... add-on ... just bare with me) with no trouble even during a parity check.  I just tried and it was impossible via my Roku; it gave me a "server unavailable error".  Then PlexWeb timed out just opening my library listing, and finally I tried playing the movie directly from the SMB share via VLC on my PC and it just sat there doing nothing until I cancelled the parity check and it immediately started.

 

I tried copying a file from the array to my PC and got:

 

No Parity Check: 52MB/s

W/ Parity Check: <1MB/s

 

For anyone testing, if you do the No Parity Check transfer first and then do it With Parity Check using the same file, you might actually see full speed because (my guess) is the file is still sitting in the cache.  If I then I choose a different file for the Parity Check test it slows to a crawl.  In fact I can start, stop, start the parity check in the middle of a large transfer and see the transfer speed slow down, speed up and slow down correspondingly.

 

Anyone else seeing horrible array read speeds during parity checks?

I wonder if this is a consequence of the debate / discussion several months ago where people were calling for the absolutely fastest parity checks possible? In any case, maybe try adjusting the tuneables, there is a script on here somewhere that was written to try different values automatically in an attempt to maximize parity check speed. Perhaps if there is more memory allocated for reading, it can multitask better?

 

I'm really just shooting in the dark here, so maybe someone else will have a better idea.

Link to comment

Sooo is it just me or do parity checks CRUSH array reads?  On 5.0.4 I used to be able to watch a movie via Plex (I know, beta ... add-on ... just bare with me) with no trouble even during a parity check.  I just tried and it was impossible via my Roku; it gave me a "server unavailable error".  Then PlexWeb timed out just opening my library listing, and finally I tried playing the movie directly from the SMB share via VLC on my PC and it just sat there doing nothing until I cancelled the parity check and it immediately started.

 

I tried copying a file from the array to my PC and got:

 

No Parity Check: 52MB/s

W/ Parity Check: <1MB/s

 

For anyone testing, if you do the No Parity Check transfer first and then do it With Parity Check using the same file, you might actually see full speed because (my guess) is the file is still sitting in the cache.  If I then I choose a different file for the Parity Check test it slows to a crawl.  In fact I can start, stop, start the parity check in the middle of a large transfer and see the transfer speed slow down, speed up and slow down correspondingly.

 

Anyone else seeing horrible array read speeds during parity checks?

 

I just tested, and using iostat, my parity check is running about 99MB/sec across all the disks, and streaming a bluray and seeing about 5-10MB/sec from the meta volume with the movie streaming through Plex...

Link to comment

There are a lot more libs available out of the box now--awesome! I've noticed an odd problem: some packages seem to not have all their parts available on the system.

 

Here you can see glibc includes `usr/include/string.h`, but it seems to be missing from the system.

 

root@Tower:~# cat /var/log/packages/glibc-2.17-x86_64-7 | grep string.h
usr/include/string.h
usr/include/bits/string.h
root@Tower:~# find / -name string.h
root@Tower:~#

 

Thoughts?

Link to comment

There are a lot more libs available out of the box now--awesome! I've noticed an odd problem: some packages seem to not have all their parts available on the system.

 

Here you can see glibc includes `usr/include/string.h`, but it seems to be missing from the system.

 

root@Tower:~# cat /var/log/packages/glibc-2.17-x86_64-7 | grep string.h
usr/include/string.h
usr/include/bits/string.h
root@Tower:~# find / -name string.h
root@Tower:~#

 

Thoughts?

Yes: after building "unRaid OS" from packages we then go through a "pruning" process that deletes close to 500MB of unneeded stuff (unneeded for a non-development system).  For example, all the documentation, user-space header files, unneeded terminfo, zoneinfo, etc. etc. etc.

Link to comment

There are a lot more libs available out of the box now--awesome! I've noticed an odd problem: some packages seem to not have all their parts available on the system.

 

Here you can see glibc includes `usr/include/string.h`, but it seems to be missing from the system.

 

root@Tower:~# cat /var/log/packages/glibc-2.17-x86_64-7 | grep string.h
usr/include/string.h
usr/include/bits/string.h
root@Tower:~# find / -name string.h
root@Tower:~#

 

Thoughts?

Yes: after building "unRaid OS" from packages we then go through a "pruning" process that deletes close to 500MB of unneeded stuff (unneeded for a non-development system).  For example, all the documentation, user-space header files, unneeded terminfo, zoneinfo, etc. etc. etc.

 

Ahh I see. Is there anyway to know what's missing? I'm checking if a dependency is installed by looking for it in /var/log/packages, but that gives false positives if their files are pruned out.

Link to comment

There are a lot more libs available out of the box now--awesome! I've noticed an odd problem: some packages seem to not have all their parts available on the system.

 

Here you can see glibc includes `usr/include/string.h`, but it seems to be missing from the system.

 

root@Tower:~# cat /var/log/packages/glibc-2.17-x86_64-7 | grep string.h
usr/include/string.h
usr/include/bits/string.h
root@Tower:~# find / -name string.h
root@Tower:~#

 

Thoughts?

Yes: after building "unRaid OS" from packages we then go through a "pruning" process that deletes close to 500MB of unneeded stuff (unneeded for a non-development system).  For example, all the documentation, user-space header files, unneeded terminfo, zoneinfo, etc. etc. etc.

 

Ahh I see. Is there anyway to know what's missing? I'm checking if a dependency is installed by looking for it in /var/log/packages, but that gives false positives if their files are pruned out.

Depends on what you're trying to do.  If you are configuring a 'development' system, or full slack install, use the package list in /var/log/packages as a start.  If you are just installing additional packages to provide additional functionality, there shouldn't be any dependencies on something like a header file.

 

The other reason for including /var/log/packages is to prevent plugins from installing a duplicate or older package than what's already included.  At some point I'll include a "prune" script that will delete unneeded files from add-on packages.  For most packages this would be minimal savings, but for some could be multiple-MB's that needlessly take up RAM.

Link to comment

There are a lot more libs available out of the box now--awesome! I've noticed an odd problem: some packages seem to not have all their parts available on the system.

 

Here you can see glibc includes `usr/include/string.h`, but it seems to be missing from the system.

 

root@Tower:~# cat /var/log/packages/glibc-2.17-x86_64-7 | grep string.h
usr/include/string.h
usr/include/bits/string.h
root@Tower:~# find / -name string.h
root@Tower:~#

 

Thoughts?

Yes: after building "unRaid OS" from packages we then go through a "pruning" process that deletes close to 500MB of unneeded stuff (unneeded for a non-development system).  For example, all the documentation, user-space header files, unneeded terminfo, zoneinfo, etc. etc. etc.

 

Ahh I see. Is there anyway to know what's missing? I'm checking if a dependency is installed by looking for it in /var/log/packages, but that gives false positives if their files are pruned out.

Depends on what you're trying to do.  If you are configuring a 'development' system, or full slack install, use the package list in /var/log/packages as a start.  If you are just installing additional packages to provide additional functionality, there shouldn't be any dependencies on something like a header file.

 

The other reason for including /var/log/packages is to prevent plugins from installing a duplicate or older package than what's already included.  At some point I'll include a "prune" script that will delete unneeded files from add-on packages.  For most packages this would be minimal savings, but for some could be multiple-MB's that needlessly take up RAM.

 

I'm not configuring a development system per se. I found the string.h issue because it's required by the json gem[1], required by thor, required by my application.

 

[1] A JSON implementation as a Ruby extension in C

Link to comment

There are a lot more libs available out of the box now--awesome! I've noticed an odd problem: some packages seem to not have all their parts available on the system.

 

Here you can see glibc includes `usr/include/string.h`, but it seems to be missing from the system.

 

root@Tower:~# cat /var/log/packages/glibc-2.17-x86_64-7 | grep string.h
usr/include/string.h
usr/include/bits/string.h
root@Tower:~# find / -name string.h
root@Tower:~#

 

Thoughts?

Yes: after building "unRaid OS" from packages we then go through a "pruning" process that deletes close to 500MB of unneeded stuff (unneeded for a non-development system).  For example, all the documentation, user-space header files, unneeded terminfo, zoneinfo, etc. etc. etc.

 

Ahh I see. Is there anyway to know what's missing? I'm checking if a dependency is installed by looking for it in /var/log/packages, but that gives false positives if their files are pruned out.

Depends on what you're trying to do.  If you are configuring a 'development' system, or full slack install, use the package list in /var/log/packages as a start.  If you are just installing additional packages to provide additional functionality, there shouldn't be any dependencies on something like a header file.

 

The other reason for including /var/log/packages is to prevent plugins from installing a duplicate or older package than what's already included.  At some point I'll include a "prune" script that will delete unneeded files from add-on packages.  For most packages this would be minimal savings, but for some could be multiple-MB's that needlessly take up RAM.

 

I'm not configuring a development system per se. I found the string.h issue because it's required by the json gem[1], required by thor, required by my application.

 

[1] A JSON implementation as a Ruby extension in C

Who would have thunk it possible?

Link to comment

Get permission issues with some exes and cmd files when attempting to access them over the network. What's odd is, I can actually rename and edit these files but when I attempt to launch them I get a permission error. I tried restarting all machines, setting shares to public, and even running New Permissions. The permissions are right. Downgrading back down to 32bit fixes it.

aaa.png.bdca359935a41b607fbf5ed48df42b2f.png

aa.png.a3f2e21eb1f0d24b958e5d9562a63aa5.png

Link to comment

In case anyone needs context about Tom adding virtualization features to unraid, this post is a good summary into the huge thread where this was discussed: http://lime-technology.com/forum/index.php?topic=30777.msg277044#msg277044

 

BTW, I had to troll through Tom's last month's worth of posts to find the thread. In doing so I was really struck by how even-tempered he is in dealing with criticism and hotheads on the forums.

Link to comment

In case anyone needs context about Tom adding virtualization features to unraid, this post is a good summary into the huge thread where this was discussed: http://lime-technology.com/forum/index.php?topic=30777.msg277044#msg277044

 

BTW, I had to troll through Tom's last month's worth of posts to find the thread. In doing so I was really struck by how even-tempered he is in dealing with criticism and hotheads on the forums.

 

crazy, isn't it?! considering all this is outside the scope BUT it has seemed to promote greater/wider discussion, so perhaps a necessary evil  ???

 

 

On Topic:

 

I have this now running on my test server and there is nothing in my log to worry about.

Link to comment

Is there a reason you're going with Xen instead of KVM or OpenVZ?  I'm very curious.

 

KVM and Xen Host and Guest support are both enabled in unRAID 6.0.

 

Xen, KVM or OpenVZ... You still need a network bridge.

 

Back to Xen (like OpenVZ)...

 

Xen has it's own "Kernel" and combines all the various tools, packages, libraries, software, etc. into one package. Where as KVM updates are done via the Linux Kernel and so are all the various tools, libraries, packages.

 

From a management standpoint, there are less moving parts with Xen. However, once we get KVM "stable" it's not too often you need to touch / upgrade KVM or the supporting software. What we are talking about here is simple VMs and not 10 node servers, with petabytes of data using a distributed file system with 500+ VMs.

Link to comment

Is there a reason you're going with Xen instead of KVM or OpenVZ?  I'm very curious.

 

KVM and Xen Host and Guest support are both enabled in unRAID 6.0.

 

Xen, KVM or OpenVZ... You still need a network bridge.

 

Back to Xen (like OpenVZ)...

 

Xen has it's own "Kernel" and combines all the various tools, packages, libraries, software, etc. into one package. Where as KVM updates are done via the Linux Kernel and so are all the various tools, libraries, packages.

 

From a management standpoint, there are less moving parts with Xen. However, once we get KVM "stable" it's not too often you need to touch / upgrade KVM or the supporting software. What we are talking about here is simple VMs and not 10 node servers, with petabytes of data using a distributed file system with 500+ VMs.

What I found with Xen, is that after half a day trying to get things up and running with a command line, I broke down and tried about 3 other things, and then two days later, I broke down again, and installed xenserver, and windows on it's own hardware so that I could manage the thing.  I've had more luck with ProxMox and OpenVZ/KVM, but I don't know how much of that is how confusing Xen is vs good tools included with Proxmox.  This is probably the wrong place to discuss, but, I think the success or failure of any virtualization used for 1-off servers/services is the availability of good front-end tools.  The closest I found with Xen was OpenXenManager, but even there I ran into lots of problems.

 

I hope that there's something that can be included with/plugged into unRAID to make VM management fairly easy.

 

Sorry if I vented.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.