Jump to content

Windows SBS 2011 VM


SliMat

Recommended Posts

Hi All

 

I can't see a better location for this query - but please move if not right :)

 

Anyway - I have UnRAID 6.3.5 with a few VMs including my Ubuntu Webserver and more recently a Windows SBS 2011 installation.

 

At the moment the SBS 2011 VM is not being used as I am still making sure I am happy with it before I migrate my production server across. Anyway, since setting it up I have noticed that it was hammering the hard disks in my UnRAID server. When I power the SBS server off the disks settle down. I have disabled WSUS as I noticed that it was causing SQL to use high disk utilisation. I have also run a Defrag... This has made things settle down and its not as noisy now... but I have noticed that 3 of my 4 disks seem to be permanently active.

 

Does anyone have any thoughts on this and whether I need to make any changes to the install or configuration before I put this VM into production.

 

The disks I am using are 3 x WD Red 4Tb and 1 x WD Red 3Tb... but am concerned that the disks are being overworked... any thoughts? See screenshot attached!

 

Thanks

 

Screen Shot 2018-04-08 at 09.56.29.png

Link to comment

Any install of Windows as a VM without a cache drive is going to nail your drives.   Windows isn't exactly known for not reading and more importantly writing constantly.  Beyond that, any mapped drives or libraries you may have assigned to an unRaid share you'd want to make sure that Windows isn't going to be indexing them.

Link to comment

Also you are using Reiserfs which an old and no longer supported file system that is known to cause issues on disks that are running out of space such as yours. I would highly suggest you consider moving to a file system such as XFS. Also, consider buying an SSD for your VM's they will run a whole lot faster and now impact your unRAID drives.

Link to comment

Thanks Squid/Ashman... as it happens I have a couple of 128Gb SSD disks which are 'spares' at the moment. The hardware I am using is an HP Microserver G8, so only has space for 4 disks natively... but I can squeeze 1, or maybe 2, SSDs in where the DVD drive is.

 

So would you recommend one of the 2 following ideas (if possible) to minimise the hard disk activity on the main NAS disks...

 

1. Fit a 128Gb SSD and use this as a cache disk

2. Fit a 128Gb SSD and move the SBS VM to sit on here (not sure if this can be done)

 

I am not too worried about having this VM in a protected array as I plan to take a snap shot of the VM and XML data periodically as a backup and store this backup on the protected array.

 

Also, I realise that ReiserFS is a very old file system, but dont know much about XFS... I had a quick Google and it says that UnRAID 6 uses XFS by default. I am guessing that as I have had UnRAID for at least 10 years, mine is still ReserFS as it has been upgraded over time... so is there a way to migrate to XFS, or would I need to dump all my data somewhere - rebuild the system as a fresh setup and then move all my data back? Would I benefit from upgrading to XFS?

 

Any thoughts would be appreciated

 

Thanks

Link to comment
2 hours ago, SliMat said:

Also, I realise that ReiserFS is a very old file system, but dont know much about XFS... I had a quick Google and it says that UnRAID 6 uses XFS by default. I am guessing that as I have had UnRAID for at least 10 years, mine is still ReserFS as it has been upgraded over time... so is there a way to migrate to XFS, or would I need to dump all my data somewhere - rebuild the system as a fresh setup and then move all my data back? Would I benefit from upgrading to XFS?

 

No need to guess - it says so right there in your screenshot! You don't need to dump all your data somewhere safe but you do need to free up one disk. Sometimes that can be done by moving files from one disk to another. Yours are pretty full though. There's a lot of information about the process here.

Link to comment

Thanks John - after I read the replies above I searched on the website and did find this tutorial... I was wondering whether I could wipe one of the data disks (or the parity drive) to create a free disk and then do a data-rebuild/parity rebuild afterwards? Alternatively I'll shuffle some data onto a 3Tb USB drive as a temporary store :)

 

Thanks for the pointers

Link to comment

You could certainly get rid of the parity disk for a few days and use the freed up drive and slot to do the conversion but, of course, you would be without the protection against disk failure during that time. If I was going to take that route I would check that my backups are up to date and that my disks are healthy. You would need to plan carefully or risk ending up with your data spread across the three 4 TB drives and the empty 3 TB drive that is too small to be used as the new parity disk! Alternatively, you might want to do that deliberately and take the opportunity to upgrade the capacity of your parity disk, which would give you better options for future expansion.

 

Your best course of action really depends on your plans for the long term.

Link to comment

Thanks John

 

The 4Tb disks are all only 3 months old... so like you say, I could upgrade the 3Tb to 4Tb initially and start by using the new disk as the first XFS one... that might be the safer route as I would still have the parity disk and if need be the 3Tb disk with the original data on it.

 

I will deliberate and make a call on it. The urgency surrounds sorting out the SBS Server VM issue as the physical server is located at a site which I am losing imminently so I need to get this unit's role back here ASAP.

 

Thanks

Link to comment

Hi again all...

 

OK, started trying to make some changes to get this server working without hammering the disks, but I am thinking I have made some sort of novice error...

 

I fitted a 128Gb SSD disk as a cache drive yesterday. However in the evening when I looked at the dashboard I noticed it was 85Gb used... which surprised me. Anyway I set mover to run every couple of hours and manually started it expecting to see the disk clear down... but all that happened was it filled up almost completely and then when mover finished it was back at about 85Gb full... below is a screen shot of what I am seeing at the moment while mover is running, any thoughts of where to start?

 

40664900135_68f526e54b_b.jpg

 

Many thanks in advance :)

 

Link to comment

As soon as I posted the above, mover finished and started looking about and think I know the problem... but dont know how to fix it.

 

I have a number of live VMs, stored in \mnt\domains and I noticed that my live Ubuntu Webserver (80Gb Image) is on the cache drive;

 

27688191878_c35dc4c499_b.jpg

 

Obviously I dont want 90Gb images being stored whenever one of the servers is on... so can anyone point me in the right direction to what settings should be changed?

 

Thanks

 

 

Link to comment

Your domains user share is set cache:prefer. If you only want some of the contents of the domains share on the cache, and others on the array, you need to set it to cache:only, and move the files you don't want on the cache back to the disk you want them on. Obviously the files can't be open when you move them.

 

Read the help descriptions on the cache mover settings on the share configuration page for more explanation.

Link to comment

FWIW, I have a SBS2008 system running as a VM on Unraid.  The operating system sits on a pair of 500gb ssd's on the cache. I think I have allocated about 250gb for sbs.  I also run a 1tb WD re4 drive as the first drive in the array.  This is used for user data and wsus.  RE4 drives can be hammered so not worried about what SBS might do to them.  Boot up time is very fast on the ssd's.  The same system also runs a win 10 vm on the same ssd's.  

 

I think if you were to move some of sbs onto the 127gb cache, you might encounter space problems. 

 

 

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...