techie.trumpet

Members
  • Posts

    6
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

techie.trumpet's Achievements

Noob

Noob (1/14)

0

Reputation

  1. If I setup two data store drives they will be independent of each other. One drive would be a standard 7200 RPM drive to use with VMs were Disk IO isn't a concern (Secondary domain controller / dns). I would also use this disk to store any ISOs needed to install VMs along with some local VM backups. The other drive would be an SSD (with bios garbage collection). I doubt I would initially need this drive as I am already running an ESXI server and virtualizing my Unraid machine would be to leverage the fact that my hardware is capable of much more than operating solely as an Unraid server.
  2. Thanks mrow, That information helps a lot. For the immediate future I know I can get by with the hardware I have however, the thought that the M1015 cards are selling for so much makes me want to watch for a deal to ensure I don't get pinched when I get to the point where I need more than 16 drives. The next hard drive I add to the system is going to be a larger parity drive to allow me to start adding larger disks to the system. Once that is out of the way I can get ready to watch for a deal on a card like the M1015. I am not sure how much the MV8's fetch used but I could always sell the second card once I upgraded to a card like the M1015 and added an expander.
  3. At this point in time the only additional PCI/PCI-E device that I would want to have an open slot for would be to add an additional NIC to keep one dedicated to Unraid. Seeing that I would primarily be running secondary service VMs like a secondary domain controller I imagine I can get by giving Unraid complete access to one NIC and use the second NIC for ESXI and the couple of VMs. If performance starts to become an issue with my existing ESXI server I could end up moving a couple of VMs over to the server running the Unraid VM and it would be at this point that I would likely want to add a 3rd NIC to the system. If I opted for the 4 port SATA controller I would still have a single PCI slot available that I could use for the NIC. From my reading though it looks like the maximum throughput of the PCI bus is equal to the max throughput of a Gigabyte NIC. Creating a virtual switch on this device could introduce a bottle neck. If I was to go with the IBM M1015 what else would I need to consolidate down to using only the two PCI-E 8x slots? I have seen reference to the M1015 cards selling between $60-80 and I would imagine an expander card runs in the $200-300 range?
  4. I am considering converting my Unraid server into an ESXI host and moving Unraid into a VM. I have been reading through everything I can find on the forum about converting to ESXI and it looks like my hardware should convert pretty easily, especially since the SASLP-MV8 pass through configuration has been sorted out. Today if I only pass-through my two MV8 cards I will have access to 16 drives in Unraid (14 data drives, Parity and Cache) and given that I am only running 7 drives that leaves me a fair bit for expansion. In my norco case I have 4 of the backplanes connected to the MV8 cards and the 5th connected to the motherboard SATA connections with a breakout cable. My plan for converting over to ESXI would be to add a couple 2.5" SSD or 7200RPM drives for my datastore and from what I understand I can mount them above the backplan inside the case, which would leave all of my hot swap bays free for Unraid. All of this is a long winded way of getting around to my actual question. My Question: After I hit my 16 drive limit of the 2 MV8 cards what direction should I take inorder to add the last 4 drives my case is capable of holding? From my reading I could Raw Device Map the drives, however there are draw backs in maintaining the RDM mappings. I could replace the MV8 cards with a single 20+ Expander, but that might not be the most cost efficient method of adding the 4 drives. What I would like to achieve by virtualizing: I recently built an ESXI server that is running of all of my primary service VMs. Virtualizing Unraid would free up my system resources to allow me to run secondary service VMs (such as a backup Domain Controller / Secondary DNS) and a few VMs that would be used occasionally for development testing, etc. My Hardware: Mainboard SUPERMICRO MBD-X8SIL-F-O CPU (Current) i3 540 3.07GHz (Upgrading to) Intel Xeon X3440 2.53GHz Memory Kingston KVR1333D3E9SK2/8G Kingston KVR1333D3E9SK2/4G Case NORCO RPC-4220 Power Supply Ultra X4 750-Watt Modular Power Supply Bronze 80+ SATA Controller 2x Supermicro AOC-SASLP-MV8 Drives 3x Hitachi Deskstar 2TB 7200 RPM 2x WD WD20EARS 2TB 1x WD20EARX 2TB 1x WD7500AAKS 750GB Cache Drive
  5. I recently did a clean install of beta 13 from 4.7 and am now having trouble with the Activity Directory integration. I backed up my 4.7 configuration and reformatted my USB key and copied over the beta 13. After successfully booting I executed the Utilities/New Permissions script to reset ownership of the content on the drives to nobody:users. According to the SMB profile under the settings the system successfully joined the domain however when I execute: getent passwd or getent group I do not see any of the AD users/groups like I did in unraid 4.7. Executing wbinfo -u or wbinfo -g does return the correct list of AD users. Initially when i setup AD with unraid 4.7 I had to edit the domain security policy to change the authentication method: Network security: LAN Manager authentication level Set to: Send LM & NTLM responses Minimum session security for NTLM SSP Disable Require 128-bit encryption Changing these settings allowed Windows 7 clients to properly connect to the network shares. Additionally can someone help clarify if it is still necessary to change the owner/group of the shares to an Activity Directory user/group to enforce domain security? In 4.7 I was able to limit access by issuing chown -R "mydomain\myuser"."mydomain\mygroup" /mnt/user/myshare Domain Server Windows Server Core 2008 R2 System Specs SUPERMICRO MBD-X8SIL-F-O Intel i3 540 Kingston 4GB (2 x 2GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 2x Supermicro AOC-SASLP-MV8 syslog.txt
  6. I recently installed sabnzbd, sickbeard, and couchpotato and also had an issue with the applications setting the owner / group to root : root for newly downloaded files. I am running unRAID 4.6 connected to Active Directory and I wanted the owner / group to be assigned to a domain user and group. My solution was to write a custom shell script that is executed when the applications are finished processing the downloaded files. Sickbeard supports the execution of custom scripts VIA the extra_scripts property in the config.ini file. After reviewing the Sickbeard / sabnzbd documentation I learned that the first argument passed to the custom script is the newly downloaded file and therefore my custom script is: #!/bin/bash chown "DOMAIN\user"."DOMAIN\media users" "$1" I saved this script in the /mnt/cache/.custom/sickbeard/autoProcess/media-setowner.sh so it could also be used by sabnzbd. In sabnzbd I created a couple of custom categories for Media files that are not downloaded by Sickbeard or CouchPotato and assigned my custom script VIA the script drop down. I am still testing it out but so far it appears to be working.