Jim Beam

Members
  • Posts

    29
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Jim Beam's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Actually... this ...https://smallformfactor.net/forum/threads/nverse-a-highly-versatile-steam-box-design.34/page-11 "Is there any way to install a fan over any of the 3.5" drive areas?" "Thanks! You can install 120mm fans where you would normally install the AIO cooler radiators, which is overlapped with the area for some hard drives. There is also space for 120mm fans over the GPU area, which is also a hard drive area if you're not using GPU." Seems there is hope yet. I like this case. Dense storage node with m-itx. Will be interesting to see it come to life.
  2. Still, its an interesting concept. If nothing else, 10 points for originality and actually trying to get something out there and done. Will be interesting to follow this and see where it goes.
  3. Its Ok, got it under control. I have someone working on it for me. i would do it myself - but i am on another continent to where the server is. There is nothing difficult about setting up a server with an OS if one knows what one is doing. This is pretty standard fare to folks who do this daily for a living - nothing special here. Anyway, its moving forward. Thanks for the input.
  4. Came across this case - very innovative indeed. Takes a mini-itx board and has 4 possible configs - the storage config would suit us Unraiders i would think http://www.rationalbananas.com/specifications [url=https://www.youtube.com/watch?v=9_xlfJj87hY this is a VERY nice mini-itx - http://www.asrock.com/mb/Intel/Fatal1ty%20Z270%20Gaming-ITXac/index.us.asp Has Thunderbolt 3 onboard plus 1x PCIe 3.0 x4 M.2, 6 x sata lll, 1 x PCIe 3.0 x16 slot, 32GB RAM, i7700K compatible. Z270 Chipset; All of these parts together would be a nice build out. Thoughts?
  5. Thanks StevenD I contacted limetech to ask them to do this work for me - i had one reply from them, i answered the question they posed and never heard back from them. Seems they dont want to touch it either. Not sure why. So where do i find an UNRAID expert who can set this up for me? I am remote from the server (its in the US) and when i send it to a DC (in EU) to retrieve my many TB's of data it then comes to me. I need it to arrive loaded with data ready to go. Willing to pay but who do i pay to do this?
  6. What are the pro's and cons of this approach ? So if one was to go this way, UNRAID is now a guest on an esxi host - i get that can be done and, further, a pfsense setup can also be run as a guest under esxi - does that mean in that setup the UNRAID VM is perfectly safe then ? If i went that way it would seem that i can run anything i want as a VM under esxi - UNRAID as a VM and give it the front 24 HDD's as JBOD as per UNRAID method, another Guest VM running some other distro that has the back 12 HDD's passed through to it and on which allows me to import my files from the Internet to that VM cntrolling the rear 12 drives. I was trying to understand this a few weeks ago - is it better to run UNRAID as the host OS on bare metal and then run guest VM's under KVM from UNRAID itself ....*OR*..... run esxi as the host on bare metal and then run UNRAID as a esxi guest? OR run KVM on bare metal and then run UNRAID as a KVM guest with the front 24 HDD's passed to it and another VM running some other distro controlling the back 12 HDD's ? This is probably all really easy stuff to some - but to us newbs, well its a bit complicated trying to grasp the pitfalls of one way over another. The end goal is to get my server home once i have transferred all my files to it. I will be using UNRAID as a depository for my 4K video footage i will be shooting (underwater footage but thats neither here nor there for what this discussion is about). Just not sure of how to setup the server so its UNRAID ready to roll when it gets home - given i have to move a huge amount of footage onto the server in a DC before it leaves UK and heads out to me on location. I dont want 36 HDD's of footage to have to move from a RAID 6 or something under some other system onto an UNRAID system when the server finally gets on location.That would be too much to cope with. I need the server to arrive with UNRAID running and all my files onboard the HDD's in the server.
  7. Apologies in advance - please excuse the wall of writing below, Wanted to go in-depth here to make sure it's clear what i am wanting to do. The idea is that the SuperMicro server will sit on a Public IP in the DC so i can move my files from the Interent onto the rear 12 drives under a VM running a different distro and then move those files as needed from the back 12 HDD's to the front 24 HDD's where they will be stored as cold storage under UNRAID as JBOD setup. I really love that idea. So that means to do that i have to run UNRAID as the host on baremetal and use KVM as the VM ...er....controller or director or ?? (not sure how you describe it - but my meaning is, VM's will be run under KVM from WITHIN UNRAID). So in that scenario the rear 12 drives can be passed through to a VM that "Owns" those 12 drives - right ? ....and UNRAID will control the front 24 as JBOD drives. At least that is my basic understanding of what can be done. And then as i wanted to do, spin up a VM to run pfsense on in order to block access to UNRAID from ANYONE except my home public IP address and only ME from my public IP at home to be allowed to move files (and to interact with UNRAID in general) from the VM's running on the back 12 HDD's using other distros , to the front 24 JBOD HDD's under UNRAID. I do not require UNRAID to be doing anything while at the DC other than the UNRAID OS just sit there and collect files on the front 24 HDD's as a JOBD system. The VM controlling the back 12 HDD's will be doing all the work moving my files onto my SuperMicro system from the Internet. That VM controlling the back 12 HDD's WILL be exposed to the Internet. But that's OK because if i got hacked i can wipe the VM and download my files again as i am copying files from another set of servers onto this one. There are major advantages to me using this approach rather than a traditional RAID 6 or 10 setup on the front 24 HDD's under something like Ubuntu - i dont want 24 x 6TB 7.2K drives spinning 24/7 because of power usage (about 220W's there alone as each drive is 9-10W). I will not be accessing these front 24 HDD's while the server is at the DC other then to transfer files to one HDD at a time as needed. When drive 3 is full it can be spun down and drive 4 spun up and filled etc. so why have 24 HDD's in a traditional RAID setup where they are spinning all the time? This is my reasoning for having UNRAID in place from the get go. The server will be an UNRAID server when she comes home one day. So why not just have UNRAID in place to handle all those drives from the day the server goes into service? I am told that the reason i should not do it like this is because UNRAID will be exposed to the Internet and its an OS that has no security as it was never designed like that. So that's why i have posted this question and gone into such detail - i need to get this right. The server itself is this mobo; http://www.supermicro.com/products/motherboard/Xeon/C600/X10DRi-T.cfm I am buying the server used - it had been in service for about 200 days before being taken out of the server room it was in (business closed down) the config is; SuperMicro 4U 36x3.5" Chassis SuperMicro X10DRI-T Motherboard 1 x Intel E5-2620v3 CPU 4 x Samsung 16GB PC4-2133P RAM Module (64GB Total) 1 x LSI SAS9300-8i SATA/SAS Adapter 2 x Intel S3510 120GB SSD 36 x HGST 6TB 7.2K 6Gbps HUS726060ALE610 SATA Hard Drive 2 x 1280W Power Supply I have added an extra 128GB of RAM to give 192GB in total to CPU 1. I will change the CPU to a V4 CPU when the cost of a 22 core V4 is cheap (3 to 4 years from now?) Should i upgrade those Intel S3510 120GB SSD's in the rear of the server - plan to use these for the KVM system to run VM's off. Should i replace those SSD's to 850 EVO's to get more write performance as the S3510's have pretty poor write performance? Or will the write performance in this scenario not be an issue ( i think more is always better on a server - right?) http://www.anandtech.com/show/9226/intel-releases-ssd-dc-s3510 (read performance is reasonable - not great but good enough - surely more then a spinner, write performance is dismal for an SSD) How many NIC's do i need for all this ? there are 2 x Intel® X540 Dual port 10GBase-T NIC's onboard. Do i need more NIC's? So if it all works out as described above (if not please tell me what part i have got wrong) then i dont really understand why a VM running pfsense cant block access to the UNRAID OS to anyone except my home public IP - the VM's (pfsense and the other distro) will be exposed to the Internet full time and on a very fast uplink - but then the pfsense should guard those VM's - right? So i am not sure why the IT people who are looking at this for me are saying its not doable because of security. Can anyone provide an over view of how this should be setup to be secure? PM me if you want to discuss this professionally.
  8. Is there any way at all to have UNRAID on a server that will sit in a data center ? I dont want to access files from the UNRAID over the internet, i just want to store files there as i ftp them to my server. I will be using a supermicro 36 by super server. Front 24 for UNRAID and rear 12 for another distro. Thing is i cant go 1 U more so cant run a 1U server as a fire wall. I am told a firewall running from with UNRAID on a VM wont secure the UNRAID server Basically i have been told to forget it, its too insecure (to run UNRAID like that) I read an old thread here about internet access but that was from 2009. Anything changed at all? Why do i want to do this? Because i want to run as JBOD and not lose HDD space to a RAID 6 or Raid 10 setup. UNRAID just has so many good things about it - i am preaching to the converted on this forum. The idea of UNRAID is to use it as cold storage while i am getting my files onto the 24 HDD's. Files will be transferred to UNRAID from the from files loaded onto the rear 12 HDD's running another distro. There has to be some way to do this without exposing UNRAID to the Internet nasties. any ideas at all or is this just not doable ?
  9. Good question - out of stock with no price given as result. In the US they are quite expensive so come nowhere near being suitable. 10 TB vs 8 TB over 22 HDD's does give 44 TB's more capacity. But at huge cost given what i see in the US. Thank GOD i am not an accountant - i hate counting pennies. But has to be looked at - at least approximately. I'm done for the day - tired of reading spec sheets, looking up websites and crunching numbers on the calculator :-) Thanks for your input Garycase
  10. Ok thanks for that. with UNRAID, when a drive is not reading or writing it spins down - right? So essentially no power used or at least so miniscule that its insignificant. I just want to make sure i have all my facts right. Still got to crunch numbers on power usage cost but with the lower power requirements of the 10TB IronWolf and lower wieght (they will be shipped to me one day and air freight is expensive) it might not be that big a difference in cost over the life of the drives between 8TB and 10TB Iron wolf. I am going to pay for it all eventually, so have to be aware of these things. 10TB Ironwolf is 6.8W Avg/4.42 idle/0.8 standby/650Grams
  11. What are "Load/Unload" cycles ? Ironwolf drive has 1m Hours MTBF, 600,00 load/unload cycles, 9.0 W avg/7.2W idle / Max Temp 70C / 265.00 Euro Archive drive has 800,00 MTBF, 300,00 load/unload cycles, 7.5W avg/5.0W idle / Max Temp 60C /259.00 Euro Its a wash. Do i go for 800,000 hours MTBF ? ....or is 600,00 load/unload cycles more important ? 2.5W per drive power usage is about 50W over 22 drives (20 data, 2 parity, 2xSSD's for cache) in front of server. 12 HDD's in the back of the server, but not sure on the back 12 yet. 50W per hour x 24 hours a day x 7 days a week x 365 days a year = ? KWh ".17c (euro) per KWh. I guess it all depends on final config and usage and if the HDD's sit in idle mode or actually power off when not used under UNRAID. Not sure about the last part because i have never used UNRAID.
  12. Hey Garycase, Again thank you. Maybe you can see where i am going with all this. A layered approach. First how do i run the VM's, then the HDD sub system has to be figured out. Lots more questions coming as I put the pieces in place. You are right about power usage. I want to put 36 drives in a server in a data center in Europe where electricity is charged out as consumed and its expensive (at least compared to where i am physically located). So the power draw per drive is important to me - one of the reasons i want to use UNRAID in the server - drives can be spun down and idle when not in use, but still get the advantage of parity protection and be able to ask "smart" hands to easily pull a drive if it dies and easily replace without taking down a whole array. If the drive sits essentially idle in UNRAID , does it use any significant power at all ? Heat is also an issue. The less the better it has to be said. But in the DC its pretty cool all the time.
  13. I have to buy my drives in Europe because it makes more sense then buying in the US and shipping to the DC to go into a server. So these are the prices i have found in Europe. Seagate IronWolf, 8TB - 8TB 3.5" HDD (SATA-600) @ 210/MB/s : € 265,- Seagate Archive HDD v2 ST8000AS0002, 8TB - 8TB 3.5" HDD (SATA-600) @ 150/150MB/s : € 259,- Both rated at 180TB a year, 1M hours MTBF. So no real difference there on duty cycle. Intended use is write once, read many for very large media files - average size 20+GB There will be MANY reads from these drives. Which would you go for and why ? Thanks for your opinion.
  14. Thanks Gary. Goldmine of information as always. Right, so off the the UNRAID hypervisor sub forum i go. If you dont mind, what are the pro's/cons of either way? Will the built in Hypervisor give better performance to the other VM's or just make UNRAID perform better? I need to get this basic concept right before i go too much further. I want to spec and purchase hardware so knowing how i should be setting up helps.
  15. Its been a while since i looked at UNRAID, but now 6.2 is here with 2 parities and 30 drives etc, well its time to get on with this and get er done. I need to do several things on one server setup - OK, so VM's. Do i install UNRAID on bare metal and then run the other VM's on top of UNRAID OR Could i setup a hypervisor on bare metal and run UNRAID in a VM ? I am pretty sure its install UNRAID as the base OS and run VM's from there, but just want to make sure i have this right as i try to figure out how to do what i want to do.