Jim Beam

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by Jim Beam

  1. Actually... this ...https://smallformfactor.net/forum/threads/nverse-a-highly-versatile-steam-box-design.34/page-11 "Is there any way to install a fan over any of the 3.5" drive areas?" "Thanks! You can install 120mm fans where you would normally install the AIO cooler radiators, which is overlapped with the area for some hard drives. There is also space for 120mm fans over the GPU area, which is also a hard drive area if you're not using GPU." Seems there is hope yet. I like this case. Dense storage node with m-itx. Will be interesting to see it come to life.
  2. Still, its an interesting concept. If nothing else, 10 points for originality and actually trying to get something out there and done. Will be interesting to follow this and see where it goes.
  3. Its Ok, got it under control. I have someone working on it for me. i would do it myself - but i am on another continent to where the server is. There is nothing difficult about setting up a server with an OS if one knows what one is doing. This is pretty standard fare to folks who do this daily for a living - nothing special here. Anyway, its moving forward. Thanks for the input.
  4. Came across this case - very innovative indeed. Takes a mini-itx board and has 4 possible configs - the storage config would suit us Unraiders i would think http://www.rationalbananas.com/specifications [url=https://www.youtube.com/watch?v=9_xlfJj87hY this is a VERY nice mini-itx - http://www.asrock.com/mb/Intel/Fatal1ty%20Z270%20Gaming-ITXac/index.us.asp Has Thunderbolt 3 onboard plus 1x PCIe 3.0 x4 M.2, 6 x sata lll, 1 x PCIe 3.0 x16 slot, 32GB RAM, i7700K compatible. Z270 Chipset; All of these parts together would be a nice build out. Thoughts?
  5. Thanks StevenD I contacted limetech to ask them to do this work for me - i had one reply from them, i answered the question they posed and never heard back from them. Seems they dont want to touch it either. Not sure why. So where do i find an UNRAID expert who can set this up for me? I am remote from the server (its in the US) and when i send it to a DC (in EU) to retrieve my many TB's of data it then comes to me. I need it to arrive loaded with data ready to go. Willing to pay but who do i pay to do this?
  6. What are the pro's and cons of this approach ? So if one was to go this way, UNRAID is now a guest on an esxi host - i get that can be done and, further, a pfsense setup can also be run as a guest under esxi - does that mean in that setup the UNRAID VM is perfectly safe then ? If i went that way it would seem that i can run anything i want as a VM under esxi - UNRAID as a VM and give it the front 24 HDD's as JBOD as per UNRAID method, another Guest VM running some other distro that has the back 12 HDD's passed through to it and on which allows me to import my files from the Internet to that VM cntrolling the rear 12 drives. I was trying to understand this a few weeks ago - is it better to run UNRAID as the host OS on bare metal and then run guest VM's under KVM from UNRAID itself ....*OR*..... run esxi as the host on bare metal and then run UNRAID as a esxi guest? OR run KVM on bare metal and then run UNRAID as a KVM guest with the front 24 HDD's passed to it and another VM running some other distro controlling the back 12 HDD's ? This is probably all really easy stuff to some - but to us newbs, well its a bit complicated trying to grasp the pitfalls of one way over another. The end goal is to get my server home once i have transferred all my files to it. I will be using UNRAID as a depository for my 4K video footage i will be shooting (underwater footage but thats neither here nor there for what this discussion is about). Just not sure of how to setup the server so its UNRAID ready to roll when it gets home - given i have to move a huge amount of footage onto the server in a DC before it leaves UK and heads out to me on location. I dont want 36 HDD's of footage to have to move from a RAID 6 or something under some other system onto an UNRAID system when the server finally gets on location.That would be too much to cope with. I need the server to arrive with UNRAID running and all my files onboard the HDD's in the server.
  7. Apologies in advance - please excuse the wall of writing below, Wanted to go in-depth here to make sure it's clear what i am wanting to do. The idea is that the SuperMicro server will sit on a Public IP in the DC so i can move my files from the Interent onto the rear 12 drives under a VM running a different distro and then move those files as needed from the back 12 HDD's to the front 24 HDD's where they will be stored as cold storage under UNRAID as JBOD setup. I really love that idea. So that means to do that i have to run UNRAID as the host on baremetal and use KVM as the VM ...er....controller or director or ?? (not sure how you describe it - but my meaning is, VM's will be run under KVM from WITHIN UNRAID). So in that scenario the rear 12 drives can be passed through to a VM that "Owns" those 12 drives - right ? ....and UNRAID will control the front 24 as JBOD drives. At least that is my basic understanding of what can be done. And then as i wanted to do, spin up a VM to run pfsense on in order to block access to UNRAID from ANYONE except my home public IP address and only ME from my public IP at home to be allowed to move files (and to interact with UNRAID in general) from the VM's running on the back 12 HDD's using other distros , to the front 24 JBOD HDD's under UNRAID. I do not require UNRAID to be doing anything while at the DC other than the UNRAID OS just sit there and collect files on the front 24 HDD's as a JOBD system. The VM controlling the back 12 HDD's will be doing all the work moving my files onto my SuperMicro system from the Internet. That VM controlling the back 12 HDD's WILL be exposed to the Internet. But that's OK because if i got hacked i can wipe the VM and download my files again as i am copying files from another set of servers onto this one. There are major advantages to me using this approach rather than a traditional RAID 6 or 10 setup on the front 24 HDD's under something like Ubuntu - i dont want 24 x 6TB 7.2K drives spinning 24/7 because of power usage (about 220W's there alone as each drive is 9-10W). I will not be accessing these front 24 HDD's while the server is at the DC other then to transfer files to one HDD at a time as needed. When drive 3 is full it can be spun down and drive 4 spun up and filled etc. so why have 24 HDD's in a traditional RAID setup where they are spinning all the time? This is my reasoning for having UNRAID in place from the get go. The server will be an UNRAID server when she comes home one day. So why not just have UNRAID in place to handle all those drives from the day the server goes into service? I am told that the reason i should not do it like this is because UNRAID will be exposed to the Internet and its an OS that has no security as it was never designed like that. So that's why i have posted this question and gone into such detail - i need to get this right. The server itself is this mobo; http://www.supermicro.com/products/motherboard/Xeon/C600/X10DRi-T.cfm I am buying the server used - it had been in service for about 200 days before being taken out of the server room it was in (business closed down) the config is; SuperMicro 4U 36x3.5" Chassis SuperMicro X10DRI-T Motherboard 1 x Intel E5-2620v3 CPU 4 x Samsung 16GB PC4-2133P RAM Module (64GB Total) 1 x LSI SAS9300-8i SATA/SAS Adapter 2 x Intel S3510 120GB SSD 36 x HGST 6TB 7.2K 6Gbps HUS726060ALE610 SATA Hard Drive 2 x 1280W Power Supply I have added an extra 128GB of RAM to give 192GB in total to CPU 1. I will change the CPU to a V4 CPU when the cost of a 22 core V4 is cheap (3 to 4 years from now?) Should i upgrade those Intel S3510 120GB SSD's in the rear of the server - plan to use these for the KVM system to run VM's off. Should i replace those SSD's to 850 EVO's to get more write performance as the S3510's have pretty poor write performance? Or will the write performance in this scenario not be an issue ( i think more is always better on a server - right?) http://www.anandtech.com/show/9226/intel-releases-ssd-dc-s3510 (read performance is reasonable - not great but good enough - surely more then a spinner, write performance is dismal for an SSD) How many NIC's do i need for all this ? there are 2 x Intel® X540 Dual port 10GBase-T NIC's onboard. Do i need more NIC's? So if it all works out as described above (if not please tell me what part i have got wrong) then i dont really understand why a VM running pfsense cant block access to the UNRAID OS to anyone except my home public IP - the VM's (pfsense and the other distro) will be exposed to the Internet full time and on a very fast uplink - but then the pfsense should guard those VM's - right? So i am not sure why the IT people who are looking at this for me are saying its not doable because of security. Can anyone provide an over view of how this should be setup to be secure? PM me if you want to discuss this professionally.
  8. Is there any way at all to have UNRAID on a server that will sit in a data center ? I dont want to access files from the UNRAID over the internet, i just want to store files there as i ftp them to my server. I will be using a supermicro 36 by super server. Front 24 for UNRAID and rear 12 for another distro. Thing is i cant go 1 U more so cant run a 1U server as a fire wall. I am told a firewall running from with UNRAID on a VM wont secure the UNRAID server Basically i have been told to forget it, its too insecure (to run UNRAID like that) I read an old thread here about internet access but that was from 2009. Anything changed at all? Why do i want to do this? Because i want to run as JBOD and not lose HDD space to a RAID 6 or Raid 10 setup. UNRAID just has so many good things about it - i am preaching to the converted on this forum. The idea of UNRAID is to use it as cold storage while i am getting my files onto the 24 HDD's. Files will be transferred to UNRAID from the from files loaded onto the rear 12 HDD's running another distro. There has to be some way to do this without exposing UNRAID to the Internet nasties. any ideas at all or is this just not doable ?
  9. Good question - out of stock with no price given as result. In the US they are quite expensive so come nowhere near being suitable. 10 TB vs 8 TB over 22 HDD's does give 44 TB's more capacity. But at huge cost given what i see in the US. Thank GOD i am not an accountant - i hate counting pennies. But has to be looked at - at least approximately. I'm done for the day - tired of reading spec sheets, looking up websites and crunching numbers on the calculator :-) Thanks for your input Garycase
  10. Ok thanks for that. with UNRAID, when a drive is not reading or writing it spins down - right? So essentially no power used or at least so miniscule that its insignificant. I just want to make sure i have all my facts right. Still got to crunch numbers on power usage cost but with the lower power requirements of the 10TB IronWolf and lower wieght (they will be shipped to me one day and air freight is expensive) it might not be that big a difference in cost over the life of the drives between 8TB and 10TB Iron wolf. I am going to pay for it all eventually, so have to be aware of these things. 10TB Ironwolf is 6.8W Avg/4.42 idle/0.8 standby/650Grams
  11. What are "Load/Unload" cycles ? Ironwolf drive has 1m Hours MTBF, 600,00 load/unload cycles, 9.0 W avg/7.2W idle / Max Temp 70C / 265.00 Euro Archive drive has 800,00 MTBF, 300,00 load/unload cycles, 7.5W avg/5.0W idle / Max Temp 60C /259.00 Euro Its a wash. Do i go for 800,000 hours MTBF ? ....or is 600,00 load/unload cycles more important ? 2.5W per drive power usage is about 50W over 22 drives (20 data, 2 parity, 2xSSD's for cache) in front of server. 12 HDD's in the back of the server, but not sure on the back 12 yet. 50W per hour x 24 hours a day x 7 days a week x 365 days a year = ? KWh ".17c (euro) per KWh. I guess it all depends on final config and usage and if the HDD's sit in idle mode or actually power off when not used under UNRAID. Not sure about the last part because i have never used UNRAID.
  12. Hey Garycase, Again thank you. Maybe you can see where i am going with all this. A layered approach. First how do i run the VM's, then the HDD sub system has to be figured out. Lots more questions coming as I put the pieces in place. You are right about power usage. I want to put 36 drives in a server in a data center in Europe where electricity is charged out as consumed and its expensive (at least compared to where i am physically located). So the power draw per drive is important to me - one of the reasons i want to use UNRAID in the server - drives can be spun down and idle when not in use, but still get the advantage of parity protection and be able to ask "smart" hands to easily pull a drive if it dies and easily replace without taking down a whole array. If the drive sits essentially idle in UNRAID , does it use any significant power at all ? Heat is also an issue. The less the better it has to be said. But in the DC its pretty cool all the time.
  13. I have to buy my drives in Europe because it makes more sense then buying in the US and shipping to the DC to go into a server. So these are the prices i have found in Europe. Seagate IronWolf, 8TB - 8TB 3.5" HDD (SATA-600) @ 210/MB/s : € 265,- Seagate Archive HDD v2 ST8000AS0002, 8TB - 8TB 3.5" HDD (SATA-600) @ 150/150MB/s : € 259,- Both rated at 180TB a year, 1M hours MTBF. So no real difference there on duty cycle. Intended use is write once, read many for very large media files - average size 20+GB There will be MANY reads from these drives. Which would you go for and why ? Thanks for your opinion.
  14. Thanks Gary. Goldmine of information as always. Right, so off the the UNRAID hypervisor sub forum i go. If you dont mind, what are the pro's/cons of either way? Will the built in Hypervisor give better performance to the other VM's or just make UNRAID perform better? I need to get this basic concept right before i go too much further. I want to spec and purchase hardware so knowing how i should be setting up helps.
  15. Its been a while since i looked at UNRAID, but now 6.2 is here with 2 parities and 30 drives etc, well its time to get on with this and get er done. I need to do several things on one server setup - OK, so VM's. Do i install UNRAID on bare metal and then run the other VM's on top of UNRAID OR Could i setup a hypervisor on bare metal and run UNRAID in a VM ? I am pretty sure its install UNRAID as the base OS and run VM's from there, but just want to make sure i have this right as i try to figure out how to do what i want to do.
  16. Great, thanks ! Thought as much but best to check first with those who know
  17. Will these drives hold up to LOTS of reads? I want to use them in a torrent server - expect LOTS of reads from them over time. The files will only ever be written to HDD ONCE. I collect BD 25 and 50's so big files. I typically leave them seeding for 6+ months at a time where they are being read from quite a bit but once they are written to the drive they never get deleted or moved. I expect to keep the files for the life of the drives. So the only real wear is being read from. Given this use case, how will they hold up?
  18. Albian, Did you ever find an answer to this? On your R710 - what HBA card do you use?
  19. tdallen..... Excellent post ! This is very clear and concise information. Explains it all perfectly for newbs....like me. V81...... Thank you for asking this question. I was also struggling to get a grip of it all. I am so glad you asked this because as i said above, a very clear overview given by tdallen. Thank you all !
  20. OK so looking further into this....... would a Dell R510 12 bay server be OK for this application (co-lo unraid) ? Looking on eBAY for a used server. I want to populate the server with the archive drives and a few SSD's - i have read that one can use Dell servers without using Dell HDD's Any thoughts about that idea? If you dont like the R510 (why?) what would you suggest? (again - why?)
  21. Brit - yes i hear you - had not even thought of security - what about if the server is behind an pfsense fw? the fw can be a 1U server on its own (looks like i have to take a min of 5U of rack space - so have room for such things) garycase - lots of very good info - as always - from your post. Lots of food for thought there for sure. Helmonder - "binge watching"...hahahahaha , ........ yeah i guess it looks like that from how i described it.
  22. Volume is multi TB's of data. Would only want to ship once a quarter or so. So as many TB's as 4 -6 drives or so can hold. Using 8TB drives with around 7.5TB of usable space that would be somewhere around 45TB's or so over 6 drives. So if it did become a shipment once every 3 months - then its about 15TB's a month, about 3 to 4 TB's a week of collected data. But no hard or fast rule - however it works out and i guess it depends on the server setup. Its an international shipment - so DHL etc to me. There is customs and duty on the incoming HDD's but thats fine. Every shipment gets hit with around $300 of fees, so only want to do a shipment when enough data has built up that i could take 4, 5 or 6 drives out and have them shipped altogether as one incoming shipment. Drives would be well packed for shipment - don't need any fault tolerance while in transit. If i lost any data it could be downloaded again worst case scenario. It's a pain to do have to do it all again if it came to that, but if i had too it's no train smash. One of the issues that got me thinking along these lines is that the Seagate Archive drive is ideal for this application - you guys have me convinced of this after reading the threads here on them and the answers i got to some questions i asked about them. However they do seem to have high failure rates in some circumstances. So it makes sense to buy them in the same country as the DC is in and fit them to a server and run them through the testing sequence you guys put your drives through. If any drive fails its testing it can be returned to the seller in the same country before any shipping and fees etc are incurred for that drive. So having an unraid server to mount them in and test on seems the absolute ideal way to make sure all is well with them in an unraid environment. Other thoughts are that on ver 6 of unraid, virtual servers and different OS's can be run in various ways - so a control VM to remotely run and control all the functions of the server should be doable. Not really worried about all the fancy stuff like Plex in the cloud etc as i dont have the BW to enjoy all that from here behind the banana curtain.
  23. Would it make sense to co-lo an unraid server? I am considering how to move large amounts of data from download point in a Data Center to a remote home location with extremely poor Internet speeds. It's just impossible to move TB's of data from the DC to home with my Internet connection and there is no way to improve the connection. The normal way to go about this for people who live in the 1st world with REAL connections (100+Mbps) is to download files to a server in the DC then ftp the files from the DC to home location. But because of my very very low BW speeds (think dial up) this does not work for me. So given that particular scenario - i am wondering if it would make sense to have an unraid server built up for the purpose of downloading my files to the unraid server in a co-lo then having the HDD's pulled from the server when full and sent to me for install to an unraid server at home? Is it possible to move HDD's from one unraid to another like that? If not that, then move files on an unraid server to a particular drive/s on the server and have that/those HDD('s) sent to me to put into an unraid server at home. I dont know how practical it would be to try to use unraid this way - i've not used unraid before and reading and reading at this time trying to figure it all out. So given my basic understanding of unraid at this time, is what i am considering practical? Thought i would throw the question out there for those who know what they are doing with unraid while i keep chugging through the reading trying to understand all the in's and out's of unraid. Thanks for any thoughts on this.
  24. Guys thanks for the replies. Yes damage in shipping - hadn't even thought of that! Lots to think through here Hmmmm.....