Jim Beam

Members
  • Content Count

    29
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Jim Beam

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Actually... this ...https://smallformfactor.net/forum/threads/nverse-a-highly-versatile-steam-box-design.34/page-11 "Is there any way to install a fan over any of the 3.5" drive areas?" "Thanks! You can install 120mm fans where you would normally install the AIO cooler radiators, which is overlapped with the area for some hard drives. There is also space for 120mm fans over the GPU area, which is also a hard drive area if you're not using GPU." Seems there is hope yet. I like this case. Dense storage node with m-itx. Will be interesting to see it come to life.
  2. Still, its an interesting concept. If nothing else, 10 points for originality and actually trying to get something out there and done. Will be interesting to follow this and see where it goes.
  3. Its Ok, got it under control. I have someone working on it for me. i would do it myself - but i am on another continent to where the server is. There is nothing difficult about setting up a server with an OS if one knows what one is doing. This is pretty standard fare to folks who do this daily for a living - nothing special here. Anyway, its moving forward. Thanks for the input.
  4. Came across this case - very innovative indeed. Takes a mini-itx board and has 4 possible configs - the storage config would suit us Unraiders i would think http://www.rationalbananas.com/specifications [url=https://www.youtube.com/watch?v=9_xlfJj87hY this is a VERY nice mini-itx - http://www.asrock.com/mb/Intel/Fatal1ty%20Z270%20Gaming-ITXac/index.us.asp Has Thunderbolt 3 onboard plus 1x PCIe 3.0 x4 M.2, 6 x sata lll, 1 x PCIe 3.0 x16 slot, 32GB RAM, i7700K compatible. Z270 Chipset; All of these parts together would be a nice build out. Thoughts?
  5. Thanks StevenD I contacted limetech to ask them to do this work for me - i had one reply from them, i answered the question they posed and never heard back from them. Seems they dont want to touch it either. Not sure why. So where do i find an UNRAID expert who can set this up for me? I am remote from the server (its in the US) and when i send it to a DC (in EU) to retrieve my many TB's of data it then comes to me. I need it to arrive loaded with data ready to go. Willing to pay but who do i pay to do this?
  6. What are the pro's and cons of this approach ? So if one was to go this way, UNRAID is now a guest on an esxi host - i get that can be done and, further, a pfsense setup can also be run as a guest under esxi - does that mean in that setup the UNRAID VM is perfectly safe then ? If i went that way it would seem that i can run anything i want as a VM under esxi - UNRAID as a VM and give it the front 24 HDD's as JBOD as per UNRAID method, another Guest VM running some other distro that has the back 12 HDD's passed through to it and on which allows me to import my files from the Internet to t
  7. Apologies in advance - please excuse the wall of writing below, Wanted to go in-depth here to make sure it's clear what i am wanting to do. The idea is that the SuperMicro server will sit on a Public IP in the DC so i can move my files from the Interent onto the rear 12 drives under a VM running a different distro and then move those files as needed from the back 12 HDD's to the front 24 HDD's where they will be stored as cold storage under UNRAID as JBOD setup. I really love that idea. So that means to do that i have to run UNRAID as the host on baremetal and use KVM as the V
  8. Is there any way at all to have UNRAID on a server that will sit in a data center ? I dont want to access files from the UNRAID over the internet, i just want to store files there as i ftp them to my server. I will be using a supermicro 36 by super server. Front 24 for UNRAID and rear 12 for another distro. Thing is i cant go 1 U more so cant run a 1U server as a fire wall. I am told a firewall running from with UNRAID on a VM wont secure the UNRAID server Basically i have been told to forget it, its too insecure (to run UNRAID like that) I read an old thread he
  9. Good question - out of stock with no price given as result. In the US they are quite expensive so come nowhere near being suitable. 10 TB vs 8 TB over 22 HDD's does give 44 TB's more capacity. But at huge cost given what i see in the US. Thank GOD i am not an accountant - i hate counting pennies. But has to be looked at - at least approximately. I'm done for the day - tired of reading spec sheets, looking up websites and crunching numbers on the calculator :-) Thanks for your input Garycase
  10. Ok thanks for that. with UNRAID, when a drive is not reading or writing it spins down - right? So essentially no power used or at least so miniscule that its insignificant. I just want to make sure i have all my facts right. Still got to crunch numbers on power usage cost but with the lower power requirements of the 10TB IronWolf and lower wieght (they will be shipped to me one day and air freight is expensive) it might not be that big a difference in cost over the life of the drives between 8TB and 10TB Iron wolf. I am going to pay for it all eventually, so have to be aware o
  11. What are "Load/Unload" cycles ? Ironwolf drive has 1m Hours MTBF, 600,00 load/unload cycles, 9.0 W avg/7.2W idle / Max Temp 70C / 265.00 Euro Archive drive has 800,00 MTBF, 300,00 load/unload cycles, 7.5W avg/5.0W idle / Max Temp 60C /259.00 Euro Its a wash. Do i go for 800,000 hours MTBF ? ....or is 600,00 load/unload cycles more important ? 2.5W per drive power usage is about 50W over 22 drives (20 data, 2 parity, 2xSSD's for cache) in front of server. 12 HDD's in the back of the server, but not sure on the back 12 yet. 50W per hour x 24 hours a day x 7
  12. Hey Garycase, Again thank you. Maybe you can see where i am going with all this. A layered approach. First how do i run the VM's, then the HDD sub system has to be figured out. Lots more questions coming as I put the pieces in place. You are right about power usage. I want to put 36 drives in a server in a data center in Europe where electricity is charged out as consumed and its expensive (at least compared to where i am physically located). So the power draw per drive is important to me - one of the reasons i want to use UNRAID in the server - drives can be spun down and idle
  13. I have to buy my drives in Europe because it makes more sense then buying in the US and shipping to the DC to go into a server. So these are the prices i have found in Europe. Seagate IronWolf, 8TB - 8TB 3.5" HDD (SATA-600) @ 210/MB/s : € 265,- Seagate Archive HDD v2 ST8000AS0002, 8TB - 8TB 3.5" HDD (SATA-600) @ 150/150MB/s : € 259,- Both rated at 180TB a year, 1M hours MTBF. So no real difference there on duty cycle. Intended use is write once, read many for very large media files - average size 20+GB There will be MANY reads from these drives. Which would
  14. Thanks Gary. Goldmine of information as always. Right, so off the the UNRAID hypervisor sub forum i go. If you dont mind, what are the pro's/cons of either way? Will the built in Hypervisor give better performance to the other VM's or just make UNRAID perform better? I need to get this basic concept right before i go too much further. I want to spec and purchase hardware so knowing how i should be setting up helps.
  15. Its been a while since i looked at UNRAID, but now 6.2 is here with 2 parities and 30 drives etc, well its time to get on with this and get er done. I need to do several things on one server setup - OK, so VM's. Do i install UNRAID on bare metal and then run the other VM's on top of UNRAID OR Could i setup a hypervisor on bare metal and run UNRAID in a VM ? I am pretty sure its install UNRAID as the base OS and run VM's from there, but just want to make sure i have this right as i try to figure out how to do what i want to do.