morti

Members
  • Posts

    74
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

morti's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. I was off the news regarding Unraid for quite a while... but, if you want to have drives in a VM and do more with them then just store some unimportant stuff, it might be a better Idea to not pass a drive directly to a VM , but to pass an entire controller to the VM. Like a SATA controller over pci(e) or passing some pci(e) connected controllers off the motherboard itself. Same with usb devices and usb controllers. One thing that would be interesting for me would be the Synology Active Backup feature. I would set up an Unraid box, put a xpenology VM on it and would like to use that to backup ESXi VMs either to UNRAID itself or to a storage on the Synology VM. I am not certain yet if that would work.... I just found out about Synologys Active Backup capabilities on free ESXi servers myself.
  2. If it's all commandline you could Go the VM way and SSH to the machine to run the Commands from the VM itself. Lots of people here use MC or other Tools to let the orders be done by the unraid machine itself. Can Set your Array as a Network storage and let stuff Download directly to it.
  3. dang it, i was one of the first to find the drives online for the price as i was researching on seagates new series. Did not get delievered in germany. Where did you buy? oh : £177.59 just gave me a hint.
  4. Hyper v can be turned of in unraid if you use the Button tobdisplay all vm options. Hyper v isknown to Cause issues with Nvidia cards. My 580 worked fine after Turing it off. A newer Nvidia Card of mine Works with it on aswell. There is Info about this on this Forum and more per Google.
  5. Sorry for Not readind it all, but did you turn off hyper-v already?
  6. what you are requesting probably is gpu passtrough - giving a vm a GPU that will perform like one that is not in a vm. That way you could run another OS paralel to your windows gaming machine. If you want to run Vmware stuff, and your PC is powerful enough to be a host to multiple machines at once, i'd recomend considering vmware esxi. There is a free version of it available and there are folks in this forum who run unraid and esxi on the same machine. There are some things you will need though. 1. your cpu needs to have vt-d 2. your motherboard needs to comply with vt-d (you often will not even find info about it in the manual) 3. your gpu that you want to give to the gaming vm, needs to be in a seprate IOMMU Group. Only then can you give it to a vm. nice to haves: 4. your gpu should have UEFI boot capability 5. your mainboard should be able to UEFI boot from USB you might also want to drop the vmware part alltogether, as unraid can do most of the stuff you would get on a free vmware licence anyway. The server grade stuff like fallback etc is not available, but most thats out of question anyway. I droped the use of vmware completely and all my PC operating systems in my home are virtualized except for one laptop. That is multiple gaming VMs running on the same server. To get better help, you do need to clarify what you want to do, and what hardware you have. Maybe checkout the unraid videos about what is possible.
  7. First: So, i take it you have enough storage on server one and two to do a complete copy without swapping out disks? If so, turn off parity on the destination, copy all the stuff over, then add parity(or double parity) and if all data is transfered and parity checks out, retire the older system. If you have parity while filling all the disks, it will be written over several times as parity changes each time one of the array drives does change. While the CPU rarely is the limiting factor here, write/read confirmation on two disks does take longer then on just one. You also save your parity from the heavy load of all that writing. Second: I assume the limiting factor is your network and not the write speed of your disks, in that case, especially if without parity, you could add data faster to the new system if you use USB on top of it. You might have one or two USB storage drives available that you could fill localy on your old server and then add to the new one. Be sure to set data or a drive as the source for the usb transfer, that is not currently read from for the ethernet transfer. if you copy movies over ethernet for instance, why not copy music (on another disk) to usb. not realy: Well ultimately, pass through a usb controller (for the unraid usb drive) and a sata controller card (for your new hdds/sdds) to a vm in your old server, and start your new unraid server on your old one. Should achieve 10gbit speed max , if it'l work out at all...
  8. I bought two 2683v3 (SR1XH) for the server I set up at work. The "odd" Core count of it is bad for some tasks as it isn't 2^x some workloads do not work well with these numbers of cores(so i have been told). I did not run a lot of stuff on these chips yet, but I have had the machine for a week or two and am pleased with the result. We ordered for about 400 € per piece comeing with atleast some short waranty. Do you mind telling what you had to pay on the 1 CPU? The MHZ count on them isn't high, but the Passmark score is nice, especially if you pack multiple CPUs. Can defienetly run a bunch of VMs on "56 cores". I prefer that system over the 2670v1 most have bought (including me) as the heat output and power draw is significantly lower. Motherboard seems clean to me, but you should have gotten more Ram dims, as you can only make use of the full bandwith if you go for atleast 4 dims filling all the blue channels. You might want to reconsider the purchase, as you probably do not need 32gb, and you definetely should go for 4 dims. IPMI would have been a nice to have feature, but probably something you can do fine without?
  9. http://www.asrock.com/mb/spec/card.asp?Model=USB%203.1/A%2BC No Power connector. 10gb USB USB c If you use a hub anway the only One Old faced Port is no prob to you Has jumpers to Set behaviour for sleep modes allow Power Draw or Not.(in that it tells that it can Draw lots of Power from pcie which i did Not Test) Can confirm Working multiple of These in my Server to multiple vms at once. Windows 10 has drivers for it Out if the Box. Payed less for it then i would have fo USB 3 It is Small, so mounting full size stuff around it probably wont mess up Fan intakes. The Link is only there to idenrify the device.
  10. I have set up 8 cores per vm, 4 would be well enough. I would not mind the clock speed to much, the instructions per clock, or per clock power in general is increasing aswell. You wouldn't go and buy a Pentium4 for its clock. Take passmark scores into consideration. I would have to make some testing to figure out what CPU to put in which VMs, not only because of the CPU pairs, and what Socket such a pair would be one, but also, which PCIE Lane the GPU connected to the VM is on, and what CPU is behind that. Would not make much sense to have the GPU conected to CPU 1 while the machine is running on CPU2 with 8 cores. I do not know exactly if it will make a difference, but it might. There is lot's of stuff i haven't figured out yet. In the end I would not put one VM over several Sockets, so that there is no need to use the channel between the CPU if the decision is made to move a task from one CPU to another. The CPU I have is more then enough to host 2 windows 10 with gaming applications on. So I do have another 16 threads unraid can play with. It isn't that easy at all to put good use of all that processing power, and I already split it between multiple people. When UHD/4K wasn't all over the media yet in my country I imported an UHD monitor and did some emulating on pcsx2 - which is a ps2 emulator. Basicaly the goal was to give (old) games the best possible light for demonstration uses. It was hard to run the game in UHD and capture it at once as the capture cards could not do UHD in 60 hz, nor were nvidia cards able to work that resolution with shadowplay. In the end I was forced to take the gaming part and storing data part and merge it into one machine as we are talking about several gigabytes per second over longer periods. While some games will not run under a certain amount of cores, I guess you will be just fine with either of these cpus. There are some games that only work a single core, but you would not be the only one struggling with these applications if they are hungry ones too. While it is incredible what you get per $/€ or whatever currency you use on some CPU, keep in mind what you actualy want to do with the machine, what you might want to add in the future, and how long you might want to keep the system running. I made a realy good deal on pretty much every single part of my build and came out way cheaper then I should. In the end though, I would have spend double if I would have known what stuff I might come up with in the future, and how important this one machine is to me. What do you actualy want to do with that system? How many vms should it run? How many drives will be connected? And what odd hardware, expansion cards, or other quirks does it have to be compatible with? Lots of manufactures struggle with usefull/correct implementations of vt-d. To know if you are going to need it(want it) is a big deal in deciding what hardware you need to go for. All of these CPUs are so much more then most people actually could make use of at a time. You will most likely not keep it running for more then a couple years, as there will be more efficient ways to do so down the line.
  11. Should make a Signature some day. I do use two xeon 2680v1. Lots of people got 2670v1 for 50-100 Dollars. I was Just super Lucky to have gotten the 2680 for as cheap which is normal x times more expensive, barely noticeable difference though. Both CPUs got 40 lanes so it will get hard to use up the 80 pcie lanes. Also 8 cores plus Ht so a total of 32 on two CPUs. The Mainboards might cost some, but ddr3 reg ecc Ram is dirt cheap. I do use a gtx 770 and a USB 3.1 controller per vm which i do have two of. My girlfriend is using the pc simultanously with me, sometimes gaming on a Minecraft Server which runs on the machine on a docker aswell. I can Not See a Performance decrease in comparison to bare metal at all. I have Set up Port forwarding to be able to conect to minecraft from outsise of the Network. And other Ports aswell for other stuff like ftp acces etc. Tuning on the vms Works over Smartphone and wake on Lan, so i can track easily which ones are on and turn them on without accessing the Web GUI. Also easier for my non techy gf to work with. I used a 4790k before to test if what i want Works with unraid. It did, so i went for the more powerfull Hardware. Not a good idea to have more Stuff going on then you have cores. What Budget and Location are We talking about?
  12. Might be an idea to do the editing right there on the nas if there is a lot of acces to the storage. If however Most Data is Produced and then given to the nas for mostly reads only it will Not help much. Thing is, it will Take time to move the data from One pc to the other and Gigabit is getting a Bit to slow for often occuring Transfers of multiple hundred Gigs. You should think about Bonds, or setting up a 10 Gig Interface, or Just work on the nas itself. 10 Gig Interfaces can be had cheap. If you do Not have an offsite Backup Solution up i would recomend Double parity aswell. As a failure can happen while you rebuild your Array, especially with larger Drives as it takes longer. Make sure you are aware of shingled Drives, as you probably want high write speeds which these do Not deliever.
  13. The non Server CPUs struggle with pcie lanes Not only for the Motherboard choices, but the lane Count on the cpu aswell. You get 28 lanes on that 5820 total. Your gpu probably is fine with x8 but still that would Account for 32 lanes with the sata Controllers which you got 3 of? Xeon e5 would be the way to Go in my recomendation. 40 lanes, or Double that with Dual CPUs. The lanes from the chipset Can help but i would want to have some headroom if i Go for a new Server in the firstplace. Speaking bout used Hardware, did you Check the used Market of older xeon e5? I spent less then 1000€ on a 32 thread 128gb machine and even came Out with a plus selling the Hardware i did Not need ansmore due to virtualization.
  14. Vt-d is often Not supported, Heck even on non es CPUs there are hickups (or worse)with certain steppings. If you want to Go with Hardware passtrough for vms i would Not recomend es CPUs at all. If you do Not need that Feature you should bo fine if you do the necesary Research. You probably also do Not want to Game on the machine, which sometimes is a Problem if only few cores Turbo up or have a lower Turbo in General. Not realy that Big of a Deal for an unraid nas Box though. I saved a lot of Money selling all but One of my other PCs and virtualizing all the stuff from nas to gaming. The stuff i learned resulted in me taking over all the infrastructure Jobs at work aswell.
  15. That would be a good Marketing strat and some peole might even make use of it.