jonp Posted September 5, 2015 Share Posted September 5, 2015 http://lime-technology.com/gaming-on-a-nas-you-better-believe-it/ Link to comment
spencers Posted September 5, 2015 Share Posted September 5, 2015 Sweet post! Thanks for taking the time to compare. Link to comment
freddiestoddart000 Posted September 8, 2015 Share Posted September 8, 2015 http://lime-technology.com/gaming-on-a-nas-you-better-believe-it/ I have 2 questions about this configuration. How much does giving 4, 6, and 8 cores to windows affect the performance of unRaid? What effect does this have on Dockers that are running, e.g. Plex, team speak, LAMP server? Link to comment
jonp Posted September 8, 2015 Author Share Posted September 8, 2015 How much does giving 4, 6, and 8 cores to windows affect the performance of unRaid? NAS functionality in its very nature is not CPU intensive, but rather, IO intensive. Gaming is not very IO intensive, but rather, GPU and CPU intensive. It's the perfect marraige. What effect does this have on Dockers that are running, e.g. Plex, team speak, LAMP server? It is recommended to use the --cpuset-cpus parameter in Docker to pin apps to specific CPU cores that your gaming VM is not running on. Isolating the CPU is best as it eliminates potential context switching. That said, the most CPU intensive container is Plex when its transcoding media. Most other containers are fairly light weight in CPU usage. I'll be doing a follow up blog on showing the same tests while the other pCPUs are being taxed. There is an impact due to CPU stress levels increasing (temperature can rise too) but again, its not so bad and the cost is still well worth it. Link to comment
NAS Posted September 8, 2015 Share Posted September 8, 2015 Something I have never been clear on and perhaps this is a good place. I game but not every day. Can a user like me install a GPU, map it to a dedicated VM and when that VM is powered down do the same to the GPU i.e. reduce my standby energy consumption. Link to comment
jonp Posted September 8, 2015 Author Share Posted September 8, 2015 Something I have never been clear on and perhaps this is a good place. I game but not every day. Can a user like me install a GPU, map it to a dedicated VM and when that VM is powered down do the same to the GPU i.e. reduce my standby energy consumption. The cost savings is very very minimal for GPU power saving mode, but I believe that will work when we upgrade the kernel in the future. I'll have to look it up for the exact version that this feature works, but I remember reading about it from Alex Williamson, the maintainer of VFIO. If I can find the link, I'll post it here later. Link to comment
NAS Posted September 8, 2015 Share Posted September 8, 2015 I think this would be an interesting clarification. Certainly it could biase the GPU purchasing decisions people make for an unRAID steam machine. At the very least it would be good to be able to prove that an unRAID machine with a GPU installed that gets used for gaming a few hours a week does not cost more than a dedicated machine that is turned off when not in use. Link to comment
jonp Posted September 8, 2015 Author Share Posted September 8, 2015 I think this would be an interesting clarification. Certainly it could biase the GPU purchasing decisions people make for an unRAID steam machine. At the very least it would be good to be able to prove that an unRAID machine with a GPU installed that gets used for gaming a few hours a week does not cost more than a dedicated machine that is turned off when not in use. Found that it's actually already in our current builds of unRAID! This was added in 4.1. Here's the relevant quote from Alex Williamson's blog: Another couple other bonuses for v4.1 and newer kernels is that by binding devices statically to vfio-pci' date=' they will be placed into a low power state when not in use. Before you get your hopes too high, this generally only saves a few watts and does not stop the fan.[/quote'] source: http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part-3-host.html Link to comment
WeeboTech Posted September 8, 2015 Share Posted September 8, 2015 The cost savings is very very minimal for GPU power saving mode. One point that people rarely take into account is the heat produced by powered devices even while idle. While it may seem trivial, Even a few 'idle' small low powered micro servers in my bedroom create an uncomfortable amount of heat. This forces me to use fans for exhaust or turn on the air conditioner every now and then. I'm even considering to remove the GPU's from a linux workstation and using the CPU's GPU to save the precious watts and heat. It's more about heat for me. Excess heat ends up requiring cooling. Link to comment
jonp Posted September 8, 2015 Author Share Posted September 8, 2015 The cost savings is very very minimal for GPU power saving mode. One point that people rarely take into account is the heat produced by powered devices even while idle. While it may seem trivial, Even a few 'idle' small low powered micro servers in my bedroom create an uncomfortable amount of heat. This forces me to use fans for exhaust or turn on the air conditioner every now and then. I'm even considering to remove the GPU's from a linux workstation and using the CPU's GPU to save the precious watts and heat. I'm not sure that's a true apples-to-apples comparison as to what we're talking about here. There's essentially four states to think of here: standby, idle, active 2D, and active 3D (not representative of the names of the actual power states). To clarify your last sentence which I bolded above, it seems as if you are talking about removing a discrete (PCIe) GPU which is used to generate desktop graphics in exchange for using the on-board CPU-driven graphics processor. In this case, we're talking about the difference between on-board vs. discrete graphics and how much heat they generate / power they consume. This is really comparing both options as they pertain to active 2D graphics generation. However, in the scenario with unRAID and a discrete GPU being utilized for a VM, the proper way to do this is to configure the BIOS to only output graphics using the on-board CPU-based graphics device and restrict the discrete (PCIe) GPU for VM usage. With no VM started and vfio-pci bound to the discrete GPU, the GPU is essentially in a standby state, and not generating nearly any heat at all (as its not even being used to generate 2D desktop graphics). When a VM is started with the device assigned, obviously we now go from a standby state to an active 2D state, which increases power draw and heat generation, but not anywhere near what type of power/heat draw will occur when a 3D application (a game) is started. Now you're pushing the GPU to do lots of work which will increase power draw, heat generation, and noise output from the GPU fans. I guess what I'm trying to get at here is that I don't think having a GPU installed for VM usage will generate much additional heat if any at all when the VM is not running because it's not participating even in generating host-based graphics (e.g. the local console). There is another aspect of installing a discrete GPU that folks should be aware of and that's impact to overall airflow and cooling. Discrete GPUs (especially high-performance gaming-class graphics devices) can be large in size, taking up a lot of space inside the chassis, and impede airflow. Generally speaking, most cases should be designed to handle this, but let's not fool ourselves; the more space taken up inside a case, the less room heat has to move/flow out of the case, and the hotter it can get in there. That is why proper cooling should be considered when designing a system for gaming. That said, for the use-case we're describing (combining a gaming rig with a NAS), proper cooling would be required anyway, and I don't think folks will need to do much additional cooling above and beyond what they were planning to anyway. I certainly wouldn't recommend using a small-form factor case for a beefy GPU without proper airflow, and placement of the device is also an important consideration. Putting a small form factor unit with a high-performance GPU into an enclosed piece of home theater furniture is NOT a good idea if using a beefy GPU. That's like putting a microwave inside an oven ;-). Link to comment
NAS Posted September 9, 2015 Share Posted September 9, 2015 So as I understand it then: if you add a GPU to a dedicated VM in unRAID and then shut down that VM then the power consumption of that GPU should drop to the cards IDLE rate or below that? In the case of the nVidea GTX 750 Ti say this would be ~6W on IDLE? Can we enhance this blog analysis to actually prove this under unRAID? Given we are essentially promoting a 24*7 always on GPU I think it is an import concern people "may" have. Link to comment
jonp Posted September 9, 2015 Author Share Posted September 9, 2015 So as I understand it then: if you add a GPU to a dedicated VM in unRAID and then shut down that VM then the power consumption of that GPU should drop to the cards IDLE rate or below that? In the case of the nVidea GTX 750 Ti say this would be ~6W on IDLE? Can we enhance this blog analysis to actually prove this under unRAID? Given we are essentially promoting a 24*7 always on GPU I think it is an import concern people "may" have. How about we save that for a follow up blog post? Link to comment
NAS Posted September 9, 2015 Share Posted September 9, 2015 So as I understand it then: if you add a GPU to a dedicated VM in unRAID and then shut down that VM then the power consumption of that GPU should drop to the cards IDLE rate or below that? In the case of the nVidea GTX 750 Ti say this would be ~6W on IDLE? Can we enhance this blog analysis to actually prove this under unRAID? Given we are essentially promoting a 24*7 always on GPU I think it is an import concern people "may" have. How about we save that for a follow up blog post? More than fine by me. Probably worthy of its own blog anyway. I know its the first thing I thought about. Link to comment
jonp Posted September 9, 2015 Author Share Posted September 9, 2015 So as I understand it then: if you add a GPU to a dedicated VM in unRAID and then shut down that VM then the power consumption of that GPU should drop to the cards IDLE rate or below that? In the case of the nVidea GTX 750 Ti say this would be ~6W on IDLE? Can we enhance this blog analysis to actually prove this under unRAID? Given we are essentially promoting a 24*7 always on GPU I think it is an import concern people "may" have. How about we save that for a follow up blog post? More than fine by me. Probably worthy of its own blog anyway. I know its the first thing I thought about. Honestly I think that's more of a European/International thing. I don't think many US customers (especially gamers) tend to weigh out the costs of GPU power consumption like that. That said, it is still a valid value proposition if I can find a tool to measure actual power usage, it'd be worth covering in its own blog post. Link to comment
NAS Posted September 9, 2015 Share Posted September 9, 2015 Im not too sure its an EU thing only, power consumption of GPUs is front and centre of every review I ever read nowdays and is one of the nvidia corp 3 taglines for the entire GTX 750 series (" the 750 delivers twice the performance at half the power consumption of previous generation cards, all at a great value") All you need is a Kill-a watt, a test rig and some patience http://www.amazon.com/P3-P4400-Electricity-Usage-Monitor/dp/B00009MDBU Link to comment
WeeboTech Posted September 9, 2015 Share Posted September 9, 2015 Honestly I think that's more of a European/International thing. I don't think many US customers (especially gamers) tend to weigh out the costs of GPU power consumption like that. That said, it is still a valid value proposition if I can find a tool to measure actual power usage, it'd be worth covering in its own blog post. I've always been concerned with power. I've had the killowatt on my computer power line for years to see how each piece affects power. Benefits of unRAID are that drives spin down and in low power state. That's a key point that brought me to unRAID years ago. There's plenty of posts about how low people's machines sip power during low power state. There's a reason some people will pull the video cards out and go headless. In my case, it's about heat and noise. Heat being the primary concern. Every watt consumed has an effect on heat from the source and all the devices in the chain. i.e. Power supply and UPS. Every BTU can be a concern when in an enclosed room as there will then be cooling costs. I think it's really more about a machine that is on 24x7x365 vs one that is turned on and off when gaming. People will still pay for their entertainment. It's just nice to know those costs. I look forward to a future post on how the power consumption subject pans out. As far as kill-o-watt. I might suggest the power strip version so it's easier to read. In my case I have that version plus the UPS to use as a wattage benchmark. Link to comment
NAS Posted September 9, 2015 Share Posted September 9, 2015 ... People will still pay for their entertainment. It's just nice to know those costs. ... Thats my prediction. If the technology works correctly the extra PA power cost will likely be a fraction of the cost of a single component of a separate dedicated game machine. The key point is we need to prove it so we can then add that as a plus point to the technology. Link to comment
archedraft Posted September 9, 2015 Share Posted September 9, 2015 ... People will still pay for their entertainment. It's just nice to know those costs. ... Thats my prediction. If the technology works correctly the extra PA power cost will likely be a fraction of the cost of a single component of a separate dedicated game machine. The key point is we need to prove it so we can then add that as a plus point to the technology. I would also be very interested in what that adding an extra GPU actually adds. Another user reported not very much but still would love to see a breakdown like: unRAID system with drives - no added GPU unRAID system with drives - add GPU - VM's Not Running unRAID system with drives - add GPU - VM's Running or similar to get an idea. Link to comment
NAS Posted September 9, 2015 Share Posted September 9, 2015 Some basic starting figures people can play with: GTX 750 Ti IDLES at 6W 1 W x 8,760 hours ( 24×365 hours in a year) = 8760 Wh = 8.76 kWh (1 Watt for 1 year) 6 W = 52.56 kWh My price is approx £0.13 per kWh Total cost of a GTX 750 Ti GPU idling for a year £6.83 = trivial However if the idle technology doesnt work this could mean a 10 fold increase which quickly becomes serious money. Also important is the choice of GPU. For instance as a GTX 980 idles at 73 W which equates to a more serious £83.13 PA just idling So the key is the right choice of GPU and proof the technology works. Link to comment
JonathanM Posted September 9, 2015 Share Posted September 9, 2015 Some basic starting figures people can play with: GTX 750 Ti IDLES are 6W 1 W x 8,760 hours ( 24×365 hours in a year) = 8760 Wh = 8.76 kWh (1 Watt for 1 year) 6 W = 52.56 kWh My price is approx £0.13 per kWh Total cost of GPU idling for a year £6.83 However if the idle technology doesnt work this could mean a 10 fold increase which quickly becomes serious money. SO the key is the right choice of GPU and proof the technology works. Always on heaters (which is what computers are) make for tough math. During some parts of the year, you can use the heat to reduce your other energy usage, offsetting virtually all the cost. Other parts of the year force you to pay money to pump that heat out of the house with A/C, doubling or tripling the cost for that specific interval. Bottom line cost is very difficult to accurately calculate for any individual circumstance, and impossible to generalize for all. Link to comment
NAS Posted September 9, 2015 Share Posted September 9, 2015 I dont deny that but we need to inform users of the cost variables that are in play so they can make an informed choice. Link to comment
nia Posted September 9, 2015 Share Posted September 9, 2015 I would be very interested in an analysis like mentioned below. Having prices here in Denmark of 1kWh @ 0.22£ or 0.34$ power consumption in a 24/365 scenario is most certainly a very important parameter. I have been on the fence with this for a while, postponing the replacement of Juniors rig (standing right next to the unRAID server) with either a new mobo/GPU or a virtual setup while pondering considerations like these. And as the server is standing right here in our home office, noise and heat is important too. So I will stay tuned for any analysis - particularly on power consumption - as a "wrong" decision could very well end up becoming quite expensive (almost doubling NAS's numbers could very quickly make the case for a separate power-off-able rig). Link to comment
jonp Posted September 10, 2015 Author Share Posted September 10, 2015 Looking to order one of those power monitoring devices tonight! Once received, I will put unRAID through its paces to measure power consumption on the GPU. Link to comment
GHunter Posted September 10, 2015 Share Posted September 10, 2015 ... People will still pay for their entertainment. It's just nice to know those costs. ... Thats my prediction. If the technology works correctly the extra PA power cost will likely be a fraction of the cost of a single component of a separate dedicated game machine. The key point is we need to prove it so we can then add that as a plus point to the technology. I would also be very interested in what that adding an extra GPU actually adds. Another user reported not very much but still would love to see a breakdown like: unRAID system with drives - no added GPU unRAID system with drives - add GPU - VM's Not Running unRAID system with drives - add GPU - VM's Running or similar to get an idea. I think this would be interesting to know. Compare this to a dedicated gaming rig and the power it consumes. Then calculate a months power consumption with 40 hours per month gaming usage with unRAID and a gaming VM vs. a dedicated gaming rig that only runs 40 hours per month. I'd be willing to bet that the additional power usage unRAID would consume for the month would still be less than that of a dedicated gaming rig. Gary Link to comment
bardsleyb Posted September 10, 2015 Share Posted September 10, 2015 It is recommended to use the --cpuset-cpus parameter in Docker to pin apps to specific CPU cores that your gaming VM is not running on. Isolating the CPU is best as it eliminates potential context switching. I have never heard of this before and really wanted to know if there was a way to isolate what cores a certain docker can use. For my case, SABNZDB likes to work really hard, and my Intel i5 just cannot keep up sometimes it seems. When SAB is unpacking certain things, it causes my Plex, which works fine and even under minimal transcoding, will start to stutter and skip until the file is unpacked. When I look in the unRAID dashboard, I see my CPU sitting at 100% until that file is unpacked. I don't care how long it takes to unpack certain files, so I would rather give SAB just one core (2 at most) and let it go. This would solve so many of my issues. I will be searching for how to do this tonight, so this nuisance issue with SAB will be solved. Of course, if anybody has a different way they would do it, please let me know as well. I am certainly open to suggestions. If this goes on much longer, my wife might throw me out! (just kidding of course..... I hope) BTW, unRAID 6 is starting to be the box I always dreamed of having.... One to rule them all! I am very thankful to the hard work that LT has been doing to roll out such a great product. I have even turned on several of the people I work with to your product, because they see what I am doing. I have converted a gamer this past weekend. I think he will be building a new system and buying unRAID in the coming weeks. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.