Low-power T processor vs. Unthrottled CPU for Plex Encoding


Recommended Posts

I've already done quite extensive Plex transcoding benchmarking and the end result was that it will require around 2000 Passmarks for each 1080p/10Mbps stream and 1500 Passmarks for each 720p/4Mbps stream. These were all with an Amd cpu's but I'm very confident that the tests with Intel will have the same result.

 

The i3-4130T has passmark score of 3938 so I will expect it to be able to transcode either one 1080p and one 720p or two 720p streams at the same time.

 

Check your cpu's Passmark score at http://cpubenchmark.net/cpu_list.php

 

Since most unRAID setups are designed to run 24/7, I would recommend using the low TDP variants, the T-models on the Intel side. They do cost a bit more and are not as powerful but you can go as far as i7-4770T and still get 45W TDP with an 8800 Passmark score. The standard i7-4770 scores  9975 and i7-4770K 10151.

Link to comment

Henris' ballpark Passmark scores per transcode sound about right based on my experience as well. That said, power per transcode depends not only on the source quality, but also the destination quality. In other words, what you are transcoding the stream down to (quality wise) will also determine how much horsepower it takes. In your tests, what were you transcoding the source down to?

 

As for the T-models, I'm inclined to disagree. While my server does in fact have a i3-3220T, I chose it for one reason only, to limit the maximum amount of heat that could be generated. I was worried about temps in such a small case like the Q25B. My fears were unfounded though as the case has great ventilation and temps have never been an issue. A T-model doesn't idle any cooler than any other model, it's only throttled on its max clock speed to limit the amount of heat it can generate at 100% utilization. Since an unRaid server spends the vast majority of its life at idle, and idle temps are no different between models, I am of the opinion that the T-models aren't worth the cost differential.  In addition, the greater horsepower of the non-T model is nice to have when it's needed for faster transcodes, etc.

 

IMO, I could have put a i7-4770 in my build and would have the same temps I do now. Of course, you'll need a good low-profile cooler like a Noctua, as the stock cooler would be too tall. Sure you'd have more dissipated heat during high CPU loads, but the CPU would be at high load for much shorter periods of time. 

Link to comment

Definitely agree with dirtysanchez's comments r.e. the low-power CPU versions.

 

I did the same thing with one of my secondary HTPC's a couple years ago => I used an i5-2400S to keep the power demands low.  It worked great, but was slower than I liked with a lot of the re-rendering I was doing on videos, so I replaced it with a non-throttled i7.    There was NO difference in the typical (i.e. idle) power consumption after the update ... yes, it did let the CPU get a couple degrees warmer during high utilization; but the improvements in computational power were VERY nice.    As long as you've got reasonable ventilation, there is no reason to use the low-power versions these days.

 

Link to comment

Agreed Gary. Also, I forgot to mention power consumption, but of course heat and power consumption are effectively one and the same. In an unRaid build an i7 should have effectively the same power consumption over the course of say, a month, as an i3 T-model. It will draw more power than the i3 during high load times, but will also be at high load for a shorter period of time.  Averaged out over any appreciable time frame the power consumption of the two should be virtually identical.

Link to comment

Henris' ballpark Passmark scores per transcode sound about right based on my experience as well. That said, power per transcode depends not only on the source quality, but also the destination quality. In other words, what you are transcoding the stream down to (quality wise) will also determine how much horsepower it takes. In your tests, what were you transcoding the source down to?

The source was in these cases always the same 1080p/10Mbps video which was transcoded to different destination qualities out of which I mentioned two in my previous post. The destination resolution (1080p/720p) matters the most, the difference between different bitrates within the same resolution was surprisingly small. I did also do few tests with 720p source material and it further reduced the cpu requirements but I don't have exact figures available.

 

As for the T-models, I'm inclined to disagree. While my server does in fact have a i3-3220T, I chose it for one reason only, to limit the maximum amount of heat that could be generated. I was worried about temps in such a small case like the Q25B. My fears were unfounded though as the case has great ventilation and temps have never been an issue. A T-model doesn't idle any cooler than any other model, it's only throttled on its max clock speed to limit the amount of heat it can generate at 100% utilization. Since an unRaid server spends the vast majority of its life at idle, and idle temps are no different between models, I am of the opinion that the T-models aren't worth the cost differential.  In addition, the greater horsepower of the non-T model is nice to have when it's needed for faster transcodes, etc.

 

IMO, I could have put a i7-4770 in my build and would have the same temps I do now. Of course, you'll need a good low-profile cooler like a Noctua, as the stock cooler would be too tall. Sure you'd have more dissipated heat during high CPU loads, but the CPU would be at high load for much shorter periods of time.

I stand corrected on this one, I had somehow understood that low TDP would also affect the lower idle consumption but that's not the case. However Haswell in overall is told to have much lower idle consumption than Ivy Bridge so Haswell should be the choice for a 24/7 system.

 

If you compare i3-4130 and i3-4130T you have:

- i3-4130, 54W TDP, 4962 Passmark, 122$ -> 92 Passmark/W, 41 Passmark/$

- i3-4130T, 35W TDP, 3938 Passmark, 140$ -> 113 Passmark/W, 28 Passmark/$

 

Depending on your transcoding or other CPU intensive usage profile you could then calculate how long it takes for the T-model to pay back the price difference through lower power consumption under load. If you take the additional processing power of 4130 into equation it kinda becomes advanced mathematics and at least my brains start to hurt. But you will get more processing power out of the T-Models per watt and eventually a T-model will become cheaper than a standard model.

 

This thread on silentpcreview had some interesting points: http://www.silentpcreview.com/forums/viewtopic.php?t=66662p=578729

And a 7,5W idling G3220 based system... Have to measure my i3-4130T without any disks spinning...

Link to comment

I stand corrected on this one, I had somehow understood that low TDP would also affect the lower idle consumption but that's not the case. However Haswell in overall is told to have much lower idle consumption than Ivy Bridge so Haswell should be the choice for a 24/7 system.

 

If you compare i3-4130 and i3-4130T you have:

- i3-4130, 54W TDP, 4962 Passmark, 122$ -> 92 Passmark/W, 41 Passmark/$

- i3-4130T, 35W TDP, 3938 Passmark, 140$ -> 113 Passmark/W, 28 Passmark/$

 

Depending on your transcoding or other CPU intensive usage profile you could then calculate how long it takes for the T-model to pay back the price difference through lower power consumption under load. If you take the additional processing power of 4130 into equation it kinda becomes advanced mathematics and at least my brains start to hurt. But you will get more processing power out of the T-Models per watt and eventually a T-model will become cheaper than a standard model.

 

This thread on silentpcreview had some interesting points: http://www.silentpcreview.com/forums/viewtopic.php?t=66662p=578729

And a 7,5W idling G3220 based system... Have to measure my i3-4130T without any disks spinning...

 

Agreed that Haswell should be the choice for a 24x7 system, especially if you are concerned with power consumption.

 

I'm not sure your "Passmark/W" is a valid measurement. Don't forget that TDP is not a measure of the power consumption of the CPU, but rather a measure of the maximum amount of heat the cooling system must dissipate, measured in watts. Of course, in general, the more power consumed the more heat that must be dissipated all other things being equal, but that doesn't mean a chip with a higher TDP uses power less efficiently. That said, your "Passmark/$" is certainly a good measure of value for a CPU.

 

I don't believe a lower TDP chip would pay back the price difference over time through lower power consumption under load. For example, with a given transcoding job a 4130T will consume less power but for a longer duration. On the other hand, the 4130 will consume more power for a shorter duration. In the end, the amount of power consumed by both CPU's should be roughly equal. Of course I have no proof of this that I can point to, this is just logical reasoning on my part. I may very well be wrong.

 

As a point of comparison, my i3-3220T system consumes 33W at idle with all drives spun down, all fans are running and nothing has been disabled in the BIOS in an attempt to get it lower.

Link to comment

I'm not sure your "Passmark/W" is a valid measurement. Don't forget that TDP is not a measure of the power consumption of the CPU, but rather a measure of the maximum amount of heat the cooling system must dissipate, measured in watts. Of course, in general, the more power consumed the more heat that must be dissipated all other things being equal, but that doesn't mean a chip with a higher TDP uses power less efficiently. That said, your "Passmark/$" is certainly a good measure of value for a CPU.

 

I don't believe a lower TDP chip would pay back the price difference over time through lower power consumption under load. For example, with a given transcoding job a 4130T will consume less power but for a longer duration. On the other hand, the 4130 will consume more power for a shorter duration. In the end, the amount of power consumed by both CPU's should be roughly equal. Of course I have no proof of this that I can point to, this is just logical reasoning on my part. I may very well be wrong.

 

As a point of comparison, my i3-3220T system consumes 33W at idle with all drives spun down, all fans are running and nothing has been disabled in the BIOS in an attempt to get it lower.

Well this is getting interesting. I also did some more measurements under various loads. The idle state was like yours, all drives spun down fans running and no special bios settings:

idle27W

25% load41W

50% load48W

75% load49W

100% load51W

The load was generated with a python script putting 100% load on each core, so that in 25% scenario one core was fully loaded. The system is Asus H87I-Plus, Intel i3-4130T (35W TDP), Silverstone PSU and Noctua fans. The figures are quite interesting, basicly after 50% load the power consumption grows only by few watts and the max consumption is 51W.

 

I'm quite certain that you actually get more computing power per watt from these low-TDP processors. To prove this you would have to test a 54W/90W TDP processor at 100% load and measure the consumption. My bet is that it is pretty close to the TDP values. This would mean that the rest of my system consumes ~16W when idling and the idle consumption for the CPU alone is 11W.

 

CPUBenchmark actual has the best-bang-for-watt graph already available (yeah I know it's TDP based...):

http://www.cpubenchmark.net/power_performance.html

Link to comment

Well this is getting interesting. I also did some more measurements under various loads. The idle state was like yours, all drives spun down fans running and no special bios settings:

idle27W

25% load41W

50% load48W

75% load49W

100% load51W

The load was generated with a python script putting 100% load on each core, so that in 25% scenario one core was fully loaded. The system is Asus H87I-Plus, Intel i3-4130T (35W TDP), Silverstone PSU and Noctua fans. The figures are quite interesting, basicly after 50% load the power consumption grows only by few watts and the max consumption is 51W.

 

I'm quite certain that you actually get more computing power per watt from these low-TDP processors. To prove this you would have to test a 54W/90W TDP processor at 100% load and measure the consumption. My bet is that it is pretty close to the TDP values. This would mean that the rest of my system consumes ~16W when idling and the idle consumption for the CPU alone is 11W.

 

CPUBenchmark actual has the best-bang-for-watt graph already available (yeah I know it's TDP based...):

http://www.cpubenchmark.net/power_performance.html

 

As for your tests, I can only assume power consumption tends to level off after 50% load (2 threads) because that CPU is a dual-core with HT.  At two threads (50%) both cores will be almost fully utilized and at max clock speed.  The other two threads (75% and 100%) are simply using Hyperthreading on an already full clock speed core, hence the very small increase in power consumption.

 

As for T-series CPU's being more energy efficient, I'm still not convinced.  One problem with your argument is that while 35W and 45W TDP processors routinely hit the top of their TDP envelope, many 77W and 95W processors don't, which is why I don't put a lot of faith in a Passmark/TDP W metric.  If it were actual measured TDP W it would be valid IMHO.

 

There's not a lot out there on the subject currently, but I did find a Tom's Hardware article.  It was performed using Ivy Bridge CPU's, but the results should be just as relevant to Haswell.  The conclusions of the article support what my logical reasoning already told me, namely that low TDP processors are no more energy efficient than high TDP processors.  Give a T-model and a non T-model processor the same workload and the T-model will take longer to complete the workload.  Granted the T-model has a lower max power draw, but it has to stay at max power draw longer to complete the workload, while the non T-model has already finished and returned to idle at low power draw.  As I stated in my previous post, total power consumed over time should be roughly equal.

 

In the article, the i5-3570K (77w TDP) actually used a bit LESS total energy than the i5-3570T (45W TDP) to complete the same workload, 60.7 Wh vs 64.6 Wh.  The article concludes that low TDP processors are good for pretty much one reason only, to cram the performance you do get into a smaller form factor.  In other words, managing heat dissipation, the sole reason I went for the i3-3220T in the first place, which in hindsight was an unfounded worry.  It also concludes "If you’re not looking at a strict thermal limit, skip those parts altogether in favor of Intel’s Core i5-3570K. It only costs $20 bucks more and does a number of things better. The days when you could buy a lower-TDP Core 2 Quad that didn’t compromise performance are over. Today, Intel’s –S and –T SKUs are all about dipping in under power limits at the expense of speed."

Link to comment

Agree with just about everything dirtysanchez just said => as I noted earlier, my experience with an i5-2400S vs. an i7-2600 that I replaced it with showed almost no difference in operating temperatures; but nearly double the computer power.  My original goal of limiting the power consumption was, as with dirtysanchez's experience, an unfounded worry, as there was plenty of heat dissipation in the HTPC case I used.

 

He's definitely correct in assessing why your power consumption didn't go up after 2 threads -- your CPU only has 2 cores, so once you max'd both cores, you didn't have any more "CPU" to use.  It's interesting that simply hyperthreading increased the load at all at that point.

 

The only thing I don't agree with is I never use the K-series CPU's.  First, I'm not an over-clocker, so I don't care about the unlocked multiplier.  But more significantly, I do occasionally like to "play" a bit with virtualization, and the K-series don't support Vt-d, whereas the non-k versions do.

 

Link to comment

The only thing I don't agree with is I never use the K-series CPU's.  First, I'm not an over-clocker, so I don't care about the unlocked multiplier.  But more significantly, I do occasionally like to "play" a bit with virtualization, and the K-series don't support Vt-d, whereas the non-k versions do.

 

Agreed, that's there simply because that's what was in the article.  I'd also never use a K-series for an unRaid build.  OTOH, I do have a K-series in my desktop gaming machine, but that's a whole different use case.

Link to comment

As for your tests, I can only assume power consumption tends to level off after 50% load (2 threads) because that CPU is a dual-core with HT.  At two threads (50%) both cores will be almost fully utilized and at max clock speed.  The other two threads (75% and 100%) are simply using Hyperthreading on an already full clock speed core, hence the very small increase in power consumption.

:-[ I'm getting educated here. Well, I've been an AMD guy for the past 25 years so I guess I could use that as an excuse for not knowing the details of an Intel cpu ;) And for sure that is the reason for consumption to not going any higher. As gary also mentioned, it is a bit surprising that it still goes up a few watts.

 

As for T-series CPU's being more energy efficient, I'm still not convinced.  One problem with your argument is that while 35W and 45W TDP processors routinely hit the top of their TDP envelope, many 77W and 95W processors don't, which is why I don't put a lot of faith in a Passmark/TDP W metric.  If it were actual measured TDP W it would be valid IMHO.

Read the Tom's Hardware review and the difference between the average power consumption of the 45W i5-3570T and 77W i5-3570 on the used benchmark was surprisingly low, less than 10 watts. But if you look at the power draw graphs you immediately see that it heavily fluctuates so that the load is not constant but dependent on the benchmark. I have no idea what this does to the comparability of the test results. I don't understand why they didn't use a synthetic CPU test as one of the tests, I guess they were aiming for the average desktop usage (including GPU) which is of course of no interest to us.

 

I'm not either convinced yet ;) I think the only way to settle this is to get a higher TDP Haswell and test it. An Intel Xeon E3-1230V3 would fit this perfectly as it has 80W TDP and 9533 Passmark. I have to see whether I had an opportunity to do this in one of the upcoming builds.

 

Reading from this white paper from Intel the real power draw might even exceed the TDP especially if an artificially consistent load was used. The wording to me indicated that the maximum power draw is normally a bit lower than the TDP but not considerably.

http://www.intel.com/content/www/us/en/benchmarks/resources-xeon-measuring-processor-power-paper.html

Link to comment

From my viewpoint, it's irrelevant whether or not a higher TDP CPU uses more or less total power than a throttled version.    The only reason I see for using a throttled version is to limit the heat generated by the CPU, to reduce the dissipation requirement.  That, for example, is why mobile CPU's are all much lower power -- a laptop simply doesn't have the airflow to dissipate the heat from a desktop class CPU.

 

One good reason to use a throttled CPU is noise => if, for example, you're going to locate your system where you want it to be as absolutely quiet as possible, you may want to use a passive heatsink (e.g. no fan) ... and in that instance, even though the case may have plenty of room for adequate airflow, with no fan on the CPU it's a good idea to limit the heat generation.

 

But if you're using an adequate heatsink in a case with good airflow, then for most purposes I'd prefer to have the higher headroom that an unthrottled CPU provides.

 

 

Link to comment

:-[ I'm getting educated here. Well, I've been an AMD guy for the past 25 years so I guess I could use that as an excuse for not knowing the details of an Intel cpu ;) And for sure that is the reason for consumption to not going any higher. As gary also mentioned, it is a bit surprising that it still goes up a few watts.

 

Totally understand, I haven't touched an AMD CPU in well over a decade and hence know very little about their curent offereings.  As for the increase of a few watts, perhaps there's a part of the chipset that draws increased power when HT is active?  I have no idea.

 

Read the Tom's Hardware review and the difference between the average power consumption of the 45W i5-3570T and 77W i5-3570 on the used benchmark was surprisingly low, less than 10 watts. But if you look at the power draw graphs you immediately see that it heavily fluctuates so that the load is not constant but dependent on the benchmark. I have no idea what this does to the comparability of the test results. I don't understand why they didn't use a synthetic CPU test as one of the tests, I guess they were aiming for the average desktop usage (including GPU) which is of course of no interest to us.

 

Yes, the average power draw was relatively close, within 10W, but that is the AVERAGE power consumed by each CPU across the entire suite of benchmarks.  While the 3570K drew on average 10W more it also finished the the benchmark suite roughly 7 minutes faster than the 3570T.  When you convert the results to total energy consumed for the benchmark suite run (in Watthours, Wh) the 3570K actually used slightly less total energy to complete the same amount of work.  The load fluctuates across the run because a number of different benchmarks are being run back-to-back.  Some benchmarks stress the CPU more than others.  The fluctuations do nothing to the comparability of the results, it's simply a matter of "do all these benchmarks back-to-back and how much energy did you use to get it done".  The K-series uses more power at load (whether at any given point in time, or as an average) but this is completely negated by its ability to drop back to idle (and thus low power draw) sooner.

 

Yes, the benchmarks were aiming for average desktop use and did include GPU, which as you said is of no interest to us.  Best I can tell the gaming benchmarks were only to determine FPS and would have been ran for a specified period of time, not a specific number of frames, which wouldn't skew the results, but only the author knows for sure.  Also, there were some synthetic benchmarks that were part of the suite, such as PC Mark and SiSoft Sandra.

 

I'm not either convinced yet ;) I think the only way to settle this is to get a higher TDP Haswell and test it. An Intel Xeon E3-1230V3 would fit this perfectly as it has 80W TDP and 9533 Passmark. I have to see whether I had an opportunity to do this in one of the upcoming builds.

 

If you can get your hands on one I'd very much be interested in seeing the results, although a high TDP Haswell Core i5/i7 would probably be a more representative comparison.  Either that or a comparison between high and low TDP Haswell Xeons.

 

Reading from this white paper from Intel the real power draw might even exceed the TDP especially if an artificially consistent load was used. The wording to me indicated that the maximum power draw is normally a bit lower than the TDP but not considerably.

http://www.intel.com/content/www/us/en/benchmarks/resources-xeon-measuring-processor-power-paper.html

 

Yes, Intel states "The thermal design power is the maximum power a processor can draw for a thermally significant period while running commercially useful software".  It can sometimes be exceeded by things like synthetic benchmarks, video encoding, etc.

Link to comment

From my viewpoint, it's irrelevant whether or not a higher TDP CPU uses more or less total power than a throttled version.    The only reason I see for using a throttled version is to limit the heat generated by the CPU, to reduce the dissipation requirement.  That, for example, is why mobile CPU's are all much lower power -- a laptop simply doesn't have the airflow to dissipate the heat from a desktop class CPU.

 

One good reason to use a throttled CPU is noise => if, for example, you're going to locate your system where you want it to be as absolutely quiet as possible, you may want to use a passive heatsink (e.g. no fan) ... and in that instance, even though the case may have plenty of room for adequate airflow, with no fan on the CPU it's a good idea to limit the heat generation.

 

But if you're using an adequate heatsink in a case with good airflow, then for most purposes I'd prefer to have the higher headroom that an unthrottled CPU provides.

 

Absolutely agree.  I used an i3-3220T as I was worried about heat in such a small case.  As I've already stated a few times, the worry was unfounded.  I could have used an i7-3770 in my build with a good low-profile cooler and had no noticable difference in temps OR power consumption while also gaining the benefits of increased processing power.

 

Edit:  For that matter I could have used the regular i3-3220 and gained roughly 14% processing power while saving $5 - $10.

Link to comment

First of all, such great info in this thread! Thanks all :D

 

I am wanting to do a similar setup to all of your builds using this case. I want to put together a powerful ECC memory system so that I have the flexibility of switching to a ZFS system if I decide to (I am going to build the machine then try out unraid / ubuntu w/zfs on linux / freenas.) I am thinking I will go with unraid but I want to try everything out and I think I will feel great having an unraid setup with ecc anyways.

 

My usage will be unraid as a media server w/ plex/sab/sb/cp

 

Could any of you provide your input on this motherboard / processor combo:

 

motherboard: Asrock E3C226D2I

  http://www.asrock.com/server/overview.asp?Model=E3C226D2I

 

processor: Intel-Xeon-Processor-E3-1220-v3

  http://ark.intel.com/products/75052/Intel-Xeon-Processor-E3-1220-v3-8M-Cache-3_10-GHz

 

Compared to an i3-4130 am I running my wattage up a lot? What about compared to an i5 Haswell?

 

Am I gaining much processing horsepower with the Haswell Xeon?

 

Are there compatibility issues I am missing?

 

Thanks a million for the input!  ;D

 

Link to comment

First of all, such great info in this thread! Thanks all :D

 

I am wanting to do a similar setup to all of your builds using this case. I want to put together a powerful ECC memory system so that I have the flexibility of switching to a ZFS system if I decide to (I am going to build the machine then try out unraid / ubuntu w/zfs on linux / freenas.) I am thinking I will go with unraid but I want to try everything out and I think I will feel great having an unraid setup with ecc anyways.

 

My usage will be unraid as a media server w/ plex/sab/sb/cp

 

Could any of you provide your input on this motherboard / processor combo:

 

motherboard: Asrock E3C226D2I

  http://www.asrock.com/server/overview.asp?Model=E3C226D2I

 

processor: Intel-Xeon-Processor-E3-1220-v3

  http://ark.intel.com/products/75052/Intel-Xeon-Processor-E3-1220-v3-8M-Cache-3_10-GHz

 

Compared to an i3-4130 am I running my wattage up a lot? What about compared to an i5 Haswell?

 

Am I gaining much processing horsepower with the Haswell Xeon?

 

Are there compatibility issues I am missing?

 

Thanks a million for the input!  ;D

 

Looks like a solid build to me.  The energy consumption at idle should be roughly the same as any other Haswell CPU.  At load it should also be roughly equal over time.  The long conversation above should apply equally to Xeons as well.  That Xeon has roughly 45% more processing power than the i3-4130. 

 

I don't see any compatibility issues, but I'm no ESXi expert.

Link to comment

Excellent choice.  The power consumption isn't an issue -- as we've discussed earlier, all Haswell CPU's are VERY power-efficient.  The higher TDP simply means it CAN ramp up to provide some serious horsepower if you need it ... but most of the time it'll be idling and using very little power.

 

The board is excellent -- IPMI, 6 SATA ports, ECC support, etc.  and the CPU fully supports just about anything you'd want to do.

 

Link to comment

From my viewpoint, it's irrelevant whether or not a higher TDP CPU uses more or less total power than a throttled version.    The only reason I see for using a throttled version is to limit the heat generated by the CPU, to reduce the dissipation requirement.  That, for example, is why mobile CPU's are all much lower power -- a laptop simply doesn't have the airflow to dissipate the heat from a desktop class CPU.

My viewpoint to this matter is the potential efficiency difference between the standard and T-models. In other words will the T-model use less energy (watt hours) than the standard model to complete the same task eg. transcode a two hour movie. Since Lian-LI PC-Q25B as a case from heat dissipation point of view can handle any Haswell CPU, it wouldn't make sense to use anything else than the best-bang-for-buck CPU if there wasn't efficiency differences. The higher TDP CPUs wouldn't  probably even be noisier on video transcoding since for given time frame they would be consuming exactly the same amount of energy and hence producing the same amount of heat in average as a T-model. The situation would be different if you had a task where you could benefit from the extra processing power.

 

 

 

Link to comment

The situation would be different if you had a task where you could benefit from the extra processing power.

 

Isn't it a "benefit" if a transcoding operation can be done in less time??

 

 

 

The higher TDP CPUs wouldn't  probably even be noisier on video transcoding since for given time frame they would be consuming exactly the same amount of energy and hence producing the same amount of heat in average as a T-model.

 

Not at all true.  The more powerful CPU would be consuming a good bit more power, and thus generating notably more heat for the duration of the task.    This may/may not result in more noise -- it depends on whether the heatsink fan needs to ramp up to dissipate the heat.    It may be true that for the total task it generates about the same amount of heat as a T-model => but the higher power unit would generate appreciably more during its peak utilization.

 

Link to comment

Is the transcoding done on the fly or its it more like converting it to a different format saved or buffered on a hard drive? If its on the fly then both types of processors would be transcoding for 2 hours just at different %s. That would be important if say the T system was using 50 watts max vs 100 or whatever watts the higher tdp system used.

Link to comment

Is the transcoding done on the fly or its it more like converting it to a different format saved or buffered on a hard drive? If its on the fly then both types of processors would be transcoding for 2 hours just at different %s. That would be important if say the T system was using 50 watts max vs 100 or whatever watts the higher tdp system used.

 

With Plex it can be both.  Transcoding on-the-fly for content that you are currently watching is AFAIK buffered in RAM.  Even so, the transcoding is very bursty in nature.  Plex will transcode up to a certain amount then go idle until another chunk needs to be transcoded. 

 

Plex can also go to 100% utilization (or close to it) and transcode an entire file as fast as it can get through it when you are using the Plex Sync functionality (pre-transcodes content to destination format and saves entire file for off-line viewing on client).  In these cases the transcoded file is saved to the apps temp directory, which is on a cache drive (if you are using one) or an array drive.

 

Either way, it isn't really important what max watts the T-system used.  It would still spend a longer duration of time at max clock speed than would the higher TDP system.

Link to comment

Is the transcoding done on the fly or its it more like converting it to a different format saved or buffered on a hard drive? If its on the fly then both types of processors would be transcoding for 2 hours just at different %s. That would be important if say the T system was using 50 watts max vs 100 or whatever watts the higher tdp system used.

Well this was partly point I was trying to make. Below is a collection of graphs I made while back for unRAID dashboard experimentation. The top left graph represents CPU usage for given process groups. The time between 20.45 - 21.45 shows three different Plex transcoding sessions (with different destination qualities). You can see a short higher spike in usage and then slightly oscillating lower usage. The spike is caused by the Plex Media Server's initial buffering until it goes to throttling to keep the same amount always buffered. The load keeps quite constant until the playback is stopped. On the bottom right graph you can actually see the cpu fan (yellow) going to higher revs while the buffering takes place.

 

Since load is continuous and fairly constant the whole time this would cause the same kind of power usage and heat dissipation on T-model and standard model if there was no efficiency difference. CPU usage would of course be lower on the standard model. This was the only thing I tried to explain in my previous post; the standard model should not cause any higher noise compared to T-model when transcoding for Plex since the load is evenly spread.

 

I do still however see it odd if there wasn't any efficiency difference between the two models. That argument would mean that the TDP values would not have any relationship to the actual maximum power consumption.

 

Let's brake down the math again:

- i3-4130T has 35W TDP and 3989 Passmark

- i3-4130 has 54W TDP and 4970 Passmark

-> T-model has 65% of the TDP and 80% of the Passmark score compared to the standard model

-> For the standard model to be as efficient as the T-model it would have to have a max power consumption of 43,5W (35W/80%), assuming that the 35W TDP model has max power consumption of 35W.

 

I find the above kind of unlikely, why would Intel say i3-4130 to be 54W TDP if it actually was maxing out at 43,5W? I cannot explain my logic any further, testing would be the only additional step.

 

tsStats_2013-09-16b.png

Link to comment

The higher TDP CPUs wouldn't  probably even be noisier on video transcoding since for given time frame they would be consuming exactly the same amount of energy and hence producing the same amount of heat in average as a T-model.

 

Not at all true.  The more powerful CPU would be consuming a good bit more power, and thus generating notably more heat for the duration of the task.    This may/may not result in more noise -- it depends on whether the heatsink fan needs to ramp up to dissipate the heat.    It may be true that for the total task it generates about the same amount of heat as a T-model => but the higher power unit would generate appreciably more during its peak utilization.

 

Right, I've tried to explain this a few different ways.  garycase understands what I'm getting at, but I'm not sure others do.  Maybe an analogy would help.  Not sure it's the best analogy, but it's what I could come up with at the moment.  They are made up numbers but should convey the point.

 

Let's say you want to boil a gallon of water.  All other things being equal (same pot, same burner, etc.) let's say you were to turn the burner to 50% and it takes the water 10 minutes to boil.  But if you were to turn the burner to 100% it takes the water 5 minutes to boil.  Once the water has boiled you have used the same amount of heat either way, just that in the second instance you got it done twice as fast by applying twice the heat per unit time. 

 

In the second instance you also made the kitchen hotter during those first 5 minutes, but as the work is now done it will cool off over the following 5 minutes.  At the end of 10 minutes the kitchen should be roughly the same temperature regardless of which method you used to boil the water.  You pumped the same amount of heat into the kitchen that needs to be dissipated either way.

 

A CPU is no different.  Every bit of electricity used must eventually be dissipated as heat.  Doing the work slowly requires less electricity per unit time and hence less heat to be dissipated per unit time.  Doing the work more quickly requires more electricity per unit time and hence more heat to be dissipated per unit time.  Given the same workload both CPUs will use the same amount of electricity and generate the same amount of heat, it's just a matter of how it's distributed over time.

 

Of course all this only applies to CPUs of the same family, e.g. Haswell, Ivy Bridge, Sandy Bridge, etc.

 

Hope that helps.

Link to comment

Since load is continuous and fairly constant the whole time this would cause the same kind of power usage and heat dissipation on T-model and standard model if there was no efficiency difference. CPU usage would of course be lower on the standard model. This was the only thing I tried to explain in my previous post; the standard model should not cause any higher noise compared to T-model when transcoding for Plex since the load is evenly spread.

 

But it's unlikely the CPU usage would be lower on the standard processor.  It would likely be equal, but just drop back to idle sooner.  Also, a standard CPU is going to use more power and generate more heat at 30% load than a T-model would, but it will also perform more calculations (do more work) per unit time.

 

I'm not saying you're wrong or I'm right, but I'm fresh out of ways to attempt to explain this, so I'm going to put this one to bed.

Link to comment

Since load is continuous and fairly constant the whole time this would cause the same kind of power usage and heat dissipation on T-model and standard model if there was no efficiency difference. CPU usage would of course be lower on the standard model. This was the only thing I tried to explain in my previous post; the standard model should not cause any higher noise compared to T-model when transcoding for Plex since the load is evenly spread.

But it's unlikely the CPU usage would be lower on the standard processor.  It would likely be equal, but just drop back to idle sooner.  Also, a standard CPU is going to use more power and generate more heat at 30% load than a T-model would, but it will also perform more calculations (do more work) per unit time.

In the Plex transcoding scenario PMS throttles the load and produces a steady stream matching the framerate of the outputted video. It does not go ahead any more than the buffering requires and thus neither processor will drop back to idling sooner than the other. The T-model will have a higher average CPU usage. The difference in CPU usage is exactly the same as the difference of the Passmark values of these two processors. I have benchmarked and verified this. The graph in my previous post indicates that there is a very short buffering with full load and then it goes to throttling with lower CPU load. When I had a more powerful CPU installed the CPU usage in the throttled state was lower and difference was relative to processing power (=Passmark value) of the two CPUs.

 

I'm not saying you're wrong or I'm right, but I'm fresh out of ways to attempt to explain this, so I'm going to put this one to bed.

I too feel that this not going to be solved, I've also exhausted my wording. If there is a need to come back to this, I think it's better to be done in a separate thread.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.