Jump to content
EwinRacing Flash Series Gaming Chairs

Recommended Posts

For the same settings (ambient temp, AC adapter, etc), 1024M benchmark took 320 sec with 3632QM and 308 sec with 3630QM, 4% difference. Even less difference in PhysicsScore 3dm11. In my opinio there is no point of getting better CPU's. :)

I've updated the results table at http://forum.techinferno.com/hp-business-class-notebooks/2537-12-5-hp-elitebook-2570p-owners-lounge-37.html#post77432 .

Surprisingly, your i7-3630QM (45W) runs at it's x32 full 4-core turbo multiplier. It has no 100Mhz/200Mhz loss we see in the higher i7-37xxQM or i7-38xxQM CPUs. The i7-3632QM (35W) runs a x29 4-core turbo multiplier. These two CPUs have nearly identical efficiency across their shared multipliers.

If you are concerned about the i7-3630QM's 7W higher TDP (+6 degrees, +600rpm fan) in your game plots, with associated higher temperature then your could use Throttlestop to limit it to the same x29 multiplier matching that of the i7-3632QM rather than allowing it to run x32. That will then reduce the temps to the same level.

That flexibility of limiting performance with Throttlestop to 35W levels or unleashing it to have up to an extra 10.3% cpu performance (3.2Ghz vs 2.9Ghz) is why I'd choose the i7-3630QM over the i7-3632QM. I guess if you want simplicity; not needing to mess with Throttlestop and just accept the lower performance then the i7-3632QM would be the way to go.

I'll mention also if you use Throttlestop -> TPL and select Intel Turbo balance with CPU=0 and GPU=8 you'll get a higher 3dmark11 physics score. http://www.3dmark.com/3dm11/6942753 shows a i7-3630QM can get 7655 . Better yet, use your eGPU for the 3dmark11 test so the iGPU isn't engaged at all.

Thank you for providing these measured results :) It gives us a better understanding as to how these CPUs differ.

Share this post


Link to post
Share on other sites
For the same settings (ambient temp, AC adapter, etc), 1024M benchmark took 320 sec with 3632QM and 308 sec with 3630QM, 4% difference. Even less difference in PhysicsScore 3dm11. In my opinion there is no point of getting better CPU's. :)

Hi Bjorm.

I'm don't clearly your logic here, since I have a 1024M benchmark @ ~280s and a physics score @~8500, are you saying my games wouldn't profit from this improvement from my 3820QM? I have to make some BF4 benchmarks (takes a long time to download on my parents 3G/4G connection) and prove you wrong ;)

Share this post


Link to post
Share on other sites
Hi Bjorm.

I'm don't clearly your logic here, since I have a 1024M benchmark @ ~280s and a physics score @~8500, are you saying my games wouldn't profit from this improvement from my 3820QM? I have to make some BF4 benchmarks (takes a long time to download on my parents 3G/4G connection) and prove you wrong ;)

His 3dmark11 score is lower than expected because he's using the iGPU to do the test. If he disabled the internal LCD and used an eGPU so the CPU could be fully dedicated to computational tasks he'd get :

i7-3632QM (x29 4C) 3dmark11 PS=7237

i7-3630QM (x32 4C) 3dmark11 PS=7697

i7-3740QM (x34 4C) 3dmark11 PS=~8200 <-- Tech Inferno Fan

i7-3820QM (x34 4C) 3dmark11 PS=~8500 < -- jacobsson

i7-3820QM (x35 4C) 3dmark11 PS=8736

Share this post


Link to post
Share on other sites

The ThrottleStop TS Bench is not the best benchmark to use for CPU performance comparison testing. This benchmark is very dependent on how many background tasks are running on your system. If you have a very lean system with very few programs running in the background then you might be able to get some consistent times out of the TS Bench. When doing performance comparisons, you will probably get more consistent, repeatable and comparable results from a benchmark like wPrime.

Every benchmark has a purpose. The TS Bench was designed to let a user easily load 1, 2 or 4 cores of their CPU so they can better understand the multiplier data that ThrottleStop shows and get a feel for how Intel CPUs really work. It's not an ultimate balls to the wall load like Linpack testing. It was designed to be a very consistent and steady load for general multiplier, temperature and TDP testing purposes.

The TDP value displayed by ThrottleStop is a value calculated within the CPU by Intel. The purpose of this data is to control the Turbo Boost function. It is a VID based approximation of power consumption so it is probably not a 100% accurate display of CPU power consumption, especially at idle. To properly measure power consumption, you would need to directly monitor the amps and voltage going into the CPU socket which is not easy for most users to do.

The reported TDP is temperature and load dependent. The typical Windows install has 500 or more background threads running at any given time. There is no way to perfectly control all of these threads so some random variation in your testing results should be expected. That doesn't mean there is something "wrong" with ThrottleStop. It's just the way it is when testing with a Core i based laptop.

In a TDP limited processor, Intel compares the amount of Turbo Boost available to water in a bath tub. When you first start testing, if you have waited long enough after booting up, the bath tub should be full and the CPU will have access to the maximum amount of turbo boost. If you do some back to back testing and you don't wait long enough between tests, when you start your second test, you may not have the same amount of Turbo Boost available. I don't know of any way to determine the amount of water in the tub that is available or how much time has to pass to make sure that you are back to the same starting point. Consistent back to back 3DMark runs or any benchmark is going to be a challenge in these TDP limited CPUs unless you can control this.

You usually need to have EIST enabled for ThrottleStop to work correctly. If you use ThrottleStop and make changes to your CPU or have EIST disabled and then exit ThrottleStop, it is anyone's guess as to what state your CPU will be in. Same thing if you have ThrottleStop enabled and then switch it back to Monitoring mode. The CPU will be left in its previous state so Monitoring mode after you boot up vs Monitoring mode after you have made changes to your CPU can be completely different. When testing it is best to just leave it enabled. When you change CPUs, you also have to make sure you delete the ThrottleStop.INI config file or settings that were appropriate from your previous CPU might screw up how your new CPU runs.

The multiplier data from ThrottleStop is extremely accurate. It can tell you exactly what your CPU is up to and exactly how much Turbo Boost your CPU is using. It's up to users to try and control the million and one other variables so they can get some useful information out of this data.

CPU-Z and XTU are both great programs but sometimes the data coming from them doesn't give a user a clear understanding of what their CPU is really doing internally. Here's a good example of this:

Edit: T|I doesn't seem to allow links to outside sites so send me a PM if you can't figure out the following link or if it gets butchered with *****.

www . overclock . net/t/1438732/event-viewer-vs-intels-etu-for-reporting-thermal-throttling#post_21104733

Just some testing background info. Now back to testing. The VID findings in this thread are very interesting. The VID data used to be important when searching for Core 2 CPUs that could clock well but there hasn't been enough VID data collected and analyzed for the newer Core i CPUs.

  • Thumbs Up 4

Share this post


Link to post
Share on other sites
Here's the data from my 3632QM. They seem a little low, so if anyone would like me to rerun the data after a few days please let me know:

CPU Multiplier x12 x23 x24 x25 x26 x27 x28 x29
i7-3632QM Voltage 0.8055 0.8856 0.9006 0.9207 0.9407 0.9607 0.9857 1.0107
TDP 10.5 19.3 20.6 22.1 23.7 25.5 27.5 30


I also soldered the mSATA (?) tracks for the WWAN slot. I get the whitelist screen if I use a mini PCI-E wifi card or mSATA drive. A second working slot was a longshot to begin with, but I wanted to put a new thermal pad on the chip that uses the chassis as a heatsink anyway

I also got a GTX 650ti Boost hooked up as an eGPU. My plan is to do some gaming on the internal LCD. The 650ti Boost was designed to be a budget 1080p card, so it should be a decent card for my purposes. I'll get some eGPU benchmarks for that thread after the new year.
  • Thumbs Up 2

Share this post


Link to post
Share on other sites

That flexibility of limiting performance with Throttlestop to 35W levels or unleashing it to have up to an extra 10.3% cpu performance (3.2Ghz vs 2.9Ghz) is why I'd choose the i7-3630QM over the i7-3632QM. I guess if you want simplicity; not needing to mess with Throttlestop and just accept the lower performance then the i7-3632QM would be the way to go.

Exactly. I don't have any visible boost in gaming so I don't want to mess with throttlestop. There is no point for me.

Hi Bjorm.

I'm don't clearly your logic here, since I have a 1024M benchmark @ ~280s and a physics score @~8500, are you saying my games wouldn't profit from this improvement from my 3820QM? I have to make some BF4 benchmarks (takes a long time to download on my parents 3G/4G connection) and prove you wrong ;)

I can even bet with You that you won't get any visible difference. Seriously. If you even get 2 or 3 more fps in minimal fps you won't even notice it from fps plots. The same situation with GTX670 vs GTX680. No point of buying something stronger than GTX670.

Here's the data from my 3632QM. They seem a little low, so if anyone would like me to rerun the data after a few days please let me know:

results very similar to mine, so it should be correct IMO.

Share this post


Link to post
Share on other sites
I also soldered the mSATA (?) tracks for the WWAN slot. I get the whitelist screen if I use a mini PCI-E wifi card or mSATA drive. A second working slot was a longshot to begin with, but I wanted to put a new thermal pad on the chip that uses the chassis as a heatsink anyway

If you halt Win7/8 bootup with F8/F12 (or boot via eGPU Setup 1.x), then hotplug the mPCIe or mSATA drive, does it appear in Device Manager? This hotplugging method after bios boot being a workaround to the HP whitelisting-halt type startup bios screen. For mSATA to appear the BIOS would have to enable SATA port2, the port purportedly that's wired through to those tracks you soldered.

Share this post


Link to post
Share on other sites
If you halt Win7/8 bootup with F8/F12 (or boot via eGPU Setup 1.x), then hotplug the mPCIe or mSATA drive, does it appear in Device Manager? This hotplugging method after bios boot being a workaround to the HP whitelisting-halt type startup bios screen. For mSATA to appear the BIOS would have to enable SATA port2, the port purportedly that's wired through to those tracks you soldered.

Give me a few days and I'll check it out. I tried doing that with my usb-only mini pcie and it didn't show up, but I'm guessing that the bios treats this differently? I also tried taping pin 20 on a mini pcie wifi card and I still had the issue.

  • Thumbs Up 1

Share this post


Link to post
Share on other sites

Got my efficient i7-3740QM back and noted the VID table. What's interesting is now I have my 25mm copper plate sandwiched between the heatsink and CPU, my temps are noticably lower and the thing can hold x35 4-cores for at least 2 minutes. Previously it would downclock from x35 to a locked x34 within seconds which makes me wonder if the CPU temps are the limiting factor here? Now after a number of minutes of a locked x35 4-core mode where the temps have risen I see the turbo then start oscillating between x35 and x34. Presumably to either keep TDP or temps in check.

06ty.jpg

I'm now wondering if I now installed a US$7.59-shipped 42mmx42mmx1.2mm copper shim instead whether I'd be able to hold x35 indefinitely? That's about the max size of a shim that would fit within the constraints of the heatsink screw mounts.

Here's the 2570P i7-3740QM (efficient) + [email protected] eGPU benchmark results:

3dmark06: 26311, CPU Score=6711

3dmark11: P7705, GPU=7935, Physics=8358

  • Thumbs Up 1

Share this post


Link to post
Share on other sites
P1xxSM-Changelog | Prema Mod

really interesting mods. Someone ready to ask mod for elitebook/latitude?

Unfortunately that's not going to happen without herculean effort.

Our BIOS is RSA protected. It was something HP introduced way back in the F.20 2510P/2530P series to protect their interests against user modification.

What happens is if the bios detects a mod it will black screen on boot requiring an emergency recovery to get it going again. Unfortunately it means no hacks to the 2570P BIOS to remove the WWAN whitelist restriction, increase i7-quad CPU power limits, enable the mSATA (WWAN) cache module or mod the ACPI FACP table to enable ASPMs and increase battery life.

That is, unless HP release a BIOS with those goodies for us.

Share this post


Link to post
Share on other sites

New BCLK overclocking enabled ME FW for the 2570P. My previous only worked for aikimox and this one works for Tech Inferno Fan. If someone else can test we can determine if the ME FW has to be specific to each system.

How to flash it:

1. Download FPT and the modded ME FW here:

fpt.zip

2570pOC.bin

FPT needs to be run from DOS. There is a windows version but flashing in windows is very dangerous.

2. Restart and do the keyboard sequence:

WIN+left_arrow+right_arrow, then release on POST to enable the flash descriptor override.

3. Flash with FPT using the command:

fpt -me -f 2570pOC.bin

When flashing you might get a warning about the file being smaller than the size available on the flash ROM. This is fine. Flash anyway.

4. Install XTU. 4.2 worked for Tech Inferno Fan.

  • Thumbs Up 1

Share this post


Link to post
Share on other sites
New BCLK overclocking enabled ME FW for the 2570P. My previous only worked for aikimox and this one works for Tech Inferno Fan. If someone else can test we can determine if the ME FW has to be specific to each system.

..

4. Install XTU. Anything but the latest 4.2 version works.

Thank you for this. Incidentally, XTU 4.2 does work for me.

How to check for a bad ME flash or XTU version

Below I've included an image showing what I check for a bad ME flash and/or bad XTU. A 'bad ME flash' in my case was Aikimox's modified ME at http://forum.techinferno.com/hp-business-class-notebooks/2537-12-5-hp-elitebook-2570p-owners-lounge-12.html#post65904 . I believe jot23 had this same issue.

I upgraded from XTU 3.2 -> 4.0 -> 4.2. All of them had the reference clock slider once I had flashed a good ME firmware with unlocked clocks c/o Khenglish.

Khenglish modified my system's own fpt64.exe generated BIOS dump which resulted in a good ME flash. Now I can successfully adjust my BCLK. The captured XTU screenshot belowing showing a 4.63% overclock (2.7Ghz->2.825Ghz)

<A HREF="http://img811.imageshack.us/img811/4415/nm4n.png"><img width=900 src=http://img811.imageshack.us/img811/4415/nm4n.png></IMG></A>

Share this post


Link to post
Share on other sites

Funny, 4.2 lacks the BCLK slider for me. Also if the BCLK slider is greyed out, it means that the ME FW is not configured for overclocking. If it's just completely gone then it's XTU's fault.

It's interesting that you have the power limit sliders. Do they seem to work at all? With those power limits you should hold x35 on all cores indefinitely as long as you don't hit 105C.

Share this post


Link to post
Share on other sites
Quote

Funny, 4.2 lacks the BCLK slider for me. Also if the BCLK slider is greyed out, it means that the ME FW is not configured for overclocking. If it's just completely gone then it's XTU's fault.

It's interesting that you have the power limit sliders. Do they seem to work at all? With those power limits you should hold x35 on all cores indefinitely as long as you don't hit 105C.



Below is the snippet of my i7-3740QM TS log while running the TS-1024M bench. You can see the third line is where the CPU hits 43.6W TDP @x35. Temps are still OK at 85/86 degrees. Then there is steady decline down to x34. Appears the CPU is being internally TDP limited regardless of the power limits I set in Throttlestop or XTU.

2014-01-07  13:53:44  35.00   94.8  100.0  100.0        0   85   1.0858   41.5
2014-01-07  13:53:44  34.99   97.6  100.0  100.0        0   85   1.0858   36.4
2014-01-07  13:53:44  34.98   96.7  100.0  100.0        0   85   1.0858   43.6
2014-01-07  13:53:45  34.94   98.6  100.0  100.0        0   85   1.0858   39.0
2014-01-07  13:53:45  34.91   99.9  100.0  100.0        0   85   1.0858   38.8
2014-01-07  13:53:45  34.91   98.0  100.0  100.0        0   86   1.0858   37.7
2014-01-07  13:53:45  34.84   99.2  100.0  100.0        0   85   1.0858   39.7
2014-01-07  13:53:45  34.93   98.5  100.0  100.0        0   85   1.0858   38.5
2014-01-07  13:53:45  34.92   98.1  100.0   12.5        0   85   1.0858   41.8
2014-01-07  13:53:46  34.92   96.2  100.0  100.0        0   85   1.0858   38.1
2014-01-07  13:53:46  34.91   99.4  100.0  100.0        0   85   1.0858   39.4
2014-01-07  13:53:46  34.98   95.3  100.0  100.0        0   85   1.0858   39.7
2014-01-07  13:53:46  34.92   97.7  100.0  100.0        0   85   1.0558   37.4
2014-01-07  13:53:47  34.94   98.1  100.0  100.0        0   85   1.0858   39.9
2014-01-07  13:53:47  34.81   99.4  100.0  100.0        0   85   1.0858   40.4
2014-01-07  13:53:47  34.84   98.8  100.0  100.0        0   86   1.0558   37.5
2014-01-07  13:53:47  34.85   99.1  100.0  100.0        0   85   1.0858   38.6
2014-01-07  13:53:47  34.80   98.3  100.0  100.0        0   86   1.0558   38.7
2014-01-07  13:53:48  34.85   99.4  100.0  100.0        0   86   1.0558   40.6
2014-01-07  13:53:48  34.52   99.4  100.0  100.0        0   86   1.0858   39.2
2014-01-07  13:53:48  34.57   99.7  100.0  100.0        0   86   1.0858   37.1
2014-01-07  13:53:48  34.69   99.7  100.0  100.0        0   86   1.0558   40.2
2014-01-07  13:53:48  34.32   99.7  100.0  100.0        0   86   1.0558   37.3
2014-01-07  13:53:49  34.37   99.8  100.0  100.0        0   86   1.0558   36.9


Only remaining test I have to investigate this is to replace my 25x25x1mm copper shim, which increased my x35 duration time, with a larger 42x42x1mm one. Something I'm awaiting jacobsson to provide. Idea there is to lower CPU temps just in case they are the problem here.

Though XTU benchmark is worse than Throttlestop. It seems to stress the CPU much more. I see it drop down to x35->x34->x33->x32 multiplier within several 10s intervals

Share this post


Link to post
Share on other sites

Lower temps do lead to substantially lower power draw, particularly on ivy bridge. That's probably why the shim helped you maintain turbo even though temps are significantly below 105C.

Share this post


Link to post
Share on other sites
Lower temps do lead to substantially lower power draw, particularly on ivy bridge. That's probably why the shim helped you maintain turbo even though temps are significantly below 105C.

Thank you for the confirmation. I was suspecting heat and TDP throttling to be related. With the original heatsink the TS-1024M bench maintained x35 for a split second then dropped to x34. Adding the shim sees it run x35 for about 2 mins or when temps hit around 85/86 degrees. I saw TDP start increasing with temps and now understand why.

I may be able to extend that 2mins to by mounting 2 x 42x42x1mm shims in place of the 25x25x1mm shim. One between the CPU and heatsink and one with a cutout hole sandwiched below it that sits around the CPU core. That amount of copper will be able to hold considerably more heat energy. Adding an alternative heat path back the stock heatpipe by say glueing on a heatpipe fragment to my exposed shim would complete the mod nicely.

I'll mention too. From the various Lenovo/HP 3dmark11 physics benchmarks I'm seeing plenty of IVB i7-quad CPUs not maintaining their 4-core max turbo multiplier. From that I believe many manufacturers provide an inadequate heatsink in their SB->IVB system updates where typically they've just used the same SB one.

In the case of the 2570P, HP could have made the whole heatsink out of copper and providing either two or a larger heatpipe running back to the heatsink fan-blown fins. Then again, either the SB or IVB 2570P heatsink is sufficient to move heat from the factory-specced 35W i5/i7. 45W i7-quad CPUs are outside of factory spec.

Share this post


Link to post
Share on other sites

Heatpipe fragment? You can't cut heatpipes or rupture them at all. Heatpipes are over an order or magnitude more heat conductive than pure copper.

Also if you're already getting good contact a wider shim is not going to help. All that matters is the least thermal resistance from the CPU die to the heatpipe. Adding more copper in between will hurt, not help temperatures. You could go liquid metal TIM if you haven't already. That would knock off a few degrees. Also maybe you could bend the heatpipe a little and sand down some parts to make the heatsink fit snug without a shim. That extra thermal interface due to the shim is hurting you several degrees over a solid piece. You have to be very careful not to kink the heatpipe if you bend it though.

Share this post


Link to post
Share on other sites
Heatpipe fragment? You can't cut heatpipes or rupture them at all. Heatpipes are over an order or magnitude more heat conductive than pure copper.

Also if you're already getting good contact a wider shim is not going to help. All that matters is the least thermal resistance from the CPU die to the heatpipe. Adding more copper in between will hurt, not help temperatures. You could go liquid metal TIM if you haven't already. That would knock off a few degrees. Also maybe you could bend the heatpipe a little and sand down some parts to make the heatsink fit snug without a shim. That extra thermal interface due to the shim is hurting you several degrees over a solid piece. You have to be very careful not to kink the heatpipe if you bend it though.

Yeah.. heatpipe bit would need to be sealed and exact width needed. Unlikely to happen. I'm using a $3 Coolermaster TIM which has not been the limiting factor here.

The 2570P heatsink is made of a large aluminium component, a small copper 'shim' and a copper heatpipe that leads to a fan-cooled radiator grill. I'd take photos but would need to repaste and I have no reserve paste latm.

Why did HP use aluminium? The beancounters would have seen aluminium is less than 1/4 of the cost. They shortchanged us of nearly double copper's thermal conductivty:

aluminium US$1717/ton conductivity=205W/(W/m K)

copper US$7313/ton conductivity=385W/(W/m K)

My thinking is this. The IVB small and hot CPU die needs to dissipate heat quickly else temps spiral out of control. HP in their wisdom decided to add a tiny copper shim at the end of the aluminium heatsink to help with rapid temps. That would be adequate for a SB 35W CPU even a 35W IVB CPU. While it's going to take up some heat it won't do it fast enough when using a 45W i7-quad.

The copper shim idea being able to remove AND store the heat until the rest of the cooling system can catch up. Store because energy flows from high to low (hot to cold). Therefore I expect the 42x42x1mm shim will improve the duration my CPU can run a TS-1024M at x35. Something I'll be able to prove once the shims arrive.

Share this post


Link to post
Share on other sites
As you've guys noticed a TDP limit with the 2570p running quad cores, I've as well noticed it with the sandy bridge 2560p, just more severe (seems to be at about 36W compared to 2570p 41W). 2760QM should run at 32x on 4C but I only get 27x, here is Throttlestop log of a 32M test which has been run over my AC unit!



  • DATE TIME MULTI C0% CKMOD CHIPM BAT_mW TEMP VID POWER

  • 2014-01-07 14:44:51 31.48 0.5 100.0 100.0 0 27 1.1959 4.0

  • 2014-01-07 14:44:53 27.07 92.4 100.0 100.0 0 51 1.1259 32.1

  • 2014-01-07 14:44:54 27.00 100.0 100.0 100.0 0 53 1.1208 34.8

  • 2014-01-07 14:44:55 27.00 100.0 100.0 100.0 0 55 1.1208 35.1

  • 2014-01-07 14:44:56 27.00 100.0 100.0 100.0 0 57 1.1259 36.0

  • 2014-01-07 14:44:57 27.00 100.0 100.0 100.0 0 58 1.1208 35.8

  • 2014-01-07 14:44:58 27.00 100.0 100.0 100.0 0 59 1.1208 35.5

  • 2014-01-07 14:44:59 27.00 100.0 100.0 100.0 0 59 1.1208 36.0

  • 2014-01-07 14:45:00 27.00 100.0 100.0 100.0 0 60 1.1208 35.9

  • 2014-01-07 14:45:01 27.00 100.0 100.0 100.0 0 61 1.1208 35.6

  • 2014-01-07 14:45:02 27.00 100.0 100.0 100.0 0 61 1.1208 35.7

  • 2014-01-07 14:45:02 27.00 100.0 100.0 100.0 0 62 1.1208 36.3

  • 2014-01-07 14:45:04 27.00 100.0 100.0 100.0 0 61 1.1208 35.8

  • 2014-01-07 14:45:04 27.12 38.8 100.0 100.0 0 39 1.2109 20.5

  • 2014-01-07 14:45:05 13.63 0.2 100.0 100.0 0 34 1.2109 3.8



I've also posted it at 2560p forum: [URL]http://forum.techinferno.com/hp-business-class-notebooks/2090-hp-elitebook-2560p-owners-lounge-%5Bversion-2-0%5D-2.html#post79495[/URL]
  • Thumbs Up 1

Share this post


Link to post
Share on other sites
Quote

As you've guys noticed a TDP limit with the 2570p running quad cores, I've as well noticed it with the sandy bridge 2560p, just more severe (seems to be at about 36W compared to 2570p 41W). 2760QM should run at 32x on 4C but I only get 27x, here is Throttlestop log of a 32M test which has been run over my AC unit!



  • DATE TIME MULTI C0% CKMOD CHIPM BAT_mW TEMP VID POWER

  • 2014-01-07 14:44:51 31.48 0.5 100.0 100.0 0 27 1.1959 4.0

  • 2014-01-07 14:44:53 27.07 92.4 100.0 100.0 0 51 1.1259 32.1

  • 2014-01-07 14:44:54 27.00 100.0 100.0 100.0 0 53 1.1208 34.8

  • 2014-01-07 14:44:55 27.00 100.0 100.0 100.0 0 55 1.1208 35.1

  • 2014-01-07 14:44:56 27.00 100.0 100.0 100.0 0 57 1.1259 36.0

  • 2014-01-07 14:44:57 27.00 100.0 100.0 100.0 0 58 1.1208 35.8

  • 2014-01-07 14:44:58 27.00 100.0 100.0 100.0 0 59 1.1208 35.5

  • 2014-01-07 14:44:59 27.00 100.0 100.0 100.0 0 59 1.1208 36.0

  • 2014-01-07 14:45:00 27.00 100.0 100.0 100.0 0 60 1.1208 35.9

  • 2014-01-07 14:45:01 27.00 100.0 100.0 100.0 0 61 1.1208 35.6

  • 2014-01-07 14:45:02 27.00 100.0 100.0 100.0 0 61 1.1208 35.7

  • 2014-01-07 14:45:02 27.00 100.0 100.0 100.0 0 62 1.1208 36.3

  • 2014-01-07 14:45:04 27.00 100.0 100.0 100.0 0 61 1.1208 35.8

  • 2014-01-07 14:45:04 27.12 38.8 100.0 100.0 0 39 1.2109 20.5

  • 2014-01-07 14:45:05 13.63 0.2 100.0 100.0 0 34 1.2109 3.8



I've also posted it at 2560p forum: [URL]http://forum.techinferno.com/hp-business-class-notebooks/2090-hp-elitebook-2560p-owners-lounge-%5Bversion-2-0%5D-2.html#post79495[/URL]



You forgot to mention you were using a 120W AC adapter, so no power starvation issues. Correction.I'm seeing my i7-3740QM start to be TDP throttled at ~45W, which is as expected. Highest value I saw recorded by Throttlestop logs being 43.6W here.

What's interesting to me is your SB i7-2760QM x27 mode is running at 1.1208V with a ~36W TS-bench TDP. We see IVB CPUs consuming 10-14W less at the same x27 mode running somewhere in b/w 0.9106V-1.03V depending on the CPU: [url]http://forum.techinferno.com/hp-business-class-notebooks/2537-12-5-hp-elitebook-2570p-owners-lounge-37.html#post77432[/url] . With 22nm IVB being 5% or so faster than 32nm SB it means significantly more performance-per-watt. 10-14W means a 2560P i7-quad SB system runs noticably hotter than a i7-quad IVB 2570P.

Your finding of a 36W CPU TDP limit takes the shine off 2560P systems being a good SB i7-quad candidate. Anybody wanting i7-quad performance should bypass a 2560P and get a 2570P instead.

Share this post


Link to post
Share on other sites
You forgot to mention you were using a 120W AC adapter, so no power starvation issues. I'm seeing my i7-3740QM start to be TDP throttled at 45W, which is as expected. Highest value I saw recorded by Throttlestop logs being 43.6W here.

What's interesting to me is your SB i7-2760QM x27 mode is running at 1.1208V with a ~36W TS-bench TDP. We see IVB CPUs consuming 10-14W less at the same x27 mode running as low as 0.9106V: http://forum.techinferno.com/hp-business-class-notebooks/2537-12-5-hp-elitebook-2570p-owners-lounge-37.html#post77432 .

With 22nm IVB being 5% or so faster than 32nm SB it means significantly more performance-per-watt. Your result takes the shine off 2560P systems being a good SB i7-quad candidate. Anybody wanting to upgrade to a i7-quad should get a 2570P instead.

I'll just add to that, that when the applications are using mostly 1,2, or (If I remember correctly) even 3 cores the CPU is excellent and the CPU draws <36W so its runs on its spec speed. But 2560p are selling for about the same price as a 2570p, so I'll recommend it over 2560p for those who don 't already have the older model.

Where's haswell 2580p :P

Share this post


Link to post
Share on other sites
I'll just add to that, that when the applications are using mostly 1,2, or (If I remember correctly) even 3 cores the CPU is excellent and the CPU draws <36W so its runs on its spec speed. But 2560p are selling for about the same price as a 2570p, so I'll recommend it over 2560p for those who don 't already have the older model.

Where's haswell 2580p :P

If they cost the same then there's no reason to get a 2560P. 2570P beats a 2560P in nearly all aspects (comparo on first page).

Where's haswell 2580p :P

Have asked, almost begged the global HP product manager for a ZBook 12, a Haswell 2570P successor OR at least a Haswell systemboard for our machines. While a ZBook 12 been slated 'for review' I don't believe it will happen. Seems the Elitebook team got sidetracked into the ultrabook thin-and-light craze with no expandability options. Their 12.5" '820 G1' a major step backwards from a 2570P (see comparo on first page).

If you need a small business-grade Haswell system with i7-quad upgradability, an expresscard slot and optical drive then HP has nothing to offer. Dell and Lenovo both have 14" candidates (eg: Dell Latitude 14 5000 or E6440, Lenovo L440) : http://forum.techinferno.com/diy-e-gpu-projects/4109-egpu-candidate-system-list-%5Bthin-light%5D.html#post57159 .

Though Haswell isn't any major technological improvement over IVB. The key highlight of better battery life is primarily in the ULV platform which integrates the chipset with the CPU. The full-powered CPU platform with separate chipset sees ~15% better battery life. Nothing major.

Share this post


Link to post
Share on other sites

i7-3740QM + [email protected] eGPU results without and with 4.63% BCLK overclock

<strike>In order to prevent 3dmark11 from hanging/rebooting with the higher BCLK setting, I need to provide more CPU voltage. Since the XTU 'additional turbo voltage' setting is greyed out, I run Throttlestop->TRL and set Flex VID (extra voltage) to 255. I do that after immediately after using XTU to set the BCLK overclock.</strike> <--- incorrect. Not necessary to run Throttlestop and set the Flex VID after doing the XTU BCLK overclock. It's an ignored setting.

Note too that CUDA-Z sees the same pci-e bandwidth in both situations so the pci-e bus isn't being overclocked. @Khenglish has prepared a ME firmware that supposedly does overclock the pci-e bus. Though I need my system up so can't take a risk flashing it for a minor gain there.

With stock 100Mhz BCLK (Throttlestop reports BCLK=99.777) Ref=2.7Ghz Turbo=3.7Ghz

3dmark06=26451

3dmk11.gpu=7911 Physics=8388

With XTU BCLK set to 104.6322 (Throttlestop reports BCLK=104.476) Ref=2.825Ghz Turbo=3.871Ghz

3dmark06=27396

3dmkV.gpu=25052

3dmk11.gpu=7916 Physics=8783

RE5-dx9-1280x800-var=226.4

RE5-dx9-1280x800-fixed=123.7

Share this post


Link to post
Share on other sites

Flex VID in throttlestop set to 255 will make the CPU use 1.52V, which I would expect would make it overheat very very fast assuming the motherboard handles the power draw. Also throttlestop and cpu-z will both report the VID with additional turbo voltage taken into account, so if it doesn't say 1.52V during load, it's not working.

I think you just got lucky and it just happened to make it through the test.

What instability are you getting? Hard freezes with no warning, or do you get program shutdowns and calculation errors in stability tests like prime95? I expect the former, and all you can do about that is lower BCLK unless you want to start pencil modding your motherboard. If it's the latter though I suspect your memory is unstable instead of the CPU, so if you raised memory timings or switched to memory with more headroom you'd be fine.

Also I am not familiar with the CUDA-Z PCI-E bandwidth test. I have run the SANDRA pci-e bandwidth test and I know for a fact that it does show the impact of even minor pci-e clock changes and it works on AMD cards, so if you could run that instead then we have a better understanding of what's going on. Also like I said in the PM, if overclocking shows no bandwidth improvement, try underclocking. I suspect underclocking will have an impact.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Similar Content

    • By Tech Inferno Fan
      We've had a stack of recurring questions from with problems getting a mPCIe eGPU working. This includes GPU-Z not reporting no clock details, error 10/43 or even not being detected at all. Overall it's more troublesome getting mPCIe working than say expresscard or Thunderbolt.
       
      Here's some common problems and some troubleshooting steps to correct them.
       
      Getting a black bootup screen, resolving error 10/43 or ACPI_BIOS_ERROR win bootup messages
       
      Here the BIOS doesn't know what to do when it sees an eGPU. So the solution is to not let the BIOS see it. Do that by setting the delays on the eGPU adapter (CTD/PTD - EXP GDC or CLKRUN/PERST# on PE4L/PE4C). Boot with eGPU adapter in the wifi slot into Setup 1.30 or Windows. Is the eGPU detected?
       
      I'll add that should error 43 continue AND you have a NVidia dGPU as well as NVidia eGPU then it's likely because of having the mobile NVidia and desktop NVidia drivers loaded simultaneously. Proceed to uninstall ALL your NVidia drivers, use "DDU" to clean NVidia registry entries and do a 'clean' install of the latest NVidia desktop driver.
       
      mPCIe port that hosted the wifi card disappears when connecting an eGPU in it's place
       
      Use the Setup1.30 PCIe Ports->enable to enable the missing port.
       
      eGPU does not get detected
       
      Overcome mPCIe whitelisting by booting with the wifi card and then hotswapping in the eGPU. That way the BIOS will enable the mPCIe port to work.
       
      1. Boot with wifi card into Windows, sleep system, swap wifi card for mPCIe eGPU adapter and ensure eGPU is powered on, resume system. Do a device manager scan in Windows. Is the eGPU detected?
       
      2. Boot with wifi card into Setup 1.30 *carefully* hotplug the eGPU adapter in place of wifi card. Hit F5 to rescan the PCIe bus. Is the eGPU detected?
       
      If this enables detection then avoid this tedious hotswapping by seeking a unwhitelisted modified BIOS for your system OR test the Setup 1.30's PCI ports->undo_whitesting feature.
       
      eGPU still not detected - set the PSU to be permanently on
       
      The latest EXP GDC and BPlus eGPU adapters try to manage the PSU to only power on after they detect a signal. This can cause a race condition where the eGPU isn't ready to go when the CLKRUN signal is asserted.
       
      Avoid this by jumpering the PSU so it's permanently on rather than being managed. Depending on the PSU you are using refer to the following doco on how to do that:
       
      http://forum.techinferno.com/enclosures-adapters/8441-%5Bguide%5D-switching-atx-psu-using-paperclip-trick-swex.html
      http://forum.techinferno.com/enclosures-adapters/9426-220w-dell-da-2-ac-adapter-discussion.html
       
      eGPU still not detected - a non-standard mPCIe implementation by your vendor?
       
      PERST# mPCIe pin 22 may need to be isolated due to a non-standard implementation by your notebook vendor: http://forum.techinferno.com/enclosures-adapters/10812-pe4x-series-understanding-clkreq-perst-delay.html#post142689
       
      eGPU still not detected - faulty hardware?
       
      If you still don't get detection then test the video card and eGPU adapter in another machine to confirm neither is faulty.
       
      NVidia driver stops responding
       
      EXP GDC, PE4H 2.4 and PE4L 1.5 all use a socketted cable and therefore are not true Gen2 compatible device. This error indicates there was transmissions errors.
       
      The solution is either to get a better Gen2-compliant eGPU adapter such as PE4C V3.0 or PE4L 2.1b (both with soldered cable), or downgrade your link from Gen2 to Gen1 using BIOS options or Setup 1.30
       
      Other troubleshooting help resources?
       
      See DIY eGPU Troubleshooting FAQ.
       
    • By ReverseEffect
      3dMark11 Performance Preset Benchmark: http://www.3dmark.com/3dm11/11262792
       
      Required items:
      1.) Lenovo u310 (I have a Core i3 - Ivy Bridge, 8GB RAM)
      2.) 65CN99WW unwhitelisted.
      3.) eGPU (I used a EVGA GTX 750 Ti from another computer I had).
      4.) EXP GDC mPCIe Edition adapter (got from eBay - banggood seller).
      5.) ATX power supply (I used a 600W PSU from another computer I had).
      6.) USB wireless.
      7.) External monitor, keyboard, and mouse.
       
      Steps:
      1.) Obtain and install a unwhitelisted BIOS. If you are unable to obtain a unwhitelist BIOS, I think it might be possible to bypass it with Tech Inferno Fan's Setup 1.x (may need confirmation as I haven't used it myself yet.)
      2.) Shutdown computer and remove all USB devices, ethernet cables, power cables, card reader cards.
      3.) Remove mPCIe wireless card and detach antennas.
       
       
      4.) Attach EXP GDC external mPCIe cable to the former wireless slot and screw down.
       
       
      5.) Attach HDMI end of the mPCIe cable adapter to the EXP GDC device.
       
       
      6.) Attach graphics card to the EXP GDC device (I moved my laptop off the desk and onto the side shelf to make room on the desk for the monitor/keyboard/mouse).
       
       
      7.) Using the power cable adapters that came with the EXP GDC device, I hooked in my ATX power supply's 20 pin and CPU 4 pin cables. Then hooked the other end (8 pin) into the EXP GDC device. My EVGA 750 Ti also required that I use an additional PCIe power cable (6 pin) in the top of the card.
       
       
       
       
       
      8.) Then I attached my misc devices (HDMI monitor, USB keyboard/mouse/wireless adapter), and hooked in my PSU and powered it on (below is image of final product, also moved HDMI cable out of the way).
       

       
      9.) Power on your computer and let it install the standard VGA drivers and then install your drivers (I didn't have to go in the BIOS for any graphics settings, which it doesn't have anyways, nor did I have to disable iGPU in Device Manager before the card was added).
       
      Extra Info:
      I found that most games will play on med settings with about 45 FPS with this particular card.
      BDO: Upscale on - Anti Aliasing on - SSAO off - med settings.
      Skyrim: Med-High settings.
      Fallout 4: Med settings.
       
      (EDIT 5/19/2016) > Images added.
       
    • By TheLoser1124
      Hello, A couple of days ago I got a new GPU but when I installed it into my computer I was unable to use it but now I know why. When checking the device manger I went into the events tab of my GPU when I went to view all events, I noticed an error it said " event 411 kernel PnP " and It also said Problem Status: 0xC01E0438. I believe this is why my GPU hasn't been working on my PC. If you know how to fix this problem or have info on how to fix this problem that would be greatly appreciated. I'm also using a EVGA NVIDIA GeForce GTX 1660.
    • By TheLoser1124
      I'm having a problem where my PC is saying my eGPU is not usable, its detected in the Device Manager and it doesn't have the yellow triangle next to it. I cant use it games and the Nvidia Control Panel doesn't recognize it either. I'm using a EVGA NVIDIA Geforce GTX 1660. I'm using windows 10 and I tried DDU and reinstalling them and now I cant access the nvidia control panel. The GPU is not recognize on any other apps and I went on *********** and was unable to find my answer, Any help on how to fix this problem would be greatly appreciated.
    • By Radstark
      Title sums it up.
       
      TL;DR: we have a Clevo that runs a desktop CPU, one with those huge 82 Wh batteries. We remove the GPU and let it use the CPU's integrated graphics. How much time for the battery to go from 100 to 0? Is it comparable to an ultrabook's?
       
      I'm theorizing a mobile set with a static eGPU and an upgradable CPU. Given a hypothetical user that needs fast processing on the go and long battery life while retaining very high degrees of mobility, but at home wants a powerful machine to run most games, I guess that would be their best bet. It would surely be more convenient to keep everything in the same disk. And even though the thing would be quite heavy to carry around, changing CPU would be more cost-efficient than changing an entire laptop. (Not sure if I'm right here, and also I'm not sure whether the motherboard in a Clevo would be replaceable when the new CPU needs a different socket, which is another reason why I'm asking here.)
       
      If my above guesses aren't correct, then an ultrabook with Thunderbolt and without a dedicated GPU would be a better choice. If they are, then we would be carrying more weight in exchange of a more cost-efficient setup, which I think would be a fair tradeoff.
       
      Also I am aware of the heating problems that these laptops suffer from, at least compared to a desktop setup. Would they be solved by moving the GPU out of the chassis, and instead plugging it with an eGPU dock via Thunderbolt port?
       
      What do you think? Is it doable? If not, why?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.