Jump to content

Khenglish

Registered User
  • Posts

    1799
  • Joined

  • Last visited

  • Days Won

    67

Posts posted by Khenglish

  1. First thanks for the info. Istill need to read it 3 more times, at least to understand it. But my question is how much does these help your FPS? And in which games? Cause my understanding was that you'll gain more benefit from GPU OC. So please help me clarify those two please

    Sorry for late reply.

    More CPU power helps in some games like BF3 and Crysis 3 if you have a strong GPU. A GPU overclock will help more in most cases. 3Dmark11 is very CPU light compared to games so it is a poor example. You're going to have a hard time finding a tangible improvement in games with just 4-5% on the BCLK, but it's better than nothing.

  2. As title says I get weird pixel discoloration when my gtx 680m changes p-states.

    [ATTACH=CONFIG]6807[/ATTACH]

    Edit: Pixels change back to normal after p-state switch. What could be causing this it is very annoying.

    I had a defective mobility 9600 do something similar. It would run with similar artifacts cold, but stop when hot. This is definitely a hardware issue somewhere, likely the 680m. I suspect the BGA needs to be resoldered. The oven trick is usually a temporary improvement, while a complete BGA reball is required for a permanent fix (GPU removed, solder points redone, then resoldered to card).

    Was the card like this when you received it, or have you had it for a while and you just started seeing this?

    You can use Nvidia inspector to force the card to stay in certain P-states to avoid the transitions, but if that works for now I suspect the card will become worse over time.

    Unless you can return the card I suggest just forcing P-states and hope things don't get worse. Keep temperatures down to avoid stressing the BGA more.

    It's unlikely to help, but also try the card on an external monitor.

  3. I really liked the concept of the Dell Latitude Z600.

    My ideal yet realistic notebook would be similar to the Z600 but in a 15.4" (16:10 aspect ratio) form factor that could turn into a tablet PC. Hardware wise, I'd want:

    - regular Intel quad core CPU with their small form factor chipset

    - 4 SODIMM slots (4x8GB)

    - 2 mSATA RAID0 SSDs (2x 512GB)

    - Thunderbolt connection to a switchable nVidia GTX Titan eGPU enclosure

    - 8-bit IPS panel with capacitive multitouch and adaptive transflective screen on a thin bezel

    - 2 slim removable batteries (similar to above, needs intelligent switch so that it alternates which battery drains)

    - slice battery for long trips, every battery should have a LED indicator with battery level

    - thin high wattage power brick

    Why alternate batteries? The batteries will last longer if they are both drained at the same time instead of one at a time since internal resistance will be cut in half.

  4. That's pretty much what I've read yet at the same voltages and same cooling (closed water), the higher ASIC achieves consistently higher boost clocks. The power draw limit in this case is the same for both (125%) so there's a cap set.

    Here's the latest run in 3dmark 11, you can see the max core speeds for each card (this is on stock volts, 125% power limit, +75 core/+240 memory). Max GPU load on each card was 97/98% respectively:

    [ATTACH=CONFIG]6768[/ATTACH]

    Edit: Although I just did notice the 2nd card shows a higher VDDC in the pic but max VDDC on both was 1.1750v. They seem to boost much higher in 3dmark 11 vs bioshock infinite. Definitely killing my non-signature 680s which max'd at 1260 mhz on 1.22v.

    Well it makes sense that the higher ASIC % card will clock higher at the same power limit since it uses less power. Unlike Intel CPUs, Nvidia GPUs actually measure power consumption instead of calculating it, so if a card really does use less power than another at the same clocks, it will clock higher.

    There are mods to make the current measuring circuit read far less current than reality, effectively disabling power limits (go to kingpin's website for it). It would be interesting to see which card clocked higher with no power limit.

  5. So one of my cards does 1215 Mhz on stock clocks/volts and has an ASIC of 85%. The other one does 1190 MHz on stock clocks/volts and has an ASIC of 71%. Coincidence that the one w/the better ASIC gets better OC? I don't think so but some people have said ASIC quality isn't 100% reliable but in my case it seems to be.

    ASIC quality corresponds to leakage, not how fast a chip can run. High ASIC quality means low leakage and thus lower power draw, while low ASIC quality = high leakage and higher power draw. Some people think that the high leakage chips generally run better under LN2, while high ASIC quality is just flat out superior on air cooling since they can run higher voltages and clocks at the same power draw and heat output.

  6. I have msi gx60 and i have A10 4600m cpu

    I use AMD PSCheck and i have max turbo 2.7ghz :/ why not 3.2ghz ?

    PScheck does not properly disable P-states, which breaks the 3.2GHz pb0 when used.

    The only way I found to set P-states properly was by manually editing the PCI config registers with BAR-edit. Set F0D24F3xDC bits 9 and 10 to 0. This will disable P-states slower than pb1 and you can get PScheck to do it (although it offers other wrong approaches). However just doing this will break pb0 since pb0 requires 2 cores to be in C6, but the cores are set to enter C6 from a P-state that is now disabled, so C6 and thus pb0 are now broken. So we need to set pb1 as the state that cores can enter C6. Pscheck does not do this. You need to set F0D24F3xA8 bits 31 and 30 to 0. If you also set F0D24F3xDC bit 8 to 0 to try to force 3.2GHz only, the system will crash. The above will limit the system to 2.7 and 3.2 GHz. 3.2GHz at all times will require a BIOS mod to disable the boost lock bit.

    recap:

    F0D24F3xA8 bits 31 and 30 to 0

    F0D24F3xDC bits 9 and 10 to 0

    Another issue I found with the trinity system that I played around with is it will downlock the northbridge when the IGP is under load. Isn't that the exact scenario where you do NOT want to downclock the northbridge? To correct this do the following while the IGP is idle:

    1. F0D0F0xB8 to 0x0001F5F8

    2. F0D0F0xBC bits 0->7 to 0

    The above will get you 1-2% more IGP performance due to the northbridge running at 1600MHz instead of 1300MHz. Note that running AMD overdrive will revert this setting, making you have to do it again. Overdrive will also occasionally cause a system freeze when being started with the northbridge forced to 1600, but never if overdrive is already running. I never found any instability besides that for 1600MHz. I think overdrive is just coded terribly. I would use HWinfo64 instead because just starting it would not change a bunch of settings on me, and it did not crash the system.

    I can't remember if you can undervolt with PScheck. If you can't just say so I can I can you how to do that too. I know it can't undervolt the IGP/northbridge, but that hardly made a temperature impact when I did it through bar-edit. CPU undervolting was huge though.

    About disabling cores with msconfig. When I tried that I found that it disabled an entire node instead of 1 core off each node, meaning that you lost half the cache and most importantly an FPU. Otherwise it might be worth doing for more performance in some cases.

    I never managed to fix the stuttering I was getting with the above and more. The stuttering on trinity is awful at least on the toshiba I was messing around with. I would pick an HD4000 over it any day despite what benchmarks say.

  7. Yeah, I know, the waterblocks from AquaComputer have an insane amount of copper fins.:P In a custom-made, this has an influence on the production costs, so I have to keep an eye on the value for money.;)

    How thick is the copper base under the fins? Maybe you can trim it down some for more fin height? And it would be a big redesign, but another possibility is to remove the IHSs just so you can have the base plate lower for taller fins. IHS removal is a bit dangerous though and messing it up is a great way to add a lot to your costs, but the combined effect of IHS removal and the taller fins would make a big difference in temperatures as long as you're careful (don't pull on one side of the razor when pulling it back out so that the other end spins in and knocks off a resistor like someone I know very well did).

    I understand that having a high number a fins for surface area is difficult when using a milling machine to make them. What about making the whole liquid cooler thicker to increase fin height for more area? This will certainly add to costs but I think as things are now you have plenty of room for it.

    I'm just concerned that you might spend all this time and money on a cooler, but then it's basically wasted because the cooler performance is not good. I did just that when making a new internal laptop air cooler 3 years ago. My GPU block has around twice your number of fins and I think they are also around twice as tall. I really think you should increase surface area significantly more than by what you already have. My feelings are that you spent thousands on hardware, so why cheap out $40 on cooling?

  8. @Khenglish: No, I haven't run temperature simulation, it's pretty hard to choose the correct values in the correct places to get nearly accurate data.

    I will not remove the IHS, the watercooler is adapted to the height of the two IHS.

    Do you think that a copper fin design, as in the picture below, will improve the cooling a lot?

    [ATTACH=CONFIG]6568[/ATTACH]

    (Fullsize on SkyDrive folder)

    @Brian: I can't tell it at the moment, I expect a few more quotes from different companies.

    That is absolutely better. I think my 580 GPU block had over 30 fins.

    • Thumbs Up 1
  9. I'm concerned that you have too little surface area near the GPU dies to pick up heat. Copper conducts heat a lot more slowly than most people think despite being the 2nd most conductive metal. I assume that since you did flow rate simulations that you also did heat transfer simulations?

    Also are you going to take off the IHSs? It dropped temps around 4C for my 580 on water, but I expect less power draw per GPU for you though so it should make less of a difference.

    • Thumbs Up 1
  10. @Prema I'm still using a version 14 BIOS but changing TDP in XTU does nothing. It does properly read TDP though. I always use throttlestop to set TDP.

    @Blair I've found the 170 BIOS to turn the fan on at ~60C and not turn it off until 40C. The fan has a hard time getting below 40C on slow so the CPU fan is on most of the time even at idle. When checking temps you may have checked a bit after it did manage to drop below 40C so the fan was off.

    I really don't care about idle temps. 60C is just as harmless as 40C. Load is what matters.

  11. Is anyone else experiencing this issue?

    I overclocked my unlocked CPU to 4.2GHz and the temps stay in check at a maximum of 81 on the hottest core while testing stability.

    Both my 680Ms are stock clock (I tried different overclocks and they all close Crysis 3 within 3 minutes of Gameplay).

    Do anyone think something is broken in my laptop?

    Is the power supply shutting off? If you don't need to unplug then replug it, then it's not shutting off. I have only 1 680 and BF3 can shut off my 240W PSU if I set the CPU TDP 75+ with an overclock.

  12. That's great. It seems just the modified vbios sent me from ~5800 to about 6200. Running 915/2150 I reach about 7200 in 3dmark11 which is a huge jump on stock volts. Temps went from about 74 to 79 however, though definitely still safe.

    All of 3dmark11 benches run without any artifacting, but between tests there is sometimes some flashing lines of static that I believe are just due to changing resolution from 1080p (native) to the 720p used during testing. Should I be worried?

    Also, what does GPU-Z report others' 680m Asic quality? Mine reads 66%.

    67.1% here. And 79C in 3dmark11? Considering how short it runs that is a bit high. I would expect games to be significantly hotter.

    Update:

    Just ran 3dm11 with fans on auto at 1V 954MHz and hit 80C, and games hardly run any hotter, so 79C is ok in 3dm11.

  13. I remember reading some warnings about the crossflash when I bought my P150EM. Are things changed? It's 'safe' to crossflash?

    It was mythlogic that originally said that caused problems (motherboard died soon after the flash), but they sell their P150EM's with P170EM BIOS's now so it looks like their problem was unrelated.

    My P150EM has never run anything besides a P170EM BIOS. Just make sure you flash on a similar P170EM BIOS to avoid EC version changes.

  14. THG got a hold of a 4770k and benchmarked it vs IVB and SNB at the same clocks.

    Core i7-4770K: Haswell's Performance, Previewed : Core i7-4770K Gets Previewed

    The IPC increases look to be slightly larger than those from SNB to IVB, but TDP is actually HIGHER than IVB at the same clocks (84W vs 77W)

    Looks like a very minor upgrade, just like SNB to IVB was, unless you want to use the IGP. Hopefully there was a flaw with their test setup so the performance gain is actually higher. They did get oddly low memory bandwidth, but AMD's APUs also get very poor memory bandwidth to prioritize the GPU over CPU, so maybe Intel did the same for more IGP performance.

    As an EE major taking VLSI courses I am not expecting anything good from the 16nm die shrink after this. The chip makers need to make MAJOR changes (ditch FETs, likely for LBJTs) for die shrinks to continue.

    • Thumbs Up 2
  15. The 680m with default BIOS will not run on battery. The 675m is likely the same. svl7's BIOS allows the 680m to run while on battery, however, the system will shutdown in seconds if at normal 3d clocks and voltage. A workaround is to use Nvidia Inspector to force the GPU to 2D voltage and clocks, then you can overclock the 2D clocks. 2D clocks are capped at 405MHz core, 1600MHz effective (400 actual) memory, which is far below of what 2D voltage should be capable of.

    You also may have noticed that the CPU is limited to a mere 1500MHz when on battery. You can use throttlestop to get rid of this cap.

    With the above fixes I get slightly over an hour on battery while maintaining performance levels expected of a gaming laptop. The IGP will draw up to 20W when fully loaded, so battery life is hardly any better when using it instead.

  16. You guys with good 680ms make me feel bad. At 1.025V I could only do 993MHz stable, with 1006MHz just a hair unstable. 1020 crashed pretty fast. This is with extensive cooling mods and forcing the fans high so that the GPU only reached the mid 70s.

    I've noticed that the card only clocks to 13.4MHz increments. Ex. setting 940 -> 952 all result in 940MHz. Some people are posting clocks that are not accurate.

    I've settled on 1V 953MHz since higher voltage requires occasional high fan speed usage, which I am trying to avoid due to the noise.

  17. I just run Prime95 for a short period while watching EVEREST to check if the fan is kicking in, I let it get to 80 C and if the fan RPM does not kick then I stop Prime. Yesterday I set all power settings on Power4Gear and the Windows ones in active cooling and it worked, today I see it is not working again... When it works, the fan reaches up to 3900/4000 RPM, if it does not work it just stays around 2000/2100 RPM.

    80C isn't very hot. Your CPU has a max temp of 105C. Let it get up to 90C and see if it speeds up.

  18. The problem with the P150EM's GPU cooling is not the design, but the build quality. Clevo flattens the heatpipes onto a copper plate that is too thin to not be bent by the pressure. If you take off the heatsink and look at the cooler under the heatpipes, you'll see very distinct grooves where the GPU die makes contact with the cooler. I dropped temperatures by over 10C by lapping the heatsink on a countertop edge. You can screw up the lapping and make things worse, so be careful. The CPU cooler has this problem too.

    Prema has a modded EC that will allow forcing the fans to high, which is the best you can do for manual control. For me it drops temps by 10C, but the fans are so loud I only use it for testing.

    There is no heatsink you can buy to replace it. The P170EM and P150EM have identical GPU coolers. Only the CPU cooler is inferior on the P150 (aluminum instead of copper radiator).

    • Thumbs Up 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.