Jump to content

Y500 flashing custom clocks in main GT650M - possible?


Aikei

Recommended Posts

Hello, sorry for my English.

I have Lenovo Y500 with GT650M SLI. I flash custom bios 2.02 for laptop, and cusom bios for Ultrabay GT650M. I can modify bios for ultrabay card usin Kepler Bios Tweaker, and i can flash this bios in ultrabay card (I want to overclock my card in bios, without other utilities like Nvidia Inspector), but i cant modify and flash bios from main card (it's 2 in 1 - bios from laptop + bios from card).

How to modify clocks in this BIOS? If using Hex Editor - which lines haves the standard 835mzh GPU and 2000mzh RAM? There are other ways?

Link to comment
Share on other sites

I remember someone saying on biosmods.com that they use a hexeditor to modify the clocks. And then they saved the bios. And next they corrected the last byte or checksum with nibitor. I am not touching anything after I bricked my HP laptop doing that. One mistake and you are screwed paying 200 euros for a new motherboard.

I am waiting for the GT-750M vBIOS and BIOS to leak and then I am going to ask to put the GT650M voltages and clocks to be the same as GT750M.

Link to comment
Share on other sites

I remember someone saying on biosmods.com that they use a hexeditor to modify the clocks. And then they saved the bios. And next they corrected the last byte or checksum with nibitor. I am not touching anything after I bricked my HP laptop doing that. One mistake and you are screwed paying 200 euros for a new motherboard.

I am waiting for the GT-750M vBIOS and BIOS to leak and then I am going to ask to put the GT650M voltages and clocks to be the same as GT750M.

GT750M haves 967Mhz for Core, 2500Mhz for memory, vBios same from GT650M.

My safety clocks for GT650m haves 1045Mhz for core and 2650Mhz for memory, it's stable :)

Link to comment
Share on other sites

I've gotten 1080/2600 with both GT 650m cards gaming in SLi mode in Borderlands 2 and SkyRim, and each never went above 75 celsius with the sensors reading 1025mv in Gpu-Z and NVidia Inspector 1.9.7.1.

This is without any laptop cooler while sitting on a flat desk in a room at about 25 celsius.

I didn't track GPU utilization, but I found that to be pretty impressive. The big deal for me though is that it's quiet. I also have an MSI GT685 with a GTX 580M and I have to game with a headset on if I'm doing multiplayer, because it's quite a bit louder and intrusive. This is with the GPU at lower than stock clock speeds under battery.

Link to comment
Share on other sites

Bad idea to overclock by flashing the BIOS. Very good way to void your warranty in case you want to take the machine in for service since the techs can easily see that you overclocked when they check the BIOS. Overclocking in software is much more flexible and less risky and there's no way anyone would know you were overclocking.

For those of you who have pretty high VRAM overclocks in the 2300-2600+ range you should probably check that you are actually getting higher scores not lower ones. The reason I say that is GDDR5 is crash resistant so you will almost never lockup or artifact even if you exceed your limit by quite a bit. What will happen is that it will resend the data that failed the first time due to the instability so your benchmark score will actually decrease if you've gone over the limit. This is a vital thing to notice because you could keep pushing your VRAM way past its limit while being oblivious to how it reacts to overclocking.

When I was overclocking my 650M SLI I had pushed my VRAM to 2600 MHz without a single crash in OCCT and FurMark and thought I could keep going. But as soon as I fired up Unigine Heaven at that point I saw that my score went down drastically, getting even lower than when my VRAM was at stock. After a few days of testing I settled at 2250 MHz memory and 1125 MHz core as that is where I realized my highest score.

Interesting enough, I found that when overclocking VRAM there are holes in the clock speed progression where scaling pleateaus or drops off like I said before. For example, as soon as I passed 2250 MHz my Heaven score took a nosedive and didn't reach the same level until 2500 MHz, soon after which it nosedived again. I can't run 2500 MHz VRAM with my core at 1125 MHz though because it will crash so I'm staying at 2250 MHz. SLI is much less hungry for memory bandwidth than a single GPU due to the doubling of bandwidth and bus width so I've found that increasing VRAM clock provides little performance improvement and would much rather get my core as high as possible. For example, my core OC improved my Heaven score by 25%, while my memory OC only added 5% on top of that and 0% if I kept my core at stock.

This anecdote also highlights the absolute uselessness IMO of GPU burn-in tests like FurMark, Kombustor, and OCCT in measuring overclocking stability. They're sythetic tests that just do a bunch of unrealistic and repetitive calculations on the GPU to generate as much heat as possible and are not representative of real-time 3D games at all. Maybe you can use them to find the stability in OpenGL applications but that's what, like <1% of all PC games? I highly recommend Unigine Heaven instead as it is running a real DirectX 11 game engine in a real-world test scenario similar to what the most demanding games do. Skip Unigine Vallley because it is not as demanding as Heaven. You will find that Heaven will kick your OC ass and crash your GPU at much lower clocks than FurMark or even many games will. My gold standard is to loop Heaven for 24 hours and if an overclock passes that then it is stable to use 24/7.

I am waiting for the GT-750M vBIOS and BIOS to leak and then I am going to ask to put the GT650M voltages and clocks to be the same as GT750M.

No need to do that. You can already OC your 650M past 750M at the lower voltage and you will get lower heat generation. Here is my 24/7 stable OC of 1125/2250 (not 1170 core as that is a reporting error). As you can see it is higher than what the 750M SLI scores at stock since memory clock makes very little difference in the score. And I'd love to see a 750M SLI system get these max temps even at their lower stock clocks.

JegtMo5.png

mLLjze1.jpg

Bu4CFky.png

l4ZCnsV.png

Link to comment
Share on other sites

It would be nice if we had a memory-only benchmark or error-checker type utility like Memtest that could tell us if the GDDR5 error checking is slowing things down, or even something that reads GDDR5 registers that tell us if error checking is working. I might try the Folding@home memory tester Memtest CL/G80 and see if there is any meaningful data that comes from that.

I know with Furmark I haven't been able to get both GPUs active so far on the Y500. I can't remember if it worked both GPUs with SLI'd cards on a desktop as well. I'll take a look when I get my desktop back together, as it's currently dismembered.

I agree on the 750m versus 650m though. It is just a voltage and core bump. Not very useful unless the owner is not interested, or capable of overclocking the 650m. I remember looking at the gt 750m, thinking that I was losing out by getting the retail/distribution version of the Y500, and then I read some posts here and got info from Anandtech and saw that it was architecturally a re-badged 650m that just pushes the thermal envelope more, which can be an issue with the shared CPU/GPU cooling on these units.

Edit: It's an error-checking and re-transmission architecture as shown here: http://www.anandtech.com/show/2841/12

Link to comment
Share on other sites

It would be nice if we had a memory-only benchmark or error-checker type utility like Memtest that could tell us if the GDDR5 error checking is slowing things down, or even something that reads GDDR5 registers that tell us if error checking is working. I might try the Folding@home memory tester Memtest CL/G80 and see if there is any meaningful data that comes from that.

I know with Furmark I haven't been able to get both GPUs active so far on the Y500. I can't remember if it worked both GPUs with SLI'd cards on a desktop as well. I'll take a look when I get my desktop back together, as it's currently dismembered.

I agree on the 750m versus 650m though. It is just a voltage and core bump. Not very useful unless the owner is not interested, or capable of overclocking the 650m. I remember looking at the gt 750m, thinking that I was losing out by getting the retail/distribution version of the Y500, and then I read some posts here and got info from Anandtech and saw that it was architecturally a re-badged 650m that just pushes the thermal envelope more, which can be an issue with the shared CPU/GPU cooling on these units.

Edit: It's an error-checking and re-transmission architecture as shown here: AnandTech | AMD's Radeon HD 5870: Bringing About the Next Generation Of GPUs

OCCT 3.1.0 has a GPU: MEMTEST which can test the VRAM of Nvidia cards that support CUDA. Not sure how accurate it is, though, since I've kinda lost my faith in synthetic benchmarks after my experience overclocking this machine. Also, it probably won't even work properly with Windows 8 since OCCT only fixed compatibility in the latest version 4.4.0.

NVIDIA/AMD Video Card - Test with OCCT - Windows 7 Help Forums

I think it's very difficult to get an OpenGL synthetic such as FurMark or Kombustor to utilize SLI. I tried running in fullscreen and setting SLI rendering mode to AFR 2 but nothing worked for me. OCCT is fine because it is DirectX 11 but I've already expressed my disdain for synthetics so I stick with Unigine Heaven. It was only in Heaven that I was able to find the point at which my VRAM passed its limit and performance started going down due to retransmissions.

Link to comment
Share on other sites

At 1100mhz on the SLi'd adapters I get artifacting so I backed down to 1080, and with Unigine my best performance has come at 1080-1090 mhz on the core, and RAM at 2200-2250 as well, and so far it's mostly stutter-free and smooth in Skyrim and Borderlands 2. I will install more intensive games like Crysis 3 and Sleeping Dogs this upcoming weekend to play around some more. Anything above 2260-2270 on the RAM and my minimum and maximum frame rates drop.

Strange with Unigine, that while the peak speed of the SLi'd adapters far outstrips what my MSI GT685 does with the i7 2820qm and GTX 580m, with an overclock to 720/1650 from 620/1500, the OC'd GTX 580m outscores the Ideapad Y500. This is where we get into the SLi'd minimum frame rate difference, and this can also impact the score.

Alternatively, the overclock-able LED panel on the Y500 makes gaming and movies a bit nicer. Less judder on 24fps movies and less tearing on games that occasionally exceed the stock 60hz frame rate withOUT v-sync.

Link to comment
Share on other sites

At 1100mhz on the SLi'd adapters I get artifacting so I backed down to 1080, and with Unigine my best performance has come at 1080-1090 mhz on the core, and RAM at 2200-2250 as well, and so far it's mostly stutter-free and smooth in Skyrim and Borderlands 2. I will install more intensive games like Crysis 3 and Sleeping Dogs this upcoming weekend to play around some more. Anything above 2260-2270 on the RAM and my minimum and maximum frame rates drop.

Strange with Unigine, that while the peak speed of the SLi'd adapters far outstrips what my MSI GT685 does with the i7 2820qm and GTX 580m, with an overclock to 720/1650 from 620/1500, the OC'd GTX 580m outscores the Ideapad Y500. This is where we get into the SLi'd minimum frame rate difference, and this can also impact the score.

Alternatively, the overclock-able LED panel on the Y500 makes gaming and movies a bit nicer. Less judder on 24fps movies and less tearing on games that occasionally exceed the stock 60hz frame rate withOUT v-sync.

Interesting. Is the overall score of your overclocked GTX 580M higher than the overclocked 650M SLI, or just the minmum frame rate? I can understand the minimum being higher due to SLI but the overall score of the 650M SLI, especially if it is overclocked, should be quite a bit higher. As I understand it the GTX 580M is a rather severely downclocked desktop GTX 560 Ti and even though you've overclocked your 580M it's still pretty far from performing on par with a stock GTX 560 Ti. My desktop has a Radeon 6950 at stock speed which is more or less equal or slightly faster than the GTX 560 Ti and my overclocked 650M SLI is faster than it. So your Y500 should definitely perform faster than the MSI.

When I was overclocking the core I found that I never reached a point were I got artifacts. If the clock was too high I simply got a TDR video hardware error and crash to desktop and had to reboot my machine. Made it a lot easier since I didn't have to sit in front of the screen for hours checking for artifacts. All I needed was a crash to let me know I had gone too far. I've since bumped down my core by 5 MHz so I am now at 1120/2250 because during my 24-hour Unigine Heaven run, which I use to validate the stability of a 24/7 overclock, I got a crash at hour 20. I could've kept the core at 1125 because I never get a crash even during hours of gaming but in my eyes an overclock is not viable unless it's 100% stable in every situation, even the atypical ones. Plus a 5 MHz decrease makes absolutely no difference in games.

I see that many people are overclocking the LCD panel to increase the refresh rate. I assume you're using the Pixel Clock tool in EVGA Precision to do so? What refresh rate have you overclocked to? Does it noticeably reduce or even eliminate the input lag associated with Vsync? The severe input lag in many games is the only reason I don't use Vsync. I would love to get rid of screen tearing, which bothers me, but Vsync makes my reaction time a lot slower in fast twitchy shooters which is absolutely unacceptable for online play. The cursor feels like molasses yet super slippery at the same time and I can't aim precisely. I'm currently using a 60 FPS cap in all my games because there's no point in having a Source Engine game run at 300 FPS all the time but there is still a lot of screen tearing.

Link to comment
Share on other sites

Hello, sorry for my English.

I have Lenovo Y500 with GT650M SLI. I flash custom bios 2.02 for laptop, and cusom bios for Ultrabay GT650M. I can modify bios for ultrabay card usin Kepler Bios Tweaker, and i can flash this bios in ultrabay card (I want to overclock my card in bios, without other utilities like Nvidia Inspector), but i cant modify and flash bios from main card (it's 2 in 1 - bios from laptop + bios from card).

How to modify clocks in this BIOS? If using Hex Editor - which lines haves the standard 835mzh GPU and 2000mzh RAM? There are other ways?

memory clocks YES

core clocks NO (not yet)

voltage YES

Link to comment
Share on other sites

Octiceps what are your Unigine Heaven Benchmark scores and fps? At these settings I have about 580-690 for the scores. It fluctuates at each run. Between running VRAM at stock speed and my oc'ed speed, the difference is close to 80 points in Unigine. So far I do not have the problem like what you described with lower fps due to error in VRAM.

GPU1

post-12798-14494995318994_thumb.jpg

GPU2

post-12798-14494995319208_thumb.jpg

Link to comment
Share on other sites

Octiceps what are your Unigine Heaven Benchmark scores and fps? At these settings I have about 580-690 for the scores. It fluctuates at each run. Between running VRAM at stock speed and my oc'ed speed, the difference is close to 80 points in Unigine. So far I do not have the problem like what you described with lower fps due to error in VRAM.

GPU1

[ATTACH=CONFIG]7528[/ATTACH]

GPU2

[ATTACH=CONFIG]7529[/ATTACH]

463 points with every setting maxed out at 1080p. 650M SLI @ 1125/2250.

post-10698-14494995319386_thumb.png

If you score varies by more than 100 points at the same clocks that should tell you something is up, most likely the memory clocked too high and GDDR5 ECC kicking in. If everything was stable you should only be getting +/- 5 points between consecutive runs. I can get my runs within 1 or 2 points of each other pretty much every time.

Link to comment
Share on other sites

463 points with every setting maxed out at 1080p. 650M SLI @ 1125/2250.

[ATTACH=CONFIG]7530[/ATTACH]

If you score varies by more than 100 points at the same clocks that should tell you something is up, most likely the memory clocked too high and GDDR5 ECC kicking in. If everything was stable you should only be getting +/- 5 points between consecutive runs. I can get my runs within 1 or 2 points of each other pretty much every time.

What I meant is that the scores varies like this:

Memory OC'ed to 2915 MHz - 589.

Memory stock at 2500 MHz - about 500.

In both cases the core is running at 1.25GHz.

So I guess it's worth it to OC the RAM. :Banane17:

Link to comment
Share on other sites

What I meant is that the scores varies like this:

Memory OC'ed to 2915 MHz - 589.

Memory stock at 2500 MHz - about 500.

In both cases the core is running at 1.25GHz.

So I guess it's worth it to OC the RAM. :Banane17:

The 750M probably uses different GDDR5 chips that can go higher then. Just curious, do you run that OC 24/7 and is it stable in all games, or just in benchmarks? Heat is probably off the charts too.

Link to comment
Share on other sites

The 750M probably uses different GDDR5 chips that can go higher then. Just curious, do you run that OC 24/7 and is it stable in all games, or just in benchmarks? Heat is probably off the charts too.

I run it 10MHz lower on the core 24/7. It has to be applied using the .bat file which runs the Nvid Inspector exe to set the clock. If I remember correctly, when I opened it up I think I saw Samsung VRAM chips. I have read somewhere that Samsung VRAM can be oc'ed a bit more.

It's stable if if I give it 25mV more on both GPU. AFAIK from testing it, every 50mV increment would contribute to 5C increase in temperature. I am trying to run it with as low of a voltage that it can without instability.

Link to comment
Share on other sites

Which program are you using to overclock here and where can I get myself a copy of it? I've been trying to use nvidia inspector and it worked for a while, but now my core is stuck and I'm looking for a workaround.

Use the unlocked BIOS and vBIOS in the sticky and MSI Afterburner.

You need 5 posts before you can download attachments though.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.