Jump to content

Robbo

Registered User
  • Posts

    743
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Robbo

  1. I hadn't bothered looking in detail at Mr Fox's 780M overclock he used, but he ran that test at an 'unholy' 1125Mhz on the core!! If you were only at stock (993Mhz), then this is going to account for the biggest difference, and your GPU score was only 15% lower than Mr Fox's, but he was running his cores 13% faster. This is the difference, your score is fine given you're operating at stock clocks on the 880M's. (The other slight remaining discrepancy in performance difference could be due to his CPU overclock & possibly running faster system memory - but this is only a small discrepancy it seems).
  2. I did some more testing with that Skydiver benchmark, and it doesn't respond that well to core clock increases in some parts: same fps regardless of whether I was at 1006Mhz or 1124Mhz (still 20 fps when she's unzipping her wings), and power draw (global wattage) was low too at the same time at this point. (A 12% greater core clock only yielded an 8% increase in GPU score, think there's some memory bandwidth limitations). I believe that there are parts of the benchmark that are memory bandwidth limited (VRAM), for me anyway. Are you running a low memory clock in comparison to Mr Fox? (His overclocked CPU might be boosting the GPU score in some places where fps is very high too).
  3. I think your graphics score is good at 42203, I only get 15658, which is well less than half your score, and considering any losses in sli efficiency it's still a good score I think. Your combined test seems low with 18131, I get 14615 on that one, so it seems that something might be throttling there, maybe your GPU's, is your power brick up to the task because that's a combined CPU/GPU test? (Or it could be that the Combined Test is more CPU limited, so that could make sense).
  4. Just wondering if it's anything to do with the modified inf file that people are sometimes using with their 120Hz screens. Different laptops & different GPUs have different 'Section Numbers' that are referred to in the inf file for NVidia driver install. I'm thinking it would be best to overwrite lines in the modded inf file that were previously used for 120Hz versions of the laptop that you use - overwritten with just the correct Device ID for the 880M so that it will install - this way I'm thinking the right 'Section Numbers' will be referenced during driver install. This is just a long shot, something I thought of and thought it better to write here, rather than just keep to myself. (Those that know how to mod their own inf files might know what I'm referring to).
  5. This is the way I see it with GPU temps: under 90 degC ok'ish, under 80 degC good, under 70 degC excellent. It's possible that a card that operates below 70 degC will last longer than one operating under 90 degC, but it's hard to prove. GPU longevity is the main concern. Also, if you're over volting, then temperatures are more critical because higher temperatures especially when combined with overvolting will accelerate silicon degradation - this is why for instance the desktop GTX 680 reduces it's boost voltage when over 70 degC (down from 1.175V). Those are the guidelines I use, others may have different ideas of what's acceptable.
  6. Hi, yes, I'm OK thanks. Well, if you don't want to hang around, you could just use the stock vBIOS then, or if you're not happy with that, then buy an 880M from a different vendor which you think will be compatible with the unlocked vBIOS. However, in my experience on these forums, different vBIOS from different manufacturers are compatible with cards from other manufacturers, as long as it's the same spec of card with the same amount of VRAM (although think I read that for 800 series cards VRAM quantity is no longer important for vBIOS flashing - could be wrong about that!).
  7. Is there really that much more to try after everyone's suggestions on here already, and the detailed help you got from Johnksss when he accessed your laptop remotely - my guess is there's not much more for you to try if you've tried all the suggestions so far. My guess is that your 880M is bad, or your individual system just has some kind of incompatibility with an 880M card. If I was you I think I'd return the card saying you have experienced it as being faulty, they then send you another one & you can try it. If that one doesn't work, say your system is incompatible, get a refund & buy a 780M (flash svl7's vBIOS & overclock it to the same performance level you'd get with an 880M).
  8. Oh well, that's good that he put in that effort to help you, but a pity it's not fixed, are you gonna send back the 880M's and get some 780M's instead? Or are you concerned you'd have the same problems with 780M's? (Or just wait for high end Maxwell cards to come out - if they end up being compatible with the R1!).
  9. @Prema, @J95, @svl7, any thoughts on how this procedure 'reset' the GPU's back to non-throttle behaviour? Any ideas if there's a simpler shortcut method?
  10. Good question there too. Did you try a program called Display Driver uninstaller yet? Here it is: Display Driver Uninstaller Download version 12.7.1.0 Maybe it's worth trying CCleaner too, the registry cleaning part. Like you, I'm also surprised that System Restore didn't work, if it's the Window's registry that gets changed.
  11. This is a very good question you ask, and I'm curious to see if svl7 and the others know the answer to this. This is a problem that they saw when they reviewed the 880M with the modified vBIOS - after a bunch of crashes the GPU will then perform poorly. The review hinted that it was possible to undo these problems (a long & complicated process it was mentioned), so I'm interested to hear what that process is. Hopefully you'll get a response soon. This is where the problem was reported by the designers of this vBIOS: http://forum.notebookreview.com/alienware/746259-my-nvidia-gtx-880m-test-run-review.html This is what Johnksss says about halfway through his review that pertains to your problem (as it seems to me): "It seems that after a few driver not responding errors or driver crash errors your 900.00 dollar card wants to now run like an intel 4k GPU. Even stock will not work correctly anymore. You have to go through a whole lot to correct this situation" It's not actually explained what that process is though. From some previous discussions in this thread it was partly thought that a Windows reinstall might be the answer, with maybe these driver crashes causing entries into the Windows registry that can be undone by reinstalling the OS. I have no idea if this is the answer though, I hope you get a response. @svl7, is this a known quirk with your modified vBIOS/stock vBIOS? I'm referring to the problem that Johnksss mentioned, and also that seems to be the problem with Mathieulh's, do you know what the reset process is that Johnksss refers to?
  12. I don't think you can undervolt the 680M vBIOS, I don't think they're voltage adjustable via software. Flash it though, and try (NVidia Inspector).
  13. I hadn't answered your question because I didn't know of the top of my head. But, I did just do a quick search of this thread (using the forum search function), and I did come up with this post where it seems like there might be some kind of limits to how much you can overclock the core when using the M18xR2 (old post & might not be relevant though): http://forum.techinferno.com/general-notebook-discussions/1847-nvidia-kepler-vbios-mods-overclocking-editions-modified-clocks-voltage-tweaks-51.html#post30161 Although, I know Mr Fox used to have 680M sli that he overclocked to really high levels, so it must be possible. Have a search in this thread to see what more you can find. The modified vBIOS you flash - make sure it is one that has come from a card with the same amount of VRAM as your card. EDIT: just noticed the last one in your list has the same vBIOS version number as your existing card. I think you'll be ok flashing that one. Dell 680m - 80.04.5B.00.02_'OCedition'_revised_00.zip
  14. I don't need to own an 880M in order to understand how they work - they are Kepler GPU's. If you don't understand my post, then it is you that doesn't actually understand your GPU. Anyway, I'm done on this topic, I apologise for being a little aggressive with you in some of my posts, but I think it's because I was pissed off that you'd ignored about 3 of my posts when I was trying to help you earlier with your stability problems (before we started talking about the whole memory clock reporting thing) - I shouldn't have taken that tack & attitude. Anyway, I'll break the cycle & leave it now.
  15. I'm not wrong because I agree with your post. I'm simply saying GDDR5 can be reported as (using stock 880M as an example): 1250Mhz on GPUz => this is the ACTUAL frequency that the chips run at 2500Mhz in NVidia Inspector 5000Mhz in some other programs (Guru3D review site often quote this figure) The above goes along with what you said in your post. - - - Updated - - - (Don't know why you're bringing in other points to the discussion about your friend, nothing to do with our little discussion on memory reporting - you're kind of all over the place.)
  16. You seem very confused on the memory reporting front, so not much point discussing that anymore, I've already told you that you're running +500Mhz ACTUAL increase in memory frequency, which is reported by GPUz as +500Mhz above your stock value, NVidia Inspector reports it as +1000Mhz, and some other programs might describe that as +2000Mhz - but it's ACTUALLY REALLY only +500Mhz in reality. I'll just leave it there.
  17. What are you talking about, memory clock on 880M is 1250Mhz at stock, 3DMark reports your memory clock at 1750 (and so would GPUz too), which is +500Mhz. I'm well aware that there are different ways of reporting the memory clock, some programs multiply the ACTUAL frequency that the memory chips run at by a factor of 2, other's by a factor of 4 - it's due to the Quad-pumped nature of GDDR5. But, nevertheless, the actual frequency it runs at is what is reported in GPUz. You've gone from 1250Mhz to 1750Mhz (+500Mhz ACTUAL frequency increase) - simple as - nothing to do with whether you have 8GB or 4GB. Yes, you were a little unlucky with the core overclock, it's not stable for you at higher overclocks, but your memory overclock is good.
  18. Well, actually +500Mhz actually if you want to talk about the ACTUAL Mhz increase, from 1250Mhz to 1750Mhz, that's still a 40% overclock on the memory though! Your problem you talked about with your core overclock seems to be your imaginings then judging by your 3DMark results, looks like it came down to your previous problems being just due to your overclock being unstable then.
  19. Thanks for the info, and good for users with temperature issues, but my temperatures are ok, max 73 degC on CPU when gaming (83 degC with Prime 95), and the GPU temp is 67 degC, so I'll leave the liquid ultra alone - isn't it electrically conductive and therefore dangerous if not applied properly? (Did you do the Prime 95 & Unigine Heaven test at stock GPU frequencies (obviously using svl7's vBIOS though) like I suggested?).
  20. Have you managed to rule out that your GPU may just not be a good overclocker. Just because it won't overclock and remain stable doesn't mean it's a power limitation, your GPU may just be unstable when overclocked. Either it needs more voltage when overclocked, and/or is too hot in the GPU core or somewhere else on the card when overclocked that contributes to the instability. This is one possibility - it may not be a limitation with the power supply. You could maybe prove that the power supply is not the problem by running Prime 95 and Unigine Heaven Benchmark at the same time together (at stock clocks on the GPU) - this would produce a sh*t ton of load on the whole system (CPU at 100% load & GPU at 100% load at the same time), and if it doesn't crash like you've been experiencing, then it's likely NOT to be a power supply problem, but a GPU overclocking stability problem.
  21. I use AS5 too, and I use the spread method of applying (http://www.arcticsilver.com/pdf/appmeth/int/ss/intel_app_method_surface_spread_v1.1.pdf). I'm getting good results with my overclock, not above 67 degC (room temp sometimes up to 24 degC). My card's 'only' using about 90 - 100W at my overclock though, yours will be using another 10 or 20W beyond mine I would estimate. Haha, yep, 98% GPU load in the start menu of a game is kind of pointless really, not like you're gaming at that point! Although, the way I see it, when you're gaming, I want my GPU to be using 98-100% GPU load, because to me that proves that the GPU is being used to it's full extent (well optimised) & nothing is holding it back be it on my platform or in the way the game is coded.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.