Jump to content

Robbo

Registered User
  • Posts

    743
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Robbo

  1. I don't think disabling the intel GPU is going to prevent those crashes. To me it sounds like the card is unstable, if you're overclocking then remove the overclock & see if the problem persists. If you're not overclocking then your GPU might be dying. You could try underclocking your GPU (reducing it's frequencies - the opposite of overclocking) to see if you can gain some stability, which might prove the point that your GPU is dying. Check that GPU & CPU temperatures are OK too.
  2. Mine is supposedly limited to 87 degC according to NVidia Inspector, but it never gets that hot, so don't know if it would throttle at that point. I think most of the time this is controlled by settings in the vBIOS, but I could be wrong.
  3. That's interesting with the 67 degC voltage dependancy that you mention. I wonder if that means that higher voltages over 67 degC are dangerous for the GPU? I know for the 600 series desktop GPU's like the GTX 680, then max Turbo Boost (referring to the NVidia feature, not the MSI Turbo Boost) voltage of 1.175V was reduced to a lesser value when above 70 degC. 700 series GPU's seem to go against this rule though, so I wonder if there is any difference on the voltage tolerance of the silicon between the 600 & 700 series GPU's, even though they're both the same Kepler cores. It's a mystery!?
  4. That sounds good, but strange! The modified vBIOS shouldn't enable it to run cooler, unless MSI programmed an unnaturally high voltage for it's 3D clocks thereby resulting in higher temperatures. Did you notice a difference in voltage between your stock MSI vBIOS and the modified vBIOS?
  5. Good stuff! SSD's are great as a boot system drive, I'd find it hard to go back to HDD.
  6. Yes, I'm surprised that vBIOS has not been posted up in this thread on Page 1. If you're interested in getting your hands on that vBIOS then maybe svl7 or johnksss could sort you out with that, but maybe there's a reason for them withholding it from the public at the moment - maybe it's buggy or something!
  7. At the very least you'd have to make sure that the amount of GB of VRAM was the same between your 880M and the 780M vBIOS that you were going to flash. You also might lose Optimus graphics switching if the 780M wasn't a selectable option for your build of laptop. I don't know enough about it to give you a definitive answer about whether it would work or not, but I know you would want to consider the previous 2 points I made anyway.
  8. Yeah, that's a big voltage difference just for 50Mhz, your +137.5mv sounds a lot more sensible. It's hard to say what voltage will damage the GPU. I know the 700 series GPUs, both mobile & desktop use a higher voltage than their 600 series counterparts, but don't know if that's because they've been engineered differently to cope with it. The 750M uses up to 1.156V at stock voltages, and the 680 desktop card used up to 1.175V as long as temperatures were below 70 degC. High temperatures & high voltages equals silicon degradation, which will kill the GPU over time. Your temperatures are over 70 degC, so I definitely wouldn't go over the 1.156V of the 750M; if I had your cards & temperatures I probably wouldn't go over 1.1V. It's just a judgement call though, I don't work at NVidia so don't know what the rated lifetime of their GPUs are at different voltages & temperatures, I'm just making inferences based on how they've applied voltages on their Kepler cards in the past, and how Turbo Boost manipulates voltages with increased temperature (e.g. on GTX 680 desktop card reducing voltage when over 70 degC).
  9. That's a good overclock! Temperatures seem OK. With +175mV, does that mean you're at 1.143V for the core at that overvolt when gaming? That would be quite a high voltage, I wouldn't push the voltage any higher than that, personally I wouldn't go above 1.1V. With my vBIOS I can go to 1.05V, and have considered 1.1V, but I'm leaving mine at 1.05V for now.
  10. Cheers for the reply. :-) So, was it like that from the beginning for you, or do you think it's something that's been enabled in one of the drivers in the not so distant past? I'm pretty sure I've not seen it before (but I could be wrong!).
  11. @svl7, I noticed something has changed in NVidia Inspector recently, I think it's due to the latest NVidia drivers. I noticed that 'Prioritise Temperature' has suddenly become a choosable & configurable option, previously it was greyed out, or not there at all (as far as I can remember). I thought this might be of use or of interest to you. Do you know why that has suddenly become a configurable option with the latest drivers? Could it have anything to do with why in the latest drivers users like Mr Fox have had throttling problems since installing the latest drivers (and apparently can't be fixed by installing previous drivers)? Here's a pic showing what I'm talking about re 'Prioritize Temperature':
  12. doh!!! :-( Have you tried searching this thread using the 'search thread' option for '780M', maybe someone has uploaded a stock version of their 780M for the A18. Failing that, there's a user called Mr Fox, who's both on these forums and the Notebookreview forums, and he has an Alienware 18, maybe he could send you the file if you PM him or something.
  13. Potential OC abilities are mostly & largely predetermined by the quality of the chip, although cool running chips are more stable than hotter ones, so if you've got a good cooling system with a chip running at 65 degC then this might reach a higher stable OC than the same chip on a cooling system at 85 degC.
  14. I think the lowest voltage one, and then if temperatures are good, and you want/need more performance then flash the next voltage up & repeat.
  15. 1450Mhz MAX core clock on GPUz sensor graphs!!?? Ah, just realised, that's the Intel iGPU reading!
  16. Hi, ok, here's a description of a good approach & methodology to overclocking, which I've cut & pasted from a previous post of mine (so didn't have to type it all out again!): 1)Determine your max stable overclock of the core at stock voltage. Increase Core Clock by 100Mhz, run 3DMark11 through till end, if no visual artifacts or crashes and if temperatures are ok then increase core clock by a further 50Mhz and repeat process. Keep repeating this process until you see artifacts or crashes, at which point back down to your previous stable overclock & do further more vigorous stress testing. Best way to do this is run a game that pushes the GPU to a constant 100% GPU utilisation (GPU load in GPUz) - eg TombRaider, Far Cry 3 are good for this - if the game is stable for 1 hour you've probably found a stable core overclock. 2)Now determine your Max Stable Memory Overclock. Leave your core at your maximum stable overclock for this process. Increase Memory Clockby 200Mhz, and do the same testing procedure as above using 3DMark11to work out an initial stable max overclock. You can increase the memory clock in bigger chunks in NVidiaInspector, because there's a peculiarity with GDDR5 memory in NVidia Inspector whereby a 200Mhz increase is actually equal to a real 100Mhz increase- it's do with a very technical fact that I don't fully understandwith GDDR5 being 'quad-pumped' (4 times faster than DDR3 at any given frequency) - (you'll see evidence of what I'm talking about when you view your memory clock in GPUz, where it displays the REAL memory clock). Anyway, in NVidia Inspector, increase the memory by 200Mhz the first time you test, then in 100Mhz chunks thereafter. Each time you complete the 3DMark11 test view the GPU score, if it's not increasing anymore as you raise the memory clock, then stop your memory overclocking where it is. This is because GDDR5 has memory error correction. As memory overclock increases the rate at which errors occur outpace the rate of error correction, thereby resulting in a lower or not increased GPU score, so overclocking the memory beyond that point is futile & only serving to greater stress your card. Once you've reached your max stable overclock in 3DMark11 then do that 1 hour of gaming like your did for the core clock (make sure you have your core at your max overclock when you do this too). If it's stable, then you've now reached your max overclock for both the core & memory at stock voltage. That above process is the same if you decide to increase the voltage, but the voltage only affects the GPU core, so don't bother tweaking the memory overclock if you raise the voltage, because by raising the voltage you're not supplying anymore voltage to the VRAM. So, if you overvolt, leave the memory at your max stable Mhz, and then start tweaking the GPU core Mhz in the same way I described in the steps above, but in smaller increments (not 100Mhz increments). Use NVidia Inspector to overclock, I believe svl7 has specifically designed his vBIOS to work in tandem with NVidia Inspector. (I also think it's the best overclocking utility in my experience).
  17. Maybe someone else can give you some ballpark overclocks for that. In the meantime why not just try upping the voltage a notch or two & then seeing how high you can push the clocks without crashing or artifacting, while keeping an eye on temperatures. Then you could post back here saying what your stable overclocks were for gaming, which will help others like yourself when they read this thread.
  18. I don't have a 780M, but know a little about it. The stock voltage is 1.00V at 850Mhz isn't it? I was going to say don't go any higher than 1.05V, which is what I run my card at, but my card uses close to 100W at 1.05V, so your 780M at 1.05V would probably use close to 1.5 times more Watts than mine based on the number of shaders your card has. So, if yours was at 1.05V, then I estimate each card would use about 135W. MXM slots are supposedly rated for 100W, but I don't know how dangerous (if at all) it is to push them above the 100W ceiling for extended periods. I've seen people bench 780M cards at up to 1.1V, so they would be using over 150W per card, but that's only for a short time. I should think 1.025V would be OK for extended use provided temperatures are good, you definitely want less than 90 degC, and ideally less than 80degC. This is just my opinion though, based on the things I've learned & read, so it's your judgement call.
  19. Might just be an unstable overclock, maybe try increasing the voltage a little, experiment to see what works & is stable.
  20. Sorry, I don't think everything on earth can be explained in this thread, you have to take some responsibility for finding out information yourself. I'm out of ideas for why you're getting these strange problems - Good luck.
  21. As I said, I don't think I can help you anymore sorting out what's going wrong with your system, we've tried everything I know, so I'm not going to retype stuff I've suggested in the past. Uploading the VBIOS is the instruction j95 gave you in post #262 - he highlighted where you click on GPUz with a red rectangle - this will save the vBIOS file, and then you can attach it to a post (upload it). If you do this it will help other users with a 770M and an R3. Like I said you've only got a 75W card, I have a 75W card too, and if I set mine to 770M clock speeds I get the same low temperatures that you do. Maybe someone else can chime in if they've got some ideas on how to prevent your display driver crashing.
  22. @grandanoke: So you can compare the performance you're getting: NVIDIA GeForce GTX 770M - NotebookCheck.net Tech And have a look at this, you'll see that 862Mhz is normal (Turbo Boost 2.0): Review One K73-3N (Clevo P170SM) Notebook - NotebookCheck.net Reviews Temperatures are lower because 770M is only a 75W card. If you're still having display driver crashes after following j95's driver install instructions, then I can't really help you on that. Maybe there's a hardware problem with your card. It might be helpful if you upload your vBIOS of your card to this website. j95 showed you how to do this earlier. Then users will be able to use your vBIOS on their 770M cards - it seems not all 770M vBIOS are compatible with the R3.
  23. Why are you bothering even posting if you've not done what we've suggested! How do you expect to fix your problem if you don't try anything to fix it!? Your memory clock is fine, it's supposed to be 1000Mhz; some programs report it as 4000Mhz because it's 'quad pumped', because it's GDDR5, but it really runs at 1000Mhz, so you're all good. Your core clock is also fine. How come your graphs are all messed up though in GPUz? The red portions should show the amplitude changes of the various frequencies & loads, but yours are all the way to the top - (mine looks the same as yours (messed up) if I 'Switch User' in Windows 7 when I have GPUz running and then go back to it). EDIT: sorry for being snappy, but you can't fix your problems unless you actually try something different - like the things we suggested. Make the changes & then post back. :-)
  24. If you've never oc'd before then I recommend that you don't flash a vBIOS straight away. If I was you I'd just experiment with oc'ing within the +135Mhz window that you get on the stock vBIOS that you already have installed on your card. You can use NVidia Inspector to overclock your card. Keep an eye on temperatures. Once you've learned about overclocking and the theory of it in relation to voltage, then think about flashing a vBIOS from here that will remove the +135Mhz limit. Instructions for flashing on the first page of this thread. (Read about the dangers of overvolting & invalidating your warranty).
  25. It was j95 that gave you the instructions for how to do a clean install of your drivers, so if you're unsure then hopefully he'll be able to come back to you. But according to his instructions I understood it as: 1) Uninstall drivers using Windows control panel 2) Run Display Driver Uninstaller (DDU) V12.4 and choose the first uninstall option (Safe Mode). 3) - Download GeForce 334.89 WHQL 4) Extract the driver using 7-Zip 5) Go to the NVidia 'International' folder in the extracted driver and delete the following folders: -Display.Update -GFExperience -GFExperience.NvStreamC -GFExperience.NvStreamSrv -LEDVisualizer -MS.NET -Network.Service -NvVAD -ShadowPlay 6) In the Display.Driver folder -> copy/overwrite nvdmi.inf with (unzip) nvdmi.inf_v334.89_AW_M17xR3_GTX_770M 7) Run the Setup.exe file in the International Folder, which will sucessfully install the driver All I did there was copy & paste what j95 suggested to you & put in a few of my own words. - - - Updated - - - As long as GPUz is showing that it's recognizing the card in all it's areas then I should think that's fine.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.