Jump to content
EwinRacing Flash Series Gaming Chairs

[SOLVED : Cpu idle bug fix without hibernating!] MBPr Late 2013 (dgpu) + GTX 970: Optimus working, "cpu idle bug"

Recommended Posts

EDIT: SOLUTION on my 4th post.


Hi all,


I just wanted to know if anyone found a fix for the cpu idle bug without hibernating. 
First of all, I made my setup in 2015 and I just used an external monitor since at the time making it work with optimus having a dgpu was unknown. Recently I read that disabling the dgpu was an option so I quickly got "up to date" reading the new stuff, but I might have missed something. Keeping that in mind:
I have a MBPr Late 2013 (GT750M dgpu) with an Akitio Thunder Box, with a dell power supply (which I always thought it might not be enough because I had to limit the power with Afterburner to 80% so it wouldn't shut down when playing a few hours) and an Nvidia GTX970 (KFA 4GB).

Since I was using a Win 7 on Bootcamp/BIOS I decided to start from scratch: got a Win 10 EFI installed, got reFind so I could activate the visibility of the Iris igpu, and although it seems to be much more buggy, got to run the system with the egpu. Then disabled the dgpu on the device manager, used the switch-gpu bat file as admin to get the igpu as main, so far so good. Shut down, Boot.

I get to the point where I basically have the egpu working and I tried running Tomb Raider's benchmark (what I had at hand) and it was going smoothly (which wouldn't be like that with the iris or dgpu). Also, looking at msi afterburner my egpu was at 100%.

The ONLY thing that I can't by any means get to work is to not have the CPU idle crazy load. I get around 25-30% of CPU load and if I try to hibernate the system it... well it goes to hell, haha.
The MBP seems to hibernate (sleep/shutdown black, fans out) but the egpu is still on with the fans (doesn't really hibernate), and from there its impossible to make it run. Pressing any button, alt, power, whatever, it tries to wake up/power on for a moment, but it simply can't finish, just black screen with the fans turning on. The only way to shut it down is to hold the power button. And after that it basically locks there and becomes even difficult to boot again, I actually have to press alt, go back to osx, reinstall refind - since it no longer appears with it trying to wake up from hibernation- run windows efi and quickly press shift+f8 to run in safe mode and then boot correctly (and again with the cpu load).


I have to say that even when working, my macbook pro doesn't consistently boot with the egpu via thunderbolt2, with any combination (wait to plug the cable, have it powered on, off, preboot, when starting windows, etc.) it just seems to randomly work. It used to work a bit better before refind, though, but never consistently (don't know if that has changed in the past years).


So... any ideas of how to get that cpu load off? Or to make hibernation work? 
Oh, as a note, I read that when you disable the dgpu and use the switchgpu script you can control the screen brightness again, but I just have it locked to the max. The control osd appears and moves but it doesn't actually change. I don't care about it but I thought it might give a hint of something not done properly?

Many thanks to whoever can give me any ideas!!!


EDIT: Reduced part of the cpu idle from known w10 small stuff but the main bug is still there, about 15%)

Edited by dnkei

Share this post

Link to post
Share on other sites

Ok, so even though no one answered - since it's a very specific problem - I'm going to answer myself after many many tests, just in case someone else needs this info in the future.
As I wrote before, sleep or hibernation in my machine is a no-go. I did make it work once, but it wasn't really hibernation but more a "hibernate-crash-shutdown-boot-think_its_waking_up". Anyway, here are my results after many tests (all of them are done with the dGpu disabled):


Intel HD graphics driver for my igpu (Iris 5200) causes the cpu's first processor (CPU 0) to be in a constant idle around 80%, being a general cpu idle of ~10-15% (since it has 8 processors). This is what is known by many here as the "cpu idle bug", which can be solved by going into hibernation and waking up. Unfortunately my computer (MBPr 2013 with thunderbolt 2) doesn't seem to be able to do this. Uninstalling the driver makes Windows 10 to revert to "Microsoft Basic Display Adapter", and actually removes this cpu idle, staying on 1% or so (as it should be). However, this driver also has it's problems, mainly it has not much configuration and most importantly, it DOESN'T work to make the eGpu work on the internal display.
Some of my tests with the tomb raider benchmark tool:

· Intel driver (+cpu 15% bug) + eGpu on external monitor : 60fps
· Intel driver (+cpu 15% bug) + eGpu on internal monitor : 30-40fps
· Microsoft Basic driver (0% cpu) + eGpu on external monitor: 60fps
· Microsoft Basic driver (0% cpu) + eGpu on internal monitor : 25-30fps

This last option isn't actually possible, at least directly it wouldn't let you, but I did connect an external monitor, started the game and moved it over to the internal display - and looking at the charts on afterburner it seemed to be using the egpu. I was hoping this would be better with the possibility of getting a headless hdmi connector, but no luck. No luck mirroring the screen either (got the same 30fps).

That's where I am right now. Any new ideas are welcome, if I don't get any I guess I'll keep posting whatever test I can do. Maybe a quick way to change drivers? Or something to force the cpu 0 to stop being used (like a hard-hibernate fix)?

I feel so close but so far...

Share this post

Link to post
Share on other sites

Quick update:
When installing "new" igpu drivers the screen flashes for a moment before they are install and it prompts you to restart. From that flashing moment the cpu idle is gone.

Meaning, if you install new drivers and just click "No, I will restart later", during that session (until next reboot) the cpu bug is gone, egpu works flawlessly in the internal display, everything just works as it should be.

Unfortunately in my experience it has to overwrite the drivers for new ones, so to do this fix you have to (every time) install old intel hd graphics drivers (to overwrite the new ones you have at the moment), click don't restart, install the newer drivers and don't restart either. And cpu is back to normal until you shutdown.


I think I'm near the solution... Maybe when installing drivers it disables and enables the internal display for a bit or something? There's usually a second black screen like 5 minutes after you choose not to reboot after installing the drivers, but I'm thinking that might be windows 10 also trying to adjust the new drivers? 


I'm probably rambling alone here, but any new ideas are welcome.

Share this post

Link to post
Share on other sites

I think I solved it! I'm still testing it, but so far so good, I'm able to remove the cpu idle without hibernating by disabling and enabling again the Intel Iris device. I figured something like that might be happening when reinstalling drivers prior to rebooting.
BE CAREFUL not to disable your igpu if you don't do it with a script that later enables it afterwards. It's usually ok, but it might get you a black screen, specially if you have a second gpu the os might only show the screen on that gpu and you wont see anything unless you connect a display to that gpu or enter in safemode (shift/f8 on windows boot).

Now, to make things a bit easier I made a script for DevManView (a free alternative device manager for windows) that runs on startup and disables and re-enables the igpu making it free of the idle bug! I think it works faster than the hibernation fix, so I might make a quick video explaining it if somebody wants it. My small bat file in case somebody wants to try it right away (the first 10 second timeout is because I observed that if I run it too fast in the startup it didn't remove th cpu idle bug):

timeout /t 10
DevManView.exe /disable "PCI\VEN_8086&DEV_0D26&SUBSYS_012F106B&REV_08\3&11583659&0&10"
timeout /t 2
DevManView.exe /enable "PCI\VEN_8086&DEV_0D26&SUBSYS_012F106B&REV_08\3&11583659&0&10"

So, now whenever I start windows, the script runs, screen goes off a few seconds and back on and I'm good to go! Not the cleanest way, but at least I have it working!



Edit: It seems that with the egpu connected timing is sometimes even more critical, I just made the first timeout to 15 and the second one to 5, and also added a shortcut on the desktop in case I have to run it manually.

Edit2: In case anyone wants to make the bat file too, the "PCI\...." is the id of the iris igpu, you can check the info of the device on DevManView and copy/paste yours.

Edited by dnkei

Share this post

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By Tech Inferno Fan
      We've had a stack of recurring questions from with problems getting a mPCIe eGPU working. This includes GPU-Z not reporting no clock details, error 10/43 or even not being detected at all. Overall it's more troublesome getting mPCIe working than say expresscard or Thunderbolt.
      Here's some common problems and some troubleshooting steps to correct them.
      Getting a black bootup screen, resolving error 10/43 or ACPI_BIOS_ERROR win bootup messages
      Here the BIOS doesn't know what to do when it sees an eGPU. So the solution is to not let the BIOS see it. Do that by setting the delays on the eGPU adapter (CTD/PTD - EXP GDC or CLKRUN/PERST# on PE4L/PE4C). Boot with eGPU adapter in the wifi slot into Setup 1.30 or Windows. Is the eGPU detected?
      I'll add that should error 43 continue AND you have a NVidia dGPU as well as NVidia eGPU then it's likely because of having the mobile NVidia and desktop NVidia drivers loaded simultaneously. Proceed to uninstall ALL your NVidia drivers, use "DDU" to clean NVidia registry entries and do a 'clean' install of the latest NVidia desktop driver.
      mPCIe port that hosted the wifi card disappears when connecting an eGPU in it's place
      Use the Setup1.30 PCIe Ports->enable to enable the missing port.
      eGPU does not get detected
      Overcome mPCIe whitelisting by booting with the wifi card and then hotswapping in the eGPU. That way the BIOS will enable the mPCIe port to work.
      1. Boot with wifi card into Windows, sleep system, swap wifi card for mPCIe eGPU adapter and ensure eGPU is powered on, resume system. Do a device manager scan in Windows. Is the eGPU detected?
      2. Boot with wifi card into Setup 1.30 *carefully* hotplug the eGPU adapter in place of wifi card. Hit F5 to rescan the PCIe bus. Is the eGPU detected?
      If this enables detection then avoid this tedious hotswapping by seeking a unwhitelisted modified BIOS for your system OR test the Setup 1.30's PCI ports->undo_whitesting feature.
      eGPU still not detected - set the PSU to be permanently on
      The latest EXP GDC and BPlus eGPU adapters try to manage the PSU to only power on after they detect a signal. This can cause a race condition where the eGPU isn't ready to go when the CLKRUN signal is asserted.
      Avoid this by jumpering the PSU so it's permanently on rather than being managed. Depending on the PSU you are using refer to the following doco on how to do that:
      eGPU still not detected - a non-standard mPCIe implementation by your vendor?
      PERST# mPCIe pin 22 may need to be isolated due to a non-standard implementation by your notebook vendor: http://forum.techinferno.com/enclosures-adapters/10812-pe4x-series-understanding-clkreq-perst-delay.html#post142689
      eGPU still not detected - faulty hardware?
      If you still don't get detection then test the video card and eGPU adapter in another machine to confirm neither is faulty.
      NVidia driver stops responding
      EXP GDC, PE4H 2.4 and PE4L 1.5 all use a socketted cable and therefore are not true Gen2 compatible device. This error indicates there was transmissions errors.
      The solution is either to get a better Gen2-compliant eGPU adapter such as PE4C V3.0 or PE4L 2.1b (both with soldered cable), or downgrade your link from Gen2 to Gen1 using BIOS options or Setup 1.30
      Other troubleshooting help resources?
      See DIY eGPU Troubleshooting FAQ.
    • By ReverseEffect
      3dMark11 Performance Preset Benchmark: http://www.3dmark.com/3dm11/11262792
      Required items:
      1.) Lenovo u310 (I have a Core i3 - Ivy Bridge, 8GB RAM)
      2.) 65CN99WW unwhitelisted.
      3.) eGPU (I used a EVGA GTX 750 Ti from another computer I had).
      4.) EXP GDC mPCIe Edition adapter (got from eBay - banggood seller).
      5.) ATX power supply (I used a 600W PSU from another computer I had).
      6.) USB wireless.
      7.) External monitor, keyboard, and mouse.
      1.) Obtain and install a unwhitelisted BIOS. If you are unable to obtain a unwhitelist BIOS, I think it might be possible to bypass it with Tech Inferno Fan's Setup 1.x (may need confirmation as I haven't used it myself yet.)
      2.) Shutdown computer and remove all USB devices, ethernet cables, power cables, card reader cards.
      3.) Remove mPCIe wireless card and detach antennas.
      4.) Attach EXP GDC external mPCIe cable to the former wireless slot and screw down.
      5.) Attach HDMI end of the mPCIe cable adapter to the EXP GDC device.
      6.) Attach graphics card to the EXP GDC device (I moved my laptop off the desk and onto the side shelf to make room on the desk for the monitor/keyboard/mouse).
      7.) Using the power cable adapters that came with the EXP GDC device, I hooked in my ATX power supply's 20 pin and CPU 4 pin cables. Then hooked the other end (8 pin) into the EXP GDC device. My EVGA 750 Ti also required that I use an additional PCIe power cable (6 pin) in the top of the card.
      8.) Then I attached my misc devices (HDMI monitor, USB keyboard/mouse/wireless adapter), and hooked in my PSU and powered it on (below is image of final product, also moved HDMI cable out of the way).

      9.) Power on your computer and let it install the standard VGA drivers and then install your drivers (I didn't have to go in the BIOS for any graphics settings, which it doesn't have anyways, nor did I have to disable iGPU in Device Manager before the card was added).
      Extra Info:
      I found that most games will play on med settings with about 45 FPS with this particular card.
      BDO: Upscale on - Anti Aliasing on - SSAO off - med settings.
      Skyrim: Med-High settings.
      Fallout 4: Med settings.
      (EDIT 5/19/2016) > Images added.
    • By TheLoser1124
      Hello, A couple of days ago I got a new GPU but when I installed it into my computer I was unable to use it but now I know why. When checking the device manger I went into the events tab of my GPU when I went to view all events, I noticed an error it said " event 411 kernel PnP " and It also said Problem Status: 0xC01E0438. I believe this is why my GPU hasn't been working on my PC. If you know how to fix this problem or have info on how to fix this problem that would be greatly appreciated. I'm also using a EVGA NVIDIA GeForce GTX 1660.
    • By TheLoser1124
      I'm having a problem where my PC is saying my eGPU is not usable, its detected in the Device Manager and it doesn't have the yellow triangle next to it. I cant use it games and the Nvidia Control Panel doesn't recognize it either. I'm using a EVGA NVIDIA Geforce GTX 1660. I'm using windows 10 and I tried DDU and reinstalling them and now I cant access the nvidia control panel. The GPU is not recognize on any other apps and I went on *********** and was unable to find my answer, Any help on how to fix this problem would be greatly appreciated.
    • By Radstark
      Title sums it up.
      TL;DR: we have a Clevo that runs a desktop CPU, one with those huge 82 Wh batteries. We remove the GPU and let it use the CPU's integrated graphics. How much time for the battery to go from 100 to 0? Is it comparable to an ultrabook's?
      I'm theorizing a mobile set with a static eGPU and an upgradable CPU. Given a hypothetical user that needs fast processing on the go and long battery life while retaining very high degrees of mobility, but at home wants a powerful machine to run most games, I guess that would be their best bet. It would surely be more convenient to keep everything in the same disk. And even though the thing would be quite heavy to carry around, changing CPU would be more cost-efficient than changing an entire laptop. (Not sure if I'm right here, and also I'm not sure whether the motherboard in a Clevo would be replaceable when the new CPU needs a different socket, which is another reason why I'm asking here.)
      If my above guesses aren't correct, then an ultrabook with Thunderbolt and without a dedicated GPU would be a better choice. If they are, then we would be carrying more weight in exchange of a more cost-efficient setup, which I think would be a fair tradeoff.
      Also I am aware of the heating problems that these laptops suffer from, at least compared to a desktop setup. Would they be solved by moving the GPU out of the chassis, and instead plugging it with an eGPU dock via Thunderbolt port?
      What do you think? Is it doable? If not, why?
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.