Jump to content

15" MSI GE60 + GTX1060@4Gbps-mPCIe2 (PE4C V3) + Win10 [DaCM]


DaCM

Recommended Posts

Laptop Specs
i7 3610QM
8GB RAM
GTX 660m
Windows 10 Anniversary 64bit
1080p screen

 

eGPU Components
Zotac GTX 1060 Mini
550W Chieftec PSU
PE4C v3 mPCIe

 

Installation Steps
- Uninstall mobile nvidia drivers, reboot
- Take out wifi card to free up mPCIe slot 
- Install PE4C connector into the slot (with the GTX 1060 and PSU already set up with the PE4C)
- Reboot, install desktop nvidia driver, reboot

At this point the 1060 was properly recognised by the system (no errors at all), without the need to use Setup 1.3. There is no need to use any of the delay switches on the PE4C either, it is simple plug and play.

 

The in-built 660m is not disabled this way though, and it was the GPU used for Optimus/internal screen output. I wanted to test internal screen performance using the 1060, so I proceeded to use Setup 1.3.

- Install Setup 1.3, boot into it
- Set eGPU port (#4 in my case) to GEN2
- Set endpoint to 4.0
- Disable dGPU
- Ignore dGPU
- iGPU + eGPU compaction
- Chainload - test run (MBR)

This way, the 660m is completely disabled and the 1060 can output to the internal screen.


Throughout my testing GEN2 speeds were maintained by the PE4C without issue, no freezes, driver crashed, BSODs.

 

I have performed some tests on the internal screen, but results are quite underwhelming unfortunately.
In 3DMark 11 the system achieved P93xx points, and 34FPS in Unigine Heaven 4.0 (extreme) on the internal screen. (Normal scores are P17000 and 80+ FPS.)

 

Game performance is very clearly limited by PCIe bandwidth as well (GPU usage often below 50%), but the framerates are interestingly tending to VSync framerates, such as 30 or 45 FPS.

Fallout 4 Ultra: 29.9 FPS, often dipping as low as 23 FPS
Fallout 4 Medium: 29.9 FPS, rarely dipping lower
Doom Ultra (Vulkan): Constant stuttering (couldn't measure FPS as RivaTuner overlay didn't work)
Doom High (Vulkan): Constant stuttering
Witcher 3 High: 29 FPS, often dipping to low 20s
No Man's Sky Ultra: Consistently well over 60 FPS, smooth
Just Cause 2 (regardless of settings): around 25 FPS
Team Fortress 2 Max: 45 FPS, microstuttering
Borderlands 2 Ultra: 50-60 FPS, smooth
Dawn of War II Retribution Benchmark on Ultra: 40 FPS average

 

Overall, there is a roughly 60% performance loss on the internal screen compared to normal GTX 1060 performance. Furthermore, some (even undemanding) games are rendered unplayable due to the stuttering the lack of bandwidth causes. I would not recommend this setup for internal screen use.

 

On the external screen, results are roughly as expected, with only a slight performance loss.
I achieved 61.4FPS in Unigine Heaven 4.0 (extreme) on an external screen.

 

Game performance is also much better:

Fallout 4 Ultra: 60 FPS smooth, rare dips into 50s
Witcher 3 Ultra: 25 - 45 FPS, dips are visible
Witcher 3 High: 45 - 60 FPS, smooth
Doom Ultra (Vulkan): Couldn't measure FPS, but very smooth
No Man's Sky Ultra: Over 60, smooth
Just Cause 2 (regardless of settings): 40 - 45 FPS
Dawn of War II Retribution Benchmark on Ultra: 60 FPS average
Guild Wars 2: The game uses an old engine and is horribly optimised. It seems to be using a lot of PCIe bandwidth, and performance is hardly playable at any setting, with constant stuttering. (The internal 660m performs better)

 

Lastly, I haven't tested if not using Setup 1.3 and having both nvidia cards run would result in a performance loss on the external screen. I would guess that it would not make a difference though, as the 660m would drive the internal screen, while the 1060 the external one. So overall, if you are using an external screen (which you should be, considering the performance), the MSI GE60 (2012) is plug and play.

Edited by DaCM
  • Thumbs Up 3
Link to comment
Share on other sites

Thanks for the summary. 2012 means it's connected via a x1 PCIe Gen 2 lane?

What resolution did you test on?

This one fares much better & closer to expectation on a x1 pcie connection.



Stupid question: Is optimus compression enabled? This should save quite some bandwith&might explan the performance loss. I don't know if it would work without the compression enabled at all, but maybe this points you towards a fix.

I'm not on eGPU yet, so not speaking from experience, but looking for a similar setup&reading up on the matter. Will be with a PCIe 3 system and a i7 4700MQ.

Sent from my XT1058 using Tapatalk


Link to comment
Share on other sites

I haven't tried Firestrike, but I have done some calculations since and it turns out that 1080p screen output at 30FPS uses up almost exactly half of the 5Gbps bandwidth available on a 2.0 1x link, which explains the massive performance degradation and stuttering on the internal screen. Lowering the resolution in games improves performance on the internal screen drastically.

 

Optimus compression is enabled on the internal screen, but due to the 1080p resolution the performance can't really be helped I think. (To send 1080p 60FPS to the internal screen the GPU would use up 100% of the available bandwidth and not be able to receive anything to process at all, which is not possible, of course.)

 

Everything is fine on the external screen though, which is why I would suggest to do a similar setup only if you intend to use one.

  • Thumbs Up 1
Link to comment
Share on other sites

  • Tech Inferno Fan changed the title to 15" MSI GE60 + GTX1060@4Gbps-mPCIe2 (PE4C V3) + Win10 [DaCM]

I did think about that, but the Afterburner monitoring is definitely not accurate. I am not sure if it is because it excludes the bandwidth used for sending the picture back to the internal screen or because of some other reason, but the utilisation reading never goes above 60%, even when it is clearly the bottleneck.

  • Thumbs Up 1
Link to comment
Share on other sites

18 hours ago, DaCM said:

the utilisation reading never goes above 60%, even when it is clearly the bottleneck.

 

This is happening to me as well - The bus utilisation never goes beyond 65% in any program that uses Optimus. I figured this was coincidence that my 580 never used anything more than 65%, and that a faster card that yielded more performance would utilise more bandwidth. I concluded this based on the fact that the 580 would consistently show 100% GPU core utilisation within Afterburner during such loads, though I've learned not to trust even that.

 

There desperately needs to be someone with a massive collection of video cards and can benchmark every one of them on x1.2Opt to see which cards aren't held back by bandwidth, and what the bandwidth requirement is to get the ideal performance out of each card.

 

I'd do the grunt work of all that myself if I had the resources.

Edited by Arbystrider
Link to comment
Share on other sites

Hate to double-post, but in light of new information I feel this is warranted.

 

I was playing TOXIKK and I noticed that bus utilisation consistently was above 67%. Sometimes as high as 96%. I managed to reproduce this using MSI Kombustor's Furry Donut stress test.

 

c994b5de68.png

 

UTIL, % shows core utilisation, BUS, % shows PCIe bus utilisation.

 

From this I think I can conclude that the PCIe x1.2 interface is not the performance-limiting factor in my eGPU config, and that Optimus doesn't in fact use an entire 33-40% of the PCIe x1.2 interface's bandwidth.

 

What of you and your GTX 1060, OP? What is your bus utilisation figure under Furmark?

Link to comment
Share on other sites

I'll chime in in case it's helpful.  I'm running X230 i5 8GB with PE4C V3.0 Expresscard and a Gigabyte GTX 1060 6GB mini connected to a PB258q.  I set Windows 10 to output to Display 2 only (external).  I'm using GPU-Z to measure Bus Interface Load and reaching a max of 73% during gaming.  My synthetic score is below.  I'm looking for ways to improve the performance even further.

 

Unigine Heaven Benchmark 4.0

FPS:
32.2
Score:
810
Min FPS:
7.5
Max FPS:
63.4

System

Platform:
Windows NT 6.2 (build 9200) 64bit
CPU model:
Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz (2594MHz) x2
GPU model:
Intel(R) HD Graphics 4000 10.18.10.4358/NVIDIA GeForce GTX 1060 6GB 21.21.13.6909 (4095MB) x1

Settings

Render:
Direct3D11
Mode:
2560x1440 8xAA fullscreen
Preset
Custom
Quality
Ultra
Tessellation:
Extreme

after.gif

Edited by douirc
Link to comment
Share on other sites

On ‎2016‎.‎09‎.‎14‎. at 0:20 PM, Arbystrider said:

Hate to double-post, but in light of new information I feel this is warranted.

 

I was playing TOXIKK and I noticed that bus utilisation consistently was above 67%. Sometimes as high as 96%. I managed to reproduce this using MSI Kombustor's Furry Donut stress test.

 

c994b5de68.png

 

UTIL, % shows core utilisation, BUS, % shows PCIe bus utilisation.

 

From this I think I can conclude that the PCIe x1.2 interface is not the performance-limiting factor in my eGPU config, and that Optimus doesn't in fact use an entire 33-40% of the PCIe x1.2 interface's bandwidth.

 

What of you and your GTX 1060, OP? What is your bus utilisation figure under Furmark?

 

I got a consistent 95% readout (3rd value in the 2nd row of the overlay).

s0YR9mq.jpg

 

If you look at the FPS counter though, Afterburner is showing 100FPS, while Furmark is showing 50FPS. Since 60FPS in 1080p would practically use 100% of the available bandwidth, Afterburner is the incorrect one.

 

50 FPS, however, only takes up around 80-85%, and considering that Furmark is a very simple benchmark in terms of visuals, it is very possible that the data sent to the GPU only takes up the remaining approximately 10%.

 

Looking at Furmark results from other 1060s in normal setups though, the result they produce is an average of 65 FPS, which would be impossible on a 1.2 internal screen setup due to the bandwidth limit:

 

 

So in Furmark it is possible to clearly demonstrate that a bottleneck is caused by the bandwidth, but this is the only program I have seen so far where this is the case. I can see the same FPS difference caused by a bottleneck in games compared to normal GTX 1060 setups, but the Afterburner PCI Util readout never goes above 60-70% in those.

 

I would still say that the Afterburner readout is incorrect in most cases though, because there is a clear jump in performance when I use the exact same setup on an external display, so there is no other explanation for it. It might be worthwhile to look up how Afterburner measures the PCI bus utilisation, because maybe a portion of the data is not being measured for some reason.

Edited by DaCM
Link to comment
Share on other sites

  • 7 months later...
On 27/8/2016 at 2:31 PM, DaCM said:

Laptop Specs
i7 3610QM
8GB RAM
GTX 660m
Windows 10 Anniversary 64bit
1080p screen

 

eGPU Components
Zotac GTX 1060 Mini
550W Chieftec PSU
PE4C v3 mPCIe

 

Installation Steps
- Uninstall mobile nvidia drivers, reboot
- Take out wifi card to free up mPCIe slot 
- Install PE4C connector into the slot (with the GTX 1060 and PSU already set up with the PE4C)
- Reboot, install desktop nvidia driver, reboot

At this point the 1060 was properly recognised by the system (no errors at all), without the need to use Setup 1.3. There is no need to use any of the delay switches on the PE4C either, it is simple plug and play.

 

 

Hi! Sorry for using an old thread, have a question about it. So, without disabling the dGPU you had it working with standard BIOS or was it necessary to modify it in order to whitelist the card?

 

I have the same laptop model and I'm thinking about doing the same . Thanks!

  • Thumbs Up 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.