Jump to content

sangemaru

Registered User
  • Posts

    157
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by sangemaru

  1. With a same-gen 12.5" Dell E6230 I had similar issues to you:

    1. If the bios detected the eGPU on bootup then it would set it as the primary adapter. A workaround is to hotplug the eGPU after BIOS post or set eGPU adapter delay switches so the eGPU is detect after bios boot.

    2. When the eGPU was the primary display, the iGPU was disabled AND the port set to Gen2. This may explain your instability given that you've said Gen1 works. Though I was using a PE4L 2.1b with soldered cable, finding Gen2 was stable.

    Your EXP GDC issues can then be resolved by hotplug after boot to the iGPU as the primary display AND switch your EC port to Gen1 speed. Acquire a PE4L 2.1b or PE4C V3.0 with soldered cable if wanting to avoid the latter.

    These details and more can be found at http://forum.techinferno.com/implementation-guides-pc/2747-12-dell-e6230-hd7870-gtx660%40x4gbps-c-ec2-pe4l-2-1b-win7-%5BTech Inferno Fan%5D.html#post37197

    Thanks for the advice. I don't understand either, why i would have this behavior with igpu + egpu when set to gen1. Maybe because of windows 10?

    EDIT: Tried disabling ULPS to maybe resolve the issue, but that didn't fix anything, ULPS was already disabled.

    The seller responded regarding the issues, asked for a video. Well, let's hope the video is convincing.

  2. My main problem, is that the behavior exhibited when enabling Gen2 (driver crash and reset, system freeze) is also exhibited when the iGPU is active at the same time as the eGPU, whether or not the adapter is set to Gen1 or Gen2. It is possible to boot with both iGPU and the eGPU, but putting any activity on the eGPU in that situation (such as plugging in an external display) will crash my drivers (so long as I have any drivers installed) and make the system almost unusable.

    I can only reliably use the eGPU by booting it as primary. An acceptable compromise might be booting the eGPU as main, but still somehow enabling the iGPU as secondary, so that I can get internal display. I wonder if setting the Internal Graphics variable to Enabled instead of Auto, and the Primary Display variable to PEG (this is a pretty poor selection, does the ExpressCard count as PEG?) might allow me to retain internal LCD capabilities.

    Sleep / connect / wake = guaranteed BSOD. Hotplugging to the EC slot after post before Windows, I've tried a few times, but tends to stall my windows loading more often than not - either way so long as I get to windows (or Ubuntu) with both the iGPU and the eGPU, the system isn't stable and will BSOD or freeze sooner or later.

    I purchased from Banggood, their return/refund policy seems sketchy, I sent their customer support team an e-mail but they've not responded yet. I'm regretting not picking the PE4C now, though I've a peeve about soldered cables and how easy they are to break...

    By the way, if I consider using a x2.2 setup, do you have any suggestions on making it less intrusive? Or do I have to give up either WLAN or WWAN, and make a very unsightly hole in my laptop or wear it with no bottom plate?

    Thanks for those variables, will try out combinations soon. Hope I don't render the system unbootable and have to clear CMOS again :D

  3. I got my hands on an external screen to experiment with, and the following conclusions have been drawn:

    My EXP GDC Beast works ONLY if:

    - I boot from the eGPU directly (booting from iGPU will either stall my boot, bsod, or make it to windows only for the driver to start crashing constantly once I connect the external monitor).

    - I use Gen1 Expresscard (Gen2 will do all of the above, regardless from which device I boot).

    So my EXP GDC only works at gen 1 speed. This sucks. I can't even use both internal and external LCD, external only.

    P6568 3DMark11 Score

    4466 3DMark Firestrike

  4. So, update of my situation.

    On a e6430 machine with Windows 10 and 3740QM CPU, my EXP GDC Beast works ONLY if:

    - I boot from the eGPU directly (booting from iGPU will either stall my boot, bsod, or make it to windows only for the driver to start crashing constantly once I connect the external monitor).

    - I use Gen1 Expresscard (Gen2 will do all of the above, regardless from which device I boot).

    So, the very thing I was dreading has come to pass. My EXP GDC only works at gen 1 speed. I'll be trying to return it and get a PE4C v3.0. This is definitely not improving my already abysmal financial situation any. The performance is actually not terrible (maybe dx12 at play?). I get fairly decent performance in 3Dmark (pretty much at the level of a non-overclocked 7970m) as well as gaming, though it's definitely not up to what the card can push out (Sapphire Vapor-X R9-270x).

    One of the painful consequences is that, due to the fact that I can't boot from iGPU, booting from the eGPU means I can't use my internal LCD, so I'm limited only to an external display while I use eGPU (can't even make use of 2 screens). This was not what I was hoping to achieve. Are there any ways I might improve signal stability?

  5. That looks pretty fun. I'd guess the MXM implementation is using some rather different specification from PCI Express to communicate to the OS through the driver, and you'd need custom-made drivers to support this mod. But oh, the posibilities. Such as using a cable that can be routed out of the chassis in a cleaner-looking way (though I imagine signal integrity is quite the concern).

  6. Set var 0x1f8 (PEG3, the only PEG device that doesn't offer a Gen3 speed setting, I'm assuming this one is the ExpressCard) to 0x1, no change.

    This makes me wonder. If the other 3 PEG devices (devices 0,1 and 2, which I'm assuming are the mPCIe slots) support Gen3, then wouldn't using one of these in conjunction with the PE4C v3.0 with Gen3 support offer another doubling of bandwidth? Since they can be set manually to gen3 through EFI vars, PCIE x1 3.0 should be equal to x2 2.0 and x4 1.0 right? (with the added benefit that being x1 would engage nvidia compression).

    Also Ubuntu won't boot. Didn't try hotplugging ubuntu yet.

    EDIT: Booting an old HD4850 works, though the driver support is pretty much not there, all it can do is browse.

    EDIT2: Managed to boot it up properly using Leshcat drivers, after uninstalling VIRTU MVP. Virtu MVP software makes the gpu unusable.

    The question now is, how do I use the eGPU on internal display?

  7. Trying to boot my exp gdc beast on an e6430 results in bsod in win10 before login screen, with system_service_exception in atikmpag.sys. Anyone have any suggestions?

    EDIT: Hotplugging it gives me Kernel_security_check_failure BSOD.

    EDIT2: Managed to boot it up properly using Leshcat drivers, after uninstalling VIRTU MVP. Virtu MVP software makes the gpu unusable. Which sucks, since apparently VIRTU MVP works on Windows 7 only, but I'd have liked to get internal display functionality on windows 10, dx12 on AMD eGPU. Now that's not possible. Unless, maybe we can mod the Alienware Graphics Amplifier driver?

  8. Got my EXP GDC in hand, forgot to ask for ExpressCard 54 version, so now i'll need to be careful not to damage the slot accidentally when moving the cable.

    I have no external monitor to test with though, so I'm trying to get internal display functionality. Booting with it connected blackscreens, though the system is obviously functional and loading.

    Hot-plugging it after boot won't show the eGPU in device manager.

    EDIT1: I can boot the system with the EXP GDC Timers set to 7seconds. If I try to boot with the timers set to 0, i boot blackscreen. If I try to hot-plug with the timers set to 0, the system restarts.

    EDIT2: With dynamic TOLUD, I could boot to windows 10, see both the iGPU and the eGPU and get error 12 on the eGPU. With the max tolud set to 2.5GB (or any other value that doesn't error 12), the system will no longer be able to boot to windows with the eGPU attached. I get BSOD system_service_exception in atikmpag.sys. Trying to hot-plug it gives me another BSOD, this time related to the Kernel (Kernel_security_check_failure).

  9. Ok, the G.Skill 2133mHz 8GBx2 RAM kit has arrived and is installed... works great, as far as I can tell. No odd flickers, unusual behavior, etc. I'm not a big gamer, but I was able to run a few games with no problems (Outlast, Hitman: Absolution, Vanishing of Ethan Carter Redux at medium settings with 1368 res. and everything ran stable/fine).

    I'm no expert at benchmarking memory, but my Windows Experience Index (on Win7 64 bit) reports a Memory subscore of 7.7 and both the Desktop & the Gaming Graphics subscores (with the Intel HD4000 onboard chip) are at 6.5.

    I don't recall what these were before. But ... so far so good! Overall, along with the Samsung SSD upgrade I did recently, the laptop feels very snappy. I'm good to go for at least a couple years now!

    .

    Good to know, cool.

    I RMA'd my ram, received an identical kit with identical behavior. So a completely different kit of Kingston Impact HyperX won't reliably run at 2133 and won't reliably boot at 2133 on the e6430 either.

    Using EFI variables I got it to run at 1866 CL10 all stable. Annoying though.

    G.Skill seems to be more reliable.

  10. Making progress :D I noticed that unless I enable DIMM profile variable 0x1EE to value 0x1 (custom profile), changes I'd make to things such as tCL timing wouldn't stick.

    So I edited the variable above, checked that changes would stick, and now I successfully booted dual-channel to CL12. Command rate set to 2T. Now my biggest challenge is getting the system to boot. I seem to have trouble booting at 2133, where I didn't have that trouble before. I also reset my cmos beforehand to make sure I have everything as close to default as possible. I've noticed the default values on some of the variables are NOT the same as in the A07 EFI dump file. A16 might have switched some around, I'll need to extract the A16 efi variables and do a compare. Things like tRAS and maybe others have different default values (0x0a instead of 0x04 for tCL for example, 0x0b instead of 0x3 for tRAS, and so on. Though they seem to be defaulting to whatever the SPD table on the ram detects at the particular speed at that moment. Still, I'm not sure they're still the same variables, so I'll try to dump A16).

    I'm hoping I manage to boot 2133 at CL12 and command rate 2T at least.

  11. Huh, this is nice. Got my HD4000 stable at 1450MHz, +0.5 voltage (will try it with stock next), 1866 CL10 ram, and furmarked it, 600 points exactly. (faster than the score you posted in first post at 1550core @ 2133mem. I'm also seeing 31-32W power used at these settings. Any further will throttle eratically and not stay under load. Even at these settings I'm seeing a tiny bit of throttle when the iGPU reaches near 100C internal temp under furmark, but it's running pretty damn nice. I think that's the highest I can get that coreOC for daytime use without heavy cooling modding, all I can do now is tweak the ram to the max.

    Apparently the variable to control my CR and set to 2T has worked, I'll now try setting 2133 speed again, see if it's stable with the different command rate.

    EDIT: Going to settle with 1400MHZ and default voltage for least fluctuation, temps and artifacting. It scores about 583 points in furmark bench.

    The RAM is a tricky question. I still can do nothing to boot it at 2133 dual-channel, which is weird because it used to be able to. I think I lost the ability to boot dual-channel 2133 once I upgraded to bios A16. I'd downgrade to A07 to confirm. Do i need flash descriptor unlocked to downgrade?

    The cool part is being able to edit virtually any spec through EFI variables. This is so powerful. I'm curious if it would be possible to do the same thing using RW-Everything from Windows? Maybe in real-time?

    • Thumbs Up 1
  12. Quote

    If you know your way around tweaking memory there are a bunch of variables that you could try (never tried them myself):

    Numeric: tCL , Variable: 0x1FE 
    Default: 8 Bit, Value: 0x4

    End

    Numeric: tRCD , Variable: 0x1FF Default: 8 Bit, Value: 0x3

    End

    Numeric: tRP , Variable: 0x200 Default: 8 Bit, Value: 0x3 End Numeric: tRAS , Variable: 0x201 Default: 16 Bit, Value: 0x9 End

    Numeric: tWR , Variable: 0x203 Default: 8 Bit, Value: 0x5

    End Numeric: tRFC , Variable: 0x204 Default: 16 Bit, Value: 0xF

    End

    Numeric: tRRD , Variable: 0x206

    Default: 8 Bit, Value: 0x4

    End Numeric: tWTR , Variable: 0x207 Default: 8 Bit, Value: 0x3 End Numeric: tRTP , Variable: 0x208

    Default: 8 Bit, Value: 0x4 End

    Numeric: tRC , Variable: 0x209 Default: 16 Bit, Value: 0xF End Numeric: tFAW , Variable: 0x20B Default: 16 Bit, Value: 0xA

    End


    For 8bit values you probably set from 0x0 to 0xFF (256) and for 16 bit values from 0x0 to 0xFFFF (65536). For example if you want to set tCL 13 you should set variable 0x1FE to 0xD etc.

    But you should probably also set

    Setting: DIMM profile, Variable: 0x1EE
    Option: Default DIMM profile, Value: 0x0

    Option: Custom profile, Value: 0x1

    Option: XMP profile 1, Value: 0x2 Option: XMP profile 2, Value: 0x3


    to Custom Profile.

    What kit?



    This kit. Currently running at 1866 CL10. [ATTACH=CONFIG]15902[/ATTACH]

    EDIT: Tried editing the variable to set custom profile. Good enough idea, unfortunately the latency won't stick. Unless I write an XMP profile to the module (I really want to write a 1.5V, 2400 or 2666MHZ CL14-CL15 profile and try it out through XMP. Unfortunately no ThaiPhoon.
  13. The irony of this system is that with overclocking and high-speed ram, the dGPU becomes redundant and you can get better battery life and comfortably use eGPU. But you need cooling to get that performance level, and to get cooling you need the redundant and rather unnecessary NVS-5200. Would have been nice if both were DX12 and then you'd have the possibility for a doubling of performance under Windows 10, but the HD4000 is DX11 only (I'd heard Fermi supports DX12 but I wouldn't know).

    Gotta get myself some of those copper tiny heatsinks so stick them on the heatpipe.

  14. When trying to unlock the upper bins of a 3740QM, and editing into variables 0x25 to 0x28, what value should I be setting it to? Trying to understand the guides but coming up short.

    Can I also control maximum Jedec speeds and latencies? Say I want to use different jedec profiles on my ram than what is automatically selected.

    For the OC ME FW, should i boot a rufus usb stick using DOS and use fpt.exe to flash the e6430oc.bin file? (Use variable 0x228 set to 0x1 to unlock flash descriptor before flashing, right?

    What's the risk on this?

    I tried going to 1600MHz on iGPU but the chip gets HOT fast, and I run into TDP throttling quickly. I've no idea what to do about cooling the chip properly, without access to the beefier heatsink. I might be better off using the i5 CPU for gaming on the go until my EXP GDC arrives.

    Any EXP GDC user can advise what to do with that ExpressCard 34 card in the 54 slot on the e6430?

  15. It's not that AMD cards won't be in TB3 systems, it's that currently if you want to get internal display with an AMD GPU you need to purchase another product, like virtu, which is not quite as competitive as nVidia's current implementation. There's the added concern that the last and current generation mid-range GPU's by nVidia don't feature the highly relevant Async Compute units in a usable form. They say the have them, but they don't seem to work well. So now that GTX-970 card is not really attractive anymore, because I can purchase the r9-290x 30% cheaper that devastates it in dx12 performance. The lower overhead of DX is also of import, because on an express-card 4GBps lane, that overhead is important. There's plenty of people using just a @ x1.2 link at best.

    So in this scenario, AMD licensing or purchasing Lucidlogix Virtu outright and integrating their tech into Enduro, or developing their own in-house feature to achieve this option on internal display, I think would give them a serious edge in the eGPU market.

    And I want to see MXM-form factor eGPU's. Not big cumbersome boxes.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.