Jump to content

sangemaru

Registered User
  • Posts

    157
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by sangemaru

  1. Thanks for the advice. I don't understand either, why i would have this behavior with igpu + egpu when set to gen1. Maybe because of windows 10? EDIT: Tried disabling ULPS to maybe resolve the issue, but that didn't fix anything, ULPS was already disabled. The seller responded regarding the issues, asked for a video. Well, let's hope the video is convincing.
  2. My main problem, is that the behavior exhibited when enabling Gen2 (driver crash and reset, system freeze) is also exhibited when the iGPU is active at the same time as the eGPU, whether or not the adapter is set to Gen1 or Gen2. It is possible to boot with both iGPU and the eGPU, but putting any activity on the eGPU in that situation (such as plugging in an external display) will crash my drivers (so long as I have any drivers installed) and make the system almost unusable. I can only reliably use the eGPU by booting it as primary. An acceptable compromise might be booting the eGPU as main, but still somehow enabling the iGPU as secondary, so that I can get internal display. I wonder if setting the Internal Graphics variable to Enabled instead of Auto, and the Primary Display variable to PEG (this is a pretty poor selection, does the ExpressCard count as PEG?) might allow me to retain internal LCD capabilities. Sleep / connect / wake = guaranteed BSOD. Hotplugging to the EC slot after post before Windows, I've tried a few times, but tends to stall my windows loading more often than not - either way so long as I get to windows (or Ubuntu) with both the iGPU and the eGPU, the system isn't stable and will BSOD or freeze sooner or later. I purchased from Banggood, their return/refund policy seems sketchy, I sent their customer support team an e-mail but they've not responded yet. I'm regretting not picking the PE4C now, though I've a peeve about soldered cables and how easy they are to break... By the way, if I consider using a x2.2 setup, do you have any suggestions on making it less intrusive? Or do I have to give up either WLAN or WWAN, and make a very unsightly hole in my laptop or wear it with no bottom plate? Thanks for those variables, will try out combinations soon. Hope I don't render the system unbootable and have to clear CMOS again
  3. Btw timohour, i'd like to mention that your assistance has been amazingly helpful, descriptive and usually spot-on. Thank you. Would have been ripping my hair out in frustration a long time ago without your help. Sent from my Neken N6 using Tapatalk
  4. I got my hands on an external screen to experiment with, and the following conclusions have been drawn: My EXP GDC Beast works ONLY if: - I boot from the eGPU directly (booting from iGPU will either stall my boot, bsod, or make it to windows only for the driver to start crashing constantly once I connect the external monitor). - I use Gen1 Expresscard (Gen2 will do all of the above, regardless from which device I boot). So my EXP GDC only works at gen 1 speed. This sucks. I can't even use both internal and external LCD, external only. P6568 3DMark11 Score 4466 3DMark Firestrike
  5. So, update of my situation. On a e6430 machine with Windows 10 and 3740QM CPU, my EXP GDC Beast works ONLY if: - I boot from the eGPU directly (booting from iGPU will either stall my boot, bsod, or make it to windows only for the driver to start crashing constantly once I connect the external monitor). - I use Gen1 Expresscard (Gen2 will do all of the above, regardless from which device I boot). So, the very thing I was dreading has come to pass. My EXP GDC only works at gen 1 speed. I'll be trying to return it and get a PE4C v3.0. This is definitely not improving my already abysmal financial situation any. The performance is actually not terrible (maybe dx12 at play?). I get fairly decent performance in 3Dmark (pretty much at the level of a non-overclocked 7970m) as well as gaming, though it's definitely not up to what the card can push out (Sapphire Vapor-X R9-270x). One of the painful consequences is that, due to the fact that I can't boot from iGPU, booting from the eGPU means I can't use my internal LCD, so I'm limited only to an external display while I use eGPU (can't even make use of 2 screens). This was not what I was hoping to achieve. Are there any ways I might improve signal stability?
  6. I'm using an r9-270x. No nvidia gpu unfortunately, and as far as i can tell, optimus internal lcd mode doesn't work for people in windows 10 anyway. Virtu also refuses to work in windows 10. Sent from my Neken N6 using Tapatalk
  7. Yea, made that happen. Actually setting max tolud is all that mattered. The crashing appears to have been due to virtu mvp (though i still can't hotplug). The last hangup is trying to find a workable way to render on the internal screen without virtu mvp. Sent from my Neken N6 using Tapatalk
  8. That looks pretty fun. I'd guess the MXM implementation is using some rather different specification from PCI Express to communicate to the OS through the driver, and you'd need custom-made drivers to support this mod. But oh, the posibilities. Such as using a cable that can be routed out of the chassis in a cleaner-looking way (though I imagine signal integrity is quite the concern).
  9. Set var 0x1f8 (PEG3, the only PEG device that doesn't offer a Gen3 speed setting, I'm assuming this one is the ExpressCard) to 0x1, no change. This makes me wonder. If the other 3 PEG devices (devices 0,1 and 2, which I'm assuming are the mPCIe slots) support Gen3, then wouldn't using one of these in conjunction with the PE4C v3.0 with Gen3 support offer another doubling of bandwidth? Since they can be set manually to gen3 through EFI vars, PCIE x1 3.0 should be equal to x2 2.0 and x4 1.0 right? (with the added benefit that being x1 would engage nvidia compression). Also Ubuntu won't boot. Didn't try hotplugging ubuntu yet. EDIT: Booting an old HD4850 works, though the driver support is pretty much not there, all it can do is browse. EDIT2: Managed to boot it up properly using Leshcat drivers, after uninstalling VIRTU MVP. Virtu MVP software makes the gpu unusable. The question now is, how do I use the eGPU on internal display?
  10. Ty, i'll search. Yes, used efi vars to set tolud. Going to try setting ec to gen1 to confirm it's not a signaling issue. Sent from my Neken N6 using Tapatalk
  11. Trying to boot my exp gdc beast on an e6430 results in bsod in win10 before login screen, with system_service_exception in atikmpag.sys. Anyone have any suggestions? EDIT: Hotplugging it gives me Kernel_security_check_failure BSOD. EDIT2: Managed to boot it up properly using Leshcat drivers, after uninstalling VIRTU MVP. Virtu MVP software makes the gpu unusable. Which sucks, since apparently VIRTU MVP works on Windows 7 only, but I'd have liked to get internal display functionality on windows 10, dx12 on AMD eGPU. Now that's not possible. Unless, maybe we can mod the Alienware Graphics Amplifier driver?
  12. Got my EXP GDC in hand, forgot to ask for ExpressCard 54 version, so now i'll need to be careful not to damage the slot accidentally when moving the cable. I have no external monitor to test with though, so I'm trying to get internal display functionality. Booting with it connected blackscreens, though the system is obviously functional and loading. Hot-plugging it after boot won't show the eGPU in device manager. EDIT1: I can boot the system with the EXP GDC Timers set to 7seconds. If I try to boot with the timers set to 0, i boot blackscreen. If I try to hot-plug with the timers set to 0, the system restarts. EDIT2: With dynamic TOLUD, I could boot to windows 10, see both the iGPU and the eGPU and get error 12 on the eGPU. With the max tolud set to 2.5GB (or any other value that doesn't error 12), the system will no longer be able to boot to windows with the eGPU attached. I get BSOD system_service_exception in atikmpag.sys. Trying to hot-plug it gives me another BSOD, this time related to the Kernel (Kernel_security_check_failure).
  13. More importantly, if the driver supports this functionality, how do we expose it on non-proprietary connections? Sent from my Neken N6 using Tapatalk
  14. Good to know, cool. I RMA'd my ram, received an identical kit with identical behavior. So a completely different kit of Kingston Impact HyperX won't reliably run at 2133 and won't reliably boot at 2133 on the e6430 either. Using EFI variables I got it to run at 1866 CL10 all stable. Annoying though. G.Skill seems to be more reliable.
  15. Making progress I noticed that unless I enable DIMM profile variable 0x1EE to value 0x1 (custom profile), changes I'd make to things such as tCL timing wouldn't stick. So I edited the variable above, checked that changes would stick, and now I successfully booted dual-channel to CL12. Command rate set to 2T. Now my biggest challenge is getting the system to boot. I seem to have trouble booting at 2133, where I didn't have that trouble before. I also reset my cmos beforehand to make sure I have everything as close to default as possible. I've noticed the default values on some of the variables are NOT the same as in the A07 EFI dump file. A16 might have switched some around, I'll need to extract the A16 efi variables and do a compare. Things like tRAS and maybe others have different default values (0x0a instead of 0x04 for tCL for example, 0x0b instead of 0x3 for tRAS, and so on. Though they seem to be defaulting to whatever the SPD table on the ram detects at the particular speed at that moment. Still, I'm not sure they're still the same variables, so I'll try to dump A16). I'm hoping I manage to boot 2133 at CL12 and command rate 2T at least.
  16. Huh, this is nice. Got my HD4000 stable at 1450MHz, +0.5 voltage (will try it with stock next), 1866 CL10 ram, and furmarked it, 600 points exactly. (faster than the score you posted in first post at 1550core @ 2133mem. I'm also seeing 31-32W power used at these settings. Any further will throttle eratically and not stay under load. Even at these settings I'm seeing a tiny bit of throttle when the iGPU reaches near 100C internal temp under furmark, but it's running pretty damn nice. I think that's the highest I can get that coreOC for daytime use without heavy cooling modding, all I can do now is tweak the ram to the max. Apparently the variable to control my CR and set to 2T has worked, I'll now try setting 2133 speed again, see if it's stable with the different command rate. EDIT: Going to settle with 1400MHZ and default voltage for least fluctuation, temps and artifacting. It scores about 583 points in furmark bench. The RAM is a tricky question. I still can do nothing to boot it at 2133 dual-channel, which is weird because it used to be able to. I think I lost the ability to boot dual-channel 2133 once I upgraded to bios A16. I'd downgrade to A07 to confirm. Do i need flash descriptor unlocked to downgrade? The cool part is being able to edit virtually any spec through EFI variables. This is so powerful. I'm curious if it would be possible to do the same thing using RW-Everything from Windows? Maybe in real-time?
  17. This kit. Currently running at 1866 CL10. [ATTACH=CONFIG]15902[/ATTACH]EDIT: Tried editing the variable to set custom profile. Good enough idea, unfortunately the latency won't stick. Unless I write an XMP profile to the module (I really want to write a 1.5V, 2400 or 2666MHZ CL14-CL15 profile and try it out through XMP. Unfortunately no ThaiPhoon.
  18. Thanks for the memory speed variable, managed to use it to get at least stable 1600 cl9 speeds for now, now i don't have to throw away the kit. The replacement kit performed the same, no idea why, I assume similarities in the design. Any idea of a way for me to set latency? So I can try 2133 CL12/Cl13 for example?
  19. The irony of this system is that with overclocking and high-speed ram, the dGPU becomes redundant and you can get better battery life and comfortably use eGPU. But you need cooling to get that performance level, and to get cooling you need the redundant and rather unnecessary NVS-5200. Would have been nice if both were DX12 and then you'd have the possibility for a doubling of performance under Windows 10, but the HD4000 is DX11 only (I'd heard Fermi supports DX12 but I wouldn't know). Gotta get myself some of those copper tiny heatsinks so stick them on the heatpipe.
  20. When trying to unlock the upper bins of a 3740QM, and editing into variables 0x25 to 0x28, what value should I be setting it to? Trying to understand the guides but coming up short. Can I also control maximum Jedec speeds and latencies? Say I want to use different jedec profiles on my ram than what is automatically selected. For the OC ME FW, should i boot a rufus usb stick using DOS and use fpt.exe to flash the e6430oc.bin file? (Use variable 0x228 set to 0x1 to unlock flash descriptor before flashing, right? What's the risk on this? I tried going to 1600MHz on iGPU but the chip gets HOT fast, and I run into TDP throttling quickly. I've no idea what to do about cooling the chip properly, without access to the beefier heatsink. I might be better off using the i5 CPU for gaming on the go until my EXP GDC arrives. Any EXP GDC user can advise what to do with that ExpressCard 34 card in the 54 slot on the e6430?
  21. It's not that AMD cards won't be in TB3 systems, it's that currently if you want to get internal display with an AMD GPU you need to purchase another product, like virtu, which is not quite as competitive as nVidia's current implementation. There's the added concern that the last and current generation mid-range GPU's by nVidia don't feature the highly relevant Async Compute units in a usable form. They say the have them, but they don't seem to work well. So now that GTX-970 card is not really attractive anymore, because I can purchase the r9-290x 30% cheaper that devastates it in dx12 performance. The lower overhead of DX is also of import, because on an express-card 4GBps lane, that overhead is important. There's plenty of people using just a @ x1.2 link at best. So in this scenario, AMD licensing or purchasing Lucidlogix Virtu outright and integrating their tech into Enduro, or developing their own in-house feature to achieve this option on internal display, I think would give them a serious edge in the eGPU market. And I want to see MXM-form factor eGPU's. Not big cumbersome boxes.
  22. It would be pretty cool if AMD did some thinking ahead, purchased LucidLogix if they haven't already, integrated it with Enduro and gave it proper support, and released their own eGPU boxes using MXM-format or proprietary-format cards. The way I see it what AMD is missing in this market that nVidia brings to the table, is some decent compression option.
  23. Another question @timohour, do I need to convert my entire Windows install to GPT and run UEFI mode in order to use UEFI variables, or is it enough to boot a thumbdrive in UEFI mode, use the setup_var command, reboot in legacy mode and they stick? (Do they stick? Do efi variables need to be applied at every boot?)
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.