Jump to content

sangemaru

T|I Advanced Member
  • Content Count

    157
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by sangemaru

  1. Thanks @Tech Inferno Fan. I'm kind of worried about getting a PE4C 3.0 or PE4L 2.1b since they're quite expensive to get in europe. Would definitely do it if i can find a purchaser for my EXP GDC though. A large part of the problem is the expresscard slot on this laptop, like I said, the adapter itself worked perfect with all GPU's when using mPCIe. By the way nando, how did you get ASPM settings to stick through sleeps/reboots? Currently I have to manually enable it in RW-Everything every time i sleep/reset the machine (it doesn't take very long, but it's still a bother). Is there a way to set these before windows loads without breaking the bootloader or current installation, and have them stick? EDIT: my god, prices of Bplus adapters have gone up massively - to the point where i doubt there's any sense in going for EC as opposed to a thunderbolt solution. And nobody seems to be selling their Bplus adapters.
  2. Two updates: Using EXP GDC, i tried out an RX460 and GTX 1060 GPU. As opposed to previous models I'd tried, RX 460 is capable of sustaining x1 2.0 and working just fine indefinitely. GTX 1060 is capable of sustaning x1 2.0, but occasionally crashes, making it unsuitable for use with the EXP GDC. It seems the choice of videocard matters a lot. One other glorious update: I used RW-EVERYTHING to enable ASPM across all my PCIE devices on all my root hubs. Windows 10 now reports around 8 hours of possible available battery life (as opposed to 4-5) with Opera (20 tabs open) + Chrome (40 tabs open), Wi-Fi enabled, brightness set to min+3%, CPU set to 99%max, battery saver on. CPU Package power consumption has dropped to 3.4W min instead of 5.7W min. BatteryBar still reports only 3:40 of available time though - i'll be testing a discharge cycle to see how long the machine can really run. If I just managed to increase my laptop battery life from 3-4 hours effective to 7+, then no way in hell I'm giving this qt3.14-laptop up. Just need to find the right stable eGPU now. I wonder if RX470/480 could sustain x1 2.0 stable. Oh, and this is while I'm writing/browsing. Idle is 10-11+ hours on the 9-cell battery (with 10% wear).
  3. Yea man, that's ok. 0x1 and 0x01 is the same thing in 8-bit, apparently. 0x170 at 1f would glitch out for me, but that was with a hotter CPU, maybe I'll try again. Set 0x171 to something like 0x25 and if it's stable, go down in increments of 5. Ignore the errors, those pop up for everybody. You tried the Dell feature enhancement pack? http://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=MHVWP&driverId=MHVWP&CID=281125&LID=5567365&DGC=AF&DGSeg=ARB&ACD=25789221108616845 Or this: http://forum.osxlatitude.com/index.php?/topic/5907-manually-controlling-the-cooling-fan-of-a-dell-laptop-or-pc/
  4. 0x16F set to 0x1 Make sure to keep in mind the range posted in badbadbad's guide: GT OverClocking Frequency 0x170 0x00-0xFF (8-bit value from 0-255) [iGPU] [the decimal value] x 50MHz (Example: 34 x 50mhz = 1700mhz) GT Overclocking Voltage 0x170 0x00-0xFF (8-bit value from 0-255) [iGPU] 0.01 increment for every value from 0x00 to 0xFF (Ex: 0x05 = +0.05V) So if you'd like to try a relatively safe clock, try setup_var 0x170 0x1c which should set to 1400Mhz. Setting it to 0x17f most likely registers it as 0x7f which is 127 in decimal, which is way out of bounds. As for voltage, you gave it +0.37v, which should be enough to get you well over 1500MHz. Check the table below: GT Overclocking Frequency Value Resulting Frequency (from GPU-z log) GT Overclocking Voltage Value Voltage Increment (speculation) Memory Frequency (Dual Channel) Highest GPU Temperature (from GPU-z log) Highest GPU Power (from GPU-z log) Furmark 720p Bencmark unchanged 1100 MHz unchanged +0.00 V 1600 MHz 18.1 W 371 unchanged 1250 MHz unchanged +0.00 V 2133 MHz 17.9 W 515 0x1a 1300 MHz unchanged +0.00 V 2133 MHz 19.5 W 517 0x1b 1350 MHz unchanged +0.00 V 2133 MHz 21.1 W 532 0x1c 1400 MHz unchanged +0.00 V 2133 MHz 81 C 22.2 W 549 0x1d 1450 MHz 0x05 +0.05 V 2133 MHz 83 C 23.5 W 553 0x1e 1500 MHz 0x15 +0.21 V 2133 MHz 84 C 27.5 W 563 0x1f 1550 MHz 0x25 +0.37 V 2133 MHz 87 C 31.6 W 589 0x20 1600 MHz 0x40 +0.65 V 2133 MHz 93 C 37.4 W 639 0x21 1650 MHz 0x50 +0.80 V 2133 MHz 102 C 717 Anyway, use throttlestop's TPL option to enable Intel Power Balance and give 0 to the CPU and 31 to GPU in order to test overclocks. It's going to get pretty hot pretty fast.
  5. sangemaru

    [HARDWARE MOD]980m to Desktop 980 core upgrade

    Brother Khenglish, this is amazing. So, in essence, you may be running a world-first MXM 980 desktop core? So... suppose you installed that card in some really ancient machine...
  6. I disagree. Used iGPU only version with the 3740QM decently cooled with liquid metal and the best pastes. It quickly ramps up to 105C and then hard-throttles down to about 3.2GHz, and slows down massively in games. Switching to a 3612qm made games much smoother.
  7. Tried switching out the newer EXP GDC for the older EXP GDC using the mPCIe cable, and it seems the old Beast doesn't even throttle down to 1.1, while the newer one does. So the older one actually has better signal integrity - just not over expresscard And i don't know if it's the cable or the expressCard slot. @Dewos, didn't you have this same exact problem?
  8. So I finally have an update on my machine - got rid of the 3740QM, running too hot, replaced it with a 3612QM and everything is much smoother overall. I ordered another EXP GDC Beast (v8.0 this time). The mPCIe version was 33$ and it convinced me to pull the plug on it - unfortunately it confirmed my hunch, and it's very disappointing. This new adapter can't hold Gen2 signal either.... on expressCard. But on mPCIe it can. It ocasionally throttles down to 1.1 for a microsecond and then ramps back up. Gonna game a bit on it but... very disappointing Considering the ExpressCard slot seems unstable... forking over the 100$ for a PE4C v3.0 seems like a much harder swallow.
  9. Yeah, the CPU is a trooper. It has no problems maxing out it's potential with unlocked multipliers, though iGPU overclocking gets rather unstable for me past 1400MHz, though I can even game @ 1600, for example. My only glass ceiling is cooling. At some point I might do some hardware mods for that. We'll see.
  10. Drill under the fan, forget about closed lid (I still haven't done it though).
  11. No, I had no troubles with Gen2 enabled so long as I wasn't putting any load on the eGPU. My problems appeared the moment I tried to connect anything to it. If I booted the machine with the adapter connected but with no monitor, Windows would be perfectly stable and report Gen2 speeds.
  12. My external monitor is an old Fujits B24W ECO. Unfortunately at present I have no other workable machine with EC on-hand (I do have an e4300 but it needs repairs before i can make it work - maybe I could report on it sometime next week). I could test with an HD4870 or HD3870 eGPU, but once more, next week at the earliest. It would be really sucky if the adapter could actually deliver Gen2 but the problem were with the machines .
  13. For me, I set tolud at 2.5GB, I have iGPU set to always enabled, as well as forced primary. I also force PCIe speed to Gen1. I suggest you try it even if the 2570p can hold a link, who knows. That's about it.
  14. Have you played around with efi variables to lower tolud and/or enable disable igpu/dgpu or switch primary device?
  15. sangemaru

    EXP GDC Beast/Ares-V7/V6 discussion

    If you don't have access to an unlocked BIOS, you're unlikely to be able to change the PCIe adapter speed using anything other than Setup 1.30, as far as I'm aware. In my case, I'd get BSOD at Gen2 speed if the adapter wasn't the main GPU. If it WAS the main GPU, I wouldn't get BSOD, but I'd either get blackscreen, freezes, or both (usually both). Especially since you're using nVidia Maxwell cards, your system TOLUD must have enough space be able to allocate resources for all your components, moreso since you also seem to have dGPU's in your machines. Refer to this thread for more information. Without enough contiguous address space available, you simply can't use the cards, that's all there is to it. Some machines (like my own), offer dynamic TOLUD allocation or the ability to manually set the TOLUD size. Most machines do not. In the near-certain event where you don't have enough space available, you MUST perform a DSDT override. The second question has to do with signal integrity, more specifically the inability of the EXP GDC to reliably deliver Gen2 signal quality. My own machine would automatically set the PCIe speed to Gen2 when the adapter was not set as primary from boot, or on hot-plug, which would instantly crash and freeze, since my own adapter can't reliably deliver Gen2 signal. I expect this is a common issue on most machines. I use EFI overrides to force Gen1 signal, while keeping the iGPU enabled while the eGPU is set as the primary display on boot. This allows me to boot and use the card properly, maintains the bus signal fixed at Gen1 and then allows for hot-plugging. It took me days to get it working, and I was very fortunate to have the machine I do and receive the support of the techinferno community. First, you must confirm you have enough PCI Address Space available (check the first link). Once you confirm that, use GPU-z and try to determine what bus speed your eGPU is using when connected to an external display. This might be more easily done if you uninstall all drivers and allow just the Microsoft VGA Adapter driver. If the link speed is reported as PCI Express x1 @ 2.0, your machine is forcing Gen2 and it's likely that your adapter can't handle the signal. You would either need to use Setup 1.30 to downgrade the bus speed, or (better off) you should request a refund from the vendor due to the adapter's inability to sustain Gen2 as advertised, and purchase the superior PE4C v3.0 adapter.
  16. sangemaru

    EXP GDC Beast/Ares-V7/V6 discussion

    Two questions: - Have you guys confirmed your TOLUD is low enough for these GPU's to fit in the PCI address space on your machines? - Have you attempted to limit PCIe Speed to Gen1? The BSOD especially makes me think your adapters can't sustain Gen2 speeds. This is what happened to me. I can only use my EXP GDC in Gen1 mode, which is why I requested a refund from Banggood. They awarded my refund and let me keep the adapter. The performance hit in my case (r9-270x) is about 20-30%, but still good enough to max out The Witcher 3 at 1920x1200 on all ultra settings (excepting nVidia Hairworks and SSAO/HBAO) at 30fps.
  17. sangemaru

    EXP GDC Beast/Ares-V7/V6 discussion

    Unfortunately i'm writing from phone so it's hard to be too detailed. Your PSU should have two sets of 12V cables. You need to purchase a cable that unites those two rails into one and plug that into the exp gdc power cable, instead of using just one of the PSU cables. You can and should do the dsdt override before you set up the egpu adapter. I think win7 might not allow hotplugging and the use of more than one video card. I suggest you make a temporary win 8.1 install on a secondary media just to test egpu on it. Uefi/legacy should not influence the black screen as far as i know. Sent from my Neken N6 using Tapatalk
  18. sangemaru

    EXP GDC Beast/Ares-V7/V6 discussion

    Psu is rated to deliver around 160w per 12v rail. That's not enough. Find a connector to use both rails to plug into the exp gdc power cable. You might also need a DSDT override, since i doubt you have enough pci address space to run the card. Set PTD switch to 7s. Also, try hotplugging. Get to windows without exp gdc, sleep computer, plug in, resume from sleep. Does your bios allow you to disable the integrated nvidia gpu? If not, you might need @Tech Inferno Fan's setup 1.30 program. Sent from my Neken N6 using Tapatalk
  19. So this just makes the e6430 even more fun and safe to tweak, basically? @timohour, can you confirm you have the same behavior on your machine? Damn it, my tweak bone is vibrating, wants to play with those voltage steppings, but i've no idea what values i should write - don't want to use wrong ones and overvolt something till it fries. Sent from my Neken N6 using Tapatalk
  20. Hmm. See, here's what has me confused. I rendered my machine unbootable a few times after experimenting with the variables for ram speed and timings. Each time i'd close down the machine, power off, remove power cord, the battery and cmos battery, power drain it, and then power it on without the cmos battery (maybe i'm confusing the name - BIOS battery?) and all variables would reset to default, including cpu multis, igpu overclocking, ram, pcie bus speed etc. I'd then plug everything back in and boot no problem.
  21. Maybe I'm being stupid, but why would it brick? Wouldn't resetting cmos restore the default settings?
  22. Thanks for the detailed answer Khenglish. So basically, we'll start seeing a whole new era of computing when sub-zero cooling becomes mainstream. If gpu voltage control isn't in nvram, then... What might that variable control? Any ideas? If it were voltage to PEG devices, i'd expect finding a var for each PEG, not just the one, and more importantly, the ability to control voltage stepping gets me suspicious. I can't think of any devices outside of cpu and gpu cores that would need granularity of voltage control. Sent from my Neken N6 using Tapatalk
  23. Oh by the way, i can play The Witcher 3 on my monitor's native resolution with everything except nvidia hairworks, AO and foliage maxed out. Framerate is between 30 and 60fps. Yay
  24. The TDP on the top bins is relatively steady so long as the CPU doesn't get over 85C. After 85C (and especially after 90) the TDP fluctuates going up, which is why I'm speculating the chip is more efficient and stable when cool (or the TDP calculation algo runs better). I did my best to get the reading before the CPU heated up but after it settled on a stable value at that multi, so they're as valid as I can make them. Oh, so in theory those vars would control voltage for the dGPU if I had it?
  25. i7-3740QM OEM Multipliers x12 x23 x24 x25 x26 x27 x28 x29 x30 x31 x32 x33 x34 x35 x36 x37 x38 x39 x40 x41 Voltage 0.8456V 0.8706V 0.8806V 0.8956V 0.9106V 0.9307V 0.9457V 0.9709V 0.9957V 1.0258V 1.0508V 1.0758V 1.1058V 1.1409V 1.1709V 1.2059V 1.2109V 1.2159V 1.2209V 1.2260V TDP 11.9W 19.3W 20.2W 21.3W 22.6W 23.8W 25.5W 27.1W 29W 31W 33.6W 36.5W 39.2W 42.7W 47.6W 50.6W 51.8W 52.5W Some pretty interesting conclusions to be drawn from this, in my opinion. Comparing to the other results posted in the link you shared, It suggest that there's quite a wide range of voltage sweet spots for low-multi scenarios. Notice how the voltage increase wildly up to around x37, in steps of 0.02-0.03V, only for that range to lower dramatically after x37. It suggests to me that the Ivy Bridge CPU's are either: 1 - massively overvolted compared to their needs; or 2: Designed to operate around 4-4.5GHz at high-voltage, 1.2V+ operation at 55W+ TDP. The fact that the heat, even at high multis, scales linearly with voltage, makes me wish I could undervolt. If I could bring TDP under 44W, I wouldn't need cooling mods at all anymore to max it out indefinitely.Definitely anyone wishing for best-bang-for-buck performance at the moment should probably get an Ivy Bridge CPU capable of multis above x39 and in a machine capable of powering and cooling it. 3740QM/3820QM/3840QM/39x0XM.This just makes me yearn for better cooling and an extreme chip, to be honest, but honestly, I don't think I have any good reason to complain.Also, Banggood refunded my payment for the EXP GDC Beast. Considering the DA-2 cost me 15$, the adapter was 'free', the 24" monitor cost me 76$ under warranty (unfortunately it's not the IPS panel, but the viewing angles are so good I couldn't tell at first - it has amazing specs though) and the GPU was also 'free' (from a friend), the CPU cost me 126$, well... definitely no reason to complain.Any idea what the following EFI vars relate to? I see them placed between PEG3 variable options, I guess it's too much to hope that we might be able to control the voltage steps for the CPU but... what if?0x5F82B Numeric: Voltage Margin Steps (305044074464-305044074464) , Variable: 0xC5A {07 A6 A9 04 AA 04 C0 01 02 00 5A 0C 10 10 01 FF 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00}0x5F851 Default: 8 Bit, Value: 0x2 {5B 0D 00 00 00 02 00 00 00 00 00 00 00}0x5F85E End {29 02}0x5F860 Numeric: Voltage Start Margin (305044074464-305044074464) , Variable: 0xC5B {07 A6 AB 04 AC 04 C1 01 02 00 5B 0C 10 10 04 FF 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00}0x5F886 Default: 8 Bit, Value: 0x14 {5B 0D 00 00 00 14 00 00 00 00 00 00 00}0x5F893 End {29 02}
×

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.