Jump to content

Another Tech Inferno Fan

Registered User
  • Posts

    198
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Another Tech Inferno Fan

  1. What you are thinking of attempting is no different from the following: Just because a physical connection appears to be RJ-45 doesn't mean it actually carries ethernet. Likewise, just because a physical connection is HDMI doesn't mean the signal is.
  2. 10/10 highly informative thread
  3. >uses an ATX PSU to power the card >uses a separate DC PSU to power the adapter oy vey
  4. Driver 376.33 working on ThinkPad X220 w/ raidrar's 1.40 BIOS, W7 x64, GTX960 via EXP GDC Beast.
  5. It's a cable, not a wire. Use zip-ties to apply constant pressure on it. Use your creativity.
  6. Any card higher than a GTS 450 will work if you are using the internal monitor. See if you can get a GTX 560Ti for cheap. It was very popular when it was released 5/6 years ago, so it should be fairly easy to get one for $50. Maybe even less.
  7. 1. The EXP GDC comes with an 8-pin to dual 6-pin. Use that to power the 1060. 2. PCIe is backwards-compatible. The 1060 will work. It will just be slow. Considering you're on PCIe x1.1, I have a hunch there won't even be any performance benefit going from the 750 to the 1060. It may be wise not to bother with changing the card at all. 3. Read the geforce.com specifications for the TDP for the 1060.
  8. Irrelevant. Just because a connector is rated for a certain power doesn't mean it will utilise it. Go find the actual TDP of the specific card itself. You might be able to get away with using a single Dell DA-2 if you're willing to undervolt/underclock the card to lower its power requirements. Otherwise, you'll have to strap two DA-2's together - At which point it might just be easier to use a regular ATX PSU. I managed to power a 244W GTX580 using a 216W 12V rail after undervolting it.
  9. Yes, because manufacturers are supposed to have varying designs for one standardised interface.
  10. There is only one type of mPCIe, and it is called mPCIe. Anything else is not mPCIe. Go find out what an Expresscard slot looks like. Then come back.
  11. PE4C v3 is more expensive but more stable. EXP GDC is less expensive but less stable. TL;DR: higher price = better product
  12. >implying 1 device = 1 board Nowhere. Use a 2.5" SATA SSD.
  13. There is a reason why the Dell DA-2 is so popular: The 12V rail is the only one that matters.
  14. Alternatively it's some kind of EM interference from your PSU causing random crashes. Try what he did.
  15. Lenovo/IBM Thinkpads come with several battery options for each particular model. There is usually the 3/4cell battery that takes up just a small amount of space. Then there would be a medium 6cell battery that either has a protrusion out the back or increases the height of the machine. Then there would be a large 9/12cell one that has a huge protrustion out the back. There are also 3cell ultrabay batteries that go into the ODD slot, for some models. You could DIY a massive battery pack yourself if you could build a physical case for it, and use control circuitry from the stock battery pack. Similar to re-celling, but adding more cells.
  16. ... Or you could just undervolt (and later underclock if necessary) the card. It is free and gives you lower temps.
  17. It's worth noting that power consumption -> thermal output. When you use less power, your GPU also runs cooler. GPU Boost 2.0/higher will boost your card more when you have lower temps. As such, lower voltage -> lower power consumption -> lower thermal output -> higher clocks assuming GPU Boost is in effect.
  18. Hate to double-post, but in light of new information I feel this is warranted. I was playing TOXIKK and I noticed that bus utilisation consistently was above 67%. Sometimes as high as 96%. I managed to reproduce this using MSI Kombustor's Furry Donut stress test. UTIL, % shows core utilisation, BUS, % shows PCIe bus utilisation. From this I think I can conclude that the PCIe x1.2 interface is not the performance-limiting factor in my eGPU config, and that Optimus doesn't in fact use an entire 33-40% of the PCIe x1.2 interface's bandwidth. What of you and your GTX 1060, OP? What is your bus utilisation figure under Furmark?
  19. This is happening to me as well - The bus utilisation never goes beyond 65% in any program that uses Optimus. I figured this was coincidence that my 580 never used anything more than 65%, and that a faster card that yielded more performance would utilise more bandwidth. I concluded this based on the fact that the 580 would consistently show 100% GPU core utilisation within Afterburner during such loads, though I've learned not to trust even that. There desperately needs to be someone with a massive collection of video cards and can benchmark every one of them on x1.2Opt to see which cards aren't held back by bandwidth, and what the bandwidth requirement is to get the ideal performance out of each card. I'd do the grunt work of all that myself if I had the resources.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.