Jump to content

Tech Inferno Fan

Registered User
  • Posts

    5
  • Joined

  • Last visited

  • Days Won

    90

Posts posted by Tech Inferno Fan

  1. Hello, i've got a PE4H v 2.4a which is used to connect an Nvidia GTX-560 Ti to a Dell Studio XPS 16 with a x1700 ATI onboard. the connection is with one lane on a x1 mPcie without any problem. The problem appears when i try to connect 2 lanes, i use the program Setup 1.x to switch the MPcie port number 3 to x2, port number 4 is used to connect the second lane by express card.

    When i start Windows everything is fine, while i use the integrated graphic card, fine ... i can see on AIDA64 that my Nvidia is recognized as x2 peripheral. When i switch to the nvidia, black screen ... BSOD.

    Does anyone has an Idea where i should looking for to resolve this issue ?

    In PCIe ports, you have a Hot reset port and a Retrain link option. Try each one after you've manually set x2 mode. Likely you need the Link retrain option. Contrary to the help text, it doesn't automatically retrain the link after setting x2 mode. Help text is to be corrected in next version.

  2. @Renovatio - those T9800/T9900 prices seem rather high when you can get a US$140-shipped Aliexpress X9100 E0 edition that can be overclocked up to 4.2Ghz. REF: RickiBerlin's X9100-E0 HDX9000 testing

    The X9100-E0 has lower running temps than the X9100 OEM release, so it's the more overclockable (fastest) dual-core CPU available for the Series-4 and 965PM systems. I expect it to retain it's value. Overclocking is done using Throttlestop software unlocking the multipliers.

  3. Do I need to have the PSU connected to the PE4L via floppy-molex connector?

    Yes. The PE4L needs 12V to power the slot. It gets it from the floppy molex connector.

    I finally got my 2x bandwith running, using a PE4H with 2x PM3N adapter and Setup 1.x.

    Congratulations :chuncky:. x2 implementations are somewhat rare so good to see another one pop up. Now if you post full system specs and benchmarks like shown on the implementations on the first page I can include you in the list.

  4. Also, I've said this before, but the difference between the hdmi to expresscard software and ultramon mirror method is drastic. I tried the mirror method one day (since it's only about $3 for resistors plus free trial of ultramon) and hated how much the cursor lagged. With the expresscard adapter, the lag is unnoticable, except in games, where there's a slight lag (simply from processor not being able to handle software and game at same time, haven't checked if this applies with my new processor as well).

    Ultramon's mirroring is only used to give a virtual view into your dummy monitor attached to your eGPU. The idea is then to grab the window and bring it onto your internal LCD. This solution only works only for windowed apps. To get full screen apps driven by your eGPU but displayed on your internal LCD would require using the expresscard HDMI input adapter as you are doing. LucidLogix Virtu is the other solution that can do that but you need a Sandy Bridge or newer CPU + iGPU as the active primary video adapter.

  5. 3. Hit the shops again and grab another GTX660Ti. My first ever SLI rig is going to be a #@!!$ing macbook air. Woo.

    While both the Sonnet Echo Express and Th05 give real-world x2 2.0 bandwidth, the Sonnet will register a x4 2.0 link speed with the Thunderbolt controller and the TH05 will register x2 2.0. As you noted, need the driver to see them both at x4 2.0 to allow the SLI option to be enabled.

  6. Okay

    1. Virtu install of 2.1.220/64 still requires installing 1.2.114 first. Attempting to install 1.2.114 gets me this:

    [ATTACH=CONFIG]5477[/ATTACH]

    ..

    Thunderbolt only gives me one channel:

    Performance Information

    -----------------------

    Memory Copy

    Host Pinned to Device: 771.347 MiB/s

    Host Pageable to Device: 681.664 MiB/s

    Device to Host Pinned: 891.037 MiB/s

    Device to Host Pageable: 809.153 MiB/s

    Device to Device: 50.3305 GiB/s

    GPU Core Performance

    Single-precision Float: 1773.2 Gflop/s

    Double-precision Float: 128.439 Gflop/s

    32-bit Integer: 510.769 Giop/s

    24-bit Integer: 509.853 Giop/s

    Generated: Fri Nov 30 00:06:00 2012

    ---

    Sad times.

    So while the PCIe bus hanging off the end of the thunderbolt rig is definitely PCIe 2.0 x4, it uses a transport that forms a 10Gbit bottleneck. That caps bandwidth at the equivalent of PCIe 2.0 x2.

    Still not bad for an eGPU.

    What one could do (given another big pile of cash) is daisy-chain a second thunderbolt device that would use the second channel. Slap a second GPU on it, and run them in SLI :D

    Indeed. You are only getting x2 2.0 performance levels due to the Thunderbolt link. Now for whatever reason, your memory copies are still a good ~10% faster than mine. I'm using TH05 + [email protected] on a 2012 13" MBP


    =============
    Version: 0.6.163 http://cuda-z.sf.net/
    OS Version: Windows AMD64 6.1.7600
    Driver Version: 306.97
    Driver Dll Version: 5.0 (8.17.13.0697)
    Runtime Dll Version: 4.20 (6,14,11,4020)

    Core Information
    ----------------
    Name: GeForce GTX 660
    Compute Capability: 3.0
    Clock Rate: 1084.5 MHz
    PCI Location: 0:10:0
    Multiprocessors: 5 (960 Cores)
    Therds Per Multiproc.: 2048
    Warp Size: 32
    Regs Per Block: 65536
    Threads Per Block: 1024
    Threads Dimensions: 1024 x 1024 x 64
    Grid Dimensions: 2147483647 x 65535 x 65535
    Watchdog Enabled: Yes
    Integrated GPU: No
    Concurrent Kernels: Yes
    Compute Mode: Default

    Memory Information
    ------------------
    Total Global: 2048 MiB
    Bus Width: 192 bits
    Clock Rate: 3004 MHz
    Error Correction: No
    L2 Cache Size: 48 KiB
    Shared Per Block: 48 KiB
    Pitch: 2048 MiB
    Total Constant: 64 KiB
    Texture Alignment: 512 B
    Texture 1D Size: 65536
    Texture 2D Size: 65536 x 65536
    Texture 3D Size: 4096 x 4096 x 4096
    GPU Overlap: Yes
    Map Host Memory: Yes
    Unified Addressing: No
    Async Engine: Yes, Unidirectional

    Performance Information
    -----------------------
    Memory Copy
    Host Pinned to Device: 695.801 MiB/s
    Host Pageable to Device: 642.796 MiB/s
    Device to Host Pinned: 787.19 MiB/s
    Device to Host Pageable: 728.874 MiB/s
    Device to Device: 52.3023 GiB/s
    GPU Core Performance
    Single-precision Float: 1230.13 Gflop/s
    Double-precision Float: 88.6785 Gflop/s
    32-bit Integer: 352.822 Giop/s
    24-bit Integer: 352.426 Giop/s

    Generated: Fri Nov 30 02:24:26 2012
    CUDA-Z Report

    Haven't tried to install Virtu on Win8. My instructions work perfectly with Win7. Atm I'm using a scratch HDD with MBR-installed Win7 + Setup 1.1x with a MBP 13" 2012. Needed to go back to Win7 because 3dmark06 has lower results in Win8.

    Oh.. and way back someone tried to do a x1 1.0 SLI config finding the driver would only allow SLI if there was a x4 link. So no cookie.

    Apparently, all the 2012 Macbooks use a DSL3510 Cactus Ridge Thunderbolt controller. Certainly, its in my MBP 13" (pci ID 8086:1549). I'm wondering then if someone with a 13 or 15" MBPr with it's dual-Thunderbolt ports could mate it with a Sonnet Echo Express (x4 2.0 capable), wiring up *both* Thunderbolt ports. Would CUDA-Z (NVidia) or PCIeSpeedTest (AMD) show 1500-1600MiB/s CPU<->GPU memory transfer rate to confirm x4 2.0 operation?

  7. Regardless, I want to have some empiric evidence of two thunderbolt channels being used under those 4 PCIe lanes.

    Running this on an almost-similar-specced macbook air 2011 (which had a single-channel thunderbolt controller) and comparing results may confirm the alternate hypothesis. I'll try running it on a high-res bench on both machines and we'll see how they stack up.

    Easily done. Just run CUDA-Z which will give you memory copy info such as shown for x1 2.0 below. Your x4 2.0 GTX660Ti should be seeing 4 times that result, 1500-1600MiB/s.



    Core Information
    ----------------
    Name: GeForce GTX 660
    Compute Capability: 3.0
    Clock Rate: 1084.5 MHz
    PCI Location: 0:3:0
    Multiprocessors: 5 (960 Cores)
    Therds Per Multiproc.: 2048
    Warp Size: 32
    Regs Per Block: 65536
    Threads Per Block: 1024
    Threads Dimensions: 1024 x 1024 x 64
    Grid Dimensions: 2147483647 x 65535 x 65535
    Watchdog Enabled: Yes
    Integrated GPU: No
    Concurrent Kernels: Yes
    Compute Mode: Default

    Memory Information
    ------------------
    Total Global: 2048 MiB
    Bus Width: 192 bits
    Clock Rate: 3004 MHz
    Error Correction: No
    L2 Cache Size: 48 KiB
    Shared Per Block: 48 KiB
    Pitch: 2048 MiB
    Total Constant: 64 KiB
    Texture Alignment: 512 B
    Texture 1D Size: 65536
    Texture 2D Size: 65536 x 65536
    Texture 3D Size: 4096 x 4096 x 4096
    GPU Overlap: Yes
    Map Host Memory: Yes
    Unified Addressing: No
    Async Engine: Yes, Unidirectional

    Performance Information
    -----------------------
    Memory Copy
    Host Pinned to Device: 373.109 MiB/s
    Host Pageable to Device: 360.146 MiB/s
    Device to Host Pinned: 397.159 MiB/s
    Device to Host Pageable: 382.978 MiB/s
    Device to Device: 52.6243 GiB/s
    GPU Core Performance
    Single-precision Float: 1233.75 Gflop/s
    Double-precision Float: 88.6771 Gflop/s
    32-bit Integer: 353.296 Giop/s
    24-bit Integer: 352.664 Giop/s

    Generated: Tue Oct 23 23:07:17 2012
    Runtime Dll Version: 4.20 (6,14,11,4020)

  8. Awesome. You cover a couple of details that I'd otherwise miss. Now can I ask you to provide the full set of dx9, dx10 and dx11 benchmarks like shown in the DIY eGPU experiences implementations post ? <strike>Then you can be included as the first to submit x4 2.0 results.</strike> <-- it's about 10-15% faster than x2 2.0 due to the Thunderbolt downlink constricting the traffic. Disappointing.

  9. I am thinking about buying Thinkpad W530 (need 32 GB of RAM) and going the eGPU route. If I understand correctly, I would have to use setup 1.x every time I restart the computer in order to enjoy the benefits of eGPU because W530 has switchable graphics. In addition, hot swapping also won't work (for the same reason). Could someone please tell whether my understanding is correct or not?

    Thank you.

    If using a NVidia Fermi/Keplar eGPU and wanting the features provided by x1.2Opt (pci-e compression and internal LCD mode) rather than x1 2.0 then yes, you would need Setup 1.1x to disable the W530's NVidia dGPU since the bios doesn't provide that facility. That allows the eGPU to engage those features rather than the dGPU.

    If are OK with just x1 2.0 performance and don't care for internal LCD mode, then can bootup without using Setup 1.1x. For the latter case you may even want to get a AMD eGPU which does give slightly better performance. REF: i5-3320M + GTX660/HD7870. Now if LucidLogix Virtu worked on Series-7 mobile systems I'd be suggesting go for an AMD eGPU + Virtu.

    If wanting to avoid using Setup 1.1x but still use a NVidia eGPU at x1.2Opt then consider a notebook that has a iGPU only, eg: Lenovo T430/T530, HP 8470P/8570P or 6470b/6570b, Dell Latitude E6430/E6530 or Vostro 3460/3560. They tend to be cheaper than the workstation class systems like the W530 or Dell M4700. Do note that there has been reports of Lenovo notebooks having very poor eGPU performance when equipped with 16GB of RAM where reducing it to 8GB solves the issue. You could even consider a Thunderbolt-equipped notebook if wanting faster eGPU performance: a Macbook or certain Lenovo T430s/S430 systems.

    • Thumbs Up 1
  10. Bootcamp win7 on a rMBP13 auto-installed the TH05 adapter driver as long as no PCIe card was installed. Blank screen if I try to boot win7 with any card in the slot. Nvidia doesn't allow install unless it detects its own gpu plugged in.

    I cannot install windows 8 because the game I play does not launch in windows 8.

    Is there a way to force the nvidia driver install in win7? My hope is the only workarounds would be rEFInd menuentries that disable the iGPU and enable the eGPU. Once the pci address is known, I can use mm to do that, yes? Please let me know if this definitely won't work.

    Win7 does allow a UEFI install which is what I'd recommend you attempt for a simpler configuration. Can see my brief on the different eGPU behaviors a 13" MBP has when booting Win8 in UEFI mode versus MBR/BIOS mode here. The latter is what occurs when you bootcamp a Mac. You can see it's trickier to get the NVidia eGPU detected and requires pci-e fixups in a preboot environment (eg: setpci in Setup 1.1x or mm in rEFInd). UEFI mode gives a plug-and-play implementation in Win8.

  11. To bad there are no mobile Virtu Win 8 Drivers. desktop drivers released like 3 weeks ago.

    Is it possible to fool the installer?

    Found a workaround. Unfortunately didn't find a way to fool the installer. Here I had to extract the files on a desktop system and copy them over to a mobile platform. It works :)LucidLogix Virtu MVP 2.1.220 (Mobile) 64-bit install on a Notebook

  12. Oh. I was just curious / didn't get the simplicity part.

    32 / 64 Bit refer to your CPUs processing and i3 is a 64 bit one. Applications written for 64 bit technology will run faster.

    The downhand: some 32 bit legacy drivers and applications won't work.

    In the past it was hard to get 64bit applications and drivers but that shouldn't be a problem for your system. You probably won't even notice it.

    I guess you can run window 32bit without hugh performence differences anyways.

    To my knowledge there shouldn't be a problem with DSDT override running windows 64bit. But wait for nando to be sure.

    If TOLUD=3.5GB then you would need to do the DSDT override if running 4GB or more of RAM regardless if it's 32-bit or 64-bit Windows to be able to attach a eGPU. Only difference b/w 32 and 64-bit Windows is 32-bit can't use more than 4GB of RAM. Note too that Win7 can do a registry DSDT override which is less complicated than Win8 which requires a pre-boot environment (eg: Setup 1.1x) to do a in-memory DSDT substitution. Microsoft are really clamping down security in Win8.

  13. @svl7, would you know what components are necessary to activate AMD's Enduro? As far as I can tell, some Clevos have a specially vbios for it. They may also have DSDT code too. I'm trying to engage Enduro on a expresscard/Thunderbolt attached eGPU - a HD7870. It just happens to be very similar to a HD7970M but with higher clocks. Wish I had a Clevo P150EM close by . . .

  14. nando what about x1.2opt vs x2.2(opt/lucid) on internal screen?

    The Dell E6230 has the expresscard slot so maxes out at x1.2Opt / x1.2 . The external tests have all been done here. Internal tests too have been done for x1.2Opt since the Optimus driver allows it.

    Lucidlogix Virtu 30-day trialware doesn't work with Series-7 mobile chipsets

    x1.2 AMD internal testing is an issue since the latest LucidLogix Virtu 1.2.114 doesn't engage on my i5-3320M Dell E6230. Eg: Running a Virtu approved game/app does say HD78xx series but the performance results match what I get from the HD4000. I get nothing even close to timohour's Internal Screen for ATI GPUs with Virtu Driver performance results.

    It seems that version of the software was written for Sandy Bridge systems. There is a LucidLogix Virtu MVP for Mobile systems that was released via Origin (EON) but certainly no trialware download. Meaning the best I'll be able to do for internal AMD HD7870 testing is the Chung-gun/Ultramon method. The jist being starting the app on the monitor attached to the eGPU, putting into windowed mode then dragging it over to the LCD attached to the iGPU and noting the results.

    More expresscard vs Thunderbolt benchmarks coming . . .

    x1.2Opt vs x2.2 will be coming when I do a comprehensive set of tests on the 2012 13" Macbook + TH05 + GTX660/HD7870. Only issue there is the internal tests will be limited to 1280x800, the native resolution of the system. Unfortunately the only video out port on that Macbook is the Thunderbolt/mDP port and there is no daisy-chainable 2nd Thunderbolt port on the TH05. So I won't be able to do 1280x1024 3dmark06 "internal" testing for example. The significantly pricier Macbook retina has 2 Thunderbolt/mDP ports and even a native HDMI port so isn't nearly as restricted.

    Hey @Tech Inferno Fan,

    I was thinking of selling my dell laptop and using its sold money to buy this laptop:

    LIFEBOOK AH531 - Fujitsu Technology Solutions

    What do you think about it? Is it worth buying it?

    On the first page you'll find a Fujitsu AH531 implementation. However, that one by Farfavid has a NVidia dGPU which is disabled so as to not have any TOLUD issues. I recall seeing that Fujitsu run a high TOLUD so be prepared to do a DSDT override / DSDT substitution to get around that to run 4GB+ of RAM. That Fujitsu is about the only remaining consumer style notebook with a expresscard slot. Still, I'd suggest grabbing a HP Probook 4530s or Dell Vostro 3350 over a AH531 since either won't have TOLUD issues. The Probook 4530s has a large user following with detailed guides on how to replace the LCD with a FHD one. See HP Probook 4530s screen upgrade 900P, 1080P .

    Can i connect my Dell inspiron 17r SE to an external GPU .? Which Ports are needed to get the GPU connected ?

    The first post of this thread details how a eGPU can be attached to a system's expresscard slot or if not available, it's mPCIe slot. The latter usually used to host the wifi card.

  15. To bad there are no mobile Virtu Win 8 Drivers. desktop drivers released like 3 weeks ago.

    Is it possible to fool the installer?

    Good work in getting what appears the iGPU to be active. A big step for 15" Macbook users with both NVidia dGPU or eGPUs.

    Biggest problem I'm seeing with Virtu is you can't buy it. Appears like systemboard manufacturers buy a license that gets loaded when the user runs UEFI firmware and Virtu then becomes a licensed version. The 30-day trial doesn't let you adjust settings nor does it actually appear to be doing anything on my HD7870 configuration. This is bad/goods news for AMD/NVidia cards. AMD cards will otherwise be the superior performers on a TB x2/x4 link. NVidia's Optimus giving a transparent internal LCD mode without the need for LucidLogix so long as you boot with an active Intel iGPU.

  16. <span "style=font-size:large">Updated: Implementation: i5-3320M 2.6 12.5" Dell E6230 + NVidia GTX660 @x1.2Opt + HD7870 @x1 2.0</span>

    Conclusion for Sandy/Ivy Bridge systems with latest GTX6xx/HD7xxx card

    <ul>

    <li>for expresscard/mPCIe systems capable of x1.2Opt: AMD has overall the better performance but the margin is minor.

    <li>for Thunderbolt systems or systems incapable of x1.2Opt: AMD cards > NVidia cards.</ul>

    Other factors favoring NVidia cards are CUDA and Optimus' internal LCD mode. Unfortunately AMD's equivalent Enduro is still being developed and doesn't appear to be easy to retrofit to eGPU solutions. Lucidlogix' Virtu has a 30-day trial version providing similar internal LCD rendering functionality to NVidia Optimus.

  17. from the eGPU experiences thread:

    there are some problems with HD 4000 right now.

    i guess it should be possible to make the Intel HD work as i can run it offscreen without igdkmxd64.sys problems using the latest drivers 15.288 ... i can't really use it anyways.

    so you'll be stuck with external only for now i guess.

    One other issue your 15" MBP will have is the GT650M. If you did get Optimus to use a GPU to render the image but display it to a iGPU-attached device then it would use the GT650M first. The solution to use the eGPU instead is to disable the GT650M within your EFI shell. Do that by translating the following setpci commands to your mm commands. I haven't done it because there is some bitmasking here so you'd need to check the original value and apply the mask.

    :: Disable dGPU on a Series6/7 chipset
    setpci -s 0:1.0 84.l=3:3 b0.w=10:10 19.b=0,0 3E.w=0:8 COMMAND=0:7 BASE_ADDRESS_4=0,0
    setpci -s 1:0.0 COMMAND=0:7 10.l=0,0,0,0,0,0

  18. First download the latest verde (mobile) driver from Nvidia website and follow the instructions on this page:

    DIY eGPU experiences - Page 123

    Instructions on that page highlight that recent Desktop drivers include the Optimus component. Meaning, no need to download the Verde driver and apply mods.

    Is it possible to use a eGPU with a Dell Notebook like the E6410, E6510, M4500 with already bulit in Nvidia GPU? This notebook don't support Optimus, and i don't know if its possible to use the Intel HD GMA CPU. (To disable the bulit in nVidia Card and use the intel GMA instead).

    I wan't to buy a used M4500 or E6410 Notebook, but i'm not sure if it will work with the performance tweak for Optimus. And even if it work with the internal Intel GPU. If someone uses that combination it would be nice to get some more information. Awaiting some facts... thanks a lot.

    Either of those notebooks has a pci-e 1.x specced expresscard slot. I would recommend going to a M4600 or E6420. Their expresscard slot is pcie 2.0, so is double the speed. Both can run the Optimus tweak. Just be sure to get a pci-e 2.0 capable PE4L 2.1b if going for one of these boxes.

    Additionally, I have tried running an MSI ATI 6670 for variable elimination puposes, but I can't get that to detect at all. Not in setup 1.x, not in Win7, no matter what hot plug combos I use. Finally, putting my system on standby, hotplugging the eGPU, and resuming has never got me anywhere. Once again, thank you.

    I'm wondering if your expresscard slot is shot. Worth trying your eGPU gear on another Sandy/Ivy Bridge system to see if it all works. While TOLUD might leave a generous amount of pci-e space available, I found some HPs still allocate a system device right in the middle of a candidate contiguous pci-e block. There Setup 1.1x's pci-e compaction ma be able to help to allocate around the problem device.

    Hi everyone. I'm looking for some advices. One week ago, I received information about such a capability in solving my problems. Of course, I noted almost everything about eGPU at this forum, but not completely sure in feasibility of this configuration:

    Toshiba satelite a300. intel core 2 duo T6400 2GHz, 4Gb RAM DDR2 800 Mhz, ATI mobility radeon hd3650

    Thinking about to buy:

    PE4L, GF Gigabyte 660ti or GF gtx MSI 670, Thermaltake TR2 630w

    Here is question. What kind of problems can be there? Any advices about configuration?

    You will see some performance improvement but will be bandwidth handicapped. Meaning, so apps will stutter rather badly particularly DX9 ones. If your system had a 4500MHD, HD, HD3000 or HD4000 iGPU then the NVidia driver would activate pci-e compression, netting 20-333% better performance. I'd suggest look at upgrading your system first to at least an inexpensive Sandy Bridge one (Lenovo E520, Dell Vostro 3350 or HP 4530s) which have the required iGPU and have a pci-e 2.0 expresscard slot. Just be sure to mate it to a pci-e 2.0 capable PE4L 2.1b.

    Hey @Tech Inferno Fan,

    Is it possible to do X2 setup for my GTX 550 TI? cause the gpu doesn't use its full capabilities.

    The AMD chipset system I don't think you can do x2. The Edge 14 might be able to, so long as it has mPCIe+expresscard or mPCIe+mPCIe ports being [port1+2], [port3+4], [port5+6] or [port7+8]. I'd recommend upgrading to the a Lenovo E420 before you do that. It has a pci-e 2.0 expresscard slot so effectively provided the same bandwidth as x2 1.0 (x1 2.0).

  19. yeah i actually tried the mm command allready (guess they are active right now) but without results. that's why id like to modfiy the values while i'm in windows which seems quiet hard to do.

    could you provide an image of your taping? i tried it some day's ago but failed :D maybe i should try it again.

    Those images of the Intel HD4000 system tray look promising. They wouldn't appear if the HD4000 wasn't installed and functioning. Well, you can try taping you card to only make the first lane available. I did it by leaving the first 7 tracks accessible on the second half of the pcie slot as shown below. It's just a small piece of cellophane tape that I ran on both sides to cover lanes 2 onwards.

    If x1.2Opt engages then you'll get a 3dmark06 score that's higher than your x2 2.0 results.

    gtx660x1forced.th.jpg

  20. Nice Nando. Yeah imho setting up an EGPU is actually easier in EFI ... if you know how to install windows EFI mode which can be a bit tricky though.

    On NBR there's a guy who says he's got optimus running by default on his MBPr 15". But I guess he's talkin trash as he doesn't want to answer my questions.

    Had Optimus installed once but GT 650m was used allways (tray shows: used by "displayport"). I can force VGA OUT on Intel HD and its actually running of screen. Received some strangly low results for 3D mark 11 while using Intel HD (like 800p?) with gt GT650 / GTX 660 ti showing up in resultsbrowser.

    Found some hints on switching GPU/Display.

    Like 4 IO-Ports need to be changed while Windows is up:

    echo Switch select
    outb 0x728 1
    echo Switch display
    outb 0x710 2
    echo Switch DDC
    outb 0x740 2
    echo Power down discrete graphics
    outb 0x750 0

    Which has to be coded into a programm some way cause windows blocks those modifications by default.

    Found some libraries that could actually help.

    Yup.. MBP eGPU owners should avoid Win7 and bootcamp. Go straight to Win8 installed in UEFI mode for a plug-and-play eGPU implementation.

    According to EFI shell help the mm command has the syntax is as follows:



    Displays or modifies MEM/MMIO/IO/PCI/PCIE address space

    Address - Starting address
    Value - The value to write
    -MEM - Memory Address type
    -MMIO - Memory Mapped IO Address type
    -IO - IO Address type
    -PCI - PCI Configuration Space Address type
    -PCIE - PCIE Configuration Space Address type
    Address format: 0x00000ssbbddffrrr
    ss - Segment
    bb - Bus
    dd - Device
    ff - Function
    rrr - Register
    -w - Unit size accessed in bytes:
    1 - 1 byte
    2 - 2 bytes
    4 - 4 bytes
    8 - 8 bytes
    -n - Non-interactive mode
    MM Address [Value] [-w 1|2|4|8] [-MEM | -MMIO | -IO | -PCI | -PCIE] [-n]

    Meaning, you should be able to code the above in your rEFIt or rEFInd shell as something like:

    mm 0x728 1 -IO
    mm 0x710 2 -IO
    mm 0x740 2 -IO
    mm 0x750 0 -IO

    GTX660 - RE5 internal DX9 var benchmarks: x1.2Opt vs x2 2.0

    re5dx9var1280x800dx9x22.th.jpg re5dx9var1280x800dx9x12.th.jpg

    GTX660 at x2 2.0 versus x1.2Opt. 117.0 vs 95.9. Huge difference.

    I forced a x1 link on the TH05 by cellophane taping all tracks past the seventh on the second section of the PCIe slot. The NVidia driver will see the 13" MBP's HD4000 and the x1 link, engaging pci-e compression so I get x1.2Opt. We see clearly the x2 2.0 link does significantly better than the x1.2Opt link when using the internal LCD.

    AMD likely the better performer on a x2 2.0 or x4 2.0 Thunderbolt interface

    Now prior to discovering Optimus' pci-e compression on a x1 expresscard/mPCIe link back in 2010, a ATiAMD card running at full duplex (x1E) was the better performer.

    Now a Thunderbolt port runs off the Intel Northbridge, a completely different pathway. So if a AMD card negotiates full duplex speed we can speculate a HD7870 to perform noticably better than it's competitor, a GTX660, on a Thunderbolt x2 2.0 or x4 2.0 link due to it's lower bandwidth requirements.

    Only things still favoring a NVidia card then would be the transparent internal LCD mode provided by the Optimus drivers and the CUDA processing. LucidLogix Virtu can provide the former or could investigate if can spoof necessary signatures in the ACPI tables to enable the AMD switchable graphics.

    Need a AMD card to put the speculation to rest.

  21. <span style="font-size:large">Briefly: i5-3210M 2.5 13" MBP + TH05 + GTX660@x2 2.0 Win8 DIY eGPU implementation</span>

    I managed to get my hands on a 2012 13" Macbook Pro which I paired successfully with a TH05 + GTX660 + Win8. I concur the findings of users Shelltoe and oripash - there are two ways of installing Win8 which significantly affect the ease of eGPU use.

    The first (UEFI MODE) requires a little more skill to get Win8 loaded initially but the eGPU functionality is plug-and-play thereafter. It's the recommended mode to use. The second (BIOS MODE) is the default Bootcamp 4.0/5.0 method so it's likely users will find themselves in this less desirable mode. More details of both are below:

    1. UEFI MODE [recommended]

    If install Win8 using oripash's guide http://forum.techinferno.com/diy-e-gpu-projects/2494-macbook-air-11-2012-gtx-660ti-%40-2-2-no-opt.html#post33280 and Teknotronix' http://forum.techinferno.com/diy-e-gpu-projects/2385-17-macbook-pro-late-2011-th05-win-8-setup-guide.html#post31839 then just need to set TH05 SW1=1 (PERST# from PortRidge), SW2=2-3 (x2..x16) . Boot into Win8 where the eGPU will work out of the box. There will be no error 12. It's a plug-and-play configuration.

    Unlike Teknotronix, I found no need to use a surrogate system to install the UEFI version of Win8. I could boot the MBP, hit ALT key and select either the Win8 Pro MSDN Installation DVD or USB stick copy of it and perform the installation. Only important point being I had to select the "EFI" DVD or USB stick.

    2. BIOS MODE [avoid if possible]

    A Bootcamped MBP runs a MBR type partition system. It requires a special sequence to get the eGPU detected. I found Win8 would *always* get an error 12 against the eGPU and if don't get the timing right I could end up with either no eGPU on the PCI BUS or if use the TH05 setting as UEFI mode above (SW1=1), the Macbook will power itself off when trying to boot Win8.

    The 100% successful method to get the eGPU on the PCI BUS in this mode is to set SW1=3 (6.9s), SW2=2-3 (x2..x16) on the TH05, poweron the eGPU+TH05, poweron the Macbook. Hit ALT during boot to get a boot selection. Watch the RED PERST# LED on the TH05. When it's no longer red then the eGPU is on the PCI BUS so can select your required OS. It's also possible to flick switch SW1 to SW1=2 (500ms) to hasten the process of getting PERST# to no longer be RED while at the ALT screen or Setup 1.1x screen if the delay is too long for your system. The delay turns out to be more like 30s than 6.9s.

    The most convenient fix for the error 12 that will be seen in Windows 8 is:

    <ol><li>Install Setup 1.1x onto a USB stick.

    <li>Configure it's \config\pci.bat to contain a replica of the same configuration UEFI boot uses for the eGPU, captured and translated below:


    echo Performing PCI allocation for 2012 MBP (BIOS) matching the UEFI settings . . .

    :: The X16 root port
    @echo -s 0:1.0 1c.w=6030 20.l=AE90A090 24.l=CDF1AEA1 > setpci.arg

    :: Underlying Bridges in order from high to low
    @echo -s 4:0.0 1c.w=5131 20.l=AB00A090 24.l=C9F1B801 >> setpci.arg
    @echo -s 5:4.0 1c.w=4131 20.l=A700A200 24.l=C5F1B801 >> setpci.arg
    @echo -s 8:0.0 04.w=7 1c.w=3131 20.l=A300A200 24.l=C1F1B801 28.l=0 30.w=0 3c.b=10 >> setpci.arg
    @echo -s 9:0.0 04.w=7 1c.w=3131 20.l=A300A200 24.l=C1F1B801 28.l=0 30.w=0 3c.b=10 >> setpci.arg

    :: The NVidia GTX660
    @echo -s a:0.0 04.w=400 0C.b=20 24.w=3F81 10.l=A2000000 14.l=B8000000 1C.l=C0000000 3C.b=10 50.b=1 88.w=140 >> setpci.arg

    setpci @setpci.arg
    set pci_written=yes
    @echo off

    <li>Configure the \config\startup.bat to do the pci-e fixups and then chainload to Win8.


    call speedup lbacache

    :: wait for eGPU to be on the PCI BUS
    call vidwait 60 10de:11c0

    :: initialize NVidia eGPU
    call vidinit -d 10de:11c0

    :: Perform the pci-e fixups
    call pci

    :: Chainload to the MBR
    call grub4dos mbr
    :: Speed up end-to-end runtime of startup.bat using caching

    <li>Confirm this fixes error 12 against the eGPU as it did for me.</ol>

    Once confirmed to work, streamline this into the more convenient and faster booting disk image install of Setup 1.1x. It's more convenient as you'll no longer need to hit ALT to boot the USB stick. Instead, you'll have a DIY eGPU Setup 1.x Win8 bootitem.

    Proceed to copy your \config\pci.bat and \config\startup.bat from your USB stick to the Setup 1.1x disk image V:\config directory as mounted within Win8. The reason can't just use the disk image install for everything is because a Macbook doesn't do the disk mapping correctly so can't be used within the Setup 1.1x pre-boot environment to configure the system. Instead, the USB stick is used for initial configuration and when done, the pertinent configuration files copied across to the disk image for read-only access.

    • Thumbs Up 1
  22. How do you mean? I mean that I want to use a PE4L-EC2C configuration instead of the PE4L-Pm3N method? I specifically do not want to have to remove the WiFi card from underneath the unit each time I want to use the graphics card. (Which would be at home each day).

    I've decided on the X220 though due the the T420 not being available here anymore.

    Since you've earmarked the X220, have you also looked at pricing of the competitor 12.5" Dell E6220/E6230 or HP 2560P/2570P systems? Below I've shortlisted their pros/cons. Sometimes can get a good deal on a refurb or 'as new' unit but this varies by locale. All can accomodate a NVidia DIY eGPU using their expresscard slot with 4GB+ of RAM.

    12.5" Lenovo X220/X230

    + IPS LCD with wide viewing angles

    + dual-drive capable: mSATA + 7mm 2.5" HDD/SSD

    + 94Wh 9-cell battery option

    -- X220 has quality and bulild issues with palmrest material above expresscard slot disintegrating, battery rattling around

    - styling not for everyone -> same as from their 1990s Thinkpads

    - non-upgradable soldered CPU

    - short palmrest may be uncomfortable if have large hands

    - whitelisted WWAN/wifi slots preventing use of future comms standards. I believe a hacked bios exists to get around this.

    - uses Displayport rather than HDMI port

    12.5" Dell E6220/E6230

    + contemporary styling

    + traditional keyboard

    + do not whitelist their WWAN/wifi slots

    + HDMI port

    - no touchstyk

    - non-upgradable soldered CPU

    - no IPS LCD option

    - only single 7mm 2.5" SATA SSD/HDD capable

    - no 9-cell battery option, rather uses a slace.

    12.5" HP 2560P/2570P

    + contemporary styling

    + socketted CPU -> can upgrade to faster dual or quad cores but confirm warranty implications

    + optical drive. Can be replaced by caddy hosting 2x9mm 2.5" SATA SSDs/HDDs

    + 100Wh 9-cell battery option

    - uses Displayport rather than HDMI port

    -- heavier and thicker than the above two

    - no IPS LCD option

    - whitelisted WWAN/wifi slots preventing use of future comms standards. No hacked bios is possible.

  23. I've been following this whole egpu adventure for a while and I just saw that there is now a thunderbolt solution available from bplus for (relatively) cheap. Now I just want to make sure I understand the details of the current state of egpus. This thunderbolt solution is limited to only 10 gbps and so has the same performance as 2.0 X1 optimus, correct? Furthermore, the only current way to beat the performance of these two solutions is to either buy a ridiculously expensive solution from sonnet, magma, etc., or to get a tricky x4 link going. Furthermore, bplus' thunderbolt solution has only a modest performance penalty compared to 2.0 x16.

    Is that all correct? Also, are there any estimates about when bplus might come out with a product that taps into more of the bandwidth of thunderbolt?

    In the commotion that was last month, the important 10-01-2012 BPlus update about a x4 2.0 TB product wasn't added. It's now been updated at Thunderbolt, USB 3.0, PCIe 2.0 eGPU update, quoted below. BPlus advised a x4 2.0 BPlus product will arrive in 2013. The BPlus TH05 with it's x2 2.0 link will outperform a x1.2Opt link as I explain in my i5-3320M +GTX660 @ x1 1.0, x1.1Opt, x1 2.0 and x1.2Opt testing here.

    10-01-2012: Masaharu on x4 2.0 Thunderbolt TH05 device

    > Lastly, is there any way you could make a prototype x4 2.0 capable

    > Thunderbolt-to-pcie adapter within a relatively short timeframe?

    > I'm guessing it would not be too difficult to extend what was done with the TH05.

    > That is, a bigger board to accomodate a presumably larger

    > Thunderbolt chip + 2 extra lanes.

    Thunderbolt x4 2.0 product itself is not difficult for us at present,

    because we already finished TH05 as you mentioned.

    However, we still hesitate to develop it because of some reasons;

    1. Thunderbolt is just started this year so politically unstable

    between Apple and intel and may take very long time

    to release logo'd products.

    2. Due to 1., we do not have many product lines.

    At present, we use "PortRidge" only, the simplest and the cheapest

    Thunderbolt controller and the external components are not so many.

    Thunderbolt (interface) - Wikipedia, the free encyclopedia

    If we make x4 products, we have to use expensive CactusRidge and

    lots of external components including Display Port ones.

    So, until issue 1. is not solved, we would like to develop

    PortRidge related products.

  24. After emailing Nando he suggested I post my question here on the forum. So hi everyone! :victorious:

    I currently have a Samsung R580-JS03-ZA notebook with Windows 8 Pro. Full specs below.

    Core i5 M430

    8gb Ram installed (Max the laptop supports)

    Intel HM55 Chipset

    Geforce GT330m graphics

    1x Express Card 34mm

    I've linked a screenshot of my "Resources by connection". Would I have any issues running a DIY Egpu with the amount of ram installed? I do not mind using an external monitor as I use one already as a main screen while at home. Do I need to post anything else for you to know? :)

    PS. The graphics card I am intending on using is an ATI 6850

    You won't be able to host a HD6850 with your system unless you use 3GB or less of RAM. The Radeon series of cards requires a free 256MB PCI space block. Unfortunately your TOLUD is set to 0xD8000000 (3.375GB) of which only NVidia cards can make use of that 128MB space b/w D8000000-E0000000. That's because NVidia cards can use a fragmented 128MB+64MB+32MB PCI space. ATiAMD cards need a 256MB PCI space aligned to a 256MB boundary.

    Considering too you lack of an iGPU (for Optimus) and together with a Series-5 chipset you are limited to x1 1.0 link. I'd suggest upgrading your notebook to a cheap Sandy Bridge one with an iGPU so it's x1.2Opt capable and buy a PE4L 2.1b instead of a PE4H 2.4. It will be significantly faster than your Samsung.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.