Jump to content

Khenglish

Registered User
  • Posts

    1799
  • Joined

  • Last visited

  • Days Won

    67

Posts posted by Khenglish

  1. yeah, getting a CPU upgreade would be the easiest way.

    anyway I get a "404 error money not found" while checking my wallet :D

    I also love to mess with mods and stuff, but well, if BCLK OC is the far I can go, then I supose I already have sucess ^^

    I think it would be nice to see pics from other people wich succeded to this mod to see if they also unlocked other stuff than BCLK.

    for example, is there any way of unlocking RCR (Reference clock ratio)?

    edit: notification mails and avatar working now , yay! :D

    Asrock already killed that link :(

    And series 6 chipsets killed off BSEL and VID mods. VID is now controlled through 1 single pin with a serial data format instead of the old multiple pins with fixed voltages. BSEL mods are also dead since regular sandy bridge has no option for a BCLK other than 100MHz (SB-E does, but it's on a much different socket), so even if there were still BSEL pins, there would be nothing to select between.

    And if paying for an i5 is a problem, you probably shouldn't mess around with fuses on your i3 to try to get a higher multiplier since there's a good chance you'll kill it, and you would need to buy a higher clocked CPU to compare against.

    On a side note, I do have the same i3 that you have and a 2920xm ES. I'm thinking that the 3 rows of circular contact pads are where intel applied current to blow the fuses. There's probably a coating over them that needs to be sanded off to test resistances. I'm really not interested in sanding the 2920 since it's unlocked anyway, so I doubt I'll pursue this. It would be better to have the next speed bin up i3 to compare against to try to isolate the multiplier fuses.

    I wonder if CPUID extensions are also set through fuses? Intel has to set them somehow, and I don't think CPUs have flash memory on them. This would be interesting since then you could not only enable a higher multiplier, but set it as well without having to hack the BIOS and ME FW.

    I am willing to sand and test the contacts on an i3-2310m (2.1GHz) if anyone has a similar CPU such as an i3-2330m (2.2GHz) that they are also willing to sand and compare with. The idea would be to find 2 contacts that are connected on the 2330m, while the same contacts have no connection on a 2310m, meaning that the fuse between those contacts on the 2310m had been blown. I'm thinking that the 80 contacts are likely routed to VCC or GND, not each other.

    Below is what I'm guessing is the fuse setup:

    Inside the CPU die:

    The CPU reads high or low voltage from a contact pad on the gate of a transistor. This gate is connected to VCC through a pull-up resistor (resistors are doable on a CMOS process).

    Outside the CPU die:

    80 lines from the CPU die go out directly to the 80 contacts in the 3 rows. The contact pads are also individually routed to GND through their own fuse.

    How it works:

    When a fuse is not blown, the voltage drop from VCC to GND will occur across the internal pull-up resistor on the CPU die, and thus the voltage on the gate is GND. When a fuse is blown, there is no connection to GND at all, and the gate will be pulled up to VCC through the internal resistor on the CPU die. There will be no voltage drop across the resistor because CMOS gates block all current. To blow a fuse, Intel applies a voltage to the contact pad. Since there is no resistor between the contact pad, the fuse, and GND, the fuse will blow, and the CPU die is unharmed as long as the voltage isn't too far above normal VCC.

    In the scenario above, VCC and GND could be swapped, but the concept is identical.

    A more complicated method where any of the 80 contacts can be connected together I think is highly unlikely. It would be a more costly method since it requires twice the amount of contact pads, and there would still need to be a direct VCC or GND connection anyway for the CPU to read the fuse state without possibly seeing a floating voltage. This is good for us, since it means that we just have to test the resistance between the pads and VCC or GND, which can easily be found on the socket pins on the other side of the CPU package, or any other contact with a non-blown fuse.

    The main danger is the multimeter resistance reading circuit being too capacitive, which could blow fuses if the multimeter lead floats significantly apart from the voltage on the pad before reading the resistance. The fuse should be safe as long as the other lead is connected to GND (or VCC) first, and the multimeter is set to reading resistance before contacting the pad. For complete safety a resistor could be added in series between the multimeter and the CPU. This will make sure current doesn't flow too quickly in case the multimeter is floating significantly. An intact fuse should have no readable resistance, so you'll only see the resistance of the added resistor.

    I might test out this theory on my spare i3, make up a diagram, and start a new thread looking for volunteers. Will probably be a few weeks before I do though.

    • Thumbs Up 3
  2. @Khenglish

    hey :)

    well, it was really a pain to get basic OC at this bios ^^

    if you need more info about the process, just let me to know.

    about memory settings, I unlocked my bios, it let me to raise or lower the frecuency, for exmple, I have my ram working at stock frecuency 1333 mhz now, with the BCLK OC It is OCED to 1405 mhz

    what do you mean about laser cut? do you mean is posible to remove the CPU multiplier lock by somewhere? I am interested on this, can you tell me more? :)

    I'm guessing that the process was the following after soldering the BIOS module back in?

    1. Dump whole BIOS chip.

    2. Have FTIC separate the image into the main regions (BIOS, ME FW, descriptor, GbE if present)

    3. Make overclocking edits.

    4. Remove padding in hex editor.

    5. Either flash ME FW region only, or build a full 8MB image with the BIOS and descriptor, then flash with fptw64.

    6. Install proper XTU version.

    BTM (Buffer Through Mode) means that the clock generator built into the chipset is not used, and is set up to just let clocks pass through from an external PLL. FCIM (Full Clock Integrated Mode) means that the chipset's PLL is being used to generate clocks. Don't change this option :)

    You have FTIC 8? Could you link it? The only reason the older version is posted is because that's the only one I'm aware of that was leaked.

    Intel cuts/blows fuses to set CPU multiplier maximums. Many people wrongly think that this is done on the processor die itself, but it's really on the packaging. I am only aware of a single laser cut being found and reverted ever, which was almost a decade ago on a mobility 9800 (based on a full x800 core) to unlock 4 or even all 8 of the disabled pixel pipelines. Your only hope is to compare your CPU package to another higher clocked SB CPU package and hope you can find differences. You then need to sand off the outer layer on the packaging and reconnect the blown fuse(s) with a conductive pen. Even if you pull off that miracle, a big problem is increasing the multiplier even if it can be higher. This is no big deal for Ivy Bridge, but Sandy Bridge requires a CPU reinitialization for a multiplier increase to take effect. A reinitialization causes the BIOS startup code to be run again, so the BIOS must have a way built in to recognize that it was called due to a reinit and not a cold boot, so it doesn't overwrite the increased CPU multiplier (MSRs 0x199 and 0x1AD). To alter MSR 0x1AD you may need to clear a lock bit in MSR 0xCE. Figured I'd let you know since removing and flashing your BIOS chip makes me think you're into more extreme mods, but raising the multiplier on an i3 would be insanely difficult. No one knows where the fuses are that set the multipliers, and you would need extensive BIOS mods to then set an increased multiplier.

  3. hi all

    I sucesfully unlocked BCLK on my xps 15 (L502X)

    had to phisically remove the bios chip, attach it to a hardware programer, and unlock the bytes at the descriptor to unlock the descriptor and ME regions read and write.

    ocerclock.jpg

    however, I was just able to unlock BCLK overclocking setting

    am using XTU 3.1.201.5 (other versions refuse to start or install :o)

    did any of u unlocked any other importants settings than BCLK?

    cuz I feel like this OC is kinda small :/

    here is my modded me image, the one I flashed to unlock my BCLK

    l502x_me_overclock_by_capitankasar.rar

    you can open it with FITC if u want to give it a look if I missed any important setting

    Holy crap it worked! I wonder why it didn't for moral hazard?

    I don't think it was necessary to remove the padding at the start of the file. I'm pretty sure moral hazard did not, and his ME FW still worked, but just didn't allow BCLK adjustments.

    Around 5% is usually around the max people can get to for a sandy bridge BCLK overclock, but it may be possible to push higher by loosening your memory timings. I am not aware of any program that can change memory timings for Sandy Bridge laptops (CPU-tweaker refuses to run on laptops), so the only way might be to flash the memory SPD. If you can clock higher with only 1 stick of memory, then you'll know that your overclock is limited by memory. Your CPU is multiplier locked, so a higher BCLK is all you can shoot for, unless you want to try searching for the laser cuts on your CPU package (I am only aware of 1 successful laser cut removal mod ever. Was for the mobility 9800).

    This mod may have unlocked msr 0x610 to allow increasing the power limit. Throttlestop can let check this and play with it if so. Your CPU is only anywhere near its power limit with both cores and the IGP running full blast, so this might not help you at all.

  4. Strange, because Device Manager says otherwise (or so I am led to believe):

    [ATTACH=CONFIG]5797[/ATTACH]

    heh now there's some good advice. I guess I'll also have to invest in a pretty good hammer crowbar. Also, a new laptop :P

    Yipes, point taken. I always thought that BIOS software was written with at least a minimum amount of confidence, since it deals directly at the hardware level, controlling such things as memory speeds, thermals, sensors, etc.

    Yeah TOLUD is 3GB. Like I said Nando wrote the pci scripts, so he's the one to talk to. Maybe your script starting at 3.5GB had something to do with selecting 36 bit compaction and is correct with 32 bit, IDK. Also I don't even know if what I read means it did start at 3.5GB, I just saw no lower value so I assumed it.

    The 540m should be MXM so it actually is removable, but that shouldn't be necessary.

  5. So my BIOS made me lie. Turns out that "Max TOLUD" selector in my BIOS doesn't do anything.

    I tried setting it to 2.5 GB, 2.75 GB, 3.5 GB, and Dynamic. None of these settings change my PCI Bus value in my Win7 Device Manager: it read BFA00000 regardless of what I set in the BIOS.

    What? A BIOS doesn't work right? I've never heard of such a thing! Wait I got that wrong, it's the other way around...

    Well nando can help you best with getting the reallocation to work. Like I said before, to me it looks like it thinks your TOLUD is 3.5GB instead of 3GB.

    If all else fails (which it shouldn't) you could always just pull the 540m. That'll make sure it doesn't take up resources :)

    Poor BIOS writing rant continued:

    It's possible to write a single BIOS that can run almost any 6 and 7 series platform (virtually identical chipsets, same voltage regulators, KBC's accept the same commands), but BIOS writers throw in tons of unnecessary proprietary code that does unnecessary hardware checks with a complete lack of failsafes. Ex. This one laptop I've been messing with will refuse to POST with 1866MHz memory. Could the BIOS run default clocks and timings if it doesn't like the SPD? Of course. Does it? Nope. Black screen. And that's the least of the BIOS's issues... with the biggest being that although overclocking and voltage increases are locked out for hardware "safety", there is a bug which allows you to disable the CPU fan, thermal throttling, and thermal shutdown... Also read yesterday how 680m 6 series clevo users system's will refuse to POST with certain video BIOS's. The system has optimus with an IGP, so there is no reason for the system to not boot with the 680m malfunctioning. Does it disable the 680m and boot? Nope, 680m must be pulled. I can go on and on and on and on... If I could learn a bit more machine code and get the memory libraries required to enable CPU features, maybe I'll write my own BIOS from scratch.

  6. Hi Khenglish,

    Thanks f or the help. I will try this out tonight when I'm home from work.

    My question now is: how do I rebuild my devcon.txt file? Do I literally just delete its contents and then Setup 1.x will rebuild it the next time I run compaction? Or do I need to do something else?

    My notebook is the Dell XPS 15 L502x, practically same one as wicked20 who has successfully pulled off eGPU with Opt1.1 (he has results in the table on first page). I've added my laptop and eGPU specs to my signature, which will hopefully be useful in the future.

    It's good to know that I wasn't crazy in thinking that I did not have to perform a DSDT override. As I mentioned before, here is my startup.bat

    call speedup lbacache
    call iport g1 1
    call iport dGPU off
    call vidwait 60 10de:11c6
    call vidinit -d 10de:11c6
    call pci
    call grub4dos win7

    As I am using the PE4H+PM3N (soon to hopefully be PE4H+PM060a or something similar for Gen2 speeds, working out those details with HIT), I am indeed disabling my dGPU prior to running compaction.

    I am only cold booting with the eGPU powered on. In my case, I power on the eGPU (via SWEX switch) a couple seconds before I power on my notebook. I've not had any problems with Windows not detecting my eGPU, just the Error 12. My BIOS allows me to change my TOLUD value as well, for example, I can lower it to 2.5GB if necessary. I don't know if that would help things or not.

    Thanks again

    You can manually lower your TOLUD? That isn't needed, but it will make things simpler. Lower it by 256MB or more. That will allow the eGPU to be allocated on boot, then all you need the setup program for is to disable the 540m. Otherwise you'll need to run the reallocation script in addition.

    Maybe nando changed things, but you used to need to build devcon.txt from windows using the command window. Not having a proper devcon file usually made compaction freeze the system.

    Update: Yes you still need to run a command to build devcon.txt:

    http://forum.techinferno.com/diy-e-gpu-projects/2123-diy-egpu-setup-1-x.html

    But your system isn't crashing, so it looks like you don't need to make a new one. If you really can select your TOLUD in BIOS, lower it and you can forget about compaction.

    • Thumbs Up 1
  7. No one has any advice on getting rid of an Error code 12?

    My understanding of the Error 12 is that it is initially related to your TOLUD value; mine is 3GB so I thought I wouldn't need to perform a DSDT override. As well, running a Setup 1.x 36-bit compaction against my eGPU (and selecting iGPU for 32-bit) did not resolve my Error 12.

    I tried to follow the DIY eGPU Troubleshooting steps, particularly Error 12 #2, but I was having trouble figuring out exactly how and what values to use. My pci.bat looks like this



    REM r:/core/compact.exe pciend 310000000 useonly 8086:0126 import devcon.txt makebatch R:\config\pci.bat
    REM created Wed Jan 16 23:28:51 2013
    echo Performing PCI write (compact@Wed Jan 16 23:28:51 2013)

    @echo -s 1:0.0 COMMAND=0 BASE_ADDRESS_1=c BASE_ADDRESS_2=3 COMMAND=0 BASE_ADDRESS_3=fe00000c BASE_ADDRESS_4=2 > setpci.arg
    @echo -s 0:2.0 COMMAND=5 BASE_ADDRESS_2=e000000c BASE_ADDRESS_3=2 >> setpci.arg
    @echo -s 1:0.0 COMMAND=0 BASE_ADDRESS_0=fd000000 >> setpci.arg
    @echo -s 1:0.1 COMMAND=0 BASE_ADDRESS_0=febfc000 >> setpci.arg
    @echo -s 0:2.0 COMMAND=5 BASE_ADDRESS_0=fcc00004 BASE_ADDRESS_1=0 >> setpci.arg
    @echo -s 0:1c.0 MEMORY_BASE=fd00 MEMORY_LIMIT=feb0 PREF_MEMORY_BASE=fe01 PREF_BASE_UPPER32=2 PREF_MEMORY_LIMIT=ff1 PREF_LIMIT_UPPER32=3 >> setpci.arg
    @echo -s 0:1c.1 MEMORY_BASE=f1b0 MEMORY_LIMIT=f1b0 PREF_MEMORY_BASE=fff1 PREF_BASE_UPPER32=0 PREF_MEMORY_LIMIT=1 PREF_LIMIT_UPPER32=0 >> setpci.arg
    @echo -s 0:1c.3 MEMORY_BASE=f1a0 MEMORY_LIMIT=f1a0 PREF_MEMORY_BASE=fff1 PREF_BASE_UPPER32=0 PREF_MEMORY_LIMIT=1 PREF_LIMIT_UPPER32=0 >> setpci.arg
    @echo -s 0:1c.4 MEMORY_BASE=f190 MEMORY_LIMIT=f190 PREF_MEMORY_BASE=fff1 PREF_BASE_UPPER32=0 PREF_MEMORY_LIMIT=1 PREF_LIMIT_UPPER32=0 >> setpci.arg
    @echo -s 0:1c.5 MEMORY_BASE=fff0 MEMORY_LIMIT=0 PREF_MEMORY_BASE=f181 PREF_BASE_UPPER32=0 PREF_MEMORY_LIMIT=f181 PREF_LIMIT_UPPER32=0 >> setpci.arg
    @echo -s 0:1.0 MEMORY_BASE=f190 MEMORY_LIMIT=feb0 PREF_MEMORY_BASE=f181 PREF_BASE_UPPER32=0 PREF_MEMORY_LIMIT=ff1 PREF_LIMIT_UPPER32=3 >> setpci.arg
    @echo -s 1:0.0 COMMAND=0 COMMAND=0 >> setpci.arg
    @echo -s 0:2.0 COMMAND=7 >> setpci.arg
    @echo -s 1:0.0 COMMAND=0 >> setpci.arg
    @echo -s 1:0.1 COMMAND=0 >> setpci.arg
    @echo -s 0:2.0 COMMAND=7 >> setpci.arg

    setpci @setpci.arg
    set pci_written=yes

    My last two echo lines contain nothing like what nando's troubleshooting demo provides, hence why I'm stuck.

    Regarding the different ways of compaction, I've tried them all:

    • 32-bit, against iGPU, eGPU, iGPU+eGPU (all 3 will fail, giving me the "set another method and try again" message).
    • 32-bitA, against iGPU, eGPU, iGPU + eGPU (all 3 will also fail, same error as above)
    • 36-bit, against iGPU (then 32-bit against None, iGPU, eGPU, iGPU+eGPU), eGPU (tried same 32-bit settings), and finally iGPU+eGPU (tried same 32-bit settings).

    I decided to stick with the 36-bit iGPU+eGPU and the 32-bit "None". None of the 32-bit methods worked, but all 36-bit selections result in Device Manager giving me the Error 12. Am I making the wrong choices for compaction?

    My specs are:

    i7-2760QM

    8GB RAM

    dGPU: Nvidia GT 540M

    eGPU: GTX 650 Ti

    PSU: 650W, 12V1 and 12V2 @24A, 5V @22A

    I've confirmed that the GPU is not faulty, but using it in another desktop system.

    I'd really appreciate if someone could please help me out here.

    I might be reading it wrong but it looks like the pci.bat thinks your TOLUD is 3.5GB (0xe000000c), not 3GB.(0xc0000000). I'm thinking that the TOLUD really isn't 3.5GB, since if it was it would be very difficult for your system to run the iGPU and dGPU with other devices as well (only 32MB would be left). I'm not sure if pci.bat gets the TOLUD from devcon.txt or if it looks it up. I suggest rebuilding the devcon.txt file, then rebuilding setpci.bat.

    DSDT override is unnecessary though since unless you're using thunderbolt, you need to disable the dGPU anyway for optimus compression to work. This will make room for the 650. You'll then need to allocate the 650 since the BIOS failed to on boot.

    You did try cold booting with the eGPU on right? BIOS's that aren't poorly written (which is the minority) will adjust the TOLUD to allocate everything it sees on startup. I don't know what your laptop is, so I don't know if yours does this or not.

    • Thumbs Up 1
  8. I was using 306.xx driver with my eGPU setup (Dell e5520 - PE4H 2.4 - gtx580) and I was able to enable optimus by simply press F8 when win7 loading and plug in eGPU then continue booting.

    lately I got another laptop (Dell e4300) so I tried the same eGPU hardware on it but withe the latest (310.xx) driver. I cannot enable optimus with all 3 options: a) boot laptop with eGPU connected. B) plug in eGPU after bios check but before win7 loads. c) plug in eGPU after win7 loaded. all three options work in that I can use a monitor connected on eGPU to utilize nvidia card but internal screen can only use integrated gpu.

    then I change to the older 306.xx driver, and now I get optimus (internal screen can utilize nvidia gpu and can choose which gpu in nvidia driver panel).

    has anyone tried the latest driver and noticed the same issue?

    I had no problems with the 310.70 or 310.90 drivers. Both internal and external modes worked fine. Well, no new problems. It seems that the resistor I knocked off the card a month ago actually did matter...

    One thing I found is that sometimes the "preferred rendering" option in the driver is absent after some driver installations. I think this only ever happens if you don't do "clean install". I did clean install when installing both 310 drivers.

  9. Hello everyone. I have a Compaq Presario CQ62 219WM laptop with an ugraded cpu (T9300) and ram (from 2 to 3) and I have an eGPU setup running on my wifi port using 1.1x.

    I am using an Nvidia EVGA GTX 650 Ti Graphics card. I have everything hooked up, all power all cables and have tried using both DVI-I video cables to my monitor and VGA (a DVI-D to VGA adapter) and The graphics card has been detected and installed on my pc. The only problem is that My graphics card will not put out any graphics on my desktop monitor.It is a blank screen. My laptop is still producing graphics on it's screen as well. My vga and DVI-I cables are both fine and I have tested them on other computers so they can't be the problem. Could someone please tell me what some possible causes are? Could it be a problem with my setup1.x configuration? I disale dGPU then Reasign Busses (to make wifi work, even through when I startup setup 1.x my computer detects it, but not if I go strait to windows) and perfom compaction on my iGPU and eGPU then chainload into windows 7.

    - - - Updated - - -

    Just to clarify, you were able to install a display driver for the 650, and the device manager lists no error codes for the 650? Also, your laptop does have the optional HD 5430, correct?

    Your internal screen is still being driven by the intel 4500, which is what you want. Verify that the ATI card is disabled by checking the device manager for it. It should not be listed.

    If the driver did install and there are no error codes for the 650 in the device manager, you did enable the second display in the screen resolution menu right? Some people in the past have actually forgotten to do that.

  10. Possible, but don't get your hopes up.

    It looks like 1 or 2 will be VCC and GND, so you can find those somewhere else and solder directly onto the chip leads. For the others follow the trace if possible, then solder a wire to whatever component it connects to.

    If a trace starts out but then disappears down into the PCB, then you can remove the outer PCB coating to solder to the trace before it descends. I would suggest sanding if you can fit sandpaper in there. The coating is thin and will be removed fast, so I suggest no lower than 1k grit. Sanding is much more controllable than scraping. If you try scraping you'll almost certainly end up cutting the trace when trying to expose enough of it. Don't try melting the coating off with a soldering iron, I've tried that and it doesn't work. Only try this idea as a last resort. Even if you sand there's a fairly high chance of cutting through the trace.

    For traces you can't see at all, look up what the respective leads do on the BIOS chip to try to figure out where they go.

    On a sidenote, there is way too little pressure on that GPU die. By way too little I mean absolutely none. The paste should not completely cover the die like that after heatsink removal. With good contact all you should see only see a very thin haze of paste, with the entire die being visible. Temps will drop by over 10C with proper pressure.

    • Thumbs Up 3
  11. The fan control in setPLL works with HP 2530P and probably other Montevina (last C2D) and Calpella (1st gen i-core) HP Elitebook units. No Dell unfortunately. However, try the the FN+15324 trick explained at diefer.de - The 15324 trick and one happy E4300 owner . I can see too that i8kfangui doesn't work based on your comments at E6520 Owner's Thread - Page 64 so speedfan looks to be it.

    Already tried that trick. They removed it with 6 series :(

  12. The fan control looks interesting and I'll try it out. Unless 2 cores are in C6 the system will reduce my multiplier by 1 at 70C, and again at 80C. The BIOS doesn't set the fan to full blast until 85C, so fan control would be nice. With speedfan I can set the fan, but in less than half a second the BIOS overrides my setting, slowing the fan back down again. If I hit the keys really fast I can kinda overpower the BIOS, but that isn't very practical.

  13. Interesting. So performance still improved with x1E, but the PCIe test reports the same bandwidth for both. Maybe x1E links never improved bandwidth for any system, but the driver may have just managed resources better when thinking it had an x2 link? Either way, considering that setting 1xE improved performance as much as it did, and how low your performance is, it looks like the recent driver enhancements do not apply to older chipsets.

    All startup.bat is for is doing the changes you make in the eGPU setup menu automatic. I'm not sure how exactly it works in recent versions. It may or may not automatically have your settings added to it, or you may have to add them manually.

    Port 3 may have been missing because the BIOS POSTs and loads the setup program before the 15s timer is up, so the BIOS disables the port because it did not see any device on it. The main reason the delay is available is because some systems will not boot with an eGPU connected and on. If I remember correctly, the x1E script (which is identical to the x2 script) should also enable port 3 if it is disabled.

    On a side note, futuremark says you're running 1GB and 2GB memory sticks. I don't think Core 2s can run dual channel with mismatched memory, so you're running at 62% (667/1067) memory bandwidth of what your system is capable of. You might want to try to pick up a cheap 2GB stick to replace the 1GB. However, since your system is pretty dated and it appears that ATI cards run better on newer chipsets, it might be better to just buy a new laptop instead.

  14. is this what i should be lookign at in AIDA-64? because it says x1?!

    EDIT1:

    This is how it looks like before going to sleep, powering on eGPU and powering on laptop back on.

    EDIT2:

    If i just power on eGPU while in windows this is how AIDA-64 looks like and this is how device manager looks like.

    Your graphics card has an ! next to it in the device manager. This means that there is an error and it is not operating properly.

  15. Hey Nando, I was looking at your recent GTX660 benchmarks and comparing them to last year's GTX560Ti's benchmarks. I was focusing on dx9 (since I'm forced to winXP) and I've noticed that according to your tests it appears that the new GTX6xx series is more bandwidth-limited than the GTX5xx series. Infact, if tested on a desktop the new GTX660 is 20% faster in dx9 than the GTX560Ti, while with an eGPU internal lcd setup it's equal if not slower (in RE5 is noticeably slower).

    Does this mean that for eGPU purposes in dx9 mode, x1.2opt, and intenal lcd I'd be better with a GTX5xx gpu? Am I wrong in my assumptions?

    Thanks

    Edit: my notebook has a sandy bridge pentium cpu (b960) and no dedicated graphics. Will Optimus engage with the integrated HDgraphics (it's very similar to the HD2000, just with a few minor features deactivated)??

    The only test where I see the 560ti beat the 660 is the RE5 internal LCD results. On the internal LCD the 304+ series drivers perform better in fullscreen mode than older drivers, but they perform worse in windowed mode. Nando may have ran RE5 in windowed mode. Also, the Nvidia driver will sometimes invoke strange fps caps on internal LCD mode (for me locks 1080P to 50fps max, 1280x800 to 100fps max). This behavior has varied from driver to driver. I think the driver is trying to guess at how much bandwidth it should reserve for the framebuffer, which does not need to be transferred when using external LCD. Sometimes it reserves too much, and other times too little. When it reserves too much you see generally worse performance, and when it reserves too little, you see hard fps caps.

    I do not know if optimus will engage on an XP system. The fact that the iGPU is not labeled as an HDx000 concerns me as well. The Nvidia driver might not recognize it and not engage compression.

    Your CPU is very low end, and I would suggest that you get an i5 or better to replace it. Even if you can't use an egpu, the HD3000 is much faster than your watered down HD2000.

    Update:

    Just ran RE5 internally on my system full screen mode. FPS average was low 80s. Fps only ever varied between 78 and 85 fps:

    http://i.imgur.com/xhBsm.jpg

    Also ran at 1920x1080, and fps ranged from 47 to 50.

    I used to see a hard fps cap. This is more of a softer cap than what I saw on older drivers.

    • Thumbs Up 1
  16. Here are tests for RE5, as before are same for x1 and x1E.

    Rest of tests will follow...

    Put a hold on the additional testing. Something is very wrong with your setup. Your performance is less than 1/4th of what is expected of your hardware on x1.1 with the old half duplex problem. Are you dragging the RE5 window over to the internal LCD? If so please don't, since there are many more results to compare against when using an external monitor.

    Also the full variable test is usually run.

    • Thumbs Up 1
  17. Did a 3D Mark 11 test with laptop screen and scored 6791

    Score on external screen was 6956.

    Is it possible to lose so few performance ? :/

    Frame rates in 3dm11 are pretty low, so there is not much extra traffic over the PCI-E link. Internal screen performance becomes worse compared to external monitor performance as the fps increase.

  18. Hey @Tech Inferno Fan, do you mind helping me out with the power consumption questions? I've purchased a 650 Ti (94W max) and the PE4L-EC2C 2.1b, but am not sure if I can get away with getting just a 120W 12V@10A PSU. Hope you can shed some light on my previous post whenever you see this. Thanks!

    http://forum.techinferno.com/diy-e-gpu-projects/2109-diy-egpu-experiences-%5Bversion-2-0%5D-48.html#post35637

    Usually power supplies are rated in peak power output, not sustainable. The 94W needs to be sustainable, and preferably you should have some breathing room. The 120W is cutting it close. Even if it does work, the voltage will be dropping below 12V, putting more strain on the graphics card's voltage regulators.

  19. Hey folks,

    I apologize in advance for the inexperience that should be evident in this post. I've been trying to follow the various eGPU threads in this forum and NBR, and information is just so spread out and there's a lot of questions that aren't answered in FAQs, so I wanted to just make a brief post before I put down a bunch of money to make sure that my understanding of everything is correct.

    My overall goal is to get rid of my desktop computer (Shuttle XPC w/ Core i7 880) and my laptop (13" MBA 2011) and have a single system that I can use for work (I'm in IT technical sales) and play (I do a lot of PC gaming at home). After lots of research, I think I've figured out that unless I wait for Haswell the best machine I can get is the Lenovo Thinkpad T430s. It's less than 4 pounds, has an optical drive, has good docking station options, and has both ExpressCard and Thunderbolt for eGPU capability. It's also "business appropriate" so I can bring it to customer presentations and such without raising suspicion.

    My plan is to take my AMD Radeon 6850 out of my desktop, connect it to a TH05 (still not sure what to do around enclosure/PSU stuff but I'll hopefully get that all figured out), and use that for my eGPU with the Thunderbolt port. Then I can come home, plop the laptop in the dock, connect the Thunderbolt cable, and be off to the races.

    I just have a couple questions regarding the TH05 that I'm not sure have been answered. I've read the threads about doing this on a Mac, and I've also read threads about using ExpressCard in older Thinkpads, but here's what I'm wondering:

    1. Will the solution be plug-and-play? Can I bring the laptop home, put it in the dock, bring it out of Standby, and suddenly the eGPU works? Or will I need to reboot? There was some mention of it being like this on Macs with EFI Windows 8, but what about BIOS Windows7 or 8?
    2. Since it's Thunderbolt, will I encounter any TOLUD/DSDT issues? I've seen some other folks mention that the T430s has a lower TOLUD, and I'd like to avoid the reboots and such associated with Setup 1.1x as much as possible.
    3. I know the TH05 is only 2x 2.0 lanes, but since its higher bandwidth does that means the AMD cards will perform similarly to NVidia cards with 1.2Opt?

    Appreciate the clarifications on any of this. I'm hoping that if everything looks good I'll move forward with getting the various parts in January and I'll be sure to keep everyone posted.

    Look at 7870 performance below:

    http://forum.techinferno.com/diy-e-gpu-projects/2109-diy-egpu-experiences-%5Bversion-2-0%5D-20.html#post30509

    The 7870 outperforms the similarly powerful 660 on just an x1.2 link, and that's with optimus running. In the old x2 benchmarks, a x2.1 ATI card had a hard time vs an equally powerful x1.1opt Nvidia card, let alone a x1.2opt Nvidia card. Judging by Nando's results above, it looks like AMD recently added PCI-E compression. I know little about this except for what I'm seeing in benchmark results, and Nando could probably add more since he has access to a 7870. Maybe only the 7k series benefits while your 6k would not, I don't know. Either way, x2.2 from thunderbolt is a decent chunk of bandwidth, and will perform well even if there is link compression or not.

    In regards to plug and play and TOLUD, I am not aware of any reason for thunderbolt to be more problematic than a strait PCI-E connection through mpci-e or expresscard.

    • Thumbs Up 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.