Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 07/23/16 in Posts

  1. Thanks dude . I feel NVIDIA is officially working on a hot-pluggable solution, and all those non-propriety enclosures will be released once the driver is ready, because even with my set up I'm already able to hot-plug in/out my GTX970 without crashing the system, and in the GPU activity monitor there's even a 'disconnect' button. I hope the Optimus support is somewhere on their schedule. They do as long as Optimus is not concerned.
    2 points
  2. 0x16F set to 0x1 Make sure to keep in mind the range posted in badbadbad's guide: GT OverClocking Frequency 0x170 0x00-0xFF (8-bit value from 0-255) [iGPU] [the decimal value] x 50MHz (Example: 34 x 50mhz = 1700mhz) GT Overclocking Voltage 0x170 0x00-0xFF (8-bit value from 0-255) [iGPU] 0.01 increment for every value from 0x00 to 0xFF (Ex: 0x05 = +0.05V) So if you'd like to try a relatively safe clock, try setup_var 0x170 0x1c which should set to 1400Mhz. Setting it to 0x17f most likely registers it as 0x7f which is 127 in decimal, which is way out of bounds. As for voltage, you gave it +0.37v, which should be enough to get you well over 1500MHz. Check the table below: GT Overclocking Frequency Value Resulting Frequency (from GPU-z log) GT Overclocking Voltage Value Voltage Increment (speculation) Memory Frequency (Dual Channel) Highest GPU Temperature (from GPU-z log) Highest GPU Power (from GPU-z log) Furmark 720p Bencmark unchanged 1100 MHz unchanged +0.00 V 1600 MHz 18.1 W 371 unchanged 1250 MHz unchanged +0.00 V 2133 MHz 17.9 W 515 0x1a 1300 MHz unchanged +0.00 V 2133 MHz 19.5 W 517 0x1b 1350 MHz unchanged +0.00 V 2133 MHz 21.1 W 532 0x1c 1400 MHz unchanged +0.00 V 2133 MHz 81 C 22.2 W 549 0x1d 1450 MHz 0x05 +0.05 V 2133 MHz 83 C 23.5 W 553 0x1e 1500 MHz 0x15 +0.21 V 2133 MHz 84 C 27.5 W 563 0x1f 1550 MHz 0x25 +0.37 V 2133 MHz 87 C 31.6 W 589 0x20 1600 MHz 0x40 +0.65 V 2133 MHz 93 C 37.4 W 639 0x21 1650 MHz 0x50 +0.80 V 2133 MHz 102 C 717 Anyway, use throttlestop's TPL option to enable Intel Power Balance and give 0 to the CPU and 31 to GPU in order to test overclocks. It's going to get pretty hot pretty fast.
    2 points
  3. Due to a stupid accident by me, I acquired a 980m with a chunk knocked out of the core. Not wanted to scrap a perfectly good top end PCB for parts, I wanted to replace the core. You can see the gouge in the core to the left of the TFC918.01W writing near the left edge of the die. First I had to get the dead core off: With no sellers on ebay selling GM204 cores, my only option was to buy a full card off ebay. With no mobile cards under $500,I had to get a desktop card. And with this much effort involved to do the repair, of course I got a 980 instead of a 970. Below is the dead 980 I got off ebay: You can see for some reason someone removed a bunch of components between the core and PCI-E slot. I have no idea why anyone would do this. I tried the card and it was error 43. PCB bend seemed to be too little to kill the card, so those missing components had to be it. GPUs can be dead because someone removed or installed a heatsink wrong and broke a corner of the core off, so buying cards for cores on ebay is a gamble. This core is not even scratched: Preheating the card prior to high heat to pull the core: And core pulled. It survived the pull: Next is the 980 core on the left cleaned of solder. On the right is the original 980m core: Next I need to reball the 980 core, and lastly put it on the card. I am waiting for the BGA stencil to arrive from China. It still has not cleared US customs: https://tools.usps.com/go/TrackConfirmAction?tLabels=LS022957368CN When that shows up expect the core to be on the card in 1-2 days. So some potential issues with this mod besides me physically messing up: I believe that starting with Maxwell Nvidia started flashing core configuration onto the cores, like intel does with CPUID. I believe this because I found laser cuts on a GK104 for a 680m, but could not find any on two GM204 cores. In addition, Clyde figured out device IDs on the 680m and K5000m. They are set by resistor values on the PCB. The 980m has the same resistor configuration as the 680m for the lowest nibble of the Device ID (0x13D7), but all of the resistors are absent. Filling in these resistors does nothing. Resistors do exist for the 3 and D in the device ID. Flashing a 970m vBIOS on my 980m did not change the device ID or core configuration. If this data is not stored on the PCB through straps or the vBIOS, then it must be stored on the GPU core. So I expect the card with the 980 core to report its device ID as 0x13D0. The first 12 bits pulled from the PCB, and last 4 from the core. 0x13D0 does not exist. I may possibly be able to add it to the .inf, or I may have to change the ID on the board. With the ID's 0 hardset by the core, I can only change the device ID to 0x13C0, matching that of a desktop 980. An additional issue may be that the core may not fully enable. Clyde put a 680 core on a K5000m and never got it to unlock to 1536 CUDA cores. We never figured out why. Lastly, there was very tough glue holding the 980m core on. When removing this glue I scraped some of the memory PCB traces. I checked with a multimeter and these traces are still intact, but if they are significantly damaged this can be problematic for memory stability. I think they are OK though, just exposed. Due to Clyde's lack of success in getting his 680 core to fully unlock I am concerned I might not get 2048. If I don't at least I should still have a very good chip. Desktop chips are better binned than mobile chips (most 980s are over 80% ASIC quality, while most 980ms are below 70%). In addition this 980 is a Galax 980 Hall of Fame, which are supposedly binned out of the 980 chips. Having a 90%+ ASIC would be great to have. The mid 60s chips we get in the 980m suck tons of power. I want to give a special thanks to Mr. Fox. This card was originally his. He sent me one card to mod and one to repair. I repaired the broken one and broke the working one. The broken one is the one I've been modding.
    1 point
  4. Hi all, After I saw this result: http://hwbot.org/submission/2830783_0.0_cpu_frequency_core_i7_4700mq_4550_mhz , I got interested to replicate it with a similar method and decided to share my steps to score higher than a desktop 4.4ghz 4770k (according to cinebench ;)) Intel Stock microcode has a Turbo multiplier bin glitch that allows unlimited multiplier increase, I used prema's bios and removed a cpu microcode update to let the cpu run the stock glitched microcode. I will share the bios file, use it at your own responsibility and if you know what you are doing. 1) Download or dump your BIOS, if your bios is ami (my case) then use AFUWINx64 2) Get AMI Aptio UEFI MMTool v5.0.0.7 and UEFITool, HxD (or your fav hex editor) 3) Open your BIOS image with UEFITool, then File>Search, in our case Haswell has the following ID: , enter C3 06 03 in Hex pattern dialog click OK 4) You can see the last four Hex pattern result, double click the first result and a structure item in the main dialog will be highlighted, right click>Extract as-is to a folder (be sure to be neat and organized or you will mess things up) 5) Do the same with the third result and save it as a diff name other than the first one 6) Now in the folder you have saved the 2 files from step 4&5, open the first one with HxD(any hex editor) and look for be sure to choose Datatype:Hex-values then hit search 7) Press F3 to find again till you reach this pattern(highlighted): 8) (1) Indicates the microcode version, 17 in this case, we want 00 (cpu stock), (2) the platform ID (the search context we reached at) (3) the microcode length = 5000 (in my case) in reverse (important to know when the microcode ends in order to remove). 9) Adjust the cursor on the beginning of the highlight text/microcode (01) right click>Select Block>Length>5000 or whatever in your case 10) Delete the highlighted blocks (after step 9) then save the file 11) Do the same with the second result from step 3 to 10 then save 12) go back to UEFITool, double click the first result like you did in step 4, right click the highlighted structure in the main dialogue > Replace as-is then choose the FIRST file you edited in HxD 13) Double click the third result and follow step 12 with the SECOND file you edited in HxD 14) You will see "Rebuild" in action row, File>Save image as> P15SM04.PM2 in my case (can be any name as long as your flashing tool recognize it) 15) Open AMI Aptio MMTool > Load Image > your modified rom > CPU Patch Tab and verify there is no 06C3 in CPU ID 16) Be brave and flash your BIOS Windows Mod to remove auto update microcode on boot: Click on Start Type CMD in the Search box Right-click on CMD and choose Run as Administrator In the Command Prompt window and change to the directory where the file is located. To do this, use the CD command. You can follow the example below. to change to the Windows\System32 directory you would enter the following command and Press Enter cd \windows\system32 Now use the DEL command to delete the offending file. Type DEL mcupdate_GenuineIntel (and backup) Done! Overclocking: 1) Check CPU MCID:Download AIDA64 , open AIDA64>Motherboard>CPUID and look at IA Brand ID, it should be 00h 2) Download the latest beta Throttlestop (not stable) in my case 810b2 3) Make sure you don't have XTU installed or running (especially at startup) or it will reset any changes in Throttlestop 4) Open throttlestop, click FIVR and look at "[checkbox] Overclock [DIALOG] Max" and note it down 5) Now here is the magic! close FIVR and open it again, [DIALOG] Max value should increase by 2 (up to 80x max ~ 8ghz) everytime you open and close FIVR aslong as long as you increase one of the cores, LOL 6) Increase "Set multiplier" to maximum after your final changes in FVIR (Iv set mine to 42x all cores, so I increased set multiplier to 42 aswell - note voltage ID is messed up ignore it) 7) Increase voltage in FVIR for stability by using cinebench run 3 times instead of prime95, as it stresses FPU which increases heat and tdp instead... 8) Unlock maximum TDP and turbo wattage in TPL, in my case: 9) Done! Changes should be persistent as long as you dont save and exit from BIOS, here is my result in Cinebench with temp throttling (from 4.3 to 3.9 ghz), room temp 25-27 XD: (no.1 at 4.2ghz, no.7 stock latest microcode) UPDATE: OCed RAM from 1600 to 1866, [email protected], +200mV adaptive vcore: I'm also getting 852 with 4.5ghz with this adaptive voltage-like method: Note: You can maintain maximum turbo multiplier bin with latest microcode after setting it in throttlestop (imp: no crash when testing; make sure its stable) then flash the latest microcode for bug fixes (more stable on my side at x45 with only +230mv, depends on your CPU, i7-4800+ will require less voltage), and you will still be able to set the bin high (up to 80x) as long as you don't crash... Post your results and I'll copy it here. Tips: I highly recommend lapping heatsink and use liquid metal thermal paste or any decent tp ( I used collaboratory liquid ultra) before doing this -If your cpu throttles no matter what, try decreasing dynamic voltage in FVIR and look at maximum value the package power indicate while stressing, decrease the value by 10% in order to avoid rapid throttling (happens with bad TP) -Do not attempt the OC if you are looking after long service life wear&tear (I expect 2 years from now if I stress the cpu everyday for an hour, which i never do :P) Happy overclocking, and don't melt your laptop Thanks to Intel if they leaked this on purpose, kinda futureproofed my machine XD P150SM 1.03.05 modded bios (at your own risk): MOD EDIT: link removed, please use a clean BIOS base because of legal implications with Intel :
    1 point
  5. This is probably the first trial online involving a TB3 to TB adapter. Update on Noon July 23rd 2016: works seamlessly with an external monitor. However, whenever I select Duplicate in projection mode, the rendering will be carried out on iGPU/dGPU(if you do not disable it) Update on Morning July 23rd 2016: Turns out the Optimus is only working for Furmark. i.e. No game would run on GTX970. Nor would 3DMark. It seems an external monitor is still a necessary fix. What's working: system recognizes gpu driver installed no problem on external screen [Partially] Optimus working, just don't disable the GTX960M in Device Manager. Only works for Furmark. Background In the spring, I came to the States for one-semester exchange. I obviously couldn't get myself a full-size gaming rig, so I chose to build an eGPU with my MBPr13 2015. I got my AkiTio Thunder 2 and GTX970 then, for about $500. Later in the summer, I purchased a XPS15 (discussed by the low-voltage cpu in my MBPr 13), was enthralled by its borderless display. It came with a TB3 port, but the only option out there for TB3 is Razer Core, another $500 investment. In the mean time, A few TB3 to TB adapters rolled out, currently only Kanex and StarTech.com models available, for around $80-100. I asked a bunch of questions regarding AkiTio Thunder 2 + TB3-TB adapter + eGPU on Amazon but no luck. So I decided to get one myself and try it out. About the adapter I got it on amazon for $100. https://www.amazon.com/gp/product/B01EJ4XL08/ref=oh_aui_detailpage_o01_s00?ie=UTF8&psc=1. In fact the StarTech.com one is 20 bucks cheaper, and I believe they should harness similar technology under the hood, but I would trust Kanex for I have already owned their USB3.0 hub and it worked pretty well. The hardware setup (Picture coming soon, when I get back dorm) The software setup Download the newest NVIDIA driver. update the TB3 firmware and TB3 management software on Dell.com, or as instructed on https://thunderbolttechnology.net/updates per manufacturer. In the UEFI settings, turn on all the checkboxes in Thunderbolt tab, and make sure the thunderbolt security settings is set to lowest (Cuz Akitio Thunder 2 is not verified for TB3). Or you can manually consent in windows, if the security level is normal. Plug the setup in, and power on. Windows should boot just fine. Use display driver uninstaller to remove the old drivers. http://www.guru3d.com/files-details/display-driver-uninstaller-download.html Install the newest driver downloaded in step 1 reboot, and you should be able to see all three graphics card in Device Manager. (Intel 530, GTX960M, GTX970). You are free to disable GTX960M, but that doesn't help anything so far. :-( Result summary Geforce experience recognizes GTX970 no problem NVIDIA Control Panel recognizes all three with no problem. Under internal display, no game would run with GTX970, even after disabling GTX960M. GTX970 shows up properly in the GPU indicator, but it's always 'inactive' It works! Just don't disable GTX960M, and tasks will be automatically assigned to GTX970 when needed. Only works for Furmark. No games and 3D Mark would work. External monitor works flawlessly. Can't get 'duplicate' projection mode to work though. Trying to fix the internal display (Optimus) problem As my MP-CL1 micro projector seems to defect, I've no access to external display yet to test the card out. And I'm not yet able to get the eGPU running on internal display either. What I have tried: modifying the nvidia driver per this video, with no avail. [in progress] just purchased this headless hdmi plug to simulate a dummy display, and use Win+P with screen share option. [just contemplating] maybe should get Razer driver to work? But Razer Synapse should only recognize Razer Core, so need a side-door. They say the driver is open-source, but I cannot find it on Github anyway. Just don't disable GTX960M and it partially works (only for furmark). Maybe furmark is rendering the content internally and pipe the image out through a different gpu manually? Windows 8.1 not working. I just installed a copy on my SanDisk SSD 500 external SSD. GTX970 cannot co-exist with GTX960M, and disabling GTX960M still doesn't enable rendering on GTX970. With external display Will update once I get my MP-CL1 working (hopefully it's just a drained battery), or when I get back to Hong Kong. Just remembered there's a common area in my dorm with a TV. Got to try it out some time. Tried out on the TV, able to achieve ~20000 on Sky Diver. With GTX960M it's around half
    1 point
  6. Because I was in need of free USB ports (especially for Bluetooth after Wifi card change) I started DIY work on adding internal USB hub. I wasn't able to found any free USB head on board so I decided to adopt USB-2.0 port on Audio jack board. This was successfull and my Y510p has now 3 more internal USB ports (fourth port is connected out to external USB - except power which is lead directly to allow power-off charging, of course). As usb hub I used mini octopuss hub type, the output cables pretty fitted to free room around HDD and under mPCI slot. On one cable is connected mini bluetooth 4.0 module, on second mini micro-SD reader (for fail-safe and system repair and recovery, including gparted live distro) and one cable is left free, ending just beside mPCI slot. I plan to use it eventually for usb wifi mini-dongle replacing PCI wifi card thus freeing mPCIe slot. The free slot will be then available for PCI riser with external graphics etc. ... If somebody is interested, I'm ready to upload complete and detailed photo story
    1 point
  7. This is my first post in this forum and I made my account just to help you, and I wish I can do so. First of all, just to make sure this is clear: I didn't make an egpu set up yet and I'm only talking from the knowledge I gathered around the internet, so if I made a mistake, correct me please. I will explain in a step by step fashion (better than essay format): Step 1: Check the m.2 slot interface. intro: put in your mind that not every m.2 slot is connected to pcie bus. there is typically 2 interfaces in which m.2 slots can use. These interfaces are PCIe interface and SATA interface. So, if you have by any chance a "m.2 SATA" slot, this means that if you connected a PCIe card into it, it won't work because the data will not reach the PCIe center on the motherboard. To make it simple, the data pathways are different. note: I read something about that m.2 slot can use a usb interface (instead of either SATA or PCIe) but I'm not sure about it. How to check: Now how can you check if the slot is a ssd or pcie slot? there is no such determined way to tell 100%, but there is multiple ways to follow to be sure: A. what is already connected to that slot? if it is a ssd drive or empty then u need to skip to the next way as SSDs can use either m.2 SSD interface or m.2 PCIe interface. If it's a wlan card or something else then it's PCIe, atleast if there was no m.2 USB interface. B. To be more confirmed, we can use a software called "hwinfo" to check PCIe buses used on your lap. Also, It can check the PCIe version, speed and # of lanes. I'm not sure if the detailed information are correct (I mean the version, speed and # of lanes) but you can depend on it deciding if it's PCIe. C. "step 2" can in some cases determine what interface is your slot D. Ofcourse you can search on the internet about your laptop model and check if the slot is SSD or PCIe note: I didn't experience the situation if it was empty, so I don't know if my previous ways are reliable to check an empty slot Step 2: check the m.2 slot key module. - Intro: there is many types of m.2 slots shapes called "keys", differencing in pins #, # of lanes used anbd interfaces can be used - M.2 module keying and provided interfaces: note: M keyed m.2 slots are the most modern type and the only one has PCIe x4 Thank you for your time. I hope I helped you, and I will try to continue any further steps later and improve it by adding figures and images. Also, I'm sorry for my bad English if there was any mistakes. if not, then it's cool. see you around .
    1 point
  8. Make sure testsigning is enabled (run this command as admin and then reboot): bcdedit -set TESTSIGNING ON
    1 point
  9. Yes, I will post pictures of how exactly I routed the cable. I cut a few small pieces of the midframe and had to grind part of it to smooth it out on the notebook and some of the bottom cover. There is a slight performance hit on using the internal display. In the SteamVR benchmark, it seems to be somewhere between a 10-15 fps difference.
    1 point
  10. @Tesla: I have checked the schematics of the mainboard. All PCIe lanes are connected directly from the main processor to the mGPU (GTX) and to the Ultrabay adapter. So there is no reason why the external GPU should not work. It must be a BIOS whitelist which blocks nvidia cards.
    1 point
  11. @Tesla: You are right. The adapter uses no supply from the laptop. Te whole power is taken from the external PSU. The crashes look like a power supply problem. Be careful with the power ratings of PSU. Sometimes PSUs can't support the rated power on all different voltages. The adapter needs only 3V3 and the GPU 12V.
    1 point
  12. Firstly, congratumalationzsauce to @Khenglish on successfully modding a 980 onto a 980M. Especially in a P150EM. But this thread has taught me quite a few things. And all of them boil down to "nVidia sucks". But I'm glad you have this victory =D. The bad thing? Now I want a couple of 980s on 980M PCBs with improved mosfets and power delivery systems in my system. DAMMIT KHENGLISH
    1 point


  • Newsletter

    Want to keep up to date with all our latest news and information?
    Sign Up
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.