Leaderboard
Popular Content
Showing content with the highest reputation on 12/26/15 in all areas
-
So I didn't like that the memory on my 980m only clocked to 6.4 GHz after raising the voltage to 1.48V from 1.35V, and wanted my memory to run even faster. I knew someone with a spare 970, so we made a deal where I buy the card, and if it still worked after I switched all the memory chips, he'd buy it back (for reduced amount if it could no longer do 7GHz, but at least 6GHz). Long story short, he bought the card back and I got faster memory. MSI 970 4GB Lightning original memory: Samsung K4G41325FC-HC28 (7GHz rating, 8GHz max overclock) MSI 980m 4GB original memory: Hynix H5GQ4H24MFR-T2C (6 GHz rating, 6.4GHz max overclock) Both cards are GM204 chips. The 980m has one less CUDA core block enabled than the 970, but it has the full 256-bit memory interface and L2 cache with no 3.5GB issues, while the 970 is 224-bit with 1/8th of the L2 cache disabled. Both cards are 4GB with 8 memory chips. I highly suspected this memory swap would work because video cards read literally nothing from a memory chip. There is no asking for what the chip is or even the capacity. They write data to it and hope they can read it back. Memory manufacturer information read by programs like GPU-z isn't even read from the memory. It's set by an on-board resistor. I also had changed multiple memory chips in the past, so was fairly confident I could physically do the job. I started with just one chip switched from both cards. This meant both cards were running a mix of memory from different manufacturers and of different speed ratings, but same internal DRAM array configuration. Both cards worked. Here is a picture of the 980m with one chip switched over: Now how did the cards react? The 980m behaved no differently. No change in max overclock. The 970 though... I expected it to be slower... but... 970 with 1 Hynix chip, 7 Samsung (originally 8 Samsung) 7GHz = Artifacts like a crashed NES even at desktop 6GHz = Artifacts like a crashed NES even at desktop 5GHz = Artifacts like a crashed NES even at desktop 2GHz = Fully Stable, 2d and 3d I didn't try 3GHz or 4GHz, but yeah, HUGE clock decrease. I shrugged though and kept switching all the memory figuring that as long as it worked at any speed, I could figure out the issue later. With switching more chips through 7/8 switched there was no change in max memory clocks. What was really fun was when I had 7/8 chips done. My GDDR5 stencil got stuck and ripped 3 pads off the final Samsung chip. Needless to say there was a very long swearing spree. Looking up the datasheet I found that 2 pads were GND, and a 3rd was some active low reset. Hoping that the reset was unused, I checked the 970's side of the pad and found it was hardwired to GND. This meant the signal was unused. I also got a solder ball on a sliver of one of the GND pads that was left, so I was effectively only missing a single GND connection. I put the mangled 8th chip in the 980m and it worked. Net gain after all of this... 25 MHz max overclock. Something was obviously missing. I figured I would switch the memory manufacturer resistor, hoping that would do something. I saw that Clyde found this resistor on a k5000m, and switching it to the Hynix value from Samsung had no effect for him. He found that for Hynix on the k5000m the value was 35k Ohms, and for Samsung 45k Ohms. I searched the ENTIRE card and never found a single 35k Ohm resistor. Meanwhile the 970 also worked with all 8 chips swapped, at a paltry 2.1 GHz. Then I got lucky. Someone with a Clevo 980m killed his card when trying to change resistor values to raise his memory voltage. His card had Samsung memory. He sent his card to me to fix, and after doing so I spent hours comparing every single resistor on our boards looking for a variation. Outside of VRM resistors there was just a single difference: On his card (his is shown here) the boxed resistor was 20k Ohms. On mine it was 15k Ohms. I scraped my resistor with a straight edge razor (I could not find a single unused 20k resistor on any of my dead boards) raising it to 19.2k, hoping it was close enough. And it was! Prior to this I also raised the memory voltage a little more from 1.48V to 1.53V. My max stable clocks prior to the ID resistor change were 6552 MHz. They are now 6930 MHz. 378 Mhz improvement. Here's a 3dm11 run at 7.5 GHz (not stable, but still ran) http://www.3dmark.com/3dm11/10673982 Now what about the poor 2GHz 970? I found its memory ID resistor too: Memory improved from 2.1 GHz to 6.264 GHz. Surprisingly the memory was slower than it was on the 980m. I expected the 970's vBIOS to have looser timings built in to run the memory faster. As for why the memory was over 100MHz slower than the 980m, 980m actually has better memory cooling than the 970. With the core at 61C I read the 970's backside memory at 86C with an IR thermometer. The Meanwhile the 980m has active cooling on all memory chips, so they will be cooler than the core. In addition, the 980m's memory traces are slightly shorter, which may also help. The 980m at 6.93 GHz is still slower than the 8 GHz that the 970 was capable of with the same memory. I'm not sure why this is. Maybe memory timings are still an issue. Maybe since MSI never released a Hynix version of the 970 meant leftover timings for an older card like a 680 were run, instead of looser timings that should have been used (I know in system BIOS tons of old, unused code get pushed on generation after generation). I don't know, just guessing. Talking to someone who knows how this stuff works would be great. I still want 8 GHz. Some more pics. Here's one with the 970 about to get its 3rd and 4th Hynix chips: Here's my 980m with all memory switched to Samsung. Sorry for the blurriness: So in summary: 1. It is possible to mix Samsung and Hynix memory, or switch entirely from one manufacturer to another, with some limitations. 2. There is a resistor on the pcb that is responsible for telling the GPU what memory manufacturer is connected to it. This affects memory timings, and maybe termination. It has a large impact on memory speed, especially for Hynix memory. This resistor value can be changed to another manufacturer. It is not guaranteed that the vBIOS will contain the other manufacturer's timings. If it does they may not be 100% correct for your replacement memory. 3. If you take a card meant for Hynix memory, you can mix Samsung memory of the same size if it is a faster memory. If the memory is the same speed, the penalty for running Samsung with Hynix timings may hurt memory clocks. 4. If you take a card meant for Samsung memory, you cannot mix any Hynix memory without MAJOR clock speed reductions without also changing the memory manufacturer resistor. It is not guaranteed that the vBIOS will contain the other manufacturer's timings, or if it does 100% proper timings for your specific memory. 5. For Kepler cards the Samsung resistor value is 45k, and for Hynix 35k. For Maxwell cards the Samsung resistor value is 20k, and Hynix 15k. Next up is changing the hardware ID to be a 980 notebook. Clyde also found HWID to have an impact on the number of CUDA core blocks enabled. In about a month I can get a hold of a 970m that someone is willing to let me measure the resistor values on. It has the same pcb as the 980m. Does Nvidia still laser cut the GPU core package? We will find out.1 point
-
holy crap@khenglish, ure beating all my personal highscores with a friggin mobile ES cpu one and a half generations back *lol* loving those hardware mods though, this is what distinguishes TI from NBR, the amount of original work Sent from my Nexus 5 using Tapatalk1 point
-
smaller screen bezel\frame (just look at how nice the new dell xps 15 & 13 are) go crazy: 21:9 laptop? and tailor a backpack that can hold it lol 24" 22" 21" 20" laptops? three or four way sli, desktop variant gpus, or perhaps create a special blend gpu: consider going dual gpu on a single graphic card: just like geforce 690 was (or amd's new fury x2): Why not desktop variant of geforce 980, in a dual gpu format (essentially twice the memory, two gpu cores, and slightly less than 200% the physical size)... now put two of this in SLI in a 20" thin bezel laptop that is essentially no bigger than 18" laptop and oh yeah... thunder bolt 3 that uses usb type c port * 8 (and no regular size usb\ethernet\vga\hdmi... just this type of port) ditch the optical drives ditch the 2.5" drives nvme m.2 ssd only, raid supporting go nutz: slide door mechanism for.... two screens (imagine 2*20" yummy) screen that can be height adjusted ..... that neck pain is baaaaaaaaad make the computer out of high grade copper, entire thing is heat conductive, if we wanted it not to be heavy we wouldn't have bought your product anyways, we don't care for weight (as long as you keep it under 15kg lol) gsync, 144hz ah-ips (or newer format of ips) maybe the entire gpu section can be on the screen (behind the display panel) instead of in the keyboard, this way fans can intake from bottom and push up, and distance from cpu = superior temp management how about actual desktop computer instead of a laptop?: actual desktop motherboard, actual desktop gpu, uses normal PCIe 3.0 X16 connector (perhaps with adapter) for gpu, and your heatsink step farther: What if the gpu is normal full size desktop gpu, sits behind the display panel, the back plate of the laptop is massive copper heatsink, no blower fans, just plain fans that pulls air from behind the monitor onto the gpu, no need to "choke" under the laptop, or anything, really seems most reasonable. which totally opens the space up for four gpus to be connected.... Also: why laptop ram? Go bananas: get some qualcomm soc embedded in the motherboard, and have two different power buttons on the laptop, one leads to normal pc other leads to the soc, a smartphone's soc+ laptop's format and battery equals what.... 5 days of work time if all you need is the most basics of computer functions? could be awesome. If you did all of the above, (went for a micro atx form motherboard, but instead of box shape went with a wide rectangle) gpus are behind the screen, no optical or 2.5" drives, just m.2 ssds, then in theory there's now excessive free space in the laptop.... counter that with physical keyboard, that plus a screen that can be height adjusted and you got a real winner, esp if you managed to shove 20" to 18" size due to reduced bezel, this will essentially be the ultimate laptop (and totally kill msi's gt80 since .. bigger screen but same size laptop, far faster hardware) Also... a military grade rugged ~1500wat PSU will be probably required here (because this laptop will be able to handle extreme edition x99 cpu and four way sli of geforce 980ti.... so yeah)1 point
-
I'll do one when I'm done with all major benchmarks at 1.2V The how-to part won't be very detailed as I doubt people want to replace all of their GPU's memory.1 point
-
Both the 240W and 330W are modded. The 330W does not limit me at all. The 330W is the home PSU, and the 240W is the travel PSU. I always have the 240W on me, but not the 330W.1 point
-
So after some more modding my memory can now do 7.6GHz in 3dm11. It can only go that high though because 3dm11 is so light on memory. Fully stable is only 6930 MHz. The reason I don't have 8 Ghz is definitely memory timings. Switching the memory ID resistor on the board from Hynix to Samsung to match the memory swap got me an extra 378 MHz. 980 timings should get me 8Ghz+ at least for benchmarks. @Prema if you happen to know how to change memory timings, or would like to learn how by throwing experimental vBIOS at me... I would be very grateful. http://www.3dmark.com/3dm11/10669305 I'm on my 240W PSU right now, so no core overvolt for core clocks. I was on the 330W last night, but the room was too hot and knocked about 10 MHz off clocks. I got tired of the crashing and gave up. I'll do 1.2V runs soon though, maybe even outside to try for 1.5GHz. I did submit catzilla 1.2V to hwbot because that test loves memory clocks, and at just 7012 I got the top spot by a mile: http://hwbot.org/submission/3061939_khenglish_catzilla___720p_geforce_gtx_980m_27579_marks1 point