Jump to content
  • Sign in to follow this  

    NVIDIA GTX 980M Hynix to Samsung Memory Swap


    Khenglish

    So I didn't like that the memory on my 980m only clocked to 6.4 GHz after raising the voltage to 1.48V from 1.35V, and wanted my memory to run even faster. I knew someone with a spare 970, so we made a deal where I buy the card, and if it still worked after I switched all the memory chips, he'd buy it back (for reduced amount if it could no longer do 7GHz, but at least 6GHz). Long story short, he bought the card back and I got faster memory.

     

    MSI 970 4GB Lightning original memory: Samsung K4G41325FC-HC28 (7GHz rating, 8GHz max overclock)
    MSI 980m 4GB original memory: Hynix H5GQ4H24MFR-T2C (6 GHz rating, 6.4GHz max overclock)

     

    Both cards are GM204 chips. The 980m has one less CUDA core block enabled than the 970, but it has the full 256-bit memory interface and L2 cache with no 3.5GB issues, while the 970 is 224-bit with 1/8th of the L2 cache disabled. Both cards are 4GB with 8 memory chips.

     

    I highly suspected this memory swap would work because video cards read literally nothing from a memory chip. There is no asking for what the chip is or even the capacity. They write data to it and hope they can read it back. Memory manufacturer information read by programs like GPU-z isn't even read from the memory. It's set by an on-board resistor. I also had changed multiple memory chips in the past, so was fairly confident I could physically do the job.

     

    I started with just one chip switched from both cards. This meant both cards were running a mix of memory from different manufacturers and of different speed ratings, but same internal DRAM array configuration. Both cards worked. Here is a picture of the 980m with one chip switched over:

     

    JXVyaWy.jpg

     

    Now how did the cards react? The 980m behaved no differently. No change in max overclock. The 970 though... I expected it to be slower... but...

     

    970 with 1 Hynix chip, 7 Samsung (originally 8 Samsung)
    7GHz = Artifacts like a crashed NES even at desktop
    6GHz = Artifacts like a crashed NES even at desktop
    5GHz = Artifacts like a crashed NES even at desktop
    2GHz = Fully Stable, 2d and 3d

     

    I didn't try 3GHz or 4GHz, but yeah, HUGE clock decrease. I shrugged though and kept switching all the memory figuring that as long as it worked at any speed, I could figure out the issue later. With switching more chips through 7/8 switched there was no change in max memory clocks.

     

    What was really fun was when I had 7/8 chips done. My GDDR5 stencil got stuck and ripped 3 pads off the final Samsung chip. Needless to say there was a very long swearing spree. Looking up the datasheet I found that 2 pads were GND, and a 3rd was some active low reset. Hoping that the reset was unused, I checked the 970's side of the pad and found it was hardwired to GND. This meant the signal was unused. I also got a solder ball on a sliver of one of the GND pads that was left, so I was effectively only missing a single GND connection.

     

    I put the mangled 8th chip in the 980m and it worked. Net gain after all of this... 25 MHz max overclock. Something was obviously missing. I figured I would switch the memory manufacturer resistor, hoping that would do something. I saw that Clyde found this resistor on a k5000m, and switching it to the Hynix value from Samsung had no effect for him. He found that for Hynix on the k5000m the value was 35k Ohms, and for Samsung 45k Ohms. I searched the ENTIRE card and never found a single 35k Ohm resistor. Meanwhile the 970 also worked with all 8 chips swapped, at a paltry 2.1 GHz.

     

    Then I got lucky. Someone with a Clevo 980m killed his card when trying to change resistor values to raise his memory voltage. His card had Samsung memory. He sent his card to me to fix, and after doing so I spent hours comparing every single resistor on our boards looking for a variation. Outside of VRM resistors there was just a single difference:

     

    AIU6Ph3.jpg

     

    On his card (his is shown here) the boxed resistor was 20k Ohms. On mine it was 15k Ohms. I scraped my resistor with a straight edge razor (I could not find a single unused 20k resistor on any of my dead boards) raising it to 19.2k, hoping it was close enough.

     

    And it was! Prior to this I also raised the memory voltage a little more from 1.48V to 1.53V. My max stable clocks prior to the ID resistor change were 6552 MHz. They are now 6930 MHz. 378 Mhz improvement.

     

    Here's a 3dm11 run at 7.5 GHz (not stable, but still ran)
    http://www.3dmark.com/3dm11/10673982

     

    Now what about the poor 2GHz 970? I found its memory ID resistor too:

    6q6iGuQ.jpg

     

    Memory improved from 2.1 GHz to 6.264 GHz. Surprisingly the memory was slower than it was on the 980m. I expected the 970's vBIOS to have looser timings built in to run the memory faster. As for why the memory was over 100MHz slower than the 980m, 980m actually has better memory cooling than the 970. With the core at 61C I read the 970's backside memory at 86C with an IR thermometer. The Meanwhile the 980m has active cooling on all memory chips, so they will be cooler than the core. In addition, the 980m's memory traces are slightly shorter, which may also help.

     

    The 980m at 6.93 GHz is still slower than the 8 GHz that the 970 was capable of with the same memory. I'm not sure why this is. Maybe memory timings are still an issue. Maybe since MSI never released a Hynix version of the 970 meant leftover timings for an older card like a 680 were run, instead of looser timings that should have been used (I know in system BIOS tons of old, unused code get pushed on generation after generation). I don't know, just guessing. Talking to someone who knows how this stuff works would be great. I still want 8 GHz.

     

    Some more pics. Here's one with the 970 about to get its 3rd and 4th Hynix chips:
    Xuu0dxa.jpg

     

    Here's my 980m with all memory switched to Samsung. Sorry for the blurriness:
    7B8bHQm.jpg

     

    So in summary:

     

    1. It is possible to mix Samsung and Hynix memory, or switch entirely from one manufacturer to another, with some limitations.

     

    2. There is a resistor on the pcb that is responsible for telling the GPU what memory manufacturer is connected to it. This affects memory timings, and maybe termination. It has a large impact on memory speed, especially for Hynix memory. This resistor value can be changed to another manufacturer. It is not guaranteed that the vBIOS will contain the other manufacturer's timings. If it does they may not be 100% correct for your replacement memory.

     

    3. If you take a card meant for Hynix memory, you can mix Samsung memory of the same size if it is a faster memory. If the memory is the same speed, the penalty for running Samsung with Hynix timings may hurt memory clocks.

     

    4. If you take a card meant for Samsung memory, you cannot mix any Hynix memory without MAJOR clock speed reductions without also changing the memory manufacturer resistor. It is not guaranteed that the vBIOS will contain the other manufacturer's timings, or if it does 100% proper timings for your specific memory.

     

    5. For Kepler cards the Samsung resistor value is 45k, and for Hynix 35k. For Maxwell cards the Samsung resistor value is 20k, and Hynix 15k.

     

    Next up is changing the hardware ID to be a 980 notebook. Clyde  also found HWID to have an impact on the number of CUDA core blocks enabled. In about a month I can get a hold of a 970m that someone is willing to let me measure the resistor values on. It has the same pcb as the 980m. Does Nvidia still laser cut the GPU core package? We will find out.

     

     

    Full thread can be found here: https://www.techinferno.com/index.php?/forums/topic/9021-hardware-mod-gtx980m-hynix-to-samsung-memory-swap/#comment-134361

     


    • Thumbs Up 11
      Report Article
    Sign in to follow this  


    User Feedback


    There are no comments to display.



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Similar Content

    • By jasonmerc
      Hey everyone!  After lots of searching it looks like this is the go-to place for laptop mods online, so I figured this would be the best place to ask something like this.
       
      I picked up a Dell E6530 for basically scrap value, and I love the thing.  Runs Xubuntu great, battery is halfway decent, and I can play 99% of the games I own on it.  My friend on the other hand picked up an E6440 with its silver-ish finish along the back.  As much as I love my 6530 I've gotta say, that thing looks great.
       
      To get to the point now, I was wondering if it's possible for me to polish the top of the lid on my 6530 to have a more silver finish like that (or maybe even a mirror finish if I sand it down with a fine enough grit).  Really what I want to know is if the dark grey part of the lid is actually metal.  I can't seem to get a definitive answer online, and I tried the magnet test on it and it wasn't magnetic at all.  This doesn't necessarily mean it's not metal though, as aluminum isn't magnetic but is obviously a sandable/polishable metal.  One source said the laptop has anodized aluminum parts on it, but they didn't specify whether it was something internal or external.
       
      Can anyone here provide me some insight?  If the grey part is definitely a metal and not a plastic, I think I'm gonna give it a try.  As stated before I got this thing for scrap value, so if in the end it doesn't work out I'm not gonna cry over it or anything.  Still though, I'd rather not run into this blind.  If this is indeed possible though and I can accomplish what I want with a decent looking result, I'll post up some pictures with the method I used for others to do as well.
       
      Thanks in advance!
    • By ounces
      After spending significant time and effort to obtain "DC" screen for 8770w (which is essentially a regular IPS panel with fancy board that converts 8bpc LVDS to 10bpc DP), I have finally got and installed one. All works great, except of the one problem...

      It has pretty bad banding / posterization in lower shadows. I have tried profiling it in different modes (full range, sRGB, rec709) - issue persists, and it indeed shows only in the lowest part of the characteristic curve. Mids and highlights are represented fine and show low deviation from reference values.

      GPU is HP K4000M, Nvidia drivers installed "as it is", video-card is identified without a hitch.
      Banding was not present with the original TN panel using the same GPU.
       
      While checking a software side, I have noticed that Win10 has bit depth set to 8-bit...
       

       
      My initial reaction was, - "Easy, let's change it in `nvidia-settings` and we're all set":

      ...but that would be too easy, right? After selecting 10bpc and clicking "Apply" screen went off and back on, only to show that depth stayed at 8bpc. Repeating the above few times yielded exactly the same result and I'm not in a hurry to meet a cliched (and laymen) definition of insanity.
       
      Let's check GPU-Z. So far so good, nothing unusual. Notice the highlighted BIOS version and subvendor string:
       
      Time to delve into other tabs. We are running WDDDM v2.4 which supports GPU dithering, but hey... BIOS version has changed!
       
      Briefly back to `nvidia-settings` to check what is reported by vendor's own utility:

       
      So far, we have two strings for BIOS version:
      80.04.5A.00.02 (let's call it an "A") 80.4.33.0.37 (let's call it a "B")  
      Notice how 2nd one seems to not follow hexademical notation. Lastly, "NVIDIA BIOS" drop-down, reports "A" version:
       
      ...and monitor section which confirms that rig is indeed capable of 10bpc, but currently running at mere 8bpc:

       
      Windows "Adapter settings", reports version "B". It's 2019, diversity is a must.

       
      "NVidia inspector" is of the same opinion:

       
      Now, let's use some seriously legit tools and check-in exported BIOS file in `nvflash`:

       
      Here we have two three interesting findings:
      Reported vendor is Dell, not an HP. See this link for details. BIOS version is back to "A". Have I already mentioned diversity? MXM module uses MX25L2005 flash storage in WSON-8 packaging. If things go real nasty, we should be able to rescue a patient via Pomona clip and external programmer.  
      Loading the same file in "Kepler BIOS tweaker" confirms the facts:

       
      EDID settings, courtesy of NVidia Control Panel. Hex dump can be found at the bottom of this post.
      ...Shall I be worried about "60.02Hz" refresh rate?
       
      To summarize:
      Why two different BIOS versions are reported? Anything to do with UEFI (e.g. HP is sideloading its own during boot)?.. Why two different vendors reported? As far as I remember, this is branded HP GPU. Where to get "clean" BIOS of K4000M for future experiments? Ideally from 8770w equipped with "DreamColor" panel from a factory.  
      Link to the dumps, BIOS ROM and monitor EDID: https://mega.nz/#F!zGgRmQIL!9q2QFZtHuK2RQ-WHXMA4Mg (also attached to this post)
      K4000M.zip
    • By Blocker35
      Hi guys, bit of a rookie to the whole EGPU scene. Currently I have:
       
      - MacBook Pro 2015 13inch (3.1GHz Core i7 16GB RAM)
      - Razer X Core 
      - Apple TB3 to TB2 Adapter
      -  TB2 Cable (Cable Matters)
      - 23inch AOC External Monitor
       
      I am wonder about what graphics card to get to run X-Plane11 with high graphic settings?
      I have purgewrangler set up and ready to use with an AMD graphics card, but am also open to the idea of an Nvidia graphics card.
      Any advice appreciated. I did not buy a Windows PC as I need a Mac for various other things and wanted an all-in-one laptop.
    • By BAKED
      There are several threads on requesting BIOS modifications here on T | I but I thought another one wouldn't hurt.
       
            Your request should look like:
      Brand Model Type of modification(For example: Unlocked, added support, Logo, microcode, ME firmware etc  
      I'll respond as soon as I have time as I'm also currently working on other projects(Requests by PM is also ok)
      If anyone else have the knowledge and time feel free to respond to requests 
       
    • By Edrc
      Hello,
       
      I am new on this forum, so I am sorry if posting at the wrong place. It's been 2 weeks that I am searching how to make my GTX 980M 8GB working Inside my Alienware 17 R1 (June 2014, i7 4710MQ, 60 Herts Display, OPTIMUS and the GTX 880M Inside died.)
      After "quickly" looking some forums, I saw that a 980M would easily fit Inside … but maybe I was wrong.
       
      Hardware wasn't a problem. I've also could install drivers by modding the nvdmi or nvcvi. inf files. The card was recognied BUT :
      the clock speed is locked a 135 or 405 Mherts and fan is not runing when booted up.
      I tried to flash vBios with nvflash and some ROM I found on vBios collection … but no way to get this working properly.
       
      So, i'm coming here. I've seen many people talking about Prema or svl7 or other big boss in the place … but ? how ? where ? when ?
       
      Inside the peripherial manager I have this in remote acces path : PCI\VEN_10DE&DEV_1617&SUBSYS_05AA1028&REV_A1\4&D590A51&0&0008
      So I assume my GTX 980M card ID is 1617 and vendor and PC Model are 05AA 1028.
      But near the "Standard Microsoft Video Card" there is a yellow exclamation point and Inside details it says "this peripherial doesn't work properly" when no Nvidia drivers installed.
      I could find some vBios with 1617 (Asus) but black screen at boot up. Since laptop model where not the same .. probably.
      And I wasn't able to find 1617 with the existing 05AA 1028 … as the GTX 980M was not an option for my model. (AW 17 R1 2014 ,60Herts)
      I tried 353.60, 382.33 and 419.35 drivers. The latest one seems easier to install but longer.
       
      BIOS is A16 from Dell. And I am using Windows 10. Got a Dell 330W brick. 
       
      I saw a guy with the exact model as me, he gets a modded vBios especially for his configuration, then everything worked easily … but can't find the topic again
       
      I guess depending on the card info and the laptop model … a dedicated vBios is required … 
       
      So, If anyone here got some informations or any kind of help for me, it would be very nice.
       
      Thanks for reading and help.  
       
      PS: there are a lot of files, pictures and links that I can't see on this forum .. even after registration …. 
       
×

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.