Jump to content
Latest News
  • Apologies for the downtime. We had to update our backend and theme.
  • We will continue pushing updates.
  • Apologies for the downtime. We had to update our backend and theme.
  • We will continue pushing updates.

Search the Community

Showing results for tags 'hardware'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Forum Introduction & Feedback
    • Site Announcements, Rules & New Member Intros
    • TechInferno Forum Feedback
  • Tech News & Reviews
    • News & Reviews
  • Notebooks & e-GPU Discussions
    • DIY e-GPU Projects
    • General Notebook Discussions
    • Notebook OEM Subforums
    • What Notebook Best Fits My Needs?
    • Legacy Articles
  • Desktop & General Hardware
    • General Desktops Discussion
    • Desktop Hardware
    • Overclocking, Cooling & Build Logs
  • Software, Networking & PC Gaming
    • PC Gaming
    • Video Driver Releases & Discussion
    • Networking
    • General Software Discussion
  • Everything Else
    • Off Topic


  • SVL7 & Klem VBIOS Mods
    • AMD
    • Alienware M11x R3
    • Alienware M14x R2
    • Alienware M17x R4
    • Alienware M18x R1
    • Alienware M18x R2
    • Kepler VBIOS
    • Lenovo Y400-Y500
    • Lenovo Y410p-Y510p
    • Lenovo Y580-Y480
    • Legacy BIOS/VBIOS
    • Maxwell VBIOS
    • Sony Vaio SVS13 / SVS15 series
    • Clevo
  • Utilities


  • Frontpage News
    • General News
    • Reviews
    • Hardware News
    • Software News
    • PC Gaming News
    • Science & Technology News
  • Promoted Posts
    • Guides
    • Hardware
    • Software


  • Hardware
    • Notebooks and Components
    • Desktop and Components
    • Phones
    • Accessories
  • Software
    • PC Games
    • All Other Software

Found 8 results

  1. Due to a stupid accident by me, I acquired a 980m with a chunk knocked out of the core. Not wanted to scrap a perfectly good top end PCB for parts, I wanted to replace the core. You can see the gouge in the core to the left of the TFC918.01W writing near the left edge of the die. First I had to get the dead core off: With no sellers on ebay selling GM204 cores, my only option was to buy a full card off ebay. With no mobile cards under $500,I had to get a desktop card. And with this much effort involved to do the repair, of course I got a 980 instead of a 970. Below is the dead 980 I got off ebay: You can see for some reason someone removed a bunch of components between the core and PCI-E slot. I have no idea why anyone would do this. I tried the card and it was error 43. PCB bend seemed to be too little to kill the card, so those missing components had to be it. GPUs can be dead because someone removed or installed a heatsink wrong and broke a corner of the core off, so buying cards for cores on ebay is a gamble. This core is not even scratched: Preheating the card prior to high heat to pull the core: And core pulled. It survived the pull: Next is the 980 core on the left cleaned of solder. On the right is the original 980m core: Next I need to reball the 980 core, and lastly put it on the card. I am waiting for the BGA stencil to arrive from China. It still has not cleared US customs: https://tools.usps.com/go/TrackConfirmAction?tLabels=LS022957368CN When that shows up expect the core to be on the card in 1-2 days. So some potential issues with this mod besides me physically messing up: I believe that starting with Maxwell Nvidia started flashing core configuration onto the cores, like intel does with CPUID. I believe this because I found laser cuts on a GK104 for a 680m, but could not find any on two GM204 cores. In addition, Clyde figured out device IDs on the 680m and K5000m. They are set by resistor values on the PCB. The 980m has the same resistor configuration as the 680m for the lowest nibble of the Device ID (0x13D7), but all of the resistors are absent. Filling in these resistors does nothing. Resistors do exist for the 3 and D in the device ID. Flashing a 970m vBIOS on my 980m did not change the device ID or core configuration. If this data is not stored on the PCB through straps or the vBIOS, then it must be stored on the GPU core. So I expect the card with the 980 core to report its device ID as 0x13D0. The first 12 bits pulled from the PCB, and last 4 from the core. 0x13D0 does not exist. I may possibly be able to add it to the .inf, or I may have to change the ID on the board. With the ID's 0 hardset by the core, I can only change the device ID to 0x13C0, matching that of a desktop 980. An additional issue may be that the core may not fully enable. Clyde put a 680 core on a K5000m and never got it to unlock to 1536 CUDA cores. We never figured out why. Lastly, there was very tough glue holding the 980m core on. When removing this glue I scraped some of the memory PCB traces. I checked with a multimeter and these traces are still intact, but if they are significantly damaged this can be problematic for memory stability. I think they are OK though, just exposed. Due to Clyde's lack of success in getting his 680 core to fully unlock I am concerned I might not get 2048. If I don't at least I should still have a very good chip. Desktop chips are better binned than mobile chips (most 980s are over 80% ASIC quality, while most 980ms are below 70%). In addition this 980 is a Galax 980 Hall of Fame, which are supposedly binned out of the 980 chips. Having a 90%+ ASIC would be great to have. The mid 60s chips we get in the 980m suck tons of power. I want to give a special thanks to Mr. Fox. This card was originally his. He sent me one card to mod and one to repair. I repaired the broken one and broke the working one. The broken one is the one I've been modding. Article update: SUCCESS! Core finally reballed. If the mount is poor I will be very very angry... Card cooling. New brain installed. So it actually works with the 980m vBIOS. I tried modding too soon. I just needed to reinstall the driver. I only ran a very lightweight render test because right now the card is only running on 2 phases. I'm pulling the phase driver from my 980m now to get the 3rd phase back up. Follow the rest of the discussion here:
  2. So back when I got my P150EM, one of the deciding factors on getting it was that due to optimus/enduro, the battery life was respectable. I wanted the top hardware while still having some mobility. Over time though, the battery became more and more worn out, to the point where I hardly got over an hour of life out of it. New batteries are stupidly expensive, and Clevo used cheap cells for it in the first place. I wasn't paying $100 for a mediocre replacement battery. I decided to pay $50 for top end cells to boost capacity by 30% and get over 6h of battery life. I figured that this could get messy, and luckily a friend let me have his nearly dead P150HM battery for me to have some spare parts. So I swapped the cells, while destroying the plastic battery shell in the process, and got a battery that worked just like it still had the old cells. Figuring I needed to reprogram the EEPROM on the battery pack, I started removing the glue all over the EEPROM chip to get it in my programmer. I stupidly forgot that I was working on a BATTERY, which meant that it was ALWAYS ON, and poured MEK over it, blowing a fuse. After getting pissed off and giving up for a few months, today I gave it another go. I got the EEPROM chip out and started taking guesses at how to reprogram it. If I guessed wrong, good thing the fuse was blown so I don't melt anything. I figured out that battery EEPROM contains the capacity info in terms of mAh for a pair of battery cells. I searched for the default 5200 mAh (1450 in hex) and found it. I then raised this to 6800 mAh (1A90 in hex). It was a success! Nominal battery capacity was now 100640 mAh total. So now I knew I could probably program things right after enough tries. It was now time to get the battery operational again. I bridged the fuse, and the battery came back to life. Sort of. It would charge when off, but not on. It would run, but windows reported no battery drain (infinite energy!?!?!?!?). In short, the battery EEPROM was not being updated at all as the battery state changed. I was under the impression that if Ilet it charge, it would not stop until overvoltage protection kicked in, and if I let it discharge, it would not turn off until the system BIOS detected an undervoltage scenario, which is far below the safe discharge voltage of the battery. I figured for the time I'd just let it be and try to get the EEPROM right. Next was looking for the wear capacity. This is the capacity left in the battery as it ages. Using hwinfo64, I got the wear level, converted it to hex, and found it in the EEPROM. I then changed it to only 5% wear instead of 74%. I left some wear because I did let the cells sit for a few months, and I was directly soldering to the cells, which isn't really good for them due to the heat from the iron. This was a success. Current charge % correctly dropped as well. So now I needed to get the battery charging right. My only option was to rip apart my old, but fully functional P150EM battery. I found that the fuse was actually really weird with 3 prongs, and only 2 prongs were supposed to have 0 resistance. I had soldered all 3 together on the P150HM battery. I switched the EEPROM chips and boards, then hoped it would work and not require me to run and get the fire extinguisher. It worked! The battery is now charging properly as I type this. It also discharges right too. It looks like the laptop will try to overcharge it a bit since the current charge % was a little low vs reality, but that should just give it a little extra wear, with the charge % being calibrated properly at 100%. I'm not sure how I'm going to get that back in the shell... Continue discussion in original thread here.
  3. Hi everyone, I hope I'm not going to write off-topic; I own an Acer Predator 15 (G951) notebook, and it never gave me problems before; I've never used the integrated audio jack port because I usually listen to music and other stuff through Bluetooth headphones, but few weeks ago I noticed that putting a jack connector in the jack port only deactivates the sound from the pc speakers, but the headphones don't appear in the "devices" section, and the sound coming out from the headphones is really faint and distorted (only if the system slider of sound is at the maximum level and the sound should be very noisy, when the level is lower I don't hear anything at all). Also, when I put a jack in the microphone jack port of the pc I hear the classic sounds of the jack which touches the pin connectors inside the port, but it doesn't happen with the headphones port. I tried almost everything: I changed a lot of headphones, both phone headphones with a single jack and pc headsets with two jacks (for the headphones and for the microphone) but every time it's the same. I installed and reinstalled a lot of different versions of the audio drivers, both from Acer and the productor (Realtek), and nothing changes, I also tried with Ubuntu and it's the same. What could be the cause, and can the problem be solved without any technic intervention? Anyway the warranty for my pc is still valid, but the assistance should be the very last resource (I should do a backup of everything and format and so on). And, the last question, I flashed a custom VBIOS some time ago, does this compromise the warranty? Should I really reflash the original VBIOS or it's not so likely they're going to check on this? Thank you all for your attention!
  4. Hi all! Figured since there aren't many avenues out there as of yet that properly show how to dissect your phoenix, I'd try to take up the mantle with my own experience in upgrading the LCD panel from FHD 1080 to the 4K panel. Before we get started, I'd like to present a list a useful tools to have beforehand if you intend to pursue this endeavor. I'll be providing pictures of my own tools as well as links to various components you need for purchase if you intend to follow-suit: ATTENTION HUGE PICTURE LOAD AHEAD! Continue discussion of guide here.
  5. Sager P9872-S Hardware Quirks

    I've recently purchased Sager's new P9872-S laptop with Intel i7-6700K unlocked CPU and a single nVidia 1070 XMX GPU. Let me say that overall I absolutely LOVE this new machine, but I've discovered some strange hardware/BIOS type quirks that I'd like to discuss here. Note: I don't see a version displayed in my BIOS, but HWINFO lists a BIOS date of 10/19/16. The CPU is unlocked and supports overclocking. The stock BIOS also has a section for CPU overclocking. My CPU is quite happy running a moderate 4.5GHz OC across all cores with a 70mV undervolt (offset mode.) Yes it'll do the same clock at stock voltage but I've found that many Skylake CPU's do not need such high voltage as Intel has programmed them for - and my temps run much lower with the undervolt. So I set the undervolt with Intel XTU (I've also tried doing the same with Sager's own utility) and it runs beautifully until I reboot the system. At POST, BIOS then seems to reject the undervolt, forces a power-cycle, and sets CPU PL1 and PL2 power specs to very low "safe" levels. It also changes the negative voltage offset to a positive 70mV, indicating that it's trying to "error correct" the negative value. BIOS itself specifies a valid range of -500 to +500mV for the voltage offset here, and I can change it directly within the BIOS (and then it DOES actually stick through future reboots) but the BIOS does not allow me to enter a negative value here! Instead of -500 to +500, the actual values it allows for entry are 0 to +1000! And no, +500 does not actually equal zero, tried that and it really is +500. Also of note, I can change any multipliers in XTU or Sager's utility without the BIOS objecting but *any change* to the voltage offset gets rejected at reboot - even +1mV. Even applying +1mV and then immediately applying 0mV in either utility gets rejected on the next reboot so it's as if some checksums or something are not calculated the same way by XTU as they are by BIOS, causing a rejection of *any* voltage offset modification made by a utility rather than BIOS itself. Initially I set all my CPU core multipliers to 45x and all was good. Soon I noticed that any time I put the computer to sleep or hibernate, it would limit core speed to a maximum 42x after recovering, despite still reporting 45x as the CPU maximum. I tinkered with it for a while and eventually tried different values. Now I have ONE CORE multi set to 46x and TWO, THREE, and FOUR CORE multi set to the original 45x. That shouldn’t make a lick of difference versus 45x across the board but it does! 46, 45, 45, 45 sticks properly through sleep and hibernate; Weird right? I’m fine with the current settings as an effective workaround but I feel it’s still worth noting here. The VSYNC seems "lazy." I know this may be a driver or settings issue, rather than true hardware quirk, but it really bothers me - and the same behavior exists with nVidia drivers 368.xx to 373.06. Take Dirt Rally for example; I want it to run with VSYNC ON at 1080p, 90Hz. If I turn VSYNC OFF, my average framerate is about 200Hz, and never falls below about 110Hz, but with VSYNC ON it keeps dipping momentarily below 90Hz and skips frames! In a fast-motion racing game this is very distracting! I've observed this same behavior with the internal LCD and with my AOC 144Hz gaming monitor connected via DisplayPort. With VSYNC OFF, even with framerate limiter set at or just above my refresh rate, the game looks terrible so this is not an acceptable workaround either. Yes, the reported refresh rate is on target without VSYNC but there's terrible tearing and stutter. I've never experienced this kind of VSYNC dysfunction before; any thoughts? I have the 1920x1080 120Hz LCD screen. I wanted to set a couple custom refresh rates in nVidia Control Panel for racing games that really stress the system at 120Hz but I still want more than 60Hz. If the LCD starts out set at 60Hz and I test a custom resolution of 1920x1080 at, say, 90Hz, the screen goes kaput until it reverts back to 60Hz. However, if the screen starts out set at 120Hz I can then create any custom refresh rate in between 60 and 120Hz that I want. I wouldn't bother trying to fix this one as there's a perfectly functional workaround - it's just weird to me that starting frequency matters. Any thoughts on these quirks are welcome, and I’d be happy to try/test any good suggestion or curiosity you might have. Especially getting a negative CPU voltage offset to stick permanently, as well as any thoughts on the VSYNC laziness would be greatly appreciated. Thanks everyone!
  6. So I didn't like that the memory on my 980m only clocked to 6.4 GHz after raising the voltage to 1.48V from 1.35V, and wanted my memory to run even faster. I knew someone with a spare 970, so we made a deal where I buy the card, and if it still worked after I switched all the memory chips, he'd buy it back (for reduced amount if it could no longer do 7GHz, but at least 6GHz). Long story short, he bought the card back and I got faster memory. MSI 970 4GB Lightning original memory: Samsung K4G41325FC-HC28 (7GHz rating, 8GHz max overclock) MSI 980m 4GB original memory: Hynix H5GQ4H24MFR-T2C (6 GHz rating, 6.4GHz max overclock) Both cards are GM204 chips. The 980m has one less CUDA core block enabled than the 970, but it has the full 256-bit memory interface and L2 cache with no 3.5GB issues, while the 970 is 224-bit with 1/8th of the L2 cache disabled. Both cards are 4GB with 8 memory chips. I highly suspected this memory swap would work because video cards read literally nothing from a memory chip. There is no asking for what the chip is or even the capacity. They write data to it and hope they can read it back. Memory manufacturer information read by programs like GPU-z isn't even read from the memory. It's set by an on-board resistor. I also had changed multiple memory chips in the past, so was fairly confident I could physically do the job. I started with just one chip switched from both cards. This meant both cards were running a mix of memory from different manufacturers and of different speed ratings, but same internal DRAM array configuration. Both cards worked. Here is a picture of the 980m with one chip switched over: Now how did the cards react? The 980m behaved no differently. No change in max overclock. The 970 though... I expected it to be slower... but... 970 with 1 Hynix chip, 7 Samsung (originally 8 Samsung) 7GHz = Artifacts like a crashed NES even at desktop 6GHz = Artifacts like a crashed NES even at desktop 5GHz = Artifacts like a crashed NES even at desktop 2GHz = Fully Stable, 2d and 3d I didn't try 3GHz or 4GHz, but yeah, HUGE clock decrease. I shrugged though and kept switching all the memory figuring that as long as it worked at any speed, I could figure out the issue later. With switching more chips through 7/8 switched there was no change in max memory clocks. What was really fun was when I had 7/8 chips done. My GDDR5 stencil got stuck and ripped 3 pads off the final Samsung chip. Needless to say there was a very long swearing spree. Looking up the datasheet I found that 2 pads were GND, and a 3rd was some active low reset. Hoping that the reset was unused, I checked the 970's side of the pad and found it was hardwired to GND. This meant the signal was unused. I also got a solder ball on a sliver of one of the GND pads that was left, so I was effectively only missing a single GND connection. I put the mangled 8th chip in the 980m and it worked. Net gain after all of this... 25 MHz max overclock. Something was obviously missing. I figured I would switch the memory manufacturer resistor, hoping that would do something. I saw that Clyde found this resistor on a k5000m, and switching it to the Hynix value from Samsung had no effect for him. He found that for Hynix on the k5000m the value was 35k Ohms, and for Samsung 45k Ohms. I searched the ENTIRE card and never found a single 35k Ohm resistor. Meanwhile the 970 also worked with all 8 chips swapped, at a paltry 2.1 GHz. Then I got lucky. Someone with a Clevo 980m killed his card when trying to change resistor values to raise his memory voltage. His card had Samsung memory. He sent his card to me to fix, and after doing so I spent hours comparing every single resistor on our boards looking for a variation. Outside of VRM resistors there was just a single difference: On his card (his is shown here) the boxed resistor was 20k Ohms. On mine it was 15k Ohms. I scraped my resistor with a straight edge razor (I could not find a single unused 20k resistor on any of my dead boards) raising it to 19.2k, hoping it was close enough. And it was! Prior to this I also raised the memory voltage a little more from 1.48V to 1.53V. My max stable clocks prior to the ID resistor change were 6552 MHz. They are now 6930 MHz. 378 Mhz improvement. Here's a 3dm11 run at 7.5 GHz (not stable, but still ran) http://www.3dmark.com/3dm11/10673982 Now what about the poor 2GHz 970? I found its memory ID resistor too: Memory improved from 2.1 GHz to 6.264 GHz. Surprisingly the memory was slower than it was on the 980m. I expected the 970's vBIOS to have looser timings built in to run the memory faster. As for why the memory was over 100MHz slower than the 980m, 980m actually has better memory cooling than the 970. With the core at 61C I read the 970's backside memory at 86C with an IR thermometer. The Meanwhile the 980m has active cooling on all memory chips, so they will be cooler than the core. In addition, the 980m's memory traces are slightly shorter, which may also help. The 980m at 6.93 GHz is still slower than the 8 GHz that the 970 was capable of with the same memory. I'm not sure why this is. Maybe memory timings are still an issue. Maybe since MSI never released a Hynix version of the 970 meant leftover timings for an older card like a 680 were run, instead of looser timings that should have been used (I know in system BIOS tons of old, unused code get pushed on generation after generation). I don't know, just guessing. Talking to someone who knows how this stuff works would be great. I still want 8 GHz. Some more pics. Here's one with the 970 about to get its 3rd and 4th Hynix chips: Here's my 980m with all memory switched to Samsung. Sorry for the blurriness: So in summary: 1. It is possible to mix Samsung and Hynix memory, or switch entirely from one manufacturer to another, with some limitations. 2. There is a resistor on the pcb that is responsible for telling the GPU what memory manufacturer is connected to it. This affects memory timings, and maybe termination. It has a large impact on memory speed, especially for Hynix memory. This resistor value can be changed to another manufacturer. It is not guaranteed that the vBIOS will contain the other manufacturer's timings. If it does they may not be 100% correct for your replacement memory. 3. If you take a card meant for Hynix memory, you can mix Samsung memory of the same size if it is a faster memory. If the memory is the same speed, the penalty for running Samsung with Hynix timings may hurt memory clocks. 4. If you take a card meant for Samsung memory, you cannot mix any Hynix memory without MAJOR clock speed reductions without also changing the memory manufacturer resistor. It is not guaranteed that the vBIOS will contain the other manufacturer's timings, or if it does 100% proper timings for your specific memory. 5. For Kepler cards the Samsung resistor value is 45k, and for Hynix 35k. For Maxwell cards the Samsung resistor value is 20k, and Hynix 15k. Next up is changing the hardware ID to be a 980 notebook. Clyde also found HWID to have an impact on the number of CUDA core blocks enabled. In about a month I can get a hold of a 970m that someone is willing to let me measure the resistor values on. It has the same pcb as the 980m. Does Nvidia still laser cut the GPU core package? We will find out. Full thread can be found here: https://www.techinferno.com/index.php?/forums/topic/9021-hardware-mod-gtx980m-hynix-to-samsung-memory-swap/#comment-134361
  7. Whats missing P570WM

    advice on whats missing in my system?
  8. Y480 SPDIF-out question

    Hey Everyone. Sometimes I use my laptop's SPDIF because I'm an audiophile weirdo . But enabling it is annoying. My previous laptop, a Toshiba, would automatically recognize when a mini-toslink adapter was inserted. On this unit I have to manually make digital out the default and switch it back when I'm done. Does anyone know a workaround? Thanks