Jump to content

Another Tech Inferno Fan

Registered User
  • Posts

    198
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Another Tech Inferno Fan

  1. Note that low-wattage PSUs tend to also have terrible power quality and voltage ripple.

     

    That, and are probably safety hazards.

     

    If you must buy a low-wattage PSU, find a used Delta PSU on your local classifieds. They sometimes come as part of a prebuilt low-power system. You should be able to find a 280W one.

     

    But at that point you might as well just spend the extra $10 or whatever and get a decent used Coolermaster or Antec.

    • Thumbs Up 1
  2. So it's a well-known fact that the EXP GDC (all versions), PE4C (v2.1), PE4H (v2.4) and PE4L (v1.5) use some kind of socketted connection to connect itself to the laptop's mPCIe or EC slot.

     

    201512011010538452.jpg

     

    This socketted connection usually comes in the form of a physical HDMI or mini-HDMI port, which are notorious for being loose, non-locking connectors. As to why the manufacturers decided to use HDMI/mHDMI, it boggles my mind. They could have used literally any other connector.

     

    The physical HDMI connection between the eGPU adapter and the laptop can get very loose, and thus causing signalling issues between the graphics card and the computer. Usually this isn't much of a problem, since it's a digital signal, so it's not like your pixels will look mangled up or whatever.

     

    However, sometimes it causes the signal to intermittently drop out, causing the graphics card to disconnect from the PC, making the graphics card driver crash. Usually this just ends up in the 3D game freezing momentarily and then coming back with a perpetual black screen, forcing you to restart the game, disabling/re-enabling the card in Device Manager, and re-applying your overclock/underclock settings (if any). Sometimes it causes the PC to crash outright with a BSoD.

     

    I thought putting some constant downward force (ie, perpendicular to the plane of the PCB) on the HDMI cable would solve this, but that doesn't seem to work.

    EDIT: Googling "loose hdmi port" doesn't yield any useful results because of audiophools and console peasants lowering the signal-to-noise ratio

     

    Has anyone figured out any other hacks/solutions to get around this?

  3. 1.2 watts is a ridiculously small amount of power. A unpowered USB port can output more than double of 1.2W.

     

    If Akitio's board can't handle outputting 1.2 watts of power, then I seriously question their existence as an electronics company.

     

    1.2 watts. 1.2 stinking watts. I bet I could output more than 1.2W of power when I go pee in the morning. I bet I've already done more than 1.2W of work just by typing this bloody post.

     

    It's not like you're trying to power a 1.2 gigawatt time machine. You'd need a nuclear fission reactor to get power of that magnitude, however. But it would still be loads of fun as a weekend project.

  4. 1. "Engineering" implies practicality. The simplest solution is often also the most practical. Physically move the card.

    2. Please don't create duplicate threads.

    3. Perhaps you could program an FPGA to connect 4 lanes (Thunderbolt only does 4 lanes of PCIe) from the card, and then connect that to either a thunderbolt line (in eGPU mode) or a desktop motherboard slot (in desktop mode). Should be as simple as just reading the binary value on one side and passing it through to the next. I don't know what kind of bandwidth requirements you may need.

    4. I spent 5 minutes searching on Google and found this: http://www.nxp.com/products/interface-and-connectivity/interface-and-system-management/pci-express/pcie-channel-switches:MC_52251?cof=0&am=0&tab=Products
     

    This device from TI shows an application diagram in section 10.2: http://www.ti.com/lit/ds/symlink/hd3ss3415.pdf

  5. 15 minutes ago, burrlin said:

    GTX 980 Ti has a 8 pin (max150W) and a 6 pin (max 75W), which if I utilize both pin sets would get me to 225W correct? But doesn't the graphic card need 250W to run at full speed? Is this a dumb question?

     

    This is a dumb question. The card is manufactured as it is. There is no reason for a manufacturer to make a card that doesn't work. If I applied your logic to the GTX 750 or any lower-end card, then it would imply that those cards require 0W to function since they have no power connectors.

  6. It seems that it's pretty common to mod a CPU liquid cooler onto a GFX card: http://www.overclock.net/t/1203528/official-nvidia-gpu-mod-club-aka-the-mod

     

    600x450px-LL-755e17bd_dsc00576fn.jpeg

     

    As it is, under Furmark at 1.050V (-0.025V from factory voltage) and 810 MHz core clock (+13MHz from factory clock), my GTX 580 SC runs at 87C, 85% fan speed (I use a custom fan curve), ambient temp 30C. It runs stable at 888MHz core clock (+91MHz) at stock voltage, but I don't run it at that setting for fear of overcheating.

     

    I could definitely get a higher overclock tho, again, if not for fear of overheating.

     

    I would love to try this out on my 580... If I weren't such a cheap prick who doesn't want to pay even $50 for a used liquid cooler.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.