Jump to content

Why are Intel not allowing Thunderbolt eGPUs? Ideas inside.


Recommended Posts

What examples exist of Intel preventing (sabotaging?) Thunderbolt eGPUs?

 

Why would Intel do this?

 

Multiple reasons but probably the biggest being eGPU's CUDA/OpenCL processing cannibalizing Intel's CPU market because of it's advantages listed below. Intel CPUs might then just be confined to booting and running a host OS with the number crunching offloaded to the eGPU. This would surely be unaligned with Intel's profit-centric technology steering commitee.
 

  1. portable pluggable eGPU processing can be time shared between multiple machines with universal TB ports
     
  2. upgradable by purchasing a newer video card (NVidia/AMD) rather than CPU (Intel)
     
  3. a mature PCIe backplane allowing scalability (multiple processing across many cards), something not possible with Intel CPUs
     
  4. a large supply of perfectly capable and inexpensive older NVidia cards being offloaded by gamers on ebay

 

Is there an example of an eGPU Supercomputer?

 

Yes. (1) 13 NVidia GPUs were used with desktop components to created a supercomputer called FastraII in 2009: Fastra II: 12 Teraflops of Computing Power on a Desktop | TechHive. NOTE: their problematic 32-bit addressing issue has been overcome in the DIY eGPU project via a DSDT override. We can host tens of eGPUs in extended 36/64-bit PCIe config space.

 

Closest by a DIY eGPU user @nesone at http://forum.techinferno.com/diy-e-gpu-projects/8579-%5Bguide%5D-2013-13-macbook-pro-2-x-titan_z%4016gbps-tb2-akitio-thunder2-osx10-10-a.html#post116923. That's 2x TITAN_Z video cards attached over a 16Gbps-TB2 link. Each card being a twin GPU means there are 4 GPUs available to provide supercomputing capability.

 

How to build a US$4k TB supercomputer with the non-ideal 25W-slot TB enclosures today?

  1. 2013+ Macbook Pro 13/15" with 2 Thunderbolt ports
     
  2. three-slot TB budget enclosures like a $500 Sonnet EE SE II (REF: http://forum.techinferno.com/implementation-guides/6879-%5Bguide%5D-2013-15-macbook-pro-iris-gtx760%4016gbps-tb2-sonnet-ee-se-ii-win-8-1-a.html#post94263 )
     
  3. 6 NVidia cards, 3 per enclosure. Use PCIe risers to power the slots and ATX PSUs to power any PCIe power connectors.
     
  4. Linux OS to drive it all per FastraII example
     

The state of play if Intel continue to not support Thunderbolt eGPUs

 

- a continuation of budget TB enclosures with a 25W-slot, rather than a 75W-slot requiring power rerouting to be able to host a video card. Best enclosure to do that with currently being http://forum.techinferno.com/enclosures-adapters/7205-us%24200-akitio-thunder2-pcie-box-16gbps-tb2.html#post98210.

 

- to get a factory built 75W slot need to pay an exhorbitant amounts: http://forum.techinferno.com/enclosures-adapters/7872-us%24979-sonnet-echo-express-iii-d-enclosure-16gbps-tb2.html#post107170

 

- eGPU-specific enclosures with a more appropriate power configuration are to never be certified by Intel and so never make it to market. Eg: Silverstone T004, MSI GUS-II

- vendors work around Intel imposed limits by creating proprietory eGPU docks eg: upcoming Alienware 13 or MSI GS30. REF: http://forum.techinferno.com/diy-e-gpu-projects/2109-diy-egpu-experiences-%5Bversion-2-0%5D-405.html#post110518 . Difference there is Intel still continues to win. Gamers what they want (for now) which is faster desktop video cards attached to notebooks. Intel gets upgrade revenue when the machine's CPU is out of date. Furthermore, the machine-specific dock isn't able to be time shared with other vendor machines or be easily extended to host multiple video cards . Deliberate inefficient use of technology there which does favor Intel maximizing it's profits.

 

Comments, differences of opinion, etc are most welcome . . .

  • Thumbs Up 3
Link to comment
Share on other sites

Can comment since I heavily researched for exactly this reason: offload calculation (not only DX11 graphics) to GPU.

GPU calculation it's a reality since many years in HPC, just look at top500 list and you can find a lot of GPU accelerated supercomputers.

On graphics workstation market since the presentation of the new MacPro was clear Apple decided focusing more on GPU than CPU, otherwise why shipping an high end graphics machine with only one CPU but 2xGPU by default, as many "traditional" 3D pro are complaining for.

Reason is Intel is slowing down CPU performance increase, vs GPU performance going higher and higher. Apple it's following the trend, using also his strategic move to retain customers with a tight hw+sw integration as usual, as example with his Final Cut Pro X already optimized to work with 2 GPU using OpenCL.

Also Apple move being based on AMD GPU is pushing many vendors to adopt OpenCL (recent example the new Nuke 9), meaning even broader and cheaper GPU available power with AMD normally really price competitive, and the possibility for anyone to develop on open standard and not proprietary CUDA.

There are also a huge movement among 3D renderer to offload computation to GPU, with some already finished solutions (Furryball) and some hybrid CPU+GPU approach (Indigorenderer and some few others, with Indigorenderer in the future will develop also pure GPU rendering option). This mean at least a x4-x6 speed increase vs CPU rendering, with some limitations due to limited available VRAM. Notice the recent availability of 8GB VRAM "gaming" cards, useful not only for 4k gaming! Just read some pro asking for Alienware Graphics Amplifier to be used on hybrid rendering solutions.

Following this trend 3D professional are also considering to use 3D game engine as rendering engine vs traditional renders. Game engine such as Unreal Engine and CryEngine are able to present stunning visual in real time, leveraging the GPU power without almost any usage of the CPU. As example there are already many example of UE4 usage in interiors architecture rendering.

Established renderer with the adoption of GPU rendering are just going on the same side of game engines, as game engines are pushing to adopt renderer technology such as global dynamic illumination now and realtime ray tracing in the near future. The separation among them will blur more and more.

With available huge VRAM GPU and maybe the adoption of AMD hUMA architecture the transition from CPU centric to GPU centric (or at least hybrid) architecture will be complete.

In my understanding this is the idea why Intel its preventing Thunderbolt eGPU: they win CPU race with AMD and refocus his R&D effort on the mobile market, only to find that high margin high end 3D graphics market it's going through GPU way where they didn't have any brand nor high end solution, and also in mobile market they need to compete with many ARM based architectures, again starting without any market recognition nor support.

Thunderbolt eGPU will allow anyone to just maintain current hw for years, just upgrading the GPU as hUMA or equivalent solutions are gaining traction.

To finish worth in my idea to underline AMD situation: they lose CPU war, but in fact adopt the right strategic move. They buy in non suspect times ATI, so they have now technology and brand to compete in GPU driven market. They also develop hUMA that can be the cornerstone for new general purpose GPU driven architectures. They push out Mantle API that force DirectX and OpenGL to pair, enabling even more the usage of low cost CPU with a strong GPU a side. They also partner with ARM, pushing the adoption of this energy efficient architecture in traditional server market.

I think they have demonstrated really good strategic thinking and I personally really appreciate his GPU value with many workstation class features available at small price. The same discussion we made for Intel we can made for Nvidia, that is pushing to stop the adoption of OpenCL for evident reasons, as is introducing more and more proprietary features to retain his loyal sw vendors.

Im not an AMD affiliate nor think AMD it's a no profit angel organization. I only think they better incarnate what we are doing here: intelligent solutions that enable new possibilities for many in the average and not only high end market, pushing for grow to more ambitious goals.

I hope AMD move will start to pay, since this will mean IMHO also a push for broader eGPU technology availability.

  • Thumbs Up 1
Link to comment
Share on other sites

I think with this new Alienware dock just being released other companies will have to create official eGPU solutions Intel included.

Yes ditto! see for now we got;

US DIY eGPU peeps,

MSI GS30 Shadow,

Asus XG Station 2

and this Dell Alienware 13 with GPU Amplifier

So Intel has got to support it! Hell this is product feasibility and full market potential for them!

Any stupid individual from Intel that blocks this is either deaf and dumb!

Link to comment
Share on other sites

  • 3 weeks later...

The only way manufacturers can implement eGPU solutions legitmately is by avoiding thunderbolt in its entirety.

The main advantage thunderbolt has (aside from being semi-widespread) is some fancy stuff happens and the PCIe data is sent not down individual lanes, it is multiplexed to some degree.

As far as I can tell, both the Alienware and MSI systems simply expose the physical PCIe lanes down a cable - hence the dock needed for the MSI, and the fact that the AW system doesn't have 16 lanes to keep the cable a reasonable thickness (and because of the ULV CPU).

This gives thunderbolt an advantage as it doesn't have to deal with skew accross the lanes or anything, but the amount of R&D needed to develop a similar solution is way beyond most companies, especially as TB has enough market dominance (in a small market) to make a return on the investment very small.

Link to comment
Share on other sites

My idea was not to avoid Tb but to avoid Intel to block a TB eGPU solution by limiting from beginning his potential market.

Intel didnt want TB eGPU to cut his CPU market, so refuse licensing for TB.

Asking to Intel TB license for a eGPU solution with limited market (ie as before: restricted by GPU type, or application used) can reassure Intel about his indirect costs of this license. Not ideal for eGPU but better than nothing.

Question is if there are a technical way to really block a eGPU adapter to work with a specific GPU brand or just with some applications.

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.