Jump to content

jot23

Registered User
  • Posts

    31
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by jot23

  1. Mantle does help with performance. I did some testing here:

    http://forum.techinferno.com/diy-e-gpu-projects/2109-diy-egpu-experiences-%5Bversion-2-0%5D-54.html#post82601

    and comparison in a desktop here:

    http://forum.techinferno.com/amd/5908-amd-catalyst%99-14-1-mantle-beta-driver.html#post83225

    Improvement is about the same for desktop GPU and laptop eGPU (about 10fps in BF4 on 14.1). I don't have, didn't try Star Swarm. A nice Nvidia card is still better for eGPU configurations.

    It's interesting. Your CPU doesn't seem to be much more powerful. Which map did you play? The Siege of Shanghai map is more difficult to handle for the CPU. I tried other maps too. Your GPU Utilization is way more stable, which means that the CPU is not bottlenecking the GPU that much. Mantle shows better results, while comparing powerful CPU and GPU combinations, but it should work with the weaker ones too. Your GPU is a much better performer. I'll make sure about Mantle working or not with the Star Swarm benchmark.

  2. Im glad that software helped you out, but did you make sure you really uninstalled all the graphics driver in safe mode and rebooted?

    Which AMD drivers did you use? Here is the latest one; 14.3 beta V1.0

    AMD Catalyst 14.2 Beta Driver for Windows

    Man can you post videos or captured videos of your setup and gameplay if you won't mind?

    Good luck mate!

    Still contemplating of starting this project or to proceed building a mini ITX gaming PC

    Yes, I am sure. Safe mode and many reboots to be sure.

    I used the latest 14.3 drivers and older: 14.1 and 13.9 (with lack of Mantle support). The performance was pretty the same. I'll check if Mantle works in the Star Swarm demo tomorrow.

    I could post the videos, but I'm still working on my DIY enclosure... and I'll need to return the GPU on Monday. So, I'm not sure if it will be possible to record the video.

    If you're planning on making an DIY eGPU, why don't you stay with Nvidia graphics? They work very well, the performance is quite good. Only Geforce is capable of the fastest Expresscard, 1.2Opt connection. Radeon lacks the Optimus support and AMD didn't make an Optimus alternative. I hoped, that Mantle could be a performance booster.

  3. First log:

    21109578_bf4_siege-of-shanghai_mantle-off-vs-on.png

    BF4 on map Siege of Shanghai. FullHD + medium settings. eGPU setup:

    HP 2570p

    i5-3210m

    MSI Radeon R9 270

    Starting at 7th minute, I played with Mantle. First 7 minutes is gameplay without Mantle. As we can see, Mantle gave me... nothing. Framerate is not very good, many drops below 40 FPS. The CPU Usage is very high and it bottlenecks the GPU. Mantle should help the CPU Usage, but it does not. I'll check out the Star Swarm benchmark to see, if Mantle even works on eGPU. For now, Mantle gives me nothing.

    • Thumbs Up 1
  4. I read that somewhere here maybe windows will auto detect and use the intel hd drivers or maybe it was not totally disabled? Do you have intel hd + nvidia graphics on your laptop?

    Sent from my iPhone 5S using Tapatalk

    I have HD400 iGPU and no dGPU. Maybe it's the driver conflict, because usually I'm using GTX660. I didn't uninstall Nvidia drivers. But Geforce GPU works fine with AMD Catalyst installed.

  5. I wanted to benchmark Battlefield 4. Unfortunately, FPS stayed at about 20-30 FPS. GPU Usage seems strange, it's 30-50% all the time, while CPU Usage gets 100% constantly. Moreover, the GPU Render Test in GPU-Z shows barely 20 FPS. I tried 4 different drivers, turning ulps off, different PSU... Nothing helped. Do you know, what could be an issue here?

  6. Finally I've got my hands on MSI R9 270 GPU. My 3dmark11 score:

    i5-3210m + R9 270. GPU score is significantly higher than the GPU score of GTX660: 6785p vs 6019p. Does it perform that well in games?

    No :( I've tested Crysis 3, ACIV, Batman Arkham Origins.

    Some HWINFO logs:

    a) Crysis 3.

    21083711_crysis-3_mission-1-low-settings.png 21083712_crysis-3_mission-3-low-settings.png

    Played on low settings in FullHD resolution. The performance is way worse than on GTX660. The FPS peaks in vents etc. are higher on R9 270, but overall, 660 beats R9 270. The third mission is barely playable on R9 270 with low settings. With GTX660, I could play the mission on mixed medium to very high settings, getting higher FPS (about 35 average). I don't have a log for the mission 2 "Welcome to the jungle" with the "grass problem", but the FPS was about 10-15% lower on the grass, reaching 15 FPS minimum. Both cards can't achieve playable framerate on this grass level.

    B) Batman Arkham Origins

    21083776_batman.jpg

    Played on almost full settings, without motion blur. The game is fully playable on both cards. The benchmark contains both mission in a closed space and some city free running.

    For now, I am disappointed with the R9 270 eGPU performance. I hoped, that the GCN architecture could give me some magic like Nvidia Optimus does. I'm going to test BF4 with and without Mantle, to see if that helps. In theory, CPU Usage should be lower with Mantle and the communication between the CPU and GPU, which is a serious drawback of eGPU due to PCIe bandwidth, should be faster.

    • Thumbs Up 4
  7. Hi! Nit sure if you saw that, but I've tested HD6850 eGPU at x1 bandwidth. Drivers used were 13.9. No severe slowdowns caused by drivers, I think. You could check out my Crysis 3 benchmarks (FPS logs) in the main topic.

    Since 5770 and 6850 are based on the same old architecture, I think the same drivers apply to them, so the performance loss shuold be comparable. But the performance I've got is quite as expected from such eGPU setup.

  8. And here, catch "The root of all Evil", part one from the beginning to destroying the badass Hydro-Electric Generator.

    HWInfo log:

    19866258_crysis-3_map-canyon.png

    Screenshots:

    19866208_crysis3-2013-12-21-19-45-36-02.jpg 19866209_crysis3-2013-12-21-19-46-06-46.jpg 19866210_crysis3-2013-12-21-19-46-46-51.jpg 19866211_crysis3-2013-12-21-19-47-06-48.jpg 19866213_crysis3-2013-12-21-19-47-16-47.jpg 19866214_crysis3-2013-12-21-19-47-26-49.jpg 19866215_crysis3-2013-12-21-19-47-36-46.jpg 19866216_crysis3-2013-12-21-19-49-16-51.jpg 19866217_crysis3-2013-12-21-19-50-06-51.jpg 19866219_crysis3-2013-12-21-19-50-56-47.jpg 19866220_crysis3-2013-12-21-19-51-06-46.jpg 19866221_crysis3-2013-12-21-19-53-26-49.jpg 19866222_crysis3-2013-12-21-19-54-26-49.jpg 19866223_crysis3-2013-12-21-19-55-46-46.jpg 19866224_crysis3-2013-12-21-19-56-36-50.jpg 19866225_crysis3-2013-12-21-19-57-16-46.jpg 19866226_crysis3-2013-12-21-19-57-26-45.jpg 19866227_crysis3-2013-12-21-19-57-56-46.jpg 19866228_crysis3-2013-12-21-19-59-36-52.jpg 19866229_crysis3-2013-12-21-19-59-46-53.jpg

    Placed in a spoiler, because there are 20 screenshots, and someone might have not played the game yet... it's my favourite mission in the game, spectacular and mind blowing, even on low settings like in the test.

    I don't have the GTX660 log at the moment, but I remember the mission was fully playable at >30 fps min and about 35 average, played with high textures and med-high settings, so again, way better

    Maybe the FPS was higher because of better CPU Utilisation? The CPU Usage stays at 60% avg on hd6850, and on 660 was more like 80-85%.

    If you can and willing to return a card after use, why not test a 290X and be done with it :o

    Because I need a GPU comparable with GTX660, so I'll rather stay at the level of HD7870/r9 270. But the new Radeon series is... strange. R9 270 is sometimes better, sometimes way worse than 660, and it costs about 10-20% more than I payed for 660 four months ago. 270x is a bit more powerful, but 270 should be easy to overclock to match 270x performance. Then there is a gap, because the next GPU is 280x, which can be compared to 680.

    There should be a r9 280, priced at 200$ and being a good competitor for 660ti or 670.

    I'll propably take the 270 one or find an older 7870 GPU, which has better GPU clock, but worse memory clock... and that's the only difference.

    I can take 7870xt too. It's priced at the level of R9 270x here in Poland, and it's performance is better. But it's only available in Club3d and XFX versions, and I'd rather stay with Asus, MSI or Gigabyte ones.

  9. First GTX660 vs HD6850 comparison (I don't have access to 660 atm...on the other hand, such comparison is a bit pointless, because 6850 is way weaker than 660, and it seems like it's more limited by pcie bandwidth):

    Crysis 3, mission 6 (from the beginning to the second Ceph AA defence, getting there took me 7 minutes on 660 and 12 minutes on hd6850 due to lower FPS, which makes it hard to play well).

    Settings:

    Resolution: 1920x1080

    All low

    [email protected]:

    18897026_crysis-3-gtx660-misja-6.png

    Look at the red line, green represents gameplay at high setting (textures and the rest) and is really short.

    AVG FPS: 45

    [email protected]:

    19863389_crysis-3_mission-6_hd6850_fullhd-low.png

    AVG FPS: 27

    Like before, the CPU Usage is way lower then it was with GTX660. It's really worth consideration, because as we know, Nvidia drivers cause a greater CPU load then AMD drivers. It's much more visible on older Core 2 Duo based desktop PCs, causing microstuttering, but I think that it might affect Core i5 as well. The problem should be not present when using quad core CPU.

    On low settings, GTX660 should easily get average of 60 FPS. Not sure about HD6850 performance, but according to benchmarks, 28 FPS is a valid result... for high settings. On low it should be more like 40 FPS I suppose.

    So, we can see that both cards are limited. By... and now it's starting to get really difficult. I'm not sure which of the factors is more important here. Maybe the performance is CPU limited, like I think it is on Welcome to the Jungle level. Of course, when it comes to the "grass moment", the PCIe bandwitdh is a real drawback there, giving us drops to 20 FPS instead of a way more playable 30-35. On GTX660, mission 6 is really playable, even at high setting. On HD6850 it's not playable on low settings, which keeps mi thinking about how a GCN-based card would perform. It would have to be less bandwidth-limited to maintain good FPS.

    p.s. I'm going to buy a GCN-based card like HD7870/7870xt/r9-270(X), but given that it's Christmas time, I think I'll have to wait till January, because prices are a bit higher now and shipping might take very long. Which is really important, because in Poland when buying online, I could give the GPU back after 10 days of testing without providing any cause of it. So, I'd like to use that privelege for an almost-free benchmarks, if I won't stay with AMD GPU.

    • Thumbs Up 1
  10. I quote @Tech Inferno Fan here: "The HD6850 uses the old VLIW5 architecture, not the GCN that a comparable HD7870 would."

    This architecture difference might be a very important factor to consider, meaning that AMD could yield better performance in an eGPU perspective. If you have a friend using a newer HD7XXX-card that you could re-do this test with we might get our answer. This is really exciting stuff!

    OK, maybe there is some magic that AMD implemented in GCN architecture, which works well with eGPU as the Nvidia Optimus does, and AMD does not know how brilliant they are :)

    So, I'll try to get my hands on any GCN-based GPU. I'm excited about this benchmarks too.

  11. Your 6850 gpu score is in the ball park for your core clock rate, you can see it in the following search results:

    https://www.google.coom/search?q=6850+3dm11+&ie=utf-8&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a&gws_rd=cr&ei=KXS0UuSaOoSUhQfJsIGwAg#q=6850+3dm11+6850%281x%29&rls=org.mozilla:en-US:official&safe=off

    I'm currently waiting for a decent cooling for 290 or an alternative to pop up, and it gives me a chance to study :P.

    Right now 770 is priced at the same price I sold my 7970, and a 290(non X) is 100$ higher but a reference design. A 780 (none TI) is 150$ higher.

    Its anyone guess how much a 290 non-reference will cost.

    OK, I hoped that my score was to low :)

    Looking at my performance, I doubt if going with AMD GPU is a good choice. I thought it would be less PCIe bus demanding, but it's not. And using Nvidia GPU is easier.

  12. Oh now I wish I tried that grass scene before I sold my GPU :/

    But can't you test that scene on a desktop at 1x 2x... etc?

    [ATTACH=CONFIG]10035[/ATTACH]

    Or a PCI 1x - 1x riser and just test it on your PC

    I tested it on a desktop PC with this GPU, but there's an ancient Core 2 Duo e6550 CPU. Performance was fairly the same in the grass scene, that's why I thought it's rather CPU related.

    But the first mission is not playable, even on low setting. GTX660 or desktop HD6850 gave me better FPS there.

    On our forum nhl.pl there is an user waldeksik, he really helps us in testing by emulating x1 2.0 PCIe Bus on his Quadro 4000m. I'll ask him to make a Crysis 3 comparison, if he didn't do it already (I am not sure).

    Or we can wait till you get a new GPU :) MY GTX660 and HD6850 is not going anywhere at the moment.

    To be honest, I hoped that AMD would give me higher FPS and thus be a better choice. Now, looking at your BF4 performance I'd like to get a powerful AMD GPU, but looking at how much HD6850 is limited in the grass scene, I'm not sure if Radeon is a better choice than Geforce. For now, given that 6850 is way weaker than 660, AMD and Nvidia are pretty the same in eGPU use.

    I still think I'm missing something. I feel there is an important factor which affects my performance.

  13. That 3dmk11 result is indeed low. From this point on it's going to be difficult to make any meaningful comparison of a GTX660 versus a HD6850. The HD6850 uses the old VLIW5 architecture, not the GCN that a comparable HD7870 would. In addition we found that HD5xxx (and likely HD6xxx) needed to run x1E to get what appears to be full duplex bandwidth on older chipsets . Certainly my testing of a HD7870 vs GTX660, both at x1 2.0 didn't have this problem. http://forum.techinferno.com/diy-e-gpu-projects/2747-%5Bguide%5D-12-dell-e6230-gtx660%40x1-2opt-hd7870%40x1-2-pe4l-ec060a-2-1b.html#post37197

    Is there any chance you can borrow a GCN-based AMD card (HD78xx, HD79xx) off someone to do what would be more comparable testing?

    You mean that it's too low for a comparison, or too low for a HD6850 card?

    It's not possible to get a kind of x1E 2.0? I mean, could I somehow use the x1E tweak on my setup?

    For now it's not possible too get a GCN-based GPU. I'm curious what is the difference between HD6850 and 7770 in eGPU.

    Maybe I could OC my GPU, but I'm not sure about the results.

    Crysis 3 performance is quite disappointing. I hoped to get FPS closer to the desktop HD6850.The FPS is not CPU limited, is it?

  14. HP Elitebook 2570p: Radeon HD6850 eGPU implementation

    In order to make a comparison of AMD and Nvidia eGPU setup, I decided to make an HD6850 eGPU implementation. It took me three days of struggle, but finally I've got it working.

    The reason for the comparison is that I wanted to check the level, in which both cards are limited by eGPU bandwidth. I've experienced huge FPS drops in a few games on my usual GTX660 eGPU, and it got me thinking if an AMD GPU would perform better. Great performance achieved with a HD7970 GPU by user sskillz was very promising. Maybe an AMD GPU could somehow overcome the bandwidth limitation problem?

    Great thanks to Tech Inferno Fan and sskillz, who helped me to succesfully install HD6850 on my notebook.

    Of course using the HD6850 GPU required making and DSDT Override and PCIe compaction. I used the newest Setup 1.3, because it's much more user friendly and powerful than previous version that I've owned, I mean Setup 1.1.

    After three days I achieved this:

    19850138_hd6850.gif

    Looks nice to me :)

    First, I made a PCIe bandwidth benchmark using PCIeSpeedTest. The peak values for 20 benchmarks were:

    CPU->GPU: 360MB/s

    GPU->CPU: 440MB/s

    Compared to the [email protected], there is no big difference. In fact, depending on how many benchmarks we run, this numbers are quite the same. So, there is a chance that comparing an GTX660 with it's AMD equivalent (7870?) there could be a similar performance.

    First bench:

    HD6850 x1.2 3dmark 11 -for me the GPU score is rather normal for a HD6850 GPU, but on our forum nhl.pl people said it's too low... what do you think?

    And the most important test for me, Crysis 3, mission 2 "Welcome to the Jungle" (map fields). The reason for it's importance is, that it's the first and the most distinctive example of the GPU limited by the eGPU bandwidth. After about 2 minutes of gameplay there is a beautiful "grass moment", meaning that the player goes outside and discovers the fuly grass and tree covered New York. The grass moves naturally with the wind blowing, and somehow it does not only load the CPU... it loads the PCIe Bus really hard. On GTX660 O've got about 20 FPS in this scene. How about the HD6850?

    19847595_crysis-3_all-low_hd6850-x1-2_fullhd_map-fields.png

    Unfortunately, performance is even worse than with GTX660. I could play the first mission on GTX660 with 30-45 FPS on medium setting, with HD6850 I could barely pass 25 FPS. In the second mission we can see a really low FPS, about 3 less average than with GTX660. And here is the fun part: on GTX660 the CPU was constantly loaded at about 95-99%, here we can see a waaay lower CPU utilisation.

    The GPU usage is not present in the HWInfo logs, but checked with MSI Afterburner it's a constant 99% load.

    Now, what do you think about this performance? Is something wrong? Do I need to turn something off, like AMD HD Audio...?

  15. Would it be possible to enable the PCIe lanes to provide 150W of total power for the GPU? Currently the amount of power the PE4L can provide is 75W. So, GPU with a very small power consumption, like relatively weak hd7750, or maybe GT440, can be powered by a small power brick.

    There is a power connector for it on a PE4L board, but it's 75W limited. If they could push it forward, to 150W, which I think should be not a problem, as the PCIe 2.x standard doubled this amount to 150W (in comparison to 75W provided by PCIe 1.x), it would be possible to power a few great performance per watt cards, like GTX660, HD7850...

    And we could get the rid off our big desktop PSU's and all the wires related to it. There would be only one power wire to the power connector of the adapter.

    • Thumbs Up 1
  16. If you peruse the DX11-centric leaderboard, you'll see your 3dm11 score is within the expected range.

    A GTX660Ti would net you about a ~1.5k increase in the 3dm11 GPU score. That's because a GTX660Ti is a substantialy more powerful with it's +40% more processing units (960 -> 1344) as detailed at AnandTech | The NVIDIA GeForce GTX 660 Review: GK106 Fills Out The Kepler Family .

    A GTX660Ti's architecture has more in common with a GTX670 than with a GTX660. NVidia could have named a GTX660Ti as GTX670 and a GTX670 as a GTX670Ti to lessen the confusion.

    I consider the GTX660Ti the minimal standard required for a high performance eGPU implementation.

    Yes, of course, I considered buying 660ti, but 660 was way cheaper for me.

    My 3dm11 score is very good for a 660 eGPU, but... I don't understand why we can't (mostly) get a score equal to the desktop one. In games, we see that sometimes the PCIe bus causes a huge performance drop, but as I mentioned, 3dm11 is way less demanding for the PCIe bus than those games (Crysis 3 "Welcome to the jungle" mission is a perfect example).

    So, I just don't get it, how can we loose that much performance in 3dm11? What is the cause of it? The more powerful the card is, the bigger the difference. For example, GTX760 scores about 1k points less than it should.

  17. Thank you for the suggestion. While I'd love to ask about BCLK overclocking I can pretty safely say running a BUS beyond it's standard 100Mhz will get a flat 'unsupported' response. Even the 45W i7-quad CPU upgrade is a bit touchy since HP designed and tested the system with a 35W i5/i7.

    With the missing slider, does it occur with XTU 2.1 and XTU 3.x? It would be great if you can work with Khenglish to figure out what the 2570P limits are for pci-e overclocks. As you point out, success there would gain extra eGPU bandwidth.

    Maybe HP could at least tell us, if there is a point in looking for eGPU performance boost this way. If it gives us a 0% chance of overclocking PCIe, there's no reason to further investigate this issue, as I don't even have a i7 QM, which would be limited by TDP.

    Wasn't there a 2570p (somewhere on Ebay) with 45W i7? Correct me if I'm wrong.

    I tried every version of XTU. Reinstalled it dozens of times. Before the flash, there was at least a greyed out slider. Then, it disappeared and I can't get it to appear again. Khenglish said there's some voodoo needed to do it... so, it makes me wonder, why this reinstallation method worked for Aikimox.

  18. I know it's a not very important issue, but I would enquiry HP about allowing BCLK overclocking. Of course, in order to use it with XTU. I realise, that we have genious Khenglish and his ME FW files, but.. I still haven't figured out how to enable the BCLK slider in XTU. It's a kind of a lottery, unfortunately.

    It could be connected with raising the max. TDP allowed by the BIOS.

    Oh, and maybe they would answer if BCLK OC affects EC PCIe frequency. Because if it does not, we would need only higher TDP limits to unleash the power of powerful i7 CPUs. This OC is really important for EC, because it could be the only way to increase eGPU bandwidth.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.