Jump to content

Clevo PxxxDM Skylake Desktop CPU Series


spartan117ch

Recommended Posts

Your P570WM with unlocked SLI GTX980Ms will beat the shit out of a single OCed GTX 980 on stock vBIOS...looks like that's why they wanted to stop us from OCing all year long with vBIOS and driver locks...

Link to comment
Share on other sites

Let me just do this for the giggles:

lrzzhkB.jpg

GTX980M SLI:

FIRESTRIKE ULTRA:

5897 GPU SCORE

5759 TOTAL SCORE

NVIDIA GeForce GTX 980M video card benchmark result

FIRESTRIKE EXTREME:

11777 GPU SCORE

10341 TOTAL SCORE

NVIDIA GeForce GTX 980M video card benchmark result

FIRESTRIKE:

24838 GPU SCORE

19077 TOTAL SCORE

NVIDIA GeForce GTX 980M video card benchmark result

Link to comment
Share on other sites

Personally I feel that it's more important to have a more powerful gpu in a gaming laptop, but That has recently happened with Nvidia showing off laptops with desktop gtx 980's inside (pure insanity)

thats what were all about here @pure insanity :D

Link to comment
Share on other sites

So, 980M SLI still very soundly stomps a single new mobile GTX 980? I expected that would be the case. It's very tough for a single GPU to compete with dual. That comparison is fairly close to 980M SLI versus desktop with GTX 980.

- - - Updated - - -

Your P570WM with unlocked SLI GTX980Ms will beat the shit out of a single OCed GTX 980 on stock vBIOS...looks like that's why they wanted to stop us from OCing all year long with vBIOS and driver locks...
Yup, more customer manipulation to make themselves appear to be more successful than they actually are. That's NVIDIA for you... that's how they operate.
Link to comment
Share on other sites

  • Founder

Fellas, you guys are missing the point of the GTX 980 (for notebooks). It's not to necessarily supplant 2 x 980M but rather to stick it to AMD and their Nano. It's NVIDIA's way of saying, "so you made Nano, a product nobody will really use but here's our answer that will get eaten up by notebook gamers". Personally I'd always go for a single powerful GPU over SLI for many reasons: No dealing with SLI profiles, better stability and more driver features are enabled, especially if you use G-Sync (e.g. MFAA and DSR is still not available for SLI + GSync). Furthermore, keep in mind SLI doesn't scale 1:1 so you will usually see anywhere from 50-80% scaling with two of them. 3DMark is misleading because it has near perfect scaling. Finally, newer engines like Unity 5.2 and UE 4 do not natively support SLI so it is being supported less and less by newer AAA games. @Prema I'll keep an eye out for your unlocked 980 vbios for Clevo, should be interesting to see how it performs against 980M SLI in real world gaming when both are unleashed.

  • Thumbs Up 4
Link to comment
Share on other sites

Fellas, you guys are missing the point of the GTX 980 (for notebooks). It's not to necessarily supplant 2 x 980M but rather to stick it to AMD and their Nano. It's NVIDIA's way of saying, "so you made Nano, a product nobody will really use but here's our answer that will get eaten up by notebook gamers". Personally I'd always go for a single powerful GPU over SLI for many reasons: No dealing with SLI profiles, better stability and more driver features are enabled, especially if you use G-Sync (e.g. MFAA and DSR is still not available for SLI + GSync). Furthermore, keep in mind SLI doesn't scale 1:1 so you will usually see anywhere from 50-80% scaling with two of them. 3DMark is misleading because it has near perfect scaling. Finally, newer engines like Unity 5.2 and UE 4 do not natively support SLI so it is being supported less and less by newer AAA games. @Prema I'll keep an eye out for your unlocked 980 vbios for Clevo, should be interesting to see how it performs against 980M SLI in real world gaming when both are unleashed.

This is exactly what I've been telling people left and right. The way tech is leading THESE DAYS, SLI is no longer a guaranteed benefit, with rare exceptions. It's now a "cool benefit that may appear" sometimes. And that's pretty stupid. It's why I have been telling people left and right to buy the single strongest GPU available before even considering SLI.

That being said, I would take a slightly weaker mobile GTX 980 (maybe with a 1000MHz core and the 5000MHz memory) with some overclockability and shove two of them into a laptop rather than buy a single full mobile 980.

Link to comment
Share on other sites

@Prema

Thanks for the pics and links of the new stuffs.

Is the new gpu (mobile 980) compatible with old notebooks (pre-skylake) such as p7x0zm?

Clevo has been testing a "variant" of the full GM204 in the regular P7x0DM for quiet a while now...so let's see...it may or may not be made available from Clevo directly...

EDIT:

One thing is for sure, whoever is gonna offer a GXX in standard MXM-B package is going to make huge bucks in the upgrade market place.

A single GXX in an SLI system makes little sense if SLI 980M can easily beat it.

EDIT2:

Heck even a single 980M can already be OCed to match or even beat a stock 980:

http://www.3dmark.com/fs/3685189

http://www.3dmark.com/fs/3685371

http://www.3dmark.com/fs/4569428

Link to comment
Share on other sites

Fellas, you guys are missing the point of the GTX 980 (for notebooks). It's not to necessarily supplant 2 x 980M but rather to stick it to AMD and their Nano. It's NVIDIA's way of saying, "so you made Nano, a product nobody will really use but here's our answer that will get eaten up by notebook gamers". Personally I'd always go for a single powerful GPU over SLI for many reasons: No dealing with SLI profiles, better stability and more driver features are enabled, especially if you use G-Sync (e.g. MFAA and DSR is still not available for SLI + GSync). Furthermore, keep in mind SLI doesn't scale 1:1 so you will usually see anywhere from 50-80% scaling with two of them. 3DMark is misleading because it has near perfect scaling. Finally, newer engines like Unity 5.2 and UE 4 do not natively support SLI so it is being supported less and less by newer AAA games. @Prema I'll keep an eye out for your unlocked 980 vbios for Clevo, should be interesting to see how it performs against 980M SLI in real world gaming when both are unleashed.

It's a bit hard to do innovation, when the MXM Sig is owned by your competitor and obviously they can do whatever they want in order to fit all of the 200W TDP. Wanna bet if AMD is going to follow this year? Wanna bet which would be the first company to hit with HBM module as well? They wont let someone to steal their thunder just like that. Why are you quick to jump on the "no-one is going to use the R9 Nano"? The world is a bit broader than your FOV. I see it as a perfect eGPU candidate, which section is a pretty nice chunk of this forum. What happened with our little chit-chat back in the clock-block thread, about multi-GPUs? I see that you changed your stance. I'm still behind the single GPU setups, but it's not so funny when a supposed inferior GPUs scale better than the "almighty" :D Since I mentioned the clock-block, is it obvious now why was all the fuss? The slides are not as impressive when a supposed mobile card (980m) can catch-up to the "real deal" (mobile 980). As I said, and I would always say - That's nGreedia for you. They are NO innovation, only brute force, because that's what money buys you - power, NOT creativity (with a few exceptions of course)! All of their shitty moves prove it. For an admin you are WAY too biased.

  • Thumbs Up 1
Link to comment
Share on other sites

  • Founder
It's a bit hard to do innovation, when the MXM Sig is owned by your competitor and obviously they can do whatever they want in order to fit all of the 200W TDP. Wanna bet if AMD is going to follow this year? Wanna bet which would be the first company to hit with HBM module as well? They wont let someone to steal their thunder just like that. Why are you quick to jump on the "no-one is going to use the R9 Nano"? The world is a bit broader than your FOV. I see it as a perfect eGPU candidate, which section is a pretty nice chunk of this forum. What happened with our little chit-chat back in the clock-block thread, about multi-GPUs? I see that you changed your stance. I'm still behind the single GPU setups, but it's not so funny when a supposed inferior GPUs scale better than the "almighty" :D Since I mentioned the clock-block, is it obvious now why was all the fuss? The slides are not as impressive when a supposed mobile card (980m) can catch-up to the "real deal" (mobile 980). As I said, and I would always say - That's nGreedia for you. They are NO innovation, only brute force, because that's what money buys you - power, NOT creativity (with a few exceptions of course)! All of their shitty moves prove it. For an admin you are WAY too biased.

There's many holes with your conspiracy theories:

1. MXM-Sig is controlled by NVIDIA but MXM specification is available to all MXM-Sig members to use which AMD is a part of. Making the assertion that AMD isn't pushing Nano on MXM because NVIDIA controls the specification is laughable at best and moronic at worst.

2. If NVIDIA was intent on locking out overclocking on mobile chips, they would have kept it there despite the bad PR. They control the enthusiast mobile market share and no amount of petitions would have changed that. However, they readily admitted it was a mistake and rectified it. No need for tinfoil hat conspiracy theories.

3. Just because AMD developed HBM doesn't mean they will have a lead in deploying HBM 2. NVIDIA has been sampling HBM/HBM 2 for quite sometime and Pascal has already gone into testing. We haven't heard anything about Arctic Islands yet.

4. eGPU community is a very tiny niche (one that we actively support) and even less of them would go for a Nano vs a full card like Fury or 980 Ti. AMD would have been better served making Nano a mobile chip but apparently the geniuses at AMD didn't do so.

5. 980M requires vbios hacks + very liberal overclocking to catch a stock 980. This isn't new, people have been doing that sort of thing for years. It doesn't take away from the fact that a stock 980 is better built (more power phases) and is a much faster full GM204 part.

6. Nothing about my multiGPU stance has changed. They are still the optimal method for high end gaming (especially at high resolution) but for the majority of laptop gamers who operate at 1080p, it makes very little sense now thanks to software stagation (this may or may not change with DX 12).

As for me being an admin and supposedly biased, there is no requirement here for any mod, admin or user to be unbiased as there is no such thing in this world. Everyone has an opinion and they are welcome to it as long as T|I rules are followed and the discussion doesn't go off topic.

  • Thumbs Up 1
Link to comment
Share on other sites

1. And you think that I don't know that? I'll tell you something else though - you see this new module? Is there anyone else that can take advantage off it? Is there a better future-proof technology around that can be put and WILL be put, but not this year. It actually can fit on standard MXM-B and still get like 6 phases for the power delivery, it just needs different hole spacing. Wouldn't that be a better option? It sure is, but you can see the extra length someone took, just to "smack" someone else.

2. Actually I'm quite surprised that they've called it off. It's not like they were going to lose any clients. Also it's not like the benches weren't starting to gain steam. So it was a bit too late, who knows. At least my tinfoil hat protects me from nGreedia's radiation, I can borrow you one if you want.

3. Of course it doesn't, this was their chance, and obviously they wont be the first with HBM MXM module, and most people would buy whatever the green throws at them anyway. Even though Fiji is better at 4K. Futureproofing, who cares, they'll get the next year's model as well.

4. It's not widespread, but can't consider it tiny either. I'd rather get a minimal yet powerful setup and I would guess that I'm not alone. As I said above (in point 1) MXM is a standard, with chip/RAM/VRM/etc spacing, placement and etc. Someone can make a pretty custom module to fit their need, but for others a simple* hole spacing change seems to be impossible. Feel free to explain how this happens. I can't.

*Actually it's not simple at all, but they both require change in spec, and it was obviously done for this new module which we can see on the pictures.

5. True. But the charts, the charts...

6. Optimal and troubleshooting don't mix well for me.

Note taken.

Link to comment
Share on other sites

Nothing about my multiGPU stance has changed. They are still the optimal method for high end gaming (especially at high resolution) but for the majority of laptop gamers who operate at 1080p, it makes very little sense now thanks to software stagation (this may or may not change with DX 12).

As far as I can tell, DX12 will change nothing with any current nVidia card, due to the amount of memory data needing to be transferred for current games. Some games don't have a lot of assets readily accessed and could probably use SFR with current non-XDMA tech, but they're not the majority as far as I can see.

Even though Fiji is better at 4K.

It's not that "fiji" is better at 4K. It's that "GCN" is better at 4K than "Kepler" and "Maxwell". Fiji's problem is that it has some low-utilization bugs and (as of at least a couple weeks ago) some frametime issues, and the fact that it's got only 4GB of vRAM. It means that in situations at 4K where people (especially those who have more than one) are cranking up the AA and other things that use vRAM like crazy, it CAN run into a vRAM bottleneck. No amount of memory bandwidth is going to help if it needs to hold 5GB of data in a 4GB frame buffer. This may not really be often encountered, mind, but it is a possibility and for those users who would grab 2-3 GPUs, watercool em all and crank up to maximum at 4K? They're not the best choice.

nVidia is taking their cards very very slowly, despite what their marketing says. They can be "pushing for 4K" as much as they want, and "pushing for advancement" as much as they want, but they build their cards to satisfy the minimum requirements for gaming at the generation they were created in (Kepler falters above 1080p, Maxwell falters above 1440p), and it's backfired at them with that DX12 benchmark the other day where their cards flat out proved they don't have the computational capability to do whatever DX12 asks. Their double precision is dead, even for quadros, and their CUDA support is dwindling so far down you might as well consider it dead in the water for consumer cards. But none of that is needed for gaming, so nobody buying their cards for gaming cares.

On the other hand; AMD's sort of ahead of its time you could say, but it doesn't help them *NOW*, which is obviously a huge issue for them. Also the same deal I called when the Hawaii cards came out is still in effect: making bigger, hotter, more power hungry cards repeatedly doesn't help. Their current line now is evidence of it: you can't grab a R9 390 on a 500W PSU like people can do for a 970, etc, and until the R9 380X comes out, there's a huge power gap between the 960/380 and 970/390.

I'm just going to hope Arctic Islands is enough of a threat to get nVidia to clean up their act with Pascal. This voltage up/down clocking with Maxwell and mismatched clocks in SLI all the time is just plain annoying.

  • Thumbs Up 1
Link to comment
Share on other sites

@Prema

Thanks for the pics and links of the new stuffs.

Is the new gpu (mobile 980) compatible with old notebooks (pre-skylake) such as p7x0zm?

It'd better. If I waited 15 months to still have the 980m as the top mxm gpu at the same $720, I'll be pissed.

If the new card is incompatible with the P150EM, at least 980m prices will drop a lot on ebay from AW, MSI and later Clevo users upgrading.

Link to comment
Share on other sites

It'd better. If I waited 15 months to still have the 980m as the top mxm gpu at the same $720, I'll be pissed.

If the new card is incompatible with the P150EM, at least 980m prices will drop a lot on ebay from AW, MSI and later Clevo users upgrading.

p8xxdm is coming out with 980m sli or 980 single.

From this configuration, I am thinking that 980 is compatible with the old mxm but need something.

The "something" may be power or thermal control.

Link to comment
Share on other sites

p8xxdm is coming out with 980m sli or 980 single.

From this configuration, I am thinking that 980 is compatible with the old mxm but need something.

The "something" may be power or thermal control.

As long as it exists and the EM series BIOS can handle it, I will make it work.

Link to comment
Share on other sites

p8xxdm is coming out with 980m sli or 980 single.

From this configuration, I am thinking that 980 is compatible with the old mxm but need something.

The "something" may be power or thermal control.

It might be physical space, or maybe the P870DM will have two motherboard designs and one can use a mobile 980 and the other can use SLI 980Ms? =D.

That being said, the huge 200W mobile 980 doesn't have a SLI connector.

Link to comment
Share on other sites

It might be physical space, or maybe the P870DM will have two motherboard designs and one can use a mobile 980 and the other can use SLI 980Ms? =D.

That being said, the huge 200W mobile 980 doesn't have a SLI connector.

MSI GT80 is coming out with 980 too. I am not sure whether it is coming out with SLI or single configuration though.

Link to comment
Share on other sites

MSI GT80 is coming out with 980 too. I am not sure whether it is coming out with SLI or single configuration though.

SLI, MXM config, but it will be downclocked and such to fit the smaller power envelope.

Hopefully the cards aren't VRM-gimped like 980Ms or vRAM-gimped like 680Ms or something.

Link to comment
Share on other sites

my take on the 980 is it being physically compatible with the mxm slot but needing too much space in width and also differing installation holes to keep the mxm board down. so a p870dm would make that extra space on one end to accomodate the 980, but also have both screwhole locations in order to accomodate the regular mxm cards as well.

the gt80 has been bga cpu and regular mxm gpu thus far. additionally, the full-fledged 980 doesnt sport sli connectors as d2 mentioned, so msi should be sporting regular mxm modules ;)

Sent from my Nexus 5 using Tapatalk

Link to comment
Share on other sites

I am sure we are going to see the 200W module modded into quite a few places where they where not designed to go. I am excited to see what our community will be able to pull off. :)

My main concern is the middle post on the EM. That thing is pretty important structurally for the laptop. I'd really hate to cut it to fit the card, then find that the card is incompatible with the screwy EM bios.

Oh yeah, and finding where to get the card for something less than the price of a new laptop.

Link to comment
Share on other sites

The left side of the card will pass through the cpu heatsink screws on our EM's... What a big card!

Also memory chips with proper cooling contact are great

If this is the future standard for clevo cards it'll probably fit any future card.

Just imagine a full fat pascal with HBM2 and space to power it properly.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.