Jump to content

D2ultima

Registered User
  • Posts

    284
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by D2ultima

  1. You're going against the laws of physics with half your desires.
  2. Well in size, yes. But in weight, it may be a different story. The GS30 is a small machine and can be powerhousey when hooked up, but it's also quite light, for the function of being a "laptop" when undocked. A desktop i7 using the iGPU won't really bring a whole lot of battery life, will weigh a lot to be cooled properly, and the power brick would need to be substantially large (120W minimum, with 150W-180W for overclocking). Going with the non-OCable chips alone it might be different; an i7-6700 with a TDP lock and a 120W PSU might be okay. But in my head I can't get it to marry the practicality of a GPU-less laptop and dock like the GS30 can. Of course, the CPU will be far better by not being BGA crap, but still. It doesn't seem to fit the part with battery life or weight, though the size/power applications definitely are promising. For people who'd pick stuff like the GS30 though (and I've met a few online) even a W230SD would be too heavy doing its laptop-y parts.
  3. I actually think @chap a model like that may not sell all that well due to the cooling needed, but it's an interesting idea.
  4. Yeah, seems like you have either a terrible paste job or bad contact. I can't OC as high as you, but since my CPU cooling is PATHETIC in comparison, I can use as an example the fact that my laptop can handle Linpack to say you should be able to handle rendering with max fans.
  5. Hi That can happen. Then your heatsink is not contacting your CPU properly, or you are not using max fans. 50c at idle is not hot... with auto fans. If it is that with Max fans, you have a problem. No. That is the critical temperature at the IHS. That is NOT the critical temperature of the CPU. The critical temperature of the CPU die (which is what the sensors read) is 105c, and it will thermally throttle at 100c.
  6. - Backwards/forwards compatibility for GPUs (like the legacy alienwares had) - Reduce as much electrical noise over audio as possible (for analog headsets etc) - Good mics; like the D900F used - Both eDP and LVDS connectors on the motherboards that are designed to hold both for better LCD choice - SD card slot having a flap, to prevent dust etc from getting in when not in use for long periods of time - no clickpads like on the P370SM3, they really suck I can't think of any else right now.
  7. I believe I touched on some of this in my post, but I'll gladly re-answer some Yes, we should. In fact, for the higher end models like the P7xxZM models, if a desktop MXM GPU line is available, I don't see why mobile chips even need to be considered (at least for single GPU). They bring little to the table that the desktop-designed chips don't (maybe only battery boost?), and as long as the MXM desktop chips can downclock like mobile chips can (my 780M gets as low as 135MHz core, 162MHz memory, and uses pretty low power at idle) I see no real reason to stray away from the desktop MXM formats in new models. BUT, this should only be expanded once Pascal comes out. The GTX 970 is broken by design and the GTX 960 is for all intents and purposes inferior to a 970M with a good vBIOS on it. The 980 is a good idea, but when Pascal comes out is when we should be considering it. And the prices shouldn't be exorbitant either... it's something that needs to be mentioned. There is no reason that fitting an un-binned $500 desktop GPU onto an MXM board format should bump its price to near $1000. Even the current mobile chips, Performance Per Dollar is extremely low, and they don't even come bundled with their full cooling systems (just a heatsink) as the fan is part of the machine it's going into. Yes, BGA shouldn't be used unless it's absolutely necessary for a form factor, and I don't believe that it is for these form factors. Though I'd like if you could fight intel down to bring back sockets on their mobile chips. BGA is no good for a consumer, no matter how you spin it. Upgrade-ability and modularity of components is the way to go. I don't know what a super monster is, but do it. I don't think there are many 13" powerhouses around... though the P640Rx is in a nice spot as well. Also keep the W230SS/SD/RD/etc type machines; they're great too. Go ahead with the 18.4" model IF it helps you improve on something or add something to the system. Like an ODD, or more fleshed out cooling, etc. If the only difference is screen size over what you can do in a 17" model, then there'd be little point. But I think an 18" model can definitely prove useful. I think at least one 2.5" HDD slot is useful; cheap storage is often necessary. But as long as M.2 drive controllers heating up won't harass anything else in the system, feel free to do something like 4 x M.2 + 1 x 2.5" 9mm SATA III drive. I think the space should be present; however I don't know if chipset lanes are present. Storage is useful indeed, however VERY few people I've ever seen want pure storage space and would rather take two 2TB HDDs + mSATA/M.2 SSDs over users who would just fill up on SSD storage.
  8. Well it flat out won't fit in the SLI models... you'd need to fully remove one of the mSATA ports, even if we could modify heatsinks to fit it. Looks like the P870DM and modding the eDP cable/LCD Cover for the 120Hz panel seems to be a better idea for us SM/SM-A users, especially considering the limited CPU overclocking we're capable of. Single GPU users might have it a lot easier; especially the ZM users.
  9. You'll not see another driver from Sager for Windows 10 on that machine for the GPU. You'll have to upgrade the GPU drivers itself. If you were getting shutdowns for no reason then you were eligible for RMA though. Especially if you can't use later drivers... which are in all sense of the word "necessary" for video cards.
  10. I accidentally voted for "under $3000" when it should have been "over $3000", so if you could switch one of those votes around for your tally, that'd be great. You'll know which one is my vote table Prema. Trust me. XD
  11. Honestly? I think your best bet is to sell the M290X cards and buy 970M cards, if you can do so.
  12. The upcoming P870DM from mostly anywhere The P570WM from Eurocom The P377SM-A from Eurocom (though the P870DM is a better buy, unless the 120Hz screen is that attractive to you. Also, it MAY be possible with enough modding to get the 120Hz screen in the P870DM).
  13. It is a SLI system. I understand your pain. A lot of the games I've been playing recently have not been very nice with SLI. In terms of single GPU notebooks, a P870DM with the mobile 980 is the way to go, definitely. Likely, the mobile 980 as you're seeing advertised right now won't fit. There is almost certainly going to be a MXM version, but you may want to consider getting a 240W PSU for your P150EM if it's compatible. You'll need I believe an adapter mod LIKE THIS, as well as a Dell PA-9E 240W PSU.
  14. The P570WM will have a word with any laptop that has a single 980 in it. A word like this: NVIDIA GeForce GTX 980M video card benchmark result - Intel Core i7-4960X,Clevo P570WM powered by PremaMod.com
  15. SLI, MXM config, but it will be downclocked and such to fit the smaller power envelope. Hopefully the cards aren't VRM-gimped like 980Ms or vRAM-gimped like 680Ms or something.
  16. It might be physical space, or maybe the P870DM will have two motherboard designs and one can use a mobile 980 and the other can use SLI 980Ms? =D. That being said, the huge 200W mobile 980 doesn't have a SLI connector.
  17. Yeah, GF114 was the one the world saw for desktops. I remember the day the 470M came out and I sat there doing math on the cards and determined it was better than the 480M and people were telling me I was crazy on NBR =D. When they found it was true the world flipped out. This is most likely true. I was considering mobile GTX 980, not the 980M. I know it should work fine for 980Ms even with some slight OCs. I don't have the heatsinks yet; I had a job I'm aiming for in a new company early next year when I was planning to start purchasing parts. But the Mobile GTX 980s that appear in MXM formats... THAT I don't know.
  18. As far as I can tell, DX12 will change nothing with any current nVidia card, due to the amount of memory data needing to be transferred for current games. Some games don't have a lot of assets readily accessed and could probably use SFR with current non-XDMA tech, but they're not the majority as far as I can see. It's not that "fiji" is better at 4K. It's that "GCN" is better at 4K than "Kepler" and "Maxwell". Fiji's problem is that it has some low-utilization bugs and (as of at least a couple weeks ago) some frametime issues, and the fact that it's got only 4GB of vRAM. It means that in situations at 4K where people (especially those who have more than one) are cranking up the AA and other things that use vRAM like crazy, it CAN run into a vRAM bottleneck. No amount of memory bandwidth is going to help if it needs to hold 5GB of data in a 4GB frame buffer. This may not really be often encountered, mind, but it is a possibility and for those users who would grab 2-3 GPUs, watercool em all and crank up to maximum at 4K? They're not the best choice. nVidia is taking their cards very very slowly, despite what their marketing says. They can be "pushing for 4K" as much as they want, and "pushing for advancement" as much as they want, but they build their cards to satisfy the minimum requirements for gaming at the generation they were created in (Kepler falters above 1080p, Maxwell falters above 1440p), and it's backfired at them with that DX12 benchmark the other day where their cards flat out proved they don't have the computational capability to do whatever DX12 asks. Their double precision is dead, even for quadros, and their CUDA support is dwindling so far down you might as well consider it dead in the water for consumer cards. But none of that is needed for gaming, so nobody buying their cards for gaming cares. On the other hand; AMD's sort of ahead of its time you could say, but it doesn't help them *NOW*, which is obviously a huge issue for them. Also the same deal I called when the Hawaii cards came out is still in effect: making bigger, hotter, more power hungry cards repeatedly doesn't help. Their current line now is evidence of it: you can't grab a R9 390 on a 500W PSU like people can do for a 970, etc, and until the R9 380X comes out, there's a huge power gap between the 960/380 and 970/390. I'm just going to hope Arctic Islands is enough of a threat to get nVidia to clean up their act with Pascal. This voltage up/down clocking with Maxwell and mismatched clocks in SLI all the time is just plain annoying.
  19. The reason I mentioned it is because my D900F had a thermal pad on the PCH and incorporated it into the CPU heatsink, and to see the HM/EM/SM/SM-A models avoid it was... why?
  20. This is exactly what I've been telling people left and right. The way tech is leading THESE DAYS, SLI is no longer a guaranteed benefit, with rare exceptions. It's now a "cool benefit that may appear" sometimes. And that's pretty stupid. It's why I have been telling people left and right to buy the single strongest GPU available before even considering SLI. That being said, I would take a slightly weaker mobile GTX 980 (maybe with a 1000MHz core and the 5000MHz memory) with some overclockability and shove two of them into a laptop rather than buy a single full mobile 980.
  21. Well yes, they did, though haven't we had full board cards for quite a few gens? My 280M was full blown G92b 485M/580M/675M were full blown GF104/GF114 780M/<s>broken garbage</s>880M was full blown GK104 The 680M was a 670's core with terrible vRAM, but Kepler was re-done twice, with much better cards each time except the unmentionable. I simply more "expect" that they do something like that. But with the terrible VRM state and such of the 980Ms... at least they're doing SOMETHING halfway decent; they had no real reason to; no competition whatsoever. *sigh* Question though: Do you think the P370EM's slave GPU heatsink (lapped) would be capable of handling a Mobile 980's heat? I know the 980Ms would be no problem, but I'm unsure about that last one. I want the better CPU cooling, but if the GPU won't hold very well... I'll have to make a hell of a choice.
  22. 780Ms pretty much did 6GHz without issue, and the hynix vRAM MSI cards with Titan memory were easily able to cross that too. 880Ms as well did higher vRAM clocks as far as I know. But the 980Ms I've *NEVER* seen a stable 6GHz card. If it's simply crosstalk issues, why is that so? What makes the crosstalk so much worse on the 980Ms than the 700M/800M series?
  23. It's more that it sounds wrong. The Mobile GTX 980 not being the 980M confuses everybody, and I think the reason is that nVidia does not want to make a new line. They probably want Pascal to be 1000 series, or they might forego the whole 1000 deal and make new GPUs. That's probably the reason their desktop cards are in such a state right now too. Kepler had a disgusting lineup for the consumer on launch, and it was fixed with the 700 series. The 670 was too close to the 770 for business, so it was delegated to an obscure 760Ti card for OEMs (yes, that EXISTS, just like the 192-bit mem bus GTX 760 with 1.5GB/3GB vRAM which is wholly inferior to the 3GB 660Ti models) and the 760 was designed to combat the mismatched memory issue of the 2GB 660/660Ti while (adding more ROPS and memory bandwidth) as well as making a "weaker" card for midrange so people wanted to buy the 770. But look at what we've got now... a midrange *60 card which has a pitifully slow 128-bit memory bus... unable to even match the GTX 285 from what, seven years ago? Even if OC'd to 8000MHz effected memory clock; just barely able to match a 660Ti's bandwidth? Unable to match my STOCK 780M's bandwidth? And this is taking into account maxwell's memory bandwidth improvements too, even, as a 1.15 multiplier on end-result bandwidth for an extra 15% benefit? It doesn't matter if the core is good (which it isn't for that tier of card; considering it's considered GM206 (even though the 965M which is the same card is GM204) and there's no 960Ti, and the fact that it's 1/2 a 980's cores) if the memory bandwidth can become such a limiting factor. I always say vRAM isn't a limiting factor on a GPU at all, but memory bandwidth is, and that card should be 192-bit at worst. And then the lovely 970, marketed as a 256-bit card to this day (even though it is effectively a 224-bit card, as the "memory bus width" is found by the number of memory controllers working in tandem added together, which is 7 x 32-bit in this case) and that vRAM problem in itself is absolute aids for anybody who is affected (even if 95% of people are not)... it should have been a 3.5GB card and it'd have been left alone and we'd have had a perfect little card. But no. nVidia screwed up, and tried to hide it, and refuse to admit any mistakes. And they probably won't until Pascal is out; because admitting that you made a mistake on a current-gen still-selling non-recalled card is corporate and PR suicide. Then the 980, the only actual good card in the lineup... still tiered too high for its own good, and costing too much for its own good ($520+ for a 4GB midrange card when a 6GB top-end card can be found for as little as $120 more, and is 40% or more stronger?) . If this card would just drop to $420-450 we'd have no problems. But no. nVidia. And then the 980Ti, the only actual well-done, well-priced, well-made card in the WHOLE lineup of maxwell desktop cards. ONE card that has the whole package. One. Lovely. Nice choices, nVidia. And the Titan X. Overpriced garbage that tops out in OCing around where the 980Ti does; $350+ more for an extra 6GB of vRAM and some extra cores that don't help due to its inability to OC as high as its younger brother (except maybe for people like john, but he's an exception). It doesn't even have a double precision block to make itself worthy of the "Titan" name. And now the crown jewel. The "Mobile GTX 980" not to be confused with the "GTX 980M" not to be confused with the "GTX 980". Still with low voltage vRAM.
  24. I am late to this thread, but my list must go on! Good PCH cooling solution. I once decided to play Black Ops 2 while rendering a video in Sony Vegas 13 and watch a livestream. My PCH hit 105c even though my CPU never passed 85c. Socketed CPUs forever, even if desktop CPUs must be used. Socketed GPUs forever of course. Upgrade-ability is beautiful. 120Hz 8-bit colour 1080p and 1440p panels (especially for the models in the class of the P7xxZM, P7xxDM and P870DM). I feel like any machine beyond my P370SM3 is somewhat of a downgrade if I can't have 120Hz again. 3D vision to return. I'm also sad that to upgrade from 780Ms to 980Ms I'd have to lose 3D. Larger single power bricks. I'm hoping we can at least get to 400W or 500W single brick systems on the quality that the current 330W bricks have. Unlimited motherboard power allowance. Just like how our systems can use two 330W bricks currently, if we attach two single 400W or 500W bricks (building off the above point) we should be able to use it all. GPUs and CPUs are quite power hungry these days! Better heatsinks. A lot of the heatsinks seem to come somewhat warped, or otherwise have poor contact, and even though the fans and cooling DESIGN is very good, the heatsinks all vary in effectiveness and lapping is often very beneficial. If they could all be perfect like how most of the old Alienwares (M17x, M18x) were, we'd be far better off. Better cooling design. Machines like the P1xxSM-A and P37xSM-A are great, but they could be better as the P7xxZM models showed. Better keyboards; with the quality of the D900 series at least (I've not used the P7xxZM keyboards; I cannot tell). iGPU functionality (for Quicksync acceleration) on the desktop CPU machines (but not allowing the iGPU access to any displays). The desktop chipsets should allow this functionality. Mux switches are always great; dGPU only mode for all machines should be possible. Full overclockability support, and good BIOS options for end-users. IF POSSIBLE, a bypass for the HQ mobile chips on the models using them to allow them to draw more power than their 47W/45W limits beyond 2.5 minutes. It's currently impossible for most machines, but I think you should be able to design the BIOS to override the limits of the CPU? Good sized power bricks for all the machines. The P6xxSx and P6xxRx machines are nice at 180W but a 220W or 230W possibility would suit them very well. Good QA for the machines' basic designs. I've seen some early P650Sx users complain that their HDMI ports refused to work due to the chassis' design, and modifying the case fixed the issue. Later machines shipped did not have this issue, but it shouldn't have happened with the first models. Support for high memory speeds and some memory overvolting. Brighter and nicer backlight control for keyboards (more than just 3 areas), and on models where not only the keyboard is backlit, ability to turn on/off backlights for each part that uses it (for example, lightbar + no keyboard lights + no red backlights on P375SM should be possible). Nice colour gamuts for screens! 72% NTSC should be minimum; there's no reason to get less. IPS panels with better response times, especially if they're the only option for a machine (like the P770ZM-G). Also 6-bit IPS panels are a joke. Better audio! MSI's Dynaudio is fantastic and I think Clevos can get something at least nearly as good. Better built in mics. My D900F mic was one of the best mics I've ever used; better than all mics on all headsets I've ever owned and even better than my current Blue Snowball by a mile. My P370SM3's mic however is not nearly as good, no matter how I set it up. Let Prema do your BIOS Taller rubber feet on the laptop. My D900F was fantastic; but my P370SM3 gains a great cooling boost simply by folding up 4 blocks of 2-ply toilet paper and putting one of those under each of the four feet of the laptop. It'd be great if at least all the performance laptops like the P7xxDM and P870DM would have taller feet to prevent the need for such a simple modification. UEFI fast boot support WITHOUT Secure Boot, as well as legacy boot support Always high vRAM count GPUs (at least as an option one can pay a little more for) even for P6xxRx type models. Please pressure nVidia to enable SLI on/off without reboots in their drivers. We've done it with driver regedits, we've done it by using desktop drivers, it causes no problems with the machines, nVidia knows that it needs changing, but they're not doing it. Support old BIOSes/systems for at least a couple generations for OS updates with hotkey software etc. Thunderbolt 3 as well as mini DP/HDMI/USB 3(.1) ports All external displays wired to the dGPU in laptops which have MUX switches and can use Optimus mode. That list was longer than I thought. I feel like I'm asking for too much T_T. Seriously let Prema do your BIOS
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.