Jump to content

triturbo

Registered User
  • Posts

    62
  • Joined

  • Last visited

Everything posted by triturbo

  1. W7170M in the upcoming Precision 7700 (or whatever they call it) would be MXM-B and I'm looking forward to get one.
  2. 1. And you think that I don't know that? I'll tell you something else though - you see this new module? Is there anyone else that can take advantage off it? Is there a better future-proof technology around that can be put and WILL be put, but not this year. It actually can fit on standard MXM-B and still get like 6 phases for the power delivery, it just needs different hole spacing. Wouldn't that be a better option? It sure is, but you can see the extra length someone took, just to "smack" someone else. 2. Actually I'm quite surprised that they've called it off. It's not like they were going to lose any clients. Also it's not like the benches weren't starting to gain steam. So it was a bit too late, who knows. At least my tinfoil hat protects me from nGreedia's radiation, I can borrow you one if you want. 3. Of course it doesn't, this was their chance, and obviously they wont be the first with HBM MXM module, and most people would buy whatever the green throws at them anyway. Even though Fiji is better at 4K. Futureproofing, who cares, they'll get the next year's model as well. 4. It's not widespread, but can't consider it tiny either. I'd rather get a minimal yet powerful setup and I would guess that I'm not alone. As I said above (in point 1) MXM is a standard, with chip/RAM/VRM/etc spacing, placement and etc. Someone can make a pretty custom module to fit their need, but for others a simple* hole spacing change seems to be impossible. Feel free to explain how this happens. I can't. *Actually it's not simple at all, but they both require change in spec, and it was obviously done for this new module which we can see on the pictures. 5. True. But the charts, the charts... 6. Optimal and troubleshooting don't mix well for me. Note taken.
  3. It's a bit hard to do innovation, when the MXM Sig is owned by your competitor and obviously they can do whatever they want in order to fit all of the 200W TDP. Wanna bet if AMD is going to follow this year? Wanna bet which would be the first company to hit with HBM module as well? They wont let someone to steal their thunder just like that. Why are you quick to jump on the "no-one is going to use the R9 Nano"? The world is a bit broader than your FOV. I see it as a perfect eGPU candidate, which section is a pretty nice chunk of this forum. What happened with our little chit-chat back in the clock-block thread, about multi-GPUs? I see that you changed your stance. I'm still behind the single GPU setups, but it's not so funny when a supposed inferior GPUs scale better than the "almighty" Since I mentioned the clock-block, is it obvious now why was all the fuss? The slides are not as impressive when a supposed mobile card (980m) can catch-up to the "real deal" (mobile 980). As I said, and I would always say - That's nGreedia for you. They are NO innovation, only brute force, because that's what money buys you - power, NOT creativity (with a few exceptions of course)! All of their shitty moves prove it. For an admin you are WAY too biased.
  4. My take on the overclocking post is that it's some kind of damage control. They probably mean - hey desktop users, don't be afraid, stay with us, there is overclocking in your/our future. This way only the notebook users are left in the dark. Hopefully more people care about the freedom of choice rather than personal interest and whether or not they'll overclock their mobile nVIDIA. Which tells me that I should sign the damn thing as well... Decisions, decisions. The R9-M295X doesn't has CF connector, could be using XDMA?
  5. We are really getting lengthy, so I cut you off I'll get to multi-GPUs later. That's your take on G-Sync and DP standards and I'll need proof about it. I'll tell you this tough - "Version 1.3 was published in February 2011; it includes a new Panel Self-Refresh (PSR) feature developed to save system power and further extend battery life in portable PC systems." That's a quote from eDP's 1.3 revision, and this is what AMD wanted to be passed to DP as well. So how is this bad? More monitors could be made without additional proprietary modules and would be compatible with everything. Again, how is this bad? This is the technology that G-Sync is based on, nothing revolutionary. The hardware and the specs wasn't there on the desktops, so they just cashed on it. Of course it does its wonder, but could've ask VESA to include what was already developed in the sub-standard in question. Trust me, you don't need to show me the advantages, I know them, that's why I really hope something like FED or SED would come around. I'm keeping an eye on CNT-FED, I hope it would hit the market eventually. Also, that's why I have FW900 I've been in Spain (40ºC, 104ºF), so I know what it is to have a hot desktop in a hot room. Again if the manufacturer is there to warrant it, why not? Not to mention that most of those get whatever was on them thrown away and are water cooled, and you can link it to the boiler instead... profit? I think yes. A hot shower after an intense session, lovely. I see nothing about the mobile 8 series from you. Where is your manufacturer? So this is your response to Mantle? So you are blaming them for not being innovative, yet your response is this? Even if it's indeed to fix their unoptimized performance, it does gains performance now doesn't it? So how is this not a good thing? I mentioned DX12, which is fine, everyone would get metal touch and orgasms would flow everywhere... if you are on W10. You know, W7 and Mantle is the preferred combo in my book. So is that a plus on AMD's side? Not to mention that this DX12 most likely wouldn't have been what it is now if it wasn't Mantle. I have no proofs about it, but there are none about the opposite as well. I'm pretty certain it was the Android and iOS case all over again. As I said before, the idea is a really damn important thing. The realization is no less important, but you have to have an idea to have a starting point! Oh yes, I forgot about SLi, another technology that has been bought. I know about the implementation, but should I repeat myself about the ideas? It's third time already. But I will repeat myself about implementation. So here is again my talk about idea and realization. So nVIDIA bought SLi (patents, 3dfx, whatever) and they've changed it upside-down. They have the money to do it. What about CrossFire - well that's the result done with less money. Are you following? Here's an idea - it wasn't AMD's, but it wasn't nVIDIA's either and we can see the end result depending on the money thrown. Oh and do you think or implying that SLi is perfect? No, I wont respond with anything, I know the answer, I want to hear/read it from you. I'll say this - I'll consider multi-GPU setup when the technology is there. For now it's mostly (not entirely) for number chasing. There's performance to be gained... when it works. When it doesn't - it's 500 to 1000 (probably more) dollars in hardware that you switch off to play on single... Right. Sorry but I like when I buy something to actually use it. XDMA is getting things close. Then comes the game support, which is also part of the implementation. Before you jump on me with - yeah but SLi suck because the developers. You have nice technology - great, what it means without anyone using it? Same goes to Mantle, which is sad, because it is THE ONLY way to stick to W7 AND have great performance. I agree, it's now or never. If AMD doesn't cash on this, they are out. Would they cash, is entirely up to you. nVIDIA is playing it attention whore the whole damn time, as soon as they start to lose attention, they play some cheap trick and regain the attention. They have the means, they have the money, they can do whatever they want. Feel free to buy from them again, soon it will be the only choice.
  6. Yes, of all the the other points, those are the ones I care about. Why, because in the end of the day if they happen to fix this "issue" you might as well end-up with nVIDIA GPU once again. 1. How? Tell me, how? That's precisely why I wrote about how development goes. You have to have funds to fund the damn development. HD 7970m was awesome, and wasn't gimped by crappy vRAM like 680m was. Yet most were still all over 680ms. Why the crappy vRAM, well that's nVIDIA we are talking about - hey our 680m with crappy RAM and cut-down core would go tie with 7970m, let's save some for the 780m, they would still buy it either way. I mean c'mon, they could've went flat-out and 680m be what 780m is, but then what? Anyway, even then with relatively equal performance and a bit more when overclocked, nVIDIA was still the one to be considered. And things tanked ever since for AMD. You have to buy a product in order the company to get some and reinvest it. Why would you should care - well look at this thread, that's why. 2. G-Sync? The feature that comes standard with DP 1.4 specs? Again that's nVIDIA we are talking about - money, money, money. "Hey let's milk 200 - 250 dollars more for a display (with a board that's likely in the pocket money range) that would either way come after an year or so, but we wont be getting any then, but now we can throw this as an exclusive feature and etc." LOL! They are not creating anything new, they are just profiting on other people's ideas.* 3. Why not? Really why not? I'll give an example with engines - air cooled engines can go up to 210ºC, but the water cooled ones up to 110ºC. Same goes to horsepower, you look at the specs on paper, but the actual performance may and will differ. What I mean with this? I always said and will continue to do so that TDP is something to base your guess on, but it's not exact science! The actual performance may and will differ especially from one manufacturer to another. And to get to the ºC - if they say that it's right why not? If it fails, they are the ones that would cover it. We are returning back to the main topic - nVIDIA is locking down and limiting in order to cover some f*%#$-up they did. Another notable mention is the entire MXM 8 series line-up. How about that, did we ever got confirmation that they indeed screwed and things will be fixed and people refunded or something? 4. Well if you were to wait the nVIDIA's answer it wouldn't have been such a problem now wouldn't it? Even they had their delays, even tough they are dealing with a lot more resources. Sure everyone makes bad decisions now and again, but as I said nVIDIA is in my black list for doing stupid things pretty much non-stop. GameWorks? How about Mantle? AMD was the first one around to come-up with this kind of tech (quick glimpse at GameWorks), so who is playing catch-up? I think nVIDIA was pretty butt-hurt that they haven't came-up with something like that. Mantle was meant to be open source, but whats the point if there's no interest? Well, Intel actually expressed interest, but I seriously doubt this is what AMD was hoping for. I mean, Intel CPUs are more powerful and that's no secret. In the APU market AMD wins because of the better iGP. Now combine Mantle with Intel's strong CPU and average iGP, and you get quite the package. A package that would entirely demolish AMD to be honest. So yeah, I can see why Mantle hasn't picked-up on anything but AMD. Why bother dealing with the competition when they can come-up with more-or-less the same on their own? I mean, it's the idea that's important, of course the realization as well, but they have the money to throw, so they'll fix it one way or another. Then comes DX12. PhysX? Again - nothing theirs, thankfully they had the money to buy them. G-Sync? Already said what I had to. * I give them credit for the MXM tough. Don't know what their intentions were, and if it's really their idea (haven't dug deep enough) but that's the single greatest thing that nVIDIA has ever made. Kudos to them for this. I'm serious. TL;DR Can add some more to each point, but is it really necessarily? If don't get my point by now adding more words would do nothing. It already is quite the wall of text, so most people would skip it anyway. Everyone is buying Intel and nVIDIA by default. It takes a miracle to someone actually buy AMD. So how on earth AMD can improve when there are no money coming their way? I have even less income and what I get for my money is well worth it in my eyes. I can't justify spending a lot more and getting marginally better product. I mean, yeah 980m is what it is - pretty fast, but could've been better AND they cut the overclocking. Tell me, how that is not marginally better product? I actually would consider it worse. You get cut down chip and no overclocking, lovely. Would get the hell out of my 7970m then comes the desktop. (if Clevo doesn't release anything new with AMD chip on it, still have my hopes)
  7. Actually it is the other way around. R&D takes a crap load of money and those people have families to feed as well. So take what, less than half of what nVIDIA has as a market share, then take the lower prices AND then take in mind that the wafers cost THE SAME for everyone (actually nVIDIA could get better discount, because you know, bigger orders). Can you do the math? Again, they can't reinvest everything, they have to warrant their living as well. AMD for not being competitive - how comes? Really?! It was at least twice in the past 5 years (clear advantage), but everyone and their mothers were still buying nVIDIA (OK, there are always exceptions, but for most part), or waiting nVIDIA's response. AMD always has released whatever they had best on tap, and I can point you at least two examples of recent years that nVIDIA was/is holding off - 680m and 980m. You are the ones that are happy to be milked, so here comes this: Of course it is, and it would get "better". You are letting them in, tell'em to feel at home and they do just that. They can do whatever they want, you have showed them this multiple times. They screw-up, you are still there with your money, ready to take the next hit. I can't see why you are moaning about it, embrace the market's natural course, the one you created. That's why I'll buy AMD till they are around, because I don't like monopoly and especially arrogant bastards (Intel is right next in-line, gnusmas tops them all). That's how I proceed, I'm not buying anything or that has chips that came from gnusmas for quite some time now, wont be getting 980m even if it happens to be the last MXM 3.0 B, would build an AMD desktop. Yeah, I'm shooting myself in the leg, but you shot both of yours.
  8. The one from Clevo P370SM worked after lowering the RAM speed. It seems that this GPU has really crappy memory to start with. The problem now is that with this BIOS the GPU is not stable with anything more than 300MHz RAM, which obviously cripples it quite a lot (the core runs fine @stock - 900MHz). So now, another one - is it possible that it's timings related (that's what I think it is) and is it fixable?
  9. A friend of mine got this GPU. I was the first one to try it, but it wont work on anything but external with the stock BIOS. Tried a modded 8970m vBIOS, the display's backlight is on, but no picture and wont boot into Windows, also cuts out the external. Tried Clevo P270WM vBIOS as well, same as with the 8970m vBIOS. I found another one, don't know what's its story, still nothing. At last I tried the M6100* and of course I bricked the GPU. No worries though, my friend has a programmer, so he got it back to life. Still wont output to internal. Could anyone help us? *The DreamColor in my friend's 8740w only works if FirePro or Quadro is in there that's why I tried this cross flash and that's why I was the first one to try it. As a side note - 6970m wont output to the DreamColor, but once flashed to M8900, works just fine, aside from the broken fan control that is. So i guess there's some vBIOS - EDID handshake that's missing in the 6970m vBIOS. Is it possible to get this part and implement it in newer GPUs, like 7970m or the R9-M290X in question? Thank you very much.
  10. And I thought DELL going private is a good thing!!! This guy's insane!!! What Brian said is what they should've done. The XPS line was essentially rivaling the Alienwares of the time, they are so dumbed down right now, it's not even funny. So a kick-back to life, like Brian suggest would only make more sense, than what it currently is. As for Alienware... It's pretty damn stupid to claim yourself enthusiast manufacturer, yet not thinking about enthusiast at all.
  11. Oh really?! Show me such MoBo.
  12. Why not? Load/heat balancing? I mean you'll obviously overload the cooling if you tax both (hard, how hard, we are yet to see), but when pushing the CPU alone, you deal with quite the cooling surface, so it would make for some wPrime records for example Also this way the CPU and GPU are close to each other, usually you don't want long traces when you design MoBo. With two or single big @$$ hole in between (where the fan/s is/are), there's no short way to connect the CPU nad GPU. So, think about that as well.
  13. Is there a newer version of this vBIOS? It seems that Clevo has 019, while this is 017. I'm asking since it doesn't deal well with my 8740w - fan stuck at full (unless manual fan control), no sleep. Thanks.
  14. There's not much hope, but check this link
  15. Interested in the MXM version. Would it return picture from the eGPU's DVI/DP to the internal LVDS/eDP through the MXM's own lines? Would it be flat/ribbon cable, or HDMI like (thicker, because of more lines obviously)?
  16. He was inspiration for me as well, so I made a bit more complicated setup. After a lot of delays I hope that I'll be able to test the water part this, or the following weekend
  17. I seriously doubt that they haven't soldered the connector, yet everything else "is there". I'm pretty sure there are some capacitors and resistors missing as well. You'll have to have the schematics, but it's pretty new model, so I doubt it that you'll find it. Good luck anyway!
  18. Outstanding ride, just like the previous 2! I love this series! Burial at the sea is also a nice touch
  19. triturbo

    Monitors

    I have GDM-FW900 I love 16:10 and I can not lie
  20. Thank you for the explanation, especially the reads and battery part. So I'll proceed to testing my mSATA theory. If successful I'll ditch the caddy I've just rerun the CDM - 30MB/s cap, so I guess I'll have to use your advice. Still, I hope that the mSATA mod would be successful.
  21. I don't think that it is possible, unless you have a pretty large vapor chamber. Even then the radiator would have to be on the opposite side, otherwise it would be pointless. This is the only way that I figured out - vapor chamber -> heat-pipes -> radiator. Of course I'm losing conductivity on each transition since I've used Arctic Alumina Thermal Adhesive, because I was too afraid to use solder. I believe solder would drop 5 or so degrees more.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.