My first video accelerator was the Nvidia NV-1 because a friend of mine was on the design team and he assured me that NURBs were going to be the dominant rendering model since you could do a sphere with just 6 of them, whereas triangles needed like 50 and it still looked like crap. But Nvidia was so tight fisted with development details and all their "secret sauce" none of my programs ever worked on it.
Then I bought a 3DFx Voodoo card and started using Glide and it was night and day. I had something up the first day and every day thereafter it seemed to get more and more capable. That was a lot of fun.
In my opinion, Direct X was what killed it most. OpenGL was well supported on the Voodoo cards and Microsoft was determined to kill anyone using OpenGL (which they didn't control) to program games if they could. After about 5 years (Direct X 7 or 8) it had reached feature parity but long before that the "co marketing" dollars Microsoft used to enforce their monopoly had done most of the work.
Sigh.
Microsoft pushing D3D was a good thing, OpenGL drivers were an even bigger mess back then than today, and drivers for popular 3D accelerators only implemented the 'happy path' needed for running GLQuake but were either very slow or sloppily implemented for the rest of the API.
D3D was a terribly designed API in the beginning, but it caught up fast and starting at around DX7 was the objectively better API, and Microsoft forced GPU vendors to actually provide conforming and performant drivers.
I see it a bit differently, but there is a lesson in here.
Microsoft pushed D3D to support their own self interest (which is totally an expected/okay thing for them to do), the way they evolved it made it both Windows only and ultimately incredibly complex (a lot of underlying GPU design leaks through the API into user code (or it did, I haven't written D3D code since DX10).
The lesson though, is that APIs "succeed", no matter what the quality, based on how many engineers are invested in having them succeed. Microsoft created a system whereby not only could a GPU vendor create a new feature in their GPU, they could get Microsoft to make it part of the "standard" (See the discussion of the GeForce drivers elsewhere) and that incentivizes the manufacturers to both continue to write drivers for Microsoft's standard, and to push developers to use that standard which keeps their product in demand.
This is an old lesson (think Rail Gauge standards as a means of preferentially making one company's locomotives the "right" one to buy) and we see it repeated often. One of the places "Open Source" could make a huge impact on the world would be in "standards." It isn't quite there yet but I can see inklings of people who are coming around to that point of view.
Should they have evolved it to be cross-platform? What about Apple's Metal API? I don't get why people expect Microsoft to do things that would benefit their competitors.
> The lesson though, is that APIs "succeed", no matter what the quality, based on how many engineers are invested in having them succeed.
Exactly. Microsoft was willing to make things work for them. Something other vendors wouldn't do (including those who are ostensibly "open source").
It wasn't just graphics, it was audio as well. People have it nice now but back then you were still fighting audio driver issues. AC'97 support made that situation livable but it took forever for everyone to support it.
My opinion on this one, for games authors anyway, was changed by reading an early 2000s piece by someone rebutting a lot of the noise Carmack was making on the topic, focusing on exactly this point: by DirectX 6, you got an API that was a suite which gave you, yes, the graphics, but also the sound, the input handling, media streaming for cutscenes and so on. OpenGL vs Direct3D was a sideshow at that point for most developers: it was "solves one part of their problem" vs "solves all of their problems". And no-one involved in OpenGL showed any sign of being interested in those other problems.
I haven't played much with either OpenGL or DirectX but I remember wanting to render some text using a 3D API - DirectX supports it out of the box (need to pick the font etc - lots of settings) but at least when I last looked, OpenGL didn't offer that functionality, so I'd need to find a library that handles font loading and everything else associated with printing text.
Input handling was pretty basic back in the day, there wasn't that many different hardware options to support.
For audio everyone was using 3rd party tools like Miles Sound System etc., but even OpenAL launched around 2000 already as an OpenGL companion. Video had the same thing happen with everyone using Bink which launched around 1999.
In comparison using OpenGL was a lot nicer than anything before probably DirectX 9. At that time in DX you needed pages and pages of boilerplate code just to set up your window, nevermind to get anything done.
Advanced GPU features of the time were also an issue, OpenGL would add them as extensions you could load, but in DirectX you were stuck until the next release.
You can't make a window in open gl itself at all...
Yeah... you need egl.
Well yeah, I mean, OpenGL (to quote Wikipedia) is "a cross-language, cross-platform API for rendering 2D and 3D vector graphics" - nothing more, nothing less. Whereas DirectX (which includes and is often conflated with Direct3D) was specifically designed by Microsoft to attract game developers to their platform (and lock them in) by taking care of all their needs. So it's kind of an apples to oranges comparison...
This implies that they could not have pushed OpenGL to be less of a mess. Feels bad faith to argue, when you consider how bad all drivers were back then.
Everybody wanted this. There was even an attempt at this, called "Long's Peak", that was ultimately voted down by the OpenGL committee after a long development road. Nobody else needed to sabotage OpenGL, Khronos was more than happy to do it themselves.
> Khronos was more than happy to do it themselves.
I was involved in a few of those committees, and sadly I have to agree.
The reason Khronos is often so slow to adopt features is because how hard it is for a group of competitors to agree on something. Everybody has an incentive to make the standard follow their hardware.
A notable exception to this was the OpenCL committee, which was effectively strongarmed by Apple. Everybody wanted Apple's business, so nobody offered much resistance to what Apple wanted.
Even today, DirectX has stricter quality/precision/behaviour requirements for hardware. On the other hand Vulkan is a lot better specified in other areas and thus better documented. So even if a vendor doesn't officially support one or the other, they will care about both…
Kind of, not only Vulkan keeps trailing behind DirectX, where vendors collaborate with Microsoft on DirectX, and eventually they might come to Vulkan.
There is also the issues of development experience in provided SDKs versus what others provide, and apparently now Khronos rather adopt HLSL than trying to improve GLSL.
Standarization of features might lag behind D3D, but availability of features usually comes first on Vulkan because of the extension system. For example NVidia brought raytracing to Vulkan long before DXR was a thing.
Nö it didn't, check your timelines with more care.
Here is tip, start with the Star Wars presentation from Unreal engine.
Vulkan trails on some things, leads on others. DirectX doesn't exist on mobile for instance, so innovations with particular relevance to mobile GPUs tend to come to Vulkan first.
Also, its extensions make it an interesting laboratory for various vendors' experiments.
if this is an argument about MS forcing DX and killing OGL, I remind you that Vulkan had/has Apple's support.
I'm not arguing any such thing, just pointing to the strengths of the two standards.
Well, Microsoft kind of deceived sgi during Fahrenheit
Mostly, SGI also did their own part of the overall failure.
I agree. D3D7 was the first version actually widely used, and it came about as a glut of 3D card manufacturers appeared. Microsoft was willing to change it drastically to appease developers. OpenGL is still held back by the CAD companies. We're lucky Vulkan is here as an alternative now.
Around 1999 we had a PC with both a Riva TNT and a Voodoo 2. The main games I played were Half Life and Unreal 1 (in addition to various games that came bundled with hardware like Monster truck madness and Urban Assault). I found the Riva TNT to work much better than the Voodoo 2 for the main games I played (e.g. when choosing in the game options, the D3D or OpenGL options had less glitches, better looking translucency in Unreal, etc..., than the options that used the voodoo card), and in addition the Riva TNT supported 32-bit color while the Voodoo 2 only had 16-bit color and had this awkward passthrough.
Maybe being 1999 it was just a little bit too late to still fully appreciate 3dfx and modern day D3D and OpenGL took over around that time, so I just missed the proper Voodoo era by a hair.
Note that by OpenGL here I meant OpenGL using the Riva TNT (I assume the Voodoo card drivers must have been called Glide or 3DFx in the settings). I've always seen D3D and OpenGL existing side by side, performing very similarly in most games I played, and supporting the same cards, with GeForce cards etc that came later. I mainly game using Wine/Proton on Linux now by the way.
Yep, as soon as the TNT came out it was pretty much over for 3dfx. Quake II on a Riva TNT running in 1024x768 was a sight to behold.
I feel like 3dfx still had a slight edge until the tail end of the Voodoo3 era (when the GF256 came out and blew everything away). But I think it depends what you prioritise.
Like here, the V3 seems to have a pretty handy lead in most cases https://www.anandtech.com/show/288/14 - but that's a TNT2 (not TNT2 Ultra) and it's all in 16 bit colour depth (not supported by the V3).
It was certainly an interesting time, and as a V3 owner I did envy that 32 bit colour depth on the TNT2 and the G400 MAX's gorgeous bump mapping :D
I worked at CompUSA the summer of 1999 and there were demo machines for the Voodoo3 and TNT2, and what I remember most was that the Voodoo looked muddy while the TNT2 looked crisp. Frame rates weren’t different enough to have a clear winner, since there were 3 different Voodoo models and Nvidia had the ultra. I ended up getting a TNT2 Ultra and loved it. Never had any compatibility issues that I remember.
I owned a Voodoo 3, while all my friends had TNT2's. My experience was the opposite. Not sure if it was the handling of anisotropic filtering, or some kind of texture filtering - but my Voodoo 3 was notably sharper on all the games of the time.
It may even vary depending on the game, the settings, whether D3D, OpenGL or GLide were used or which versions of the drivers were installed.
For sure, and I'm pretty certain I also tweaked the hell out of 3dfx tools too.
3D gaming at 1024x768 in 1998 would be like 8K gaming today.
I had a Rage 128 in 1999 and easily gamed at 1024x768 - and ran Windows 98 at 1600x1200!
For years it was so hard to find LCDs that came close to the resolution I had on my CRT for a reasonable price.
And the LCDs of the day really struggled with contrast ratios and ghosting.
> 3D gaming at 1024x768 in 1998 would be like 8K gaming today.
Whatever. In late 1996, I got a PowerMac 8500/180DP (PowerPC 604e) and a 1024x768 monitor. The 8500 didn't even have a graphics card, but had integrated/dedicated graphics on the motherboard with 4MB VRAM (also S-video and composite video in and out). It came bundled with Bungie's Marathon[1] (1994) which filled the screen in 16-bit color.
[1] https://en.wikipedia.org/wiki/Marathon_(video_game)
Probably cost a heck of a lot more than a comparable PC gaming setup in those days. Not to say it's not a great game, but Marathon uses a Doom-like 2.5D raycasting engine which doesn't need a 3D accelerator, just enough memory speed to draw each frame (which the PowerMac obviously had). Life gets a lot more complicated when you have to render perspective correct triangles with filtered textures, lighting and z-buffers in a true 3D space.
> Probably cost a heck of a lot more than a comparable PC gaming setup in those days.
Until 2020, this was always a myth. When matching features and performance, the price of a Mac was always within $100 of a PC that is its equal. Not anymore with Apple Silicon. Now when matching performance and features you'll have a PC costing twice as much or more.
Here in the Netherlands Macs were outrageously expensive in the 90's. I only knew a few people who bothered to buy them (mostly because they wanted something simpler than a PC or because of Adobe stuff). Macs also used 'better' components at the time (SCSI drives instead of slow IDE, better sound, etc) so yes, if you wanted a comparable PC you had to pay up. But most people here had a much cheaper PC...
Depending on options, there are at least 2-4 different US-made SUV models that cost half as much as a BMW X1, which is not exactly expensive afa BMWs go.
The "Apple is expensive"-myth has been perpetuated since the days of 8-bit computing. Less expensive computers are cheaper because they have fewer features, use inferior parts, and are simply not as performant. But all that is behind us with Apple Silicon. Now you'd be hard-pressed to find a PC that performs half as well as the current line up of low-end Macs for their price.
There are workloads where a high-end PC outclasses a Mac for the same money (think top-end GPU or lots of memory, Apple doesn't have the first and wants a king sized markup for the second).
For most entry level stuff performance is not that important so that's not the metric where customers focus on (price is). A desktop all-in-one from e.g. Lenovo starts at 600 euro's, the cheapest iMac starts at 1500. A reasonable Windows laptop starts at around 400 euro's while MacBook air starts at 1000 euro's. It's not that the Apple machines aren't better, it's just that lots of folks here don't want to pay the entry fee.
Same reason most people here don't drive BMWs but cheaper cars.
No, 4K. 1280x720 would be 640x480, the minimum usable for a Windows 95/98 desktop and most multimedia based software. Today that resolution it's almost the minimum for 720p video and modern gaming.
1920x1080 would be 800x600 back in the day, something everyone used for a viable (not just usable) desktop in order to be confortable with daily tasks such as browing and using a word processor. Not top-end, but most games would look nice enough, such as Unreal, Deus Ex and Max Payne at 800x600, which looked great.
It was much more common to run games at a lower resolution than regular desktop, though. Even as 1024x768 became the norm for desktop, mostly, the crazy iteration rates on 3D hardware meant that most people who couldn't afford a new card every year would stick to 640x480 for the more recent 3D games.
Yes, I did that even in the Geforce 2MX days. Games maxed @800x600, desktop at 1024x768.
Ditto with today's 1920x1080 desktop resolution on my Intel NUC and games at 1280x720.
But I could run 1280x1024@60 if I wanted. And a lot games would run fine at 1024x768.
More like 4K gaming today. PII 450MHz + Riva 128ZX or TNT easily ran Half-Life at 1152x864. (FPS expectations were also lower, however - nobody expected 100fps+ like we do today)
> nobody expected 100fps+
Not for the games framerates indeed, but I did set my CRT monitor at 120 Hz to avoid eyestrain. You could effortlessly switch between many framerates from 60 Hz to 160 Hz or so on those monitors and it was just a simple setting.
Today it seems there now exist LCD monitors that can do (much) more than 60 Hz, but somehow it has to have all those vendor lock in sounding brandnames that makes it all sound a bit unreliable [in the sense of overcomplicated and vendor dependent] compared to back then, when it was just a number you could configure that was just a logical part of how the stuff worked.
With respect to raw refresh rates, it's mostly the connectivity standards at fault. After VGA things got a bit out of hand with one connector after another in various form factors.
The part you're probably thinking of is GSync vs Freesync which is a feature for making a tear-free dynamic refresh rate, something that was simply impossible in the CRT days but does add some perceptual smoothness and responsiveness in games. Not using a compatible monitor just means you're doing sync with the traditional fixed rate system.
What has gotten way more complex is the software side of things because we're in a many-core, many-thread world and a game can't expect to achieve exact timing of their updates to hit a target refresh, so things are getting buffers on top of buffers and in-game configuration reflects that with various internal refresh rate settings.
I don't buy that it has to be this complex.
We could write to buffers at 60 Hz effortlessly with computers from 1999, speeds have increased more than enough to write to buffers at 120 Hz and more, even with 16x more pixels.
1/120th of a second is a huge amount of time in CPU/GPU clock ticks, more than enough to compute a frame and write it to a double buffer to swap, and more threads should make that easier to do, not harder: more threads can compute pixels so pixels can be put in the buffer faster.
If there's problems with connector standards, software side of things, multithreading making it require third-party complexity, then that's a problem of those connector standards, the software, things like the LCD monitors themselves trying to be too smart and add delay, etc... Take also for example the AI upscaling done in NVidia cards now: adding yet more latency (since it needs multiple frames to compute this) and complexity (and I've seen it create artefacts too, then I'd rather just have a predictable bicubic or lanczos upscaling).
Same with audio: why do people tolerate such latency with bluetooth audio? Aptx had much less latency but the latest headphones don't support it anymore, only huge delay.
>GSync vs Freesync which is a feature for making a tear-free dynamic refresh rate, something that was simply impossible in the CRT days
fun fact: the very same technique used by Freesync, delaying the vsync, works with CRTs
Genericness of Variable Refresh Rate (VRR works on any video source including DVI and VGA, even on MultiSync CRT tubes) https://forums.blurbusters.com/viewtopic.php?f=7&t=8889
I remember that the framerate was pretty important in Quake. Because player positions were interpolated linearly, you could miss the top of the parabola when jumping if your framerate was too low. I don't remember any numbers but getting a high framerate was definitely a concern when playing a bit competitively.
Competitive play, sure, especially at low resolutions and everything disabled. Single player games were running at pretty low framerates though, with a few exceptions like Quake/Half-Life. Even 60fps was more of a luxury than a rule.
I had a Riva 128 and it was garbage, I was constantly kicking myself for cheaping out and not getting a Voodoo2
>nobody expected 100fps+
but thats exactly what you got on Voodoo2 with P2 450 back then in 640x480 https://www.bluesnews.com/benchmarks/081598.html
My 200+ fps quake 2 config sure did. If you weren't running 200+ fps for online quake 2 you were in for a bad time.
Between a Celeron 333A running at 550MHz and a dual voodoo2 you could drive games at pretty ridiculous frame rates.
I would say 4K, as it was the resolution those with a powerful PC could handle.
Everyone had monitors that could do 1024x768 (usually up to 1600x1200) where as 8K monitors are much less ubiquitous today in comparison.
Exactly. But now we are drowning in overhead
I had one of those weird TNT2 M64 cards that you could fool into thinking it was a proper TNT2.
Unfortunately for mine the foolery didn't work perfectly and I recall the card was a bit unstable when running that way, or maybe it didn't achieve the optimal performance possible for one of those fiddled M64s.
Not really a huge surprise, IIRC it was a super cheap card...
>Quake II on a Riva TNT running in 1024x768
40 fps, 70 fps on V2 sli
https://www.bluesnews.com/benchmarks/081598.html
I fondly remember the K6-2 system I had with a Voodoo 2 and 192MB of RAM. It was the first PC that was all mine. I also played HL1 and Unreal Tournament 1. The big games though were the HL mods TFC and Counter-Strike. I dragged that thing to so many LAN parties. A true golden age.
It was also the first PC I ever installed Linux on. My dad would not let me do such a risky operation as dual booting Linux on the family computer. I don’t even remember what distro at this point.
"Risky", it sounds so strange now to call dual booting "risky". But it was a different time, where remediating a borked software change wasn't google-able.
Edit: @throwawayx38: I 100% agree with you! Thanks for your reply.
To be fair, it may not have been the dual boot itself that was risky.
Unless they had a dedicated harddrive, they would also need to resize existing fat32/ntfs partitions and add a couple of new partitions for Linux. This process had a certain risk.
I have vague memories from early days of using Linux that you used to have to manually set the modeline for your videocard in X11 - if you got things wrong you could potentially permanently damage the monitor.
Heh, I meant it in a tongue in cheek way. My dad thought it was too risky.
I understand, that's what I found amusing. It's easy to imagine a dad in the 90s, when computers were much more costly, being like "hell no you aren't doing that to the family computer!" :)
Maybe I'm still misinterpreting, but this was where my mind went. Man, I wish I could've appreciated how distinct the 90s were as a kid, but I was too young and dumb to have a shred of hope of being that aware!
In the era of that kind of PC, I remember the boot loaders being a bit more touchy.
I never 'totally borked' a PC with LILO but definitely had to fix some things at least once, and that was with a nice thick Slackware 7.1 book to guide me.
GRUB, IIRC, vastly improved things but took a little while to get there and truly 'easy-peasy'
>My dad would not let me do such a risky operation as dual booting Linux on the family computer.
You were smarter than me. I wanted all those free compilers so badly I just went and installed redhat on the family pc. Ask me how well that conversation went with the old man...
The pass-through was really awkward. Depending on the exact board implementation it either looked perfect or terrible, and you couldn’t trust the reviewers because they were all blind, using awful monitors, and tended to focus on the game performance instead of whether it looked like trash the other 90% of the time you spent looking at the display.
Yeah, this doesn't get mentioned enough. 3Dfx's first few cards were revelatory, but if you used the same computer for "actual work", the passthrough tended to kind of make your text look like ass in Windows/DOS.
I had a Voodoo 1 and Voodoo 2. Running Quake with the GLide renderer for the first time was life changing. However, the pass through of the 2D card always felt like a hack. It also led to a noticeable reduction in 2D video quality. It always pained me to have to pass the beautiful output from my Matrox Millennium through the Voodoo.
> in addition the Riva TNT supported 32-bit color while the Voodoo 2 only had 16-bit color and had this awkward passthrough.
32bit on TNT at half the framerate, performance hit was brutal. 16bit on TNT was ugly AF due to bad internal precision while 3dfx did some dithering ~22bit magic
"Voodoo2 Graphics uses a programmable color lookup table to allow for programmable gamma correction. The 16-bit dithered color data from the frame buffer is used an an index into the gamma-correction color table -- the 24-bit output of the gamma-correction color table is then fed to the monitor or Television."
I was never a fan of the 3dfx dithering/filtering, personally. Things usually looked just a bit too muddled for my taste in most games. It wasn't as bad as others (I remember the Matrox M3D having some very interesting features, but Image quality was definitely worse than Voodoo. Marginally better than a Virge at least, lol.)
16 bit on TNT was fine for most of what I played at the time, although at the time it was mostly Quake/Quake2 and a few other games. Admittedly I was much more into 2d (especially strategy) games at the time, so 2d perf (and good VESA compat for dos trash and emulators) was more important to me for the most part.
I think 3dfx had a good product but lost the plot somewhere in between/combination of their cutting 3rd parties out of the market, and not deeply integrating as quickly vs considering binning. VSA-100 was a good idea in theory but the idea they could make a working board with 4 chips in sync at an affordable cost was too bold, and probably a sign they needed to do some soul seeking before going down that path.
Now, it's possible that comment is only discernable in hindsight only. After all, these folks had seemed like engineering geniuses with what they had already pulled off. OTOH, when we consider the cost jump of a '1 to 2 to 4 CPU' system back then... maybe everyone was a bit too optimistic.
Urban Assault was such a cool game, what a shame it never caught on.
It was a gem! It came bundled with the Sidewinder Force Feedback Pro joystick
I reached out to Microsoft to try to buy the rights to it several years ago. They referred me to the original developer in Germany, but they either weren’t interested, or the ownership was too complicated. I’d love a modern version of Urban Assault, although the original is still completely playable. I’m not aware of any other game that managed so well to combine RTS and FPS.
How much could the rights to a game like that cost? Depending on whether it's in range of the average person's disposable income, this could be a cool thing to do as a preservationist.
A Voodoo 2 was kind of old and slow for 1999. A Voodoo 3 was a good card then, but the TNT2 did eclipse it in several games. Only a handful of games where still better in Glide.
Ha, I had a Riva TNT in my forst own PC! Totally forgot about that, and just how great Half Life was back the day, and still is IMHO.
> Microsoft was determined to kill anyone using OpenGL ... After about 5 years (Direct X 7 or 8) it had reached feature parity but long before that the "co marketing" dollars Microsoft used to enforce their monopoly had done most of the work.
I was acutely aware of the various 3D API issues during this time and this rings very true.
Yup, remember when they "teamed up" with SGI to create "Farenheit"? Embrace, extend, extinguish...
I have the beta cds here.. Fahrenheit / XSG. One disk died though
My "favorite" Microsoft fact is that DirectX and Xbox were codenamed Manhattan and Midway because Microsoft's gaming division just ran on "what codename would be the most racist towards Japanese people?". The dxdiag icon is an X because it was originally the radiation symbol!
And this tradition is carried on to this day in milder form when Western game devs hear Elden Ring is more popular than their game or try to "fix" visual novels and JRPGs without playing any of them.
As if SGI didn't had their share in Farenheit's failure.
https://en.wikipedia.org/wiki/Fahrenheit_(graphics_API)
Holy cow I found a nest of Microsoft fans. From your link:
> By 1999 it was clear that Microsoft had no intention of delivering Low Level; although officially working on it, almost no resources were dedicated to actually producing code.
No kidding...
Also the CEO of SGI in the late 90s was an ex-Microsoft and bet heavily on weird technical choices (remember the SGI 320 / 540? I do) that played no small role in sinking the boat. Extremely similar to the infamous Nokia suicide in the 2010s under another Microsoft alumni. I think the similarity isn't due to chance.
Don’t attribute to malice that which can be can be attributed to stupidity.
My current employer has fairly recently hired a ton of ex-Google/Microsoft into upper management. They’re universally clueless about our business, spending most of their time trying to shiv one another for power.
We've recently hired a bunch of ex-Googlers (as well as Twitter-ers and other laid off people). They seem to spend most of their time making sure everyone here knows they're ex-${big_name} and how awesome things were there and we should change everything to do it the same way. It's a bit of a put-off.
If one thinks about it, they were minions in a monopoly, and a pretty dysfunctional one at that even if lucrative. What the heck do they know about how to run a competitive business?
Is this not visible to the people with authority to fire them/not hire more such people? Or do they think differently than you?
I think the attitude is that the industry analysts view “big names” as a structural positive not bothering to think about the actual reality. Also when you hire one of them in a senior position, suddenly you now have a gang of them as new subsidiary VPs.
People are not usually fired for being obnoxious.
It sound like the person we're replying too implied they were also not very effective at their jobs.
I'd be surprised if he really spent most of his time repeating that he worked at Google. That's certainly hyperbole.
TBH SGI 320/540, well namely Cobalt were rather interesting tech-wise. Not sure if they could've gone against Microsoft at the time (NT/Softimage for example) and rise of OpenGL2 (with 3dlabs and all).
Yeah, they were really interesting machines but there were lots of weird technical choices : the non-PC EPROM that made them incompatible with any other OS than a special release of Windows 2000, the 3.3V PCI slots (at the time incompatible with 95% of available cards), the weird connector for the SGI 1600 screen... making the whole idea of "going Intel to conquer a larger market" moot from the start.
Of course the main crime of the ex-Microsoft boss as the time wasn't that, but selling out most of SGI's IP to Microsoft and nVidia for some quick money.
Yeah, but that's SGI doing SGI things basically. They were used to daylight robbery systems they had ultimate control over. This is nothing out of ordinary from their thinking. The weird thinking was more if you're doing a PC do a PC then, not this.. but maybe they initially didn't want to - doing an SGI workstation with PC components is more like it. They'd be shown by the market ultimately it's not what it wants. Primary cool thing about it in my opinion was cobalt and cpu aharing RAM, but what was weird about it was distribution of how much CPU gets and how much GPU was not dynamic but rather static which you had to set up manually before boot. Dynamic sharing is what only now Apple is doing. Something AMD also explored if you had their vertical (cpu, mobo, gpu) but only for fast path movement of data. I'd like to see more of that.
The shared memory architecture came directly from the SGI O2 in 1996. The O2 had dynamic sharing, but it was impossible to make it work in Windows.
O2 dynamic memory sharing allowed things impossible on all other machines with its infinite texture memory, like mapping several videos seamlessly on moving 3D objects (also thanks to the built-in MJPEG encoding/decoding hardware).
>infamous Nokia suicide
Nokia would had killed itself either way, with Elop it still tried to flop.
Every Nokia fanboy cries about EEE, but blissfully forgets what a turd was 5800 Xpress Music, which came half a year later than iPhone 3G.
Yeah, definitely agreed.
RiM basically killed off a big chunk of the Nokia market, as it did for Windows CE as well.
By the time the original iPhone came out, Nokia hadn’t really put out anything to capture the mindshare in a while. They were severely hampered by the split of their Symbian lineup (S30,40,60,90) and unable to adapt to the newest iteration of smartphones.
They’d never have been able to adapt to compete without throwing out Symbian, which they held on to and tried to reinvent. Then there was the failure of MeeGo.
Nokia would have been in the same spot they’re in today regardless of Microsoft. They’d be just another (sadly) washed up Android phone brand. Just like their biggest competitors at the time: Sony Ericsson and Motorola.
But at least we got a lot of Qt development out of it.
Nokia was massive outside of the US, it had name recognition bigger than apple in Europe even in 2009 and still pumped out some gems like the N900.
Yes it had huge systemic issues. Structural problems, too many departments pumping out too many phones with overlapping feature sets, and an incoherent platform strategy.
But Elop flat-out murdered it with his burning platforms memo and then flogged the scraps to the mothership. It came across as a stitch-up from the word go.
By 2009 writing was on the wall and it wasn't 'Nokia'.
You know why it was 'Xpress Music'? Because Nokia was years late for a 'music phone'. Even Moto had E398 and SE had both music and photo. By 2009 Nokia had a cheap line-up for the brand zealots (eaten up by Moto C-series and everyone else), a couple of fetishist's phones (remember those with floral pattern and 8800?) and.. overpriced 'commucators' with subpar internals (hell, late PDAs on Xscale had more RAM and CPU power) and incompatible with anything, including themselves, mess of Symbian.
Elop not only allowed MS to try the waters with mobiles, but actually saved many, many workplaces for years. Alternative for that would had been a bankrupcy around 2013.
That's some pretty revisionist history IMHO. By 2009 the company was in the shit, but it had money and market share.
> Alternative for that would had been a bankrupcy around 2013.
The alternative would have been some restructuring by someone with a better idea than crashing the company and selling the name to his real bosses at MS.
The company was in trouble but salvageable. Elop flat-out murdered it, and it looked a lot like he did it to try to get a name brand for MS to use for its windows phones, which were failing badly (and continued to do so).
> The company was in trouble but salvageable.
No. You should re-read that memo, specifically starting at "In 2008, Apple's market share in the $300+ price range was 25 percent; by 2010 it escalated to 61 percent." paragraph.
In 2010 nobody was interested in Symbian, no one else made phones on Symbian, no one would do apps for Symbian[0] - who would bother with all the Symbian shenanigans when even Nokia itself said what it would move to MeeGo 'soon', along with ~10% of smartphone market? Money was in Apple and Android.
To be salvageable you need something in demand on the market and Nokia only had a brand. You can't salvage an 18-wheeler running off the cliff.
Personally, I had a displeasure of trying to do something on colleague's N8 somewhere in 2011-2012. Not only it was slow as molasses but most of the apps which relied on Ovi were nonfunctional.
Insightful tidbit from N8 wiki page:
> At the time of its launch in November 2010 the Nokia N8 came with the "Comes With Music" service (also branded "Ovi Music Unlimited") in selected markets. In January 2011, Nokia stopped offering the Ovi Music Unlimited service in 27 of the 33 countries where it was offered.
So popular and salvageable what they discontinued the service 3 months after the launch? Should I remind you what Elop's memo was a month later, in February 2011?
[0] Yep, this is what really killed WP too - lack of the momentum on the start, inability to persuade Instagram to bake the app for WP, falling integrations (whey worked at the start! Then Facebook decided it doesn't want to be integrated anywhere because everyone should use their app) => declining market share => lack of interest from developers => declining market share => lack of...
Nobody said they had to stick with Symbian. Nobody said they were on any sort of right track.
But there were all sorts of options to restructure a company that big that had only just been surpassed by android at the time of the memo. Tanking what was left of the company and selling out to MS was probably the worst of them.
It's quite funny, the guardian article on the memo, that reproduces it in full, is here - https://www.theguardian.com/technology/blog/2011/feb/09/noki...
First comment below the line "If Nokia go with MS rather than Android they are in even bigger trouble."
Everyone could see it, apart from Stephen Elop, who was determined to deliver the whole thing to MS regardless of how stupid a decision it was.
> to restructure a company that big
That's the problem. Momentum and more importantly - momentum in the big ass company notorious for it's bureaucracy, red tape and cultural[0] policy of doing nothing above the needed yet jealously protecting own domain from anyone.
> Everyone could see it
You are forgetting what if Nokia pivoted to Android in Feb 2011, than the first usable, mass produced yet first one for the company unit (ie: with all bugs, errors a first thing in the line you can encounter) would be at Q4 2012 at the very, very best. More sane estimate (considering their total unfamiliarity with the platform and, again, cultural nuances) would say somewhere in Q1-Q2 2013. There it would then compete with the already established market of Android (Moto RAZR would be 1y+ old) and iPhone 5.
Even if they somehow managed to do it in less than a year (like a fairy godmother came and magically did everything for them, including production and shipping) then they would needed to compete with iPhone 4S, which as we know was extremely popular and people held them for years.
No fucking way they could do anything to stay afloat. That is why I say what would be bankrupt by 2013. You may don't like Elop and his actions as much as you want, but there is zero chances they could do anything themselves.
Oh, one more thing. Sure in 2009 Nokia had the money. By 2011 creditors lowered N. rating (reflected in the memo), which means what by 2012 they would have no spare money aka cash. And you probably forgetting what Nokia had a ridiculous amount of employees (which clearly seen by how many were let go by 2013-2014). You need many, many monies to support that amount of people and you can't tell them to fuck off like in the US with the at-will employment, you need to provide the severance and pension funds for everyone. Without money the only thing you can do is to close the doors and file for bankruptcy. If you want to continue your business - you need first to pay out the social responsibilities. And that costs money. And guess who not only had the money but was willing to pour them into Nokia?
[0] Literally. I've read the 'memoirs' of the guy who worked there before and while. I had a friend working in Finland some time later. The stories she told about passiveness, lack of enthusiasm, always trying to evade the responsibility - just confirmed me the things what I knew at that time, and no amount of naked sauna helps. Hell, they didn't even had the guts to fire her, instead making macabre dances to force her to quit. Which, after a more than half of year of doing literally nothing (their way to get her out), she gladly did and went to MS.
Funny enough, she is at Google now and some shit what is happening there directly resembles what was happening almost decade ago in Finland.
Take a seat. The US was the most primitive mobile market in the world before the iPhone, with people drooling over the frigging Moto RAZR - a feature phone FFS - just because it was thin and flipped open. And imagine paying for incoming calls, or buying your phone from the operator with all useful features like copying your own ringtones disabled so that you were forced to buy whatever lame ones the operator offered. Symbian in the meanwhile was a full featured, multitasking OS with apps, themes, video calling and other features that took their time to reach iOS and Android. Nokia already knew that Symbian was on the way out, and they bought Qt to act as a bridge for developers between Symbian and Meego - it was to be the default app toolkit for Meego. Around 2009 onwards, Qt versions of popular Symbian apps started to appear. The first Meego device, the N9, had rave reviews but was intentionally hobbled by Elop choosing to go with dead in the water Windows Mobile and refusing to allow more production. This piece from back in the day is a detailed analysis of the fiasco - https://communities-dominate.blogs.com/brands/2013/09/the-fu...
> You are forgetting what if Nokia pivoted to Android in Feb 2011, than the first usable, mass produced ...
I'm not forgetting anything. All of these issues are present in a switch to the Microsoft platform too, and it was already clear to most observers when they did that, that the MS platform was dead in the water.
> hey would needed to compete with iPhone 4S
As did every other player, and outside the US the iPhone was not dominant in the same way.
> you can't tell them to fuck off like in the US
They had engineering units in the US which they could have done that to. And there are things you can do in most European countries too, when situations are dire.
> You may don't like Elop and his actions as much as you want, but there is zero chances they could do anything themselves.
I very much disagree, as do many observers. It could have been turned around with good management, but that doesn't seem to have been Elop's aim, his aim seemed to be to fulfill goals for MS.
> And guess who not only had the money but was willing to pour them into Nokia?
Yes, it was a stitch-up job for MS to buy an established name to try to save their dead mobile platform.
I'm a Nokia fanboy and will sadly admit that you're right.
They made a -lot- of stupid decisions, both in sticking to 'what they knew' and bad decisions with cutting-edge tech (N900 comes to mind, you couldn't get one in the states with the right bands for 3G).
I will always love the Lumia cameras however, even their shit tier models had great image quality.
I had an n900 on 3g on T-Mobile USA.
They should have continued maemo 5 and bet hard on it. The big rewrites for n9, and continued focus on symbian as a cash cow, hurt them.
> Holy cow I found a nest of Microsoft fans. From your link:
It's not a nest, he's is mostly the only one.
Part of the success of directx over opengl was that very few graphics card companies seemed capable of producing a fully functional opengl driver, for the longest time Nvidia was the only option.
I recall ATI and Matrox both failing in this regard despite repeated promises.
Fully functional opengl was not exactly the issue, or not the only one
Opengl was stagnating at the time vendors started a feature wars. On opengl you can have vendor specific extensions, because it was meant for tightly integrated hardware and software. Vendors started leaning heavily on extensions to one up each other.
The cronus group took ages to get up and standardize modern features
By that time gl_ext checks became nightmarishly complicated and cross compatibility was further damaged by vendors lying about their actual gl_ext support, where drivers started claiming support for things the hardware could do, but using the ext causes the scene to not look right or outright crash
Developers looked at that and no wonder they didn't want to take part in any of it
This all beautifully exploded a few year later when compiz started taking a foothold which required this or that gl_ext and finally caused enough rage to get cronus working at bringing back under control the mess
By that time ms were already at directx 9, you could use xlna to target different architectures, and it brought networking and io libraries with it making it a very convenient development environment
*this is all a recollection from the late nineties early 2k and it's by now a bit blurred, it's hard to fill in the details on the specific exts Nvidia was the one producing the most but it's not like the blame is on them, Maxtor and ATI wonky support to play catch up was overall more damaging. Ms didn't need to really do much to win hearts with dx.
Plus MS was also trying to offer more than just graphics by adding audio and networking to the stack which kind of started to make the whole ecosystem attractive, even if it was painful to program against.
I had my share of fun with DirectMusic.
DirectInput (and later XInput) had it's faults but it's probably the only reason you can just plug random first and third party controllers into a USB port and expect everything to just work.
How did Microsoft solve the "extensions problem"? Did they publish standardized APIs in time so vendors wouldn't come up with any extensions? Even then, how did MS prevent them from having the driver lie about the card's features to make it look better than it is?
MS had a rigorous certification and internal testing process, new D3D versions came out quickly to support new hardware features, and through the Xbox Microsoft had more real world experience for what games actually need than the GPU vendors themselves, which probably helped to rein in some of the more bizarre ideas of the GPU designers.
I don't know how the D3D design process worked in detail, but it is obvious that Microsoft had a 'guiding hand' (or maybe rather 'iron fist') to harmonize new hardware features across GPU vendors.
Over time there have been a handful of 'sanctioned' extensions that had to be activated with magic fourcc codes, but those were soon integrated into the core API (IIRC hardware instancing started like this).
Also, one at the time controversial decision which worked really well in hindsight was that D3D was an entirely new API in each new major version, which allowed to leave historical baggage behind quickly and keep the API clean (while still supporting those 'frozen' old D3D versions in new Windows versions).
> Also, one at the time controversial decision which worked really well in hindsight was that D3D was an entirely new API in each new major version, which allowed to leave historical baggage behind quickly and keep the API clean (while still supporting those 'frozen' old D3D versions in new Windows versions).
This is interesting. I have always wondered if that is a viable approach to API evolution, so it is good to know that it worked for MS. We will probably add a (possibly public) REST API to a service at work in the near future, and versioning / evolution is certainly going to be an issue there. Thanks!
It’s a COM based API and so everything is an interface described via an IDL. You add a new member or change the parameters to a method, you must create a new version of the interface with a new GUID descriptor. You can query any interface for other interfaces it supports, so it’s easy for clients to check for newer functionality on incremental versions of DirectX.
In practice for DirectX you just use the header files that are in the SDK.
I've never had to migrate between DirectX versions but I don't imagine it's the easiest thing in the world due to this approach. Somewhat related I saw a library to translate DirectX 9 function calls to DirectX 12 because apparently so much of the world is still using DirectX 9.
That's what the drivers for the new Intel GPUs have to do, since the GPUs were only designed for DX12, and need earlier versions to be emulated
Ah yeah, and it looks like this is what they're using: https://github.com/microsoft/D3D9On12 And it looks like DirectX 11 to DirectX 12 translation exists as well: https://github.com/microsoft/D3D11On12
Directx api, and driver certification program
At the time Microsoft worked directly with vendors to have many of what would be vendor extension on OpenGL become cross-vendor core DirectX features.
They haven't learn much from it, see how many Vulkan extensions exist already.
Back in 1999 when the Quake source code came out, I started working on "Quake 2000" which was an improvement on the rendering pipeline of the code. I ended up getting free cards ship to me - one was a GeForce256, one was the Matrox G400 DualHead and I think the other was the ATI Rage 128 Pro.
The GeForce blew the other cards performance out the water. The Matrox was particularly bad and the dual screen didn't add much and I remember maybe 2 games that supported it.
> he assured me that NURBs were going to be the dominant rendering model
Wow, this sounds like those little cases where a few different decisions could have easily led us down into an alternate parallel world :)
Can someone expand on why NURBs didn't/don't win out against polygons?
Could this be like AI/ML/VR/Functional Programming, where the idea had been around for decades but could only be practically implemented now after we had sufficient hardware and advances in other fields?
Because it's exactly like the parent said: Nvidia has always Nvidia & always has been, a tightfisted tightwad that makes everything they do ultra-proprietary. Nvidia never creates standards or participates.
Sometimes, like with CUDA, they just have an early enough lead that they entrench.
Vile player. They're worse than IBM. Soulless & domineering to the max, to every extent possible. What a sad story.
> Sometimes, like with CUDA, they just have an early enough lead that they entrench.
The problem in case of CUDA isn't just that NVIDIA was there early, it's that AMD and Khronos still offer no viable alternative after more than a decade. I've switched to CUDA half a year ago after trying to avoid it for years due to being proprietary. Unfortunately I discovered that CUDA is absolutely amazing - It's easy to get started, developer friendly in that it "just works" (which is never the case for Khronos APIs and environments), and it's incredibly powerful, kind of like programming C++17 for 80 x 128 SIMD processors. I wish there was a platform independent alternative, but OpenCL, Sycl, ROCm aren't it.
I keep hearing that ROCm is DOA, but there’s a lot of supercomputing labs that are heavily investing in it, with engineers who are quite in favor of it.
With supercomputers you write your code for that specific supercomputer. In such an environment ROCm works ok. Trying to make a piece of ROCm code work on different cards/setups is real pain (and not that easy with CUDA either if you want good performance)
If you want to run compute on AMD GPU hardware on Linux, it does work - however it's not as portable as CUDA as you practically have to compile your code for every AMD GPU architecture, whereas with CUDA the nvidia drivers give you an abstraction layer (ish, it's really PTX which provides it, but...) which is forwards and backwards compatible, which makes it trivial to support new cards / generations of cards without recompiling anything.
I hope it takes off, a platform independent alternative to CUDA would be great. But if they want it to be successfully outside of supercomputing labs, it needs to be as easy to use as CUDA. And I'd say being successfull outside of supercomputer labs is important for overall adoption and success. For me personally, it would also need fast runtime compilation so that you can modify and hot-reload ROCm programs at runtime.
Some random HPC lab with weight to have a AMD team drop by isn't the same thing as average joe and jane developer.
Nvidia has had driver parity for linux, freebsd and windows for many many years. No other graphics card manufacturer has come close to the quality of their software stack accross platforms. For that they have my gratitude.
DLSS was windows only for some time.
linux’s amdgpu is far better than the nvidia-driver.
ATI drivers were a horror show for the longest time on windows never mind linux. What Nvidia did was have have basically the same driver code for all operating systems with a compatibility shim. If you were using any sort of professioinal 3d software over the previous 2 decades Nvidia were the only viable solution.
Source: Was burned by ATI, Matrox, 3dlabs before finallly coughing up the cash for Nvidia.
I was a big Matrox fan, mostly because I knew someone there, and was able to upgrade their products at a significant discount. This was important for me as a teenager whose only source of income was power washing eighteen-wheelers and their associated semi-trailers. It was a dirty and somewhat dangerous job, but I fondly remember my first job. Anyway, I digress, so let's get back to the topic of Matrox cards.
The MGA Millennium had unprecedented image quality, and its RAMDAC was in a league of its own. The G200 had the best 3D image quality when it was released, but it was really slow and somewhat buggy outside of Direct3D where it shined. However, even with my significant discount and my fanboyism, when the G400 was released, I defected to NVIDIA since its relative performance was abysmal.
One usecase Matrox kept doing well was X11 multimonitor desktops. The G400 era was about the time I was drifting away from games and moving to full time Linux, so they suited me at least.
yes, i am very familiar with that pain. fglrx was hell compared to nvidia.
nvidia being the only viable solution for 3d on linux is a bit of an exaggeration imo (source: i did it for 5 years), but that was a long time ago: we have amdgpu, which is far superior to nvidia’s closed source driver.
Except it doesn't do GPU compute stuff, so it's no use for anything except games.
it doesn’t do CUDA, but it does do opencl, and vulkan compute
Maybe, but nothing really uses that, at least for video.
amdgpu is better now. But was terrible for years, probably 2000-2015. That’s what gp is saying.
Huh ? Compared to open source nvidia driver which could do nothing ?
I had a Riva TNT 2 card. The only "accelerated" thing it could do in X was DGA (direct graphics access). Switched to Ati and never looked back. Of course you could use the proprietary driver. If you had enough time to solve instalation problems and didn't mind frequent crashes.
> Compared to open source nvidia driver which could do nothing ?
Compared to the official Nvidia driver.
> If you had enough time to solve instalation problems and didn't mind frequent crashes
I used Nvidia GPUs from ~2001 to ~2018 on various machines with various GPUs and i never had any such issues on Linux. I always used the official driver installer and it worked perfectly fine.
Did people not try the nvidia driver back then? Even as a casual user at the time it was miles ahead - but it wasn’t open source
DGA and later XV.
amdgpu is new. you may be thinking about fglrx: a true hell.
No, I was thinking about amdgpu. amdgpu, the open source driver, since 4-5 years is better than nvidia closed source driver (excluding the cuda vs opencl/rocm debacle ofc).
fglrx has always been a terrible experience indeed, so AMD was no match for nvidia closed source driver.
So, once upon a time (I'd say 2000-2015) the best Linux driver for discrete GPUs was nVidia closed source one. Nowadays it's the amd open source one. Intel has always been good, but doesn't provide the right amount of power.
I think any company who feels they are in the lead with something competitive would do the same. The ones who open their standards were behind to begin with and that's their way of combating the proprietary competition.
Belief in your own technology, even if it is good, as it turns out, is often insufficient to really win. At some point, in computing, you need some ecosystem buy in, and you almost certainly will not be able to go it alone.
Nvidia seems utterly disinterested in learning these lessons, decades in now: they just gets more and more competitive, less and less participatory. It wild. On the one hand they do a great job maintaining products like the Nvidia Shield TV. On the other hand, if you try anything other than Linux4Tegra (l4t) on most of their products (the Android devices wont work at all for anything but Android btw) it probably wont work at all or will be miserable.
Nvidia has one of the weirdest moats, of being open source like & providing ok-ish open source mini-worlds, but you have to stay within 100m of the keep or it all falls apart. And yea, a lot of people simply dont notice. Nvidia has attracted a large camp-followers group, semi-tech folk, that they enable, but who dont really grasp the weird limited context they are reserved on.
As much as I hate Nvidia, AMD and Intel have done themselves zero favors in the space.
It's not that hard--you must provide a way to use CUDA on your hardware. Either support it directly, transcompile it, emulate it, provide shims, anything. After that, you can provide your own APIs that take advantage of every extra molecule of performance.
And neither AMD nor Intel have thrown down the money to do it. That's all it is. Money. You have an army of folks in the space who would love to use anything other than Nvidia who would do all the work if you just threw them money.
What do they get right with shield?
Some say the nurbs model was also not fit with culture at the time and not supported either on modeling tools or texturing. Game dev would get faster results with triangles than with nurbs. Not sure who should have footed the bill, game studios or nvidia.
How is NVIDIA different from Apple?
Nvidia makes superior graphics cards which are for dirty gamers while Apple makes superior webshit development machines.
My guess is that it’s much harder to develop rendering algorithms (e.g. shaders) for NURBSes. It’s easy and efficient to compute and interpolate surface normals for polygons (the Phong shader is dead simple [0], and thus easy to extend). Basic shading algorithms are much more complicated for a NURBS [1], and thus sufficiently computationally inefficient that you might as well discretize the NURBS to a polygonal mesh (indeed, this is what 3D modeling programs do). At that point, you might as well model the polygonal mesh directly; I don’t think NURBS-based modeling is significantly easier than mesh-based modeling for the 3D artist.
[0] https://cs.nyu.edu/~perlin/courses/fall2005ugrad/phong.html
[1] https://www.dgp.toronto.edu/public_user/lessig/talks/talk_al...
Given the available resources today it should be possible to create a NURB based renderer on something like the ECP5 FPGA. Not a project I have time for but something to think about.
How do you direct render a curved surface? The most straightforward, most flexible way is to convert it into a polygon mesh.
I suppose you could direct rasterize a projected 3D curved surface, but the math for doing so is hideously complicated, and it is not at all obvious it’d be faster.
I think the idea is that polygon meshes are the only way things are done on all existing graphics cards and as such that is the only primitive used and that is the only primitive optimized for. Personally I suspect that triangle meshes were the correct way to go. but you can imagine an alternate past where we optimized for csg style solid primitives(pov-ray), or perhaps we optimized for drawing point clouds(voxels), or perhaps spline based patches(nurbs). just figure out how to draw the primitive and build hardware that is good at it. right now the hardware is good at drawing triangle meshes so that is the algorithm used.
So just for the record, I've actually written a software 3D rasterizer for a video game back in the 90's, and did a first pass at porting the engine to Glide using the Voodoo 2 and Voodoo 3 hardware. I'm pulling on decades-old knowledge, but it was a formative time and I am pretty sure my memory here is accurate.
At the point of rasterization in the pipeline you need some way to turn your 3D surface into actual pixels on the screen. What actual pixels do you fill in, and with what color values? For a triangle this is pretty trivial: project the three points to screen-space, then calculate the slope between the points (as seen on the 2D screen), and then run down the scanlines from top to bottom incrementing or decrementing the horizontal start/top pixels for each scanline by those slope values. Super easy stuff. The only hard part is that to get the colors/texture coords right you need to apply a nonlinear correction factor. This is what "perspective-correct texturing" is, support for which was one of 3dfx's marketing points. Technically this approach scales to any planar polygon as well, but you can also break a polygon into triangles and then the hardware only has to understand triangles, which is simpler.
But how do you rasterize a Bézier curve or NURBS surface? How do you project the surface parameters to screen-space in a way that doesn't distort the shape of the curve, then interpolate that curve down scanlines? If you pick a specific curve type of small enough order it is doable, but good god is it complicated. Check out the code attached the main answer of this stack overflow question:
https://stackoverflow.com/questions/31757501/pixel-by-pixel-...
I'm not sure that monstrosity of an algorithm gets perspective correct texturing right, which is a whole other complication on top.
On the other hand, breaking these curved surfaces into discrete linear approximations (aka triangles) is exactly what the representation of these curves is designed around. Just keep recursively sampling the curve at its midpoint to create a new vertex, splitting the curve into two parts. Keep doing this until each curve is small enough (in the case of Pixar's Reyes renderer used for Toy Story, they keep splitting until the distance between vertices is less than 1/2 pixel). Then join the vertices, forming a triangle mesh. Simple, simple, simple.
To use an analogy from a different field, we could design our supercomputer hardware around solving complex non-linear equations directly. But we don't. We instead optimize for solving linear equations (e.g. BLAS, LINPACK) only. We then approximate non-linear equations as a whole lot of many-weighted linear equations, and solve those. Why? Because it is a way easier, way simpler, way more general method that is easier to parallelize in hardware, and gets the same results.
This isn't an accidental historical design choice that could have easily gone a different way, like the QWERTY keyboard. Rendering complex surfaces as triangles is really the only viable way to achieve performance and parallelism, so long as rasterization is the method for interpolating pixel values. (If we switch to ray tracing instead of rasterization, a different set of tradeoffs come into play and we will want to minimize geometry then, but that's a separate issue.)
You'd probably convert it to bicubic patches or something, and then rasterise/ray-intersect those...
I'm not really convinced curves are that useful as a modelling scheme for non-CAD/design stuff (i.e. games and VFX/CG): while you can essentially evaluate the limit surface, it's not really worth it once you start needing things like displacement that actually moves points around, and short of doing things like SDF modulations (which is probably possible, but not really artist-friendly in terms of driving things with texture maps), keeping things as micropolygons is what we do in the VFX industry and it seems that's what game engines are looking at as well (Nanite).
Nah, NURBS are a dead end. They are difficult to model with and difficult to animate and render. Polygon-based subdivision surfaces entirely replaced NURBS as soon as the Pixar Renderman patents on subdivision surfaces expired.
NURBs are more high-level compared to triangles. Single triangle primitive cannot be ill-defined and is much easier to rasterize. There are other high level contenders - for example SDFs and voxels. Instead of branching out the HW to offer acceleration for each of these, they can all be reduced to triangles and made to fit in modern graphics pipeline.
It's like having a basic VM, high-level languages are compiled to the intermediate representation where things are simpler and various optimizations can be applied.
My guess is: 'brute force and fast' always wins against 'elegant but slow'. And both the 3dfx products and triangle rasterization in general were 'brute force and fast'. Early 3D accelerator cards of different vendors were full of such weird ideas to differentiate themselves from the competitors, thankfully all went the way of the Dodo (because for game devs it was a PITA to support such non-standard features).
Another reason might have been: early 3D games usually implemented a software rasterization fallback. Much easier and faster to do for triangles than nurbs.
Was it NURBs or quads? Maybe both.
>In my opinion, Direct X was what killed it most. OpenGL was well supported on the Voodoo cards and Microsoft was determined to kill anyone using OpenGL
I don't know the details myself but as a FYI... this famous answer covering the OpenGL vs DirectX history from StackExchange disagrees with your opinion and says OpenGL didn't keep up (ARB committee). It also mentions that the OpenGL implementation in Voodoo cards was incomplete and only enough to run Quake:
https://softwareengineering.stackexchange.com/questions/6054...
The author of that answer is active on HN so maybe he'll chime in.
> It also mentions that the OpenGL implementation in Voodoo cards was incomplete and only enough to run Quake
That brings back some memories... I remember having to pick the rendering pipeline on some games, like Quake.
I also remember the days of having to route the video cable from my graphics card to my 3dfx card then to my monitor.
>I remember having to pick the rendering pipeline on some games, like Quake.
You can still do that in some recent games, e.g. Doom 2016 and Half-Life: Alyx.
Yeah it's swung around again - it feels like 25 years ago you could choose between DirectX and OpenGL (and software too), then for AAA games at least there was DirectX only for the longest time, but now DirectX or Vulkan (or OpenGL, or DirectX 12 instead of DirectX 9/10/11).
Important puzzle piece in all that is also Microsoft's Fahrenheit diversion.
I remember selling a a bunch of them at a computer trade show the computer shop I worked at in my teens. In probably best marketing idea I've had to date, I installed tomb raider on two basically identical computers, except one had a 3dfx card. As soon as we started running that the cards basically sold themselves.
Like everything else, Khronos did not need help from Microsoft to mess up OpenGL, with its spaghetti soup and lack of tooling, that makes every graphics programming newbie start by hunting and building from scratch, the infrastructure to make a rendering engine.
Vulkan is just as bad into this regard, with complexity turned to eleven. No wonder people call it a GPU hardware abstraction API, not a graphics API.
And on the Web they couldn't have better idea than throw away all the existing GLSL, to replace it with a Rust inspired shading language.
Khronos is only responsible for OpenGL since 2006 (essentially 3.0), DirectX 8 came out in 2000. In the relevant timeframe for the OP, the ARB was responsible, which has nothing to do with Vulkan.
Around the Long Peaks debacle ARB became Khronos, it was basically a renaming, most of the people stayed the same.
It has everything to do with Vulkan, givent that the same organisation is handling it, and had it not been for AMD's Mantle, they would probably be discussing what OpenGL vNext should look like.
> No wonder people call it a GPU hardware abstraction API, not a graphics API.
The entire point of Vulkan is that it’s a hardware abstraction. It was invented to offer an API around low level hardware operations rather than the typical approach of graphics libraries which come from the opposite direction.
And with it turned everyone into a device driver developer, no wonder it isn't taking off as much as desired, outside Android and GNU/Linux.
> And with it turned everyone into a device driver developer
Which is what graphics developers wanted.
The problems with OpenGL and DirectX 11 were that you had to fight the device drivers to find the "happy path" that would allow you the maximum performance. And you had zero hope of doing solid concurrency.
Vulkan and DirectX 12 directly expose the happy path and are incresingly exposing the vector units. If you want higher level, you use an engine.
For game developers, this is a much better world. The big problem is that if you happen to be an application developer, this new world sucks. There is nowhere near the amount of money sloshing around to produce a decent "application engine" like there are "game engines".
Hmm, "vector units" isn't the right term because GPUs usually don't use those. It's better to write your shaders in terms of scalars.
But it was never intended to be a general purpose graphics SDK.
The way 3D rendering is done these days is drastically different from the days of OpenGL. The hardware is architecturally different, the approach people take to writing engines is different.
Also most people don’t even target the graphics API directly these days and instead use off the shelf 3D engines.
Vulkan was always intended to be low level. You have plenty of other APIs around still if you want something a little more abstracted.
Metal and DirectX12 (if I’m remembering my version numbers correctly) are very very similar to Vulkan so I’m not really sure what point you’re trying to make.
They’re similar to Vulkan in being low level.
But they’re significantly easier to target (less feature splitting) and much more ergonomic to develop with.
I wouldn't put Metal next to DX12/Vulkan. I'd put Vulkan as the most low-level API, DX12 slightly above Vulkan (because of convenience features like commited resources, slightly simpler memory model), but Metal I'd put somewhere in the middle between OGL/DX11 and VK/DX12. Metal introduces many optimizations such as pipeline objects, but it requires much less micromanagement around memory and synchronization as VK/DX12 do.
"OpenGL was well supported on the Voodoo cards "
Definitely not the case.
Voodoo cards were notorious for not supporting OpenGL properly. They supported GLide instead.
3dfx also provided a "minigl" which implemented the bare minimum functions designed around particular games (like Quake) -- because they did not provide a proper OpenGL driver.
https://en.wikipedia.org/wiki/MiniGL
> a friend of mine was on the design team and he assured me that NURBs were going to be the dominant rendering model since you could do a sphere with just 6 of them, whereas triangles needed like 50 and it still looked like crap. But Nvidia was so tight fisted with development details and all their "secret sauce" none of my programs ever worked on it.
(1) Someone designs something clearly superior to other technology on the market.
(2) They reason that they have a market advantage because it's superior and they're worried that people will just copy it, so they hold it close to the chest.
(3) Inferior technologies are free or cheap and easy to copy so they win out.
(4) We get crap.
... or the alternate scenario:
(1) Someone designs something clearly superior to other technology on the market.
(2) They understand that only things that are more open and unencumbered win out, so they release it liberally.
(3) Large corporations take their work and outcompete them with superior marketing.
(4) We get superior technology, the original inventors get screwed.
So either free wins and we lose or free wins and the developers lose.
Is there a scenario where the original inventors get a good deal and we get good technology in the end?
Good point. In this context, it wasn't even superior though, at least not in the long run. Memory got bigger so that storing more triangles wasn't a problem anymore, it's more about computational resources. There, NURBS are only clearly better for very smooth surfaces (like the mentioned sphere), which are rare in natural shapes. For everything else, you get more details per FLOP by just using more triangles, which is where the industry went.
This is what patents are for, but in the real world that gets complicated too.
>In my opinion, Direct X was what killed it most.
False, 3dfx killed themselves. Their graphics chips and their architecture became quickly outdated compared to the competition. Their latest efforts towards the end of their life resorted to simply putting more of the same outdated and inefficient chip designs on the same board leading to monstrosities GPUs with 4 chips that came with their own power supply. Nvidia and ATI were already eating their lunch.
Also, their decision to build and sell graphics cars themselves directly to consumers, instead of focusing on the chips and letting board partners build and sell the cards was another reason for their fall.
Their Glide API alone would not be enough to save them from so many terrible business and engineering decisions.
>OpenGL was well supported on the Voodoo cards and Microsoft was determined to kill anyone using OpenGL
Again, false. OpenGL kinda kiled itself on the Windows gaming scene. Microsoft didn't do anything to kill OpenGL on Windows. Windows 95 supported OpenGL just fine, as a first class citizen just like Direct3D, but Direct3D was easier to use and had more features for windows game dev, meaning quicker time to market and less dev effort, while OpenGL drivers from the big GPU makers still had big quality issues back then and OpenGL progress was stagnating.
DirectX won because it was objectively better than OpenGL for Windows game dev, not because Microsoft somehow gimped OpenGL on Windows, which they didn't.
I was directly involved in graphics during this time, and was also a tech lead at an Activision studio during these times of Direct3D vs OpenGL battle, and it's not that simple.
OpenGL was the nicer API to use, on all platforms, because it hid all the nasty business of graphics buffer and context management, but in those days, it was also targeted much more at CAD/CAM and other professional use. The games industry wasn't really a factor in road maps and features. Since OpenGL did so much in the driver for you, you were dependent on driver support for all kinds of use cases. Different hardware had different capabilities and GL's extension system was used to discover what was available, but it wasn't uncommon to have to write radically different code paths for some rendering features based on the cababilities present. These capabilities could change across driver versions, so your game could break when the user updated their drivers. The main issue here was quite sloppy support from driver vendors.
DirectX was disgusting to work with. All the buffer management that OpenGL hid was now your responsibility, as was resource management for textures and vertex arrays. In DirectX, if some other 3D app was running at the same time, your textures and vertex buffers would be lost every frame, and you're have to reconstruct everything. OpenGL did that automatically behind the scenes. this is just one example. What DirectX did have, though was some form of certification, eg, "Direct X9", which guaranteed some level of features, so if you wrote your code to a DX spec, it was likely to work on lots of computers, because Microsoft did some thorough verification of drivers, and pushed manufacturers to do better. Windows was the most popular home OS, and MacOS was insignificant. OpenGL ruled on IRIX, SunOS/Solaris, HP/UX, etc, basically along the home/industry split, and that's where engineering effort went.
So, we game developers targeted the best supported API on Windows, and that was DX, despite having to hold your nose to use it. It didn't hurt that Microsoft provided great compilers and debuggers, and when XBox came out, which used the same toolchain, that finally cinched DX's complete victory, because you could debug console apps in the same way you did desktop apps, making the dev cycle so much easier. The PS1/PS2 and GameCube were really annoying to work with from an API standpoint.
Microsoft did kill OpenGL, but it was mainly because they provided a better alternative. They also did sabotage OpenGL directly, by limiting the DLL's shipped with windows to OpenGL 1.2, so you ended up having to work around this by poking into you driver vendor's OpenGL DLL and looking up symbols by name before you could use them. Anticompetitive as they were technically, though, they did provide better tools.
I was also involved in game graphics at that time (preceding DirectX) and do not quite remember that as you do. Debugging graphics on PC was a pain, you had to either use WinDbg remote mode (which was a pain to set up to get source symbols) or SoftICE and an MDA monitor. That's just for the regular CPU debugger because of the fullscreen mode. There had not been a graphics debugger until DX9. Meanwhile all consoles could be debugged from the same dev machine, and starting from PS2 we had graphics debuggers and profilers. Even the OG xbox had PIX, which introduced pixel debugging (though was a bit of a pain to set up and needed the game to submit each frame twice).
HW OpenGL was not available on the consumer machines (Win95, Win2K) at all, GLQuake used so-called "mini-driver" which was just a wrapper around few Glide APIs and was a way to circumvent id's contract with Rendition, which forbade them from using any proprietary APIs other than Verite (the first HW accelerated game they released had been VQuake), by the time the full consumer HW OpenGL drivers became available circa OpenGL 2.0 time, DirectX 9 already reigned supreme. You can tell by the number of OpenGL games released after 2004 (mobile games did not use OpenGL but the OpenGL ES, which is a different API).
You must have worked on this earlier than me. I started with DX7 on Windows, before that I worked purely in OpenGL on workstations on high end visual simulation. Yes, in DX7 we used printf debugging and in full screen-only work, you dumped to a text file or as you say, MDA if necessary for interactive debugging, though we avoided that. DX9's visual debugger was great.
I don't remember console development fondly. This is 25 years ago, so memory is hazy, but the GameCube compiler and toolchain was awful to work with, while the PS2 TOOL compile/test cycle was extremely slow and the API's were hard to work with, but that was more hardware craziness than anything. XBox was the easiest when it came out. Dreamcast was on the way out, but I remember really enjoying being clever with the various SH4 math instructions. Anyhow, I think we're both right, just in different times. In the DX7 days, NVIDIA and ATI were shipping OpenGL libraries which were usable, but yes, by then, DX was the 800lb gorilla on windows. The only reason that OpenGL worked at all was due to professional applications and big companies pushing against Microsoft's restrictions.
I don't recall any slowness on the PS2 development, I dreaded touching anything graphical on PC though as the graphics bugs tended to BSOD the whole machine and rebooting 20+ times a day was not speeding up anything (all the Windows took their sweet time to boot, not to mention restarting all the tools you needed and recovering your workspace) lol.
Don't forget the co-marketing from Intel's DRG (dev relations group) (the group I worked in) which started in ~1995 or so for game optimization dev on SIMD and, later AGP, openGL and unreal engine (our lab had some of the very first iterations of this - and NURBs were a major topic in the lab especially for OpenGL render tests etc. (if you recall the NURBs Dolphin benchmark/demo)
Intel would offer upto(?) (cant recall if it base, set, or upto) $1 million in marketing funds if me and my buddy did our objective and subjective gaming tests between the two looking for a subjective feel that the games ran better on Intel.
The objective tests were to determine if the games were actually using the SIMD instructions...
Wasn't MS on the openGL board at one point ? they even bought softimage for a while. They seemed to be interested in the whole 3d space.
ps: the nv1 "mis-step" was really interesting. They somehow quickly realigned with the nv3, which was quite a success IIRC.
> he assured me that NURBs were going to be the dominant rendering mode
How does this relate to the NV-1? I thought it used quads instead of triangles. Did it do accelerated NURBs as well?
reminds me of PowerVR main selling point being tiled rendering, but they tried pushing some proprietary "infinite planes" BS
https://vintage3d.org/pcx1.php
"Thanks to volumes defined by infinite planes, shadows and lights can be cast from any object over any surface."
If it was marketing they didn't seem to do a great job... Very little online except for mentions of "NURBS", and this thread.
It's not intuitively obvious to me how a rasterizer accelerator would render NURBS surfaces at all (edit: without just approximating the surface with triangles/quads in software, which any competing card could also do)
I was also wondering about this. I only remember two vaguely related mentions, both are from a bit later time and one of them turned out not to be NURBS but Bézier patches :)
- Curved surfaces in Quake 3: https://www.gamedeveloper.com/programming/implementing-curve...
- Rhino 3D support: https://www.rhino3d.com/features/nurbs/