I have been reading up more about GPU architecture, and one bit sort of confuses me. Most resources define "shaders" as a piece of code that runs in various parts of the graphics pipeline to project a final render of a 3D scene. Older GPU architectures were "fixed functions"... does this mean that graphics programmers then had really no control on how filters and effects were applied to their 3D scene?
Can someone confirm with me now that we have "programmable GPUs" does this mean that the shader functions/programs are sent to the GPU?
This leads me to another thought, if graphics programmers had no control, does the term "fixed function" for the GPU mean that these shaders were implemented on hardware (through transistors/gates)?
The legacy 'fixed-function' pipeline was not fully programmable in the modern GPU shader sense. In the earliest days, it was a software renderer with some parameters but in the late 90s it was referred to as "Hardware Transform & Light" for vertices and "Multitexturing" for pixels in Direct3D 6/7. To control the fixed-function pipeline, you set a ton of state to configure it for various operations. In other words, it was configurable but not programmable.
The legacy 'fixed-function' pipeline on modern GPUs is emulated by programmable shaders, and you can see an example of how those shaders look in the Direct3D FixedFuncEMU sample. Because it is emulated for Direct3D 9 and earlier anyhow, Direct3D 10 and later do not support the legacy fixed-function pipeline at all.
Even on modern GPUs, there are aspects of the pipeline that are 'fixed function' controlled by configurable state rather than programmable shaders, such as triangle rasterization, render target alpha blending, texture sampling, depth/stencil tests, etc. The tradeoffs here are less generality for very fast and cheap hardware implementation, allowing easy replication of these functional units.