Imagine a ray-traced reflection. In the old model, the GPU shoots a ray. If that ray hits a mirror surface, the GPU has to stop, bounce the data back to the CPU, wait for the CPU to say "yes, shoot another ray," and then restart. That round trip costs milliseconds—an eternity in gaming.
The terror comes from memory. Because the GPU can now generate infinite work (a particle system that explodes into a million more particles), developers can no longer rely on static buffers. Microsoft solved this with —a safety net where excess work spills over into system memory without crashing the driver.
The GPU finally learned to manage itself. Developers just have to learn to let go.
For decades, programming a graphics card has felt like managing a chaotic restaurant kitchen. The CPU (the head chef) had to shout every single instruction: chop the onions, boil the water, plate the steak. If the kitchen fell behind, the chef had to stop everything to micro-manage the cleanup.
With , the GPU launches a "Node." That node processes the work. If it needs more work (a second bounce, a third bounce, a particle effect that spawns more particles), it spawns a child node right there on the silicon.