Yeah, but SATA is slow, slow, SLLLLOOOOOOOWWWW compared to the potential of PCIe. 6Gb/sec (as in bits) compared to 126 GB/sec (as in BYTES) on a 16-lane PCIe (under v6, due this year).
There's more to it than that, obviously (like low latency, deeper command queues, and MUCH greater parallelism), provided both hardware (CPU) and OS are ready for it.
As each generation of PCIe comes out, so the speed oes up, and so does the maximum number of lanes. The common analogy is a road. Imagine how many cars you can get down a SATA as a single lane road and a max of 70mph, then imagine a highway with 16-lanes and a 300mph speed limit (and cars that will do it), and think about the cars getting down that in a given period of time.
But note that PCIe supports a lot more than HDs and SSDs, whether the SSDs are Sata or NVMe. There will be a given amount of CIe lanes for a given processor, and you'll be sticking both HD and SSD storage down it, as well as USB, Bluetooth, LAN, NFC and all sorts. And raphics, of course.
So it all depends on the hardware achitecture as to how many lanes (PCIe y.0) where y is generation, from 1 to 6, and how many lanes (x1, x4, x8, x16).
Some manufacturers, for example, might ewlect to use an PCIe whatever, x8 slot giving RAIDed SSDs a lot of bandwidth, and another might provide two x4 slots. The latter could support an SSD on one x4 but with less bandwidth, and a fast LAN board, or Wifi6, etc, on the other.
If you're trying to saturate an M.2 PCIe lane with SSDs, it'll depend not just on how many SSDs, and the performance of each of them, but how wide that portion of the lane is. Overall performane depends on it. It's a case of trying to balance how many cars, I mean how much data, you can get out of the SSDs without bottlenecking, versus leaving bandwidth for non-SSD usage too. And that is use-case dependent.
Please excuse the simplistic analogy but there's a good reason for me explaining it that way, that being that it's about the depth of how far I understand what's going on.