Comparing Web Interfaces
Efficiency, which is doing things right, is irrelevant until you work on the right things.
What is FastCGI – and is it really fast?
The reason behind it
FastCGI was born in 1996, as an evolution of CGI: instead of starting a new process for each client request, FastCGI aimed at using persistent processes.
When this is not possible because a process is not thread-safe (like, say, PHP) the 'solution' is to run several processes in parallel to handle several client requests simultaneously.
The price to pay
The price to pay is the overhead of using several processes where only one single (thread-safe) process would have done the job. With PHP, that means loading the PHP runtime several times, a pointless waste of resources.
To be fair with PHP, many of the Web servers which support FastCGI are not both event-based and multi-threaded. That means that you either have to create a thread per connection (very costly) or to load and run several times the Web server to achieve concurrency.
As a result, you have several Web server processes (or far too many threads for the number of your CPU Cores) running concurrently to feed several FastCGI processes running in parallel.
And all of them are competing for the same resources (CPU Cores and caches, memory, disks and network interfaces).
Compared to a single process able to do it all by itself, this happy mess is consuming several orders of magnitude more resources than needed to achieve the task.
The waste may not hurt too much on a 4-Core CPU – but this will change with 16/64/128 Core systems.
The right way to do it
The overhead involved in this inter-process communication made sense at a time when high-traffic Web sites and concurrency was not a concern.
But in a World with multi-Core CPUs and a race for low-latency, FastCGI is a vestige of the past.
At least that's what it should be (in the interest of those who pay for servers).
If, at this stage, you are still confused, consult comparative benchmarks to make yourself an opinion.