What I know about the FCGI protocol is, the first time the application is called, it loads it into memory, run it, return the response to the server, finish the response but does not end the application, it keeps it running in memory, then next requests will use this compiled in memory copy of the application to process the request.
Reading about the PSGI protocol, it seems to be working the same way.
My question is, is my assumption correct, they are the same regarding the application speed to requests per second.
the confusing issue also if they work the same, why I see plackup has command line option to enable FCGI.
You're asking for a comparison between apples and fruit. Your question doesn't make much sense.
There are various underlying mechanisms you can use to deploy a web application written in Perl.
The problem is that for each deployment mechanism you need to change the way that your program is written. This means that you need to know that you're, say, targetting mod_perl before you start writing the code. It also means that moving an application between these various deployment methods is non-trivial.
This is the problem that PSGI solves. Instead of writing a CGI app or a mod_perl app or a FCGI app, you write an app that targets the PSGI protocol. You can the deploy exactly the same app under CGI, mod_perl or FcGI (or many other deployment methods).
If you deploy your PSGI app using the FCGI handler, then it will work the same way as a FCGI app. But later on it's simple to move it to run as a mod_perl app. Or to run it as a standalone server using something like Starman.
Does that help at all?