So the obvious answer is that its necessary because it serves routed paths from the server, so that we don't get 404s.
However solutions like angular-cli-ghpages solves this by adding a script to the app that parses parameters returned in a 404 that will then reroute the app to the correct state.
So just curious are there any drawbacks to this and why would this not be used in general instead of solutions like Angular Universal or Rendertron?
For example this is what spa-github-pages says:
A quick SEO note - while it's never good to have a 404 response, it appears based on Search Engine Land's testing that Google's crawler will treat the JavaScript window.location redirect in the 404.html file the same as a 301 redirect for its indexing. From my testing I can confirm that Google will index all pages without issue, the only caveat is that the redirect query is what Google indexes as the url. For example, the url example.tld/about will get indexed as example.tld/?p=/about. When the user clicks on the search result, the url will change back to example.tld/about once the site loads.
Because of two main things:
Robots do not run javascript, so they parse what the get from server and than the Universal comes around.
Even using --aot builded app served by ghpages with a 404 page that is a clone from the index, the client/robot still needs to get the first files, parse them and finally mount the final view. Gh-pages do not serve the final html state.