I'm developing a Java EE web application running on WildFly 18, and Angular on the front end. All the HTTP calls from Angular to Wildfly are POSTs. The application works fine, but once a month, when I start it, I cannot use it because Wildfly rejects the request saying that the HTTP method POST is not supported by this URL
(see error below on browser console). Just to make sure is not Angular, I made the POST call from a Java program, and got the same error.
The solution is to close everything and restart, sometimes more than once. Why does this happen and how to fix this? The big problem is that this may happen in production.
visualcode/rest/getbropr:1 Failed to load resource: the server responded with a status of 405 (Method Not Allowed) main.js:1127 HttpErrorResponse error: "Error HTTP method POST is not supported by this URL" headers: HttpHeaders {normalizedNames: Map(0), lazyUpdate: null, lazyInit: ƒ} message: "Http failure response for http://localhost:4400/visualcode/rest/getbropr: 405 Method Not Allowed" name: "HttpErrorResponse" ok: false status: 405 statusText: "Method Not Allowed" url: "http://localhost:4400/visualcode/rest/getbropr"
UPDATE
This happened to me in two different machines with identical Wildfly configuration, so it must be something on how JAX-RS or any other related component is set up.
UPDATE 2
I got the error and this is the server log:
11:46:17,306 DEBUG [io.undertow.request] (default I/O-12) Matched prefix path /visualcode for path /visualcode/rest/getbropr
11:46:17,306 DEBUG [io.undertow.request.security] (default task-1) Attempting to authenticate /visualcode/rest/getbropr, authentication required: false
11:46:17,306 DEBUG [io.undertow.request.security] (default task-1) Authentication outcome was NOT_ATTEMPTED with method io.undertow.security.impl.CachedAuthenticatedSessionMechanism@2d8f2c0a for /visualcode/rest/getbropr
11:46:17,306 DEBUG [io.undertow.request.security] (default task-1) Authentication result was ATTEMPTED for /visualcode/rest/getbropr
11:46:17,307 INFO [io.undertow.request.dump] (default task-1)
----------------------------REQUEST---------------------------
URI=/visualcode/rest/getbropr
characterEncoding=null
contentLength=2
contentType=[application/json]
cookie=_ga=GA1.1.1378850711.1587329434
header=accept=application/json, text/plain, */*
header=accept-language=en-US,en;q=0.9,es;q=0.8
header=accept-encoding=gzip, deflate, br
header=sec-fetch-mode=cors
header=origin=http://localhost:4400
header=user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36
header=sec-fetch-dest=empty
header=connection=close
header=sec-fetch-site=same-origin
header=cookie=_ga=GA1.1.1378850711.1587329434
header=content-type=application/json
header=content-length=2
header=referer=http://localhost:4400/login
header=host=localhost:8080
locale=[en_US, en, es]
method=POST
protocol=HTTP/1.1
queryString=
remoteAddr=/127.0.0.1:51323
remoteHost=kubernetes.docker.internal
scheme=http
host=localhost:8080
serverPort=8080
isSecure=false
--------------------------RESPONSE--------------------------
contentLength=104
contentType=text/html;charset=UTF-8
header=Connection=close
header=Content-Type=text/html;charset=UTF-8
header=Content-Length=104
header=Date=Thu, 09 Jul 2020 15:46:17 GMT
status=405
==============================================================
And this is the code that (sometimes) fails:
@Path("/")
@Consumes({ MediaType.APPLICATION_JSON })
@Produces({ MediaType.APPLICATION_JSON })
public class LoginService {
@Inject
private SomeBean bean;
@Context
private HttpServletRequest httpRequest;
@POST
@Path("/getbropr")
public Response getBrowserProperties() {
// process response
}
At the moment I did not find any similar issue in his issue tracker. I have also not had a similar problem with wildfly 17 and 19.
So, the first Thing I suggest you check to solve the problem is to carry out the following checks and collect certain information to help:
With these tests, you can first determine the true origin of the service response. sometimes bad configurations in balancers, networks, etc. can cause that the calls do not arrive at the destination and the one that answers is another service.
It is useful to check the headers that are included, since traces are sometimes found. Or the format or message in the body of the response, which can be another indication to determine the origin.
If we have confirmed that the origin is the service, it is convenient for us to review the traces from the moment it began to fail backwards. There may be clues to a fault or a mishandled exception when a fault occurs internally. For example if we are using Resteasy (jax-rs) with wildfly (which is quite common), an exception thrown wrong when there is an internal error (database not available, internal services unreachable, etc.) or an exception mapper is wrong... it can cause an incorrect message and error code to be printed which would be a distraction to solve the real problem (it would not be something very common that happens but it is a point to keep in mind and reviewing the logs can help here).
if after analyzing the response, confirming its origin, analyzing the traces and exceptions, your problem persists ... I suggest you provide greater detail of your application in this thread (for example, java version, libs, implement direct with servlet or resteasy?, authentication?, etc) ... any information you can provide such as the trace or a small demo with similar characteristics that can be used to reproduce the problem will be welcome to help you solve the problem.
Knowing that you are running the service locally, I ask you to configure Wildfly to print the call and its parameters in your logs. Wildfly uses undertow as engine so please add a filter to your undertow configuration to log your request. Add to your standalone.xml -> subsystem "undertow" the following configuration:
<subsystem xmlns = "urn:jboss:domain: undertow: 3.0">
<server name = "default-server">
<host name = "default-host" alias = "localhost">
...
<filter-ref name = "request-logger" />
</host>
</server>
<filters>
...
<filter name = "request-logger" module = "io.undertow.core"
class-name = "io.undertow.server.handlers.RequestDumpingHandler" />
</filters>
</subsystem>
Alternatively, you can change your xml configuration with jboss-cli.sh with this script:
$WILDFLY_HOME/bin/jboss-cli.sh --connect --file=script.cli
script.cli file:
batch
/subsystem=undertow/configuration=filter/custom-filter=request-logging-filter:add(class-name=io.undertow.server.handlers.RequestDumpingHandler, module=io.undertow.core)
/subsystem=undertow/server=default-server/host=default-host/filter-ref=request-logging-filter:add
run-batch
Any call that reaches undertow through the open port should log the data received in your logs and that will help to know if the call reaches the wildfly and be able to trace the problem. Debug logs will also be welcome later if the call actually reaches wildfly.
On the other hand, have you tested if the problem persists in Wildfly 20? Updating the wildfly between 18-20 is relatively simple.
The request looks good, so I suggest you check the startup logs. Seems to be a race condition at start, Wildfly runs many tasks simultaneously to get started ASAP.
Typically, when you have duplicate configuration files in classpath or startup codes, it can lead to unexpected behavior, so comparing logs is helpful. Check that you only have a web.xml and file descriptors in your .war file.
Hope I can helps