We are using Twilio Task Router for a call routing functionality in our web project (Java 8 application running on Tomcat 9). We are using Twilio Java SDK (https://mvnrepository.com/artifact/com.twilio.sdk/twilio/7.36.2) for making Twilio API requests. When we were testing our application, it was found that Twilio is taking loo long to respond for almost every API requests sent within a time span of 1 hour. The response time is as higher as 10 minutes for a few sample requests analyzed.
The log traces for one such request is provided below. In this log trace it is found that the 10 minute delay happened after executing the line that says 'Connection request: [route: {s}->https://taskrouter.twilio.com:443][total kept alive: 0; route allocated: 10 of 10; total allocated: 10 of 20]
'. As per our analysis, this trace comes from the Apache HttpClient library which is part of the Twilio SDK. The next trace : 'Connection leased: [id: 3492][route: {s}->https://taskrouter.twilio.com:443][total kept alive: 0; route allocated: 10 of 10; total allocated: 10 of 20]
' that comes after 10 minutes is also from the Apache HttpClient. As per the http-outgoing trace in the below log, Twilio-Request-Duration is only 27 milliseconds. Hence we assume the delay of 10 minutes happened within the SDK and not from the Twilio Server. Please help us resolve this issue. Thanks in advance.
| DEBUG | 2019-07-09 10:31:22.688 | https-jsse-nio-8443-exec-342 | TwilioCommandExecutor:38 | Executing Twilio command
| DEBUG | 2019-07-09 10:31:22.688 | https-jsse-nio-8443-exec-342 | RequestAddCookies:123 | CookieSpec selected: default
| DEBUG | 2019-07-09 10:31:23.756 | https-jsse-nio-8443-exec-342 | RequestAuthCache:77 | Auth cache not set in the context
| DEBUG | 2019-07-09 10:31:23.772 | https-jsse-nio-8443-exec-342 | PoolingHttpClientConnectionManager:265 | Connection request: [route: {s}->https://taskrouter.twilio.com:443][total kept alive: 0; route allocated: 10 of 10; total allocated: 10 of 20]
| DEBUG | 2019-07-09 10:41:39.018 | https-jsse-nio-8443-exec-342 | PoolingHttpClientConnectionManager:309 | Connection leased: [id: 3492][route: {s}->https://taskrouter.twilio.com:443][total kept alive: 0; route allocated: 10 of 10; total allocated: 10 of 20]
| DEBUG | 2019-07-09 10:41:39.020 | https-jsse-nio-8443-exec-342 | DefaultManagedHttpClientConnection:88 | http-outgoing-3492: set socket timeout to 0
| DEBUG | 2019-07-09 10:41:39.020 | https-jsse-nio-8443-exec-342 | DefaultManagedHttpClientConnection:88 | http-outgoing-3492: set socket timeout to 30500
| DEBUG | 2019-07-09 10:41:39.020 | https-jsse-nio-8443-exec-342 | MainClientExec:256 | Executing request POST /v1/Workspaces/WSd9f51d1692a76faa3079514c2a89197c/Workers/WK9cacec1e7285f98519152316bac03de8 HTTP/1.1
| DEBUG | 2019-07-09 10:41:39.021 | https-jsse-nio-8443-exec-342 | MainClientExec:267 | Proxy auth state: UNCHALLENGED
| DEBUG | 2019-07-09 10:41:39.021 | https-jsse-nio-8443-exec-342 | headers:133 | http-outgoing-3492 >> POST /v1/Workspaces/WSd9f51d1692a76faa3079514c2a89197c/Workers/WK9cacec1e7285f98519152316bac03de8 HTTP/1.1
| DEBUG | 2019-07-09 10:41:39.096 | https-jsse-nio-8443-exec-342 | wire:73 | http-outgoing-3492 << "HTTP/1.1 200 OK[\r][\n]"
| DEBUG | 2019-07-09 10:41:39.105 | https-jsse-nio-8443-exec-342 | wire:73 | http-outgoing-3492 << "Date: Tue, 09 Jul 2019 14:41:39 GMT[\r][\n]"
| DEBUG | 2019-07-09 10:41:39.106 | https-jsse-nio-8443-exec-342 | wire:73 | http-outgoing-3492 << "Content-Type: application/json[\r][\n]"
| DEBUG | 2019-07-09 10:41:39.106 | https-jsse-nio-8443-exec-342 | wire:73 | http-outgoing-3492 << "Content-Length: 1891[\r][\n]"
| DEBUG | 2019-07-09 10:41:39.106 | https-jsse-nio-8443-exec-342 | wire:73 | http-outgoing-3492 << "Connection: keep-alive[\r][\n]"
| DEBUG | 2019-07-09 10:41:39.131 | https-jsse-nio-8443-exec-342 | headers:125 | http-outgoing-3492 << Twilio-Request-Duration: 0.027
| DEBUG | 2019-07-09 10:41:39.133 | https-jsse-nio-8443-exec-342 | MainClientExec:285 | Connection can be kept alive indefinitely
| DEBUG | 2019-07-09 10:41:39.134 | https-jsse-nio-8443-exec-342 | PoolingHttpClientConnectionManager:341 | Connection [id: 3492][route: {s}->https://taskrouter.twilio.com:443] can be kept alive indefinitely
| DEBUG | 2019-07-09 10:41:39.137 | https-jsse-nio-8443-exec-342 | DefaultManagedHttpClientConnection:88 | http-outgoing-3492: set socket timeout to 0
| DEBUG | 2019-07-09 10:41:39.424 | https-jsse-nio-8443-exec-342 | PoolingHttpClientConnectionManager:348 | Connection released: [id: 3492][route: {s}->https://taskrouter.twilio.com:443][total kept alive: 1; route allocated: 10 of 10; total allocated: 10 of 20]
| DEBUG | 2019-07-09 10:41:39.424 | https-jsse-nio-8443-exec-342 | TwilioCommandExecutor:40 | Twilio command executed successfully
Twilio developer evangelist here.
According to your comment you made around 25,000 API requests in 15 minutes. That is approximately 27 requests per second. Twilio itself has a limit of 100 concurrent connections but the Twilio Java library has a pool of 20 HTTP connection objects. If you are throwing all 25,000 requests at the library in one turn, then it is going to start with the first 20 requests using the pool, and make them. Once each request frees up a connection the next request will be made. Ultimately this means that the last of the 25,000 requests will have to wait for the previous 24,999 to take place, causing your 10 minute delay.
If the size of the connection pool isn't big enough for you, you can provide your own HTTPClient to the library. You can see the existing implementation of the NetworkHTTPClient
which is likely a good model to base your own implementation on. The connection pool limits are implemented here. Note that making a connection pool larger than 100 will result in API errors for higher than allowed concurrent use. If you need to consider higher API limits, I recommend you talk to Twilio sales about your needs.
Alternatively, you could consider whether that many requests are desirable in such a short time. If it is possible to batch the requests differently and make calls at different times, you could avoid this level of queueing of requests.
Let me know if that helps at all.