Below shows the relevant code segments. Is it related to connection close or Buffer memory.? (The file, which contains 50MB+ AND 65000+ Text Lines)
@Scheduled(cron = "0 0 9 * * *")
public void retrieveFile() throws IOException{
try {
RestTemplate restTemplate=new RestTemplate();
String url="URL of the file, which contains 50MB+ AND 65000+ Text Lines";
restTemplate.getMessageConverters().add(0, new StringHttpMessageConverter(Charset.forName("UTF-8")));
URL targetFileUrl = new URL(url);
URLConnection connection = targetFileUrl.openConnection();
InputStream is = connection.getInputStream();
handleFile(is);
}catch (Exception e) {
e.getMessage();
}
}
public void handleFile(InputStream inputStream) throws IOException {
BufferedReader reader = null;
try {
InputStreamReader inputStreamReader = new InputStreamReader(inputStream);
reader = new BufferedReader(inputStreamReader);
String nextLine = null;
while ((nextLine = reader.readLine()) != null) {
//saving them in DB
}
}
} catch (Exception ex) {
ex.getMessage();//ERROR POPULATED FROM HERE
} finally {
if (reader != null) {
reader.close();
}
}
}
Please advise on this.! Thanks.
Updated Please find here the Error Stack Trace. I've marked error lines from the source and mapped them with logs.
Error Stack Trace : java.io.IOException: Premature EOF
at java.base/sun.net.www.http.ChunkedInputStream.readAheadBlocking(ChunkedInputStream.java:568)
at java.base/sun.net.www.http.ChunkedInputStream.readAhead(ChunkedInputStream.java:612)
at java.base/sun.net.www.http.ChunkedInputStream.read(ChunkedInputStream.java:699)
at java.base/java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.base/sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3510)
at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181)
at java.base/java.io.BufferedReader.fill(BufferedReader.java:161)
at java.base/java.io.BufferedReader.readLine(BufferedReader.java:326)
at java.base/java.io.BufferedReader.readLine(BufferedReader.java:392)
at com.blah.blah.blah.handleFile(FileOps.java:75)//This coding line -> ((nextLine = reader.readLine()) != null) {
at com.blah.blah.blah.retrieveFile(FileOps.java:65)//This coding line -> handleFile(is);
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:95)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
The exception is thrown by the internal class sun.net.www.http.ChunkedInputStream
. This class is used to read a chunk of http content whose length was declared at the beginning using the Content-Length HTTP header. However, the http content was incomplete, because it ended before the expected amount of bytes was read.
This is possibly caused by a bad http connection or a server that sends incomplete http content. Use a debugging tool such as Wireshark and examine the returned raw http content; look at the Content-Length and ensure that this many characters are available to read.