javacachingignite

Ignite requires more on heap memory


I'm currently working on building Apache Ignite as a caching layer. My requirement is to load 10 million data to the server on startup. After caching 400,000 records, I'm encountering a "GC overhead exceeded" error. I've checked for memory leaks, and my code looks fine. Could this problem be related to my system's RAM (8GB)?

I tried increasing the on-heap initial size by setting up these JAVA_OPTS.

-Xms512m -Xmx4g -Xmn2048m -XX:+UseParallelGC

After setting this up, I can process up to 800,000 records, but immediately after that, my IDE crashes. I had to restart the system.

Server Config:

 IgniteConfiguration cfg = new IgniteConfiguration();
        cfg.setIgniteInstanceName("Instance");
        cfg.setConsistentId("Node");

        // Create TCP Communication SPI
        TcpCommunicationSpi commSpi = new TcpCommunicationSpi();

        // Set the socketWriteTimeout to 5 seconds (5000 milliseconds)
        commSpi.setSocketWriteTimeout(5000);

        // Data storage configuration

        DataStorageConfiguration storageCfg = new DataStorageConfiguration();
        DataRegionConfiguration regionCfg = new DataRegionConfiguration();
        regionCfg.setName("500MB_Region");

        regionCfg.setPersistenceEnabled(true);
        regionCfg.setInitialSize(1024L * 1024 * 1024); // 1GB initial size
        regionCfg.setMaxSize(6L * 1024 * 1024 * 1024); // 6GB maximum size

        regionCfg.setMetricsEnabled(true); // Enable metrics for monitoring
        // Data region configuration
        regionCfg.setPageEvictionMode(DataPageEvictionMode.RANDOM_LRU);
        regionCfg.setPageReplacementMode(PageReplacementMode.RANDOM_LRU);
        storageCfg.setDefaultDataRegionConfiguration(regionCfg);

        CacheConfiguration<String, String> marksCacheCfg = new CacheConfiguration<>();
        marksCacheCfg.setName("poswavierCache");
        marksCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
        marksCacheCfg.setCacheMode(CacheMode.REPLICATED);
        marksCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC);

        cfg.setCacheConfiguration(marksCacheCfg);
        cfg.setPeerClassLoadingEnabled(true);
        cfg.setDataStorageConfiguration(storageCfg);


        Ignite igniteServer = Ignition.start(cfg);
        igniteServer.cluster().state(ClusterState.ACTIVE);

        IgniteCache<String, String> marksCache = igniteServer.getOrCreateCache("poswavierCache");
        igniteServer.resetLostPartitions(Arrays.asList("poswavierCache"));
        igniteServer.cluster().baselineAutoAdjustEnabled(true);

This is how I'm pushing data into my server using datastreamer:

 private static final int BATCH_SIZE = 100000;

    @Autowired
    private IgniteCacheService igniteCacheService;

    @Autowired
    private ProductLinesRepo productLinesRepo;

    public CompletableFuture<Void> processAllRecords() {
        long startTime = System.currentTimeMillis();

        int pageNumber = 0;
        Page<ProductLines> page;
        CompletableFuture<Void> future = CompletableFuture.completedFuture(null);

        do {
            page = productLinesRepo.findRecordsWithPanNotNull(PageRequest.of(pageNumber++, BATCH_SIZE));
            List<ProductLines> records = page.getContent();
            if (!records.isEmpty()) {
                int finalPageNumber = pageNumber;
                future = future.thenCompose(result ->
                        CompletableFuture.runAsync(() -> {
                            igniteCacheService.streamBulkData("poswavierCache", records);
                            logger.info("Processed {} records", (finalPageNumber - 1) * BATCH_SIZE + records.size());
                        }));

            }
        } while (page.hasNext());

        long endTime = System.currentTimeMillis();
        long totalTime = endTime - startTime;
        logger.info("Total time taken for processing all records: {} milliseconds", totalTime);

        return future;
    }

DataStreamer:

public void streamBulkData(String cacheName, List<ProductLines> records) {
        try (IgniteDataStreamer<String, ProductLines> streamer = ignite.dataStreamer(cacheName)) {
            streamer.allowOverwrite(true);

            for (ProductLines record : records) {
                String key = record.getPan_no();
                if (key != null) {
                    streamer.addData(key, record);
                } else {
                    System.err.println("Skipping record with null key: " + record);
                }
            }

            streamer.flush();

        } catch (CacheException e) {
            System.err.println("Error streaming data to cache: " + e.getMessage());
            e.printStackTrace();
        }
    }

Solution

  • There's no One Obvious Thing here that indicates the problem, but there are a few things worth nothing.

    First, from what you've written, it seems that you have a single node with 8Gb of memory. For a distributed, in-memory database, that's not very much.

    Secondly, you appear to be over-committing your machine. As Ignite is in-memory, it should never be allowed to swap to disk. However, you've configured 4Gb of heap space and 6Gb of off-heap. That's already over the 8Gb of memory you have, even before you consider the OS, Java, and any other overhead.

    This won't affect your memory usage, but you've configured eviction and persistence. Pick one -- probably persistence.

    I'm not sure I understand where your data streamer is (I assume it's a client), but it's going to use a lot of memory. You really want to stream the data in rather than copying it all into memory and publishing it in a large batch.

    Lastly, you've configured Java to use the parallel garbage collector, which is very old and not well suited to large memory, multi-core systems like Ignite. The general recommendation is to use G1. More in the documentation.