pythonflaskcachinglru

lru caching not working between application runs for same argument on Flask App


*Edit: just realized I made a mistake in my function design where I re-instantiated the AppDAO in my Class1 function and that is what was causing the unexpected behavior. I figured it out by printing the self argument in cache_call.

I have a Flask App with the following design:

 from flask import Flask, request
 from Class1 import Class1
 from AppDAO import AppDAO
 
 app = Flask(__name__)

 def main():
   app.config['appDAO'] = AppDAO()
   app.run()

 @app.route('/app_route1',methods=['POST'])
 def app_route1():
     print("Running app route 1...")
     print(app.config['appDAO'].cache_call.cache_info())
     
     cache_param = request.json.get('cached_parameter')
     print("The cached parameter is: %s." % cache_param)

     class1 = Class1(app.config['appDAO'])

     for item in ['item1', 'item2']:
           class1.do_processing(item,cache_param)

Class1.py:

class Class1(object):
   def __init__(self, app_dao):
     self.app_dao = app_dao

   def do_processing(self, item, cache_param):
        print("Processing for item: %s..." % item)

        resp_cache = self.app_dao.cache_call(cache_param)
        print(self.app_dao.cache_call.cache_info())
        
        return resp_cache

AppDAO.py:

from functools import lru_cache
import mysql.connector 

class AppDAO(object):
  
  def __init__():
      self.conn = mysql.connector.connect('user1','password1','server1','database')
  
  @lru_cache(maxsize=4)
  def cache_call(self, cache_param):
     print("Running cache call with parameter: %s..." % cache_param)

     cursor = self.conn.cursor()
     cursor.execute("SELECT * FROM Table1 WHERE Column1 = `%s`;" % cache_param)
     rs = cursor.fetchall()

     return rs
       

If I run the app making a post, the AppDAO.cache_call functions correctly with the following output print output:

 Running app route 1...
 CacheInfo(hits=0, misses=0, maxsize=4, currsize=0)
 Processing items: item1...
 Running cache call with parameter: foo1...
 CacheInfo(hits=0, misses=1, maxsize=4, currsize=1)
 Processing items: item2...
 CacheInfo(hits=1, misses=1, maxsize=4, currsize=1)

But I make another post to the branch using the same parameter for the cache_call, I get the following print output:

 Running app route 1...
 CacheInfo(hits=1, misses=1, maxsize=4, currsize=1)
 Processing items: item1...
 Running cache call with parameter: foo1...
 CacheInfo(hits=1, misses=2, maxsize=4, currsize=2)
 Processing items: item2...
 CacheInfo(hits=2, misses=2, maxsize=4, currsize=2)
  

I run the app using the Anaconda QT Console, but I experience the following caching issue if I used an Anaconda Command Prompt as well. Can anyone speculate why the lru_cache is not working when a new post is made to the app despite the cached call still clearing being stored?


Solution

  • Note that

    @lru_cache(maxsize=4)
    def cache_call(self, cache_param):
    

    is wrapping a method, not a function. In your example, self, which will be used as as part of the cache key, is an instance of Class1 and is created once per route handler invocation. The result is that you aren't getting the caching you expect.

    Updated: I misread the code. Assuming that

    for item in [item1, item2]:
    

    was a typo, and should be

    for item in ['item1', 'item2']:
    

    and that you're do_processing is intentionally not passing item (which varies) to cache_call, then what you're seeing is consistent with how lru_cache behaves.

    On first request, it's going to add one thing (request.json.get('cached_parameter')) to the cache, scoring it as a miss the for 'item1' and a hit for 'item2'.

    On second request, request.json.get('cached_parameter') is a different object. It gets scored as a miss for 'item1', added (increasing 'currsize' to 2). For 'item2', it gets scored as a hit.

    What behavior did you expect?

    Unrelated but worth mentioning: The way you're constructing that query leaves you open to SQL Injection attacks. Consider using a bind parameter instead.