代码之家  ›  专栏  ›  技术社区  ›  Chase Seibert

Django中的每个请求缓存?

  •  16
  • Chase Seibert  · 技术社区  · 15 年前

    我有一个自定义标记,它决定 一长串记录中的一个记录是 项目是收藏夹,您必须查询 数据库。理想情况下,你会 执行一个查询以获取所有 收藏夹,然后检查一下

    一个解决办法是获得所有 视图中的收藏夹,然后传递 设置到模板中,然后 每一个标签调用。

    或者,标签本身可以 执行查询本身,但只执行 结果可以缓存以供后续处理 电话。好处是你可以使用 此标签来自任何模板,在任何 视图,而不提醒视图。

    假设这与 相关性可靠。

    下面是我目前拥有的标签的一个例子。

    @register.filter()
    def is_favorite(record, request):
    
        if "get_favorites" in request.POST:
            favorites = request.POST["get_favorites"]
        else:
    
            favorites = get_favorites(request.user)
    
            post = request.POST.copy()
            post["get_favorites"] = favorites
            request.POST = post
    
        return record in favorites
    

    有没有办法从Django获取当前的请求对象,而不传递它?从一个标签,我可以传递请求,它将永远存在。但是我想从其他函数中使用这个装饰器。

    7 回复  |  直到 15 年前
        1
  •  26
  •   href_    13 年前

    使用一个定制的中间件,你可以得到一个Django缓存实例,保证每个请求都被清除。

    from threading import currentThread
    from django.core.cache.backends.locmem import LocMemCache
    
    _request_cache = {}
    _installed_middleware = False
    
    def get_request_cache():
        assert _installed_middleware, 'RequestCacheMiddleware not loaded'
        return _request_cache[currentThread()]
    
    # LocMemCache is a threadsafe local memory cache
    class RequestCache(LocMemCache):
        def __init__(self):
            name = 'locmemcache@%i' % hash(currentThread())
            params = dict()
            super(RequestCache, self).__init__(name, params)
    
    class RequestCacheMiddleware(object):
        def __init__(self):
            global _installed_middleware
            _installed_middleware = True
    
        def process_request(self, request):
            cache = _request_cache.get(currentThread()) or RequestCache()
            _request_cache[currentThread()] = cache
    
            cache.clear()
    

    要使用中间件,请在settings.py中注册它,例如:

    MIDDLEWARE_CLASSES = (
        ...
        'myapp.request_cache.RequestCacheMiddleware'
    )
    

    然后可以按如下方式使用缓存:

    from myapp.request_cache import get_request_cache
    
    cache = get_request_cache()
    

    Django Low-Level Cache API

    修改memoize修饰符以使用请求缓存应该很容易。查看Python Decorator库,以获得memoize Decorator的一个好示例:

    Python Decorator Library

        2
  •  3
  •   Georgy Vladimirov    12 年前

    我想出了一个将东西直接缓存到请求对象的方法(而不是使用标准缓存,它将绑定到memcached、文件、数据库等)

    # get the request object's dictionary (rather one of its methods' dictionary)
    mycache = request.get_host.__dict__
    
    # check whether we already have our value cached and return it
    if mycache.get( 'c_category', False ):
        return mycache['c_category']
    else:
        # get some object from the database (a category object in this case)
        c = Category.objects.get( id = cid )
    
        # cache the database object into a new key in the request object
        mycache['c_category'] = c
    
        return c
    

    所以,基本上我只是将缓存的值(在本例中是category对象)存储在请求字典中的一个新键“c\u category”下。或者更准确地说,因为我们不能只在请求对象上创建一个键,所以我将该键添加到请求对象的一个方法-get\u host()。

    乔治。

        3
  •  3
  •   Chase Seibert    12 年前

    patch() 方法,就像在中间件中一样。

    from threading import local
    import itertools
    from django.db.models.sql.constants import MULTI
    from django.db.models.sql.compiler import SQLCompiler
    from django.db.models.sql.datastructures import EmptyResultSet
    from django.db.models.sql.constants import GET_ITERATOR_CHUNK_SIZE
    
    
    _thread_locals = local()
    
    
    def get_sql(compiler):
        ''' get a tuple of the SQL query and the arguments '''
        try:
            return compiler.as_sql()
        except EmptyResultSet:
            pass
        return ('', [])
    
    
    def execute_sql_cache(self, result_type=MULTI):
    
        if hasattr(_thread_locals, 'query_cache'):
    
            sql = get_sql(self)  # ('SELECT * FROM ...', (50)) <= sql string, args tuple
            if sql[0][:6].upper() == 'SELECT':
    
                # uses the tuple of sql + args as the cache key
                if sql in _thread_locals.query_cache:
                    return _thread_locals.query_cache[sql]
    
                result = self._execute_sql(result_type)
                if hasattr(result, 'next'):
    
                    # only cache if this is not a full first page of a chunked set
                    peek = result.next()
                    result = list(itertools.chain([peek], result))
    
                    if len(peek) == GET_ITERATOR_CHUNK_SIZE:
                        return result
    
                _thread_locals.query_cache[sql] = result
    
                return result
    
            else:
                # the database has been updated; throw away the cache
                _thread_locals.query_cache = {}
    
        return self._execute_sql(result_type)
    
    
    def patch():
        ''' patch the django query runner to use our own method to execute sql '''
        _thread_locals.query_cache = {}
        if not hasattr(SQLCompiler, '_execute_sql'):
            SQLCompiler._execute_sql = SQLCompiler.execute_sql
            SQLCompiler.execute_sql = execute_sql_cache
    

        4
  •  3
  •   Macke    5 年前

    编辑:

    https://pypi.org/project/django-request-cache/

    我发现了一个简单得多的解决方案,并且有点脸红,因为我没有意识到这从一开始就应该是多么容易。

    from django.core.cache.backends.base import BaseCache
    from django.core.cache.backends.locmem import LocMemCache
    from django.utils.synch import RWLock
    
    
    class RequestCache(LocMemCache):
        """
        RequestCache is a customized LocMemCache which stores its data cache as an instance attribute, rather than
        a global. It's designed to live only as long as the request object that RequestCacheMiddleware attaches it to.
        """
    
        def __init__(self):
            # We explicitly do not call super() here, because while we want BaseCache.__init__() to run, we *don't*
            # want LocMemCache.__init__() to run, because that would store our caches in its globals.
            BaseCache.__init__(self, {})
    
            self._cache = {}
            self._expire_info = {}
            self._lock = RWLock()
    
    class RequestCacheMiddleware(object):
        """
        Creates a fresh cache instance as request.cache. The cache instance lives only as long as request does.
        """
    
        def process_request(self, request):
            request.cache = RequestCache()
    

    有了这个,你可以用 request.cache 作为仅与 request 请求完成时,垃圾回收器将完全清除。

    如果你需要进入 对象从通常不可用的上下文中,您可以使用可以在线找到的所谓“全局请求中间件”的各种实现之一。

    **初始答案:**

    django.core.cache.backends.locmem 定义几个全局字典,其中包含对每个LocalMemCache实例的缓存数据的引用,并且这些字典永远不会清空。

    from uuid import uuid4
    from threading import current_thread
    
    from django.core.cache.backends.base import BaseCache
    from django.core.cache.backends.locmem import LocMemCache
    from django.utils.synch import RWLock
    
    
    # Global in-memory store of cache data. Keyed by name, to provides multiple
    # named local memory caches.
    _caches = {}
    _expire_info = {}
    _locks = {}
    
    
    class RequestCache(LocMemCache):
        """
        RequestCache is a customized LocMemCache with a destructor, ensuring that creating
        and destroying RequestCache objects over and over doesn't leak memory.
        """
    
        def __init__(self):
            # We explicitly do not call super() here, because while we want
            # BaseCache.__init__() to run, we *don't* want LocMemCache.__init__() to run.
            BaseCache.__init__(self, {})
    
            # Use a name that is guaranteed to be unique for each RequestCache instance.
            # This ensures that it will always be safe to call del _caches[self.name] in
            # the destructor, even when multiple threads are doing so at the same time.
            self.name = uuid4()
            self._cache = _caches.setdefault(self.name, {})
            self._expire_info = _expire_info.setdefault(self.name, {})
            self._lock = _locks.setdefault(self.name, RWLock())
    
        def __del__(self):
            del _caches[self.name]
            del _expire_info[self.name]
            del _locks[self.name]
    
    
    class RequestCacheMiddleware(object):
        """
        Creates a cache instance that persists only for the duration of the current request.
        """
    
        _request_caches = {}
    
        def process_request(self, request):
            # The RequestCache object is keyed on the current thread because each request is
            # processed on a single thread, allowing us to retrieve the correct RequestCache
            # object in the other functions.
            self._request_caches[current_thread()] = RequestCache()
    
        def process_response(self, request, response):
            self.delete_cache()
            return response
    
        def process_exception(self, request, exception):
            self.delete_cache()
    
        @classmethod
        def get_cache(cls):
            """
            Retrieve the current request's cache.
    
            Returns None if RequestCacheMiddleware is not currently installed via 
            MIDDLEWARE_CLASSES, or if there is no active request.
            """
            return cls._request_caches.get(current_thread())
    
        @classmethod
        def clear_cache(cls):
            """
            Clear the current request's cache.
            """
            cache = cls.get_cache()
            if cache:
                cache.clear()
    
        @classmethod
        def delete_cache(cls):
            """
            Delete the current request's cache object to avoid leaking memory.
            """
            cache = cls._request_caches.pop(current_thread(), None)
            del cache
    

    编辑2016-06-15:

    从django.core.cache.backends.locmem导入LocMemCache
    从django.utils.synch导入RWLock
    
    
    """
    RequestCache是一个定制的LocMemCache,它将其数据缓存存储为实例属性,而不是
    一个全球性的。它被设计为只在RequestCacheMiddleware将其连接到的请求对象上时有效。
    """
    
    #我们在这里显式地不调用super(),因为当我们想要BaseCache.\uu init\uuuu()运行时,我们*不需要*
    #希望LocMemCache.\uuu init\uuu()运行,因为这样会将缓存存储在其全局变量中。
    
    self.\u cache={}
    self.\u expire\u info={}
    self.\u lock=RWLock()
    
    
    def处理请求(self,request):
    request.cache=请求缓存()
    

    有了这个,你可以用 作为仅与 请求 请求完成时,垃圾回收器将完全清除。

    如果你需要进入

        5
  •  2
  •   Igor Katson    11 年前

    • 不需要任何中间件,并且内容不会在每次访问时都进行pickle和depickle,这会更快。
    • 测试并与gevent的monkeypatching配合使用。

    这可能也可以通过本地存储实现。 我不知道这种方法的任何缺点,请在评论中随意添加它们。

    from threading import currentThread
    import weakref
    
    _request_cache = weakref.WeakKeyDictionary()
    
    def get_request_cache():
        return _request_cache.setdefault(currentThread(), {})
    
        6
  •  1
  •   lprsd    15 年前

    您可以随时手动进行缓存。

        ...
        if "get_favorites" in request.POST:
            favorites = request.POST["get_favorites"]
        else:
            from django.core.cache import cache
    
            favorites = cache.get(request.user.username)
            if not favorites:
                favorites = get_favorites(request.user)
                cache.set(request.user.username, favorites, seconds)
        ...
    
        7
  •  1
  •   Community CDub    8 年前

    Answer 由@href\给出的很好。

    以防万一你想要一些短一点的东西,也有可能达到目的:

    from django.utils.lru_cache import lru_cache
    
    def cached_call(func, *args, **kwargs):
        """Very basic temporary cache, will cache results
        for average of 1.5 sec and no more then 3 sec"""
        return _cached_call(int(time.time() / 3), func, *args, **kwargs)
    
    
    @lru_cache(maxsize=100)
    def _cached_call(time, func, *args, **kwargs):
        return func(*args, **kwargs)
    

    favourites = cached_call(get_favourites, request.user)
    

    这种方法利用 lru cache

    这不是使缓存失效的最佳方法,因为有时它会丢失最近的数据: int(..2.99.. / 3) 然后 int(..3.00..) / 3) . 尽管有这个缺点,它仍然可以在大多数点击非常有效。

    另外,作为一种奖励,您可以在请求/响应周期之外使用它,例如芹菜任务或管理命令作业。