代码之家  ›  专栏  ›  技术社区  ›  Jasper

Django:随机查询速度慢-尽管进行了优化

  •  2
  • Jasper  · 技术社区  · 6 年前

    我有一个构建的API,它应该从一个大型queryset返回10个随机选择的结果。

    我有以下四种型号:

    class ScrapingOperation(models.Model):
        completed = models.BooleanField(default=False)
        (...)
    
        indexes = [
            models.Index(fields=['completed'], name='completed_idx'),
            models.Index(fields=['trusted'], name='trusted_idx'),
        ]
    
        @property
        def ads(self):
            """returns all ads linked to the searches of this operation"""
            return Ad.objects.filter(searches__in=self.searches.all())
    
    
    class Search(models.Model):
        completed = models.BooleanField(default=False)
        scraping_operation = models.ForeignKey(
                ScrapingOperation,
                on_delete=models.CASCADE,
                related_name='searches'
        )
        (...)
    
    
    class Ad(models.Model):
        searches = models.ManyToManyField('scraper.Search', related_name='ads')
        (...)
    
    
    class Label(models.Model):
         value = models.Integerfield()
         linked_ad = models.OneToOneField(
             Ad, on_delete=models.CASCADE, related_name='labels'
         )
    

    数据库目前有400.000个+ Ad 里面有物体,但是 ScrapingOperation 有14000个 广告 链接到它的对象我希望API返回10个来自这些+/-14000的随机结果,这些结果还没有链接 Label 对象(每个操作只存在几百个对象)

    因此,必须从包含14000个对象的查询返回10个随机结果。

    早期版本只返回1个结果,但使用的速度要慢得多 sort_by('?') 方法当我把它放大到随机返回10时 广告 我使用的新方法部分基于 this stackoverflow answer

    下面是选择(并返回)10个随机对象的代码:

    # Get all ads linked to the last completed operation
    last_op_ads = ScrapingOperation.objects.filter(completed=True).last().ads
    
    # Get all ads that don't have an label yet
    random_ads = last_op_ads.filter(labels__isnull=True)
    
    # Get list ids of all potential ads
    id_list = random_ads.values_list('id', flat=True)
    id_list = list(id_list)
    
    # Select a random sample of 10, get objects with PK matches
    samples = rd.sample(id_list, min(len(id_list), 10))
    selected_samples = random_ads.filter(id__in=samples)
    
    return selected_samples
    

    然而,尽管我进行了优化,这个查询仍然需要10多秒才能完成,创建的API非常慢。

    这种长延迟是随机查询所固有的吗?(如果是,其他程序员如何处理这个限制?)或者我的代码中缺少错误/低效?

    编辑:基于我在下面包含的原始sql查询的响应 (注意:它们在本地环境中运行,本地环境仅包含生产环境中包含的5%的数据)

    {'sql': 'SELECT "scraper_scrapingoperation"."id", 
    "scraper_scrapingoperation"."date_started", 
    "scraper_scrapingoperation"."date_completed",
    "scraper_scrapingoperation"."completed", 
    "scraper_scrapingoperation"."round", 
    "scraper_scrapingoperation"."trusted" FROM "scraper_scrapingoperation" 
    WHERE "scraper_scrapingoperation"."completed" = true ORDER BY 
    "scraper_scrapingoperation"."id" DESC LIMIT 1', 'time': '0.001'}
    
    
    {'sql': 'SELECT "database_ad"."id" FROM "database_ad" INNER JOIN 
    "database_ad_searches" ON ("database_ad"."id" = 
    "database_ad_searches"."ad_id") LEFT OUTER JOIN "classifier_label" ON
    ("database_ad"."id" = "classifier_label"."ad_id") WHERE 
    ("database_ad_searches"."search_id" IN (SELECT U0."id" FROM 
    "scraper_search" U0 WHERE U0."scraping_operation_id" = 6) AND 
    "classifier_label"."id" IS NULL)', 'time': '1.677'}
    

    编辑2:我尝试了另一种方法 select_related 参数

            random_ads = ScrapingOperation.objects.prefetch_related(
                'searches__ads__labels',
            ).filter(completed=True).last().ads.exclude(
                labels__isnull=True
            )
    
            id_list = random_ads.values_list('id', flat=True)
            id_list = list(id_list)
    
            samples = rd.sample(id_list, min(
                len(id_list), 10))
    
            selected_samples = random_ads.filter(
                id__in=samples)
    
            return selected_samples
    

    生成以下SQL查询:

    {'time': '0.008', 'sql': 'SELECT "scraper_search"."id", 
    "scraper_search"."item_id", "scraper_search"."date_started", 
    "scraper_search"."date_completed", "scraper_search"."completed", 
    "scraper_search"."round", "scraper_search"."scraping_operation_id", 
    "scraper_search"."trusted" FROM "scraper_search" WHERE 
    "scraper_search"."scraping_operation_id" IN (6)'}
    
    
     {'time': '0.113', 'sql': 'SELECT ("database_ad_searches"."search_id")
     AS "_prefetch_related_val_search_id", "database_ad"."id", 
     "database_ad"."item_id", "database_ad"."item_state", 
     "database_ad"."title", "database_ad"."seller_id", 
     "database_ad"."url", "database_ad"."price", 
     "database_ad"."transaction_type", "database_ad"."transaction_method",
     "database_ad"."first_seen", "database_ad"."last_seen", 
     "database_ad"."promoted" FROM "database_ad" INNER JOIN 
     "database_ad_searches" ON ("database_ad"."id" = 
     "database_ad_searches"."ad_id") WHERE 
     "database_ad_searches"."search_id" IN (130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160)'}
    
    
    {'time': '0.041', 'sql': 'SELECT "classifier_label"."id", 
    "classifier_label"."set_by_id", "classifier_label"."ad_id", 
    "classifier_label"."date", "classifier_label"."phone_type", 
    "classifier_label"."seller_type", "classifier_label"."sale_type" FROM
    "classifier_label" WHERE "classifier_label"."ad_id" IN (1, 3, 6, 10, 20, 29, 30, 35, 43, (and MANY more of these numbers) ....'}
    
    
    
    {'time': '1.498', 'sql': 'SELECT "database_ad"."id" FROM "database_ad"
    INNER JOIN "database_ad_searches" ON ("database_ad"."id" = "database_ad_searches"."ad_id") LEFT OUTER JOIN "classifier_label" ON 
    ("database_ad"."id" = "classifier_label"."ad_id") WHERE 
    ("database_ad_searches"."search_id" IN (SELECT U0."id" FROM
    "scraper_search" U0 WHERE U0."scraping_operation_id" = 6) AND NOT 
    ("classifier_label"."id" IS NOT NULL))'}
    

    每个 刮削操作 “only”有1.4万条链接广告,但制作中的广告总数为40万条(而且还在增长)上面的所有代码都在本地环境(仅包含5%的数据)上返回有效结果,但在生产中的API上返回502个错误。

    1 回复  |  直到 6 年前
        1
  •  1
  •   Stefanos Zilellis    6 年前

    我会先尝试分离链接广告,然后通过生成的随机列使用order从中随机抽取10个我不确定在生成的sql中这是如何有效的。当然,我更希望在任务上创建一个存储过程,因为这是一个以随机采样结束的数据挖掘操作。