代码之家  ›  专栏  ›  技术社区  ›  Blue

Firestore多批写入失败,超过截止日期[重复]

  •  0
  • Blue  · 技术社区  · 6 年前

    我从Firestore文档中获取了一个示例函数,并能够在本地firebase环境中成功运行它。然而,一旦我部署到firebase服务器,该功能就完成了,但firestore数据库中没有任何条目。firebase功能日志显示“已超过截止日期”我有点困惑。有人知道为什么会发生这种情况以及如何解决吗?

    以下是示例函数:

    exports.testingFunction = functions.https.onRequest((request, response) => {
    var data = {
        name: 'Los Angeles',
        state: 'CA',
        country: 'USA'
    };
    
    // Add a new document in collection "cities" with ID 'DC'
    var db = admin.firestore();
    var setDoc = db.collection('cities').doc('LA').set(data);
    
    response.status(200).send();
    });
    
    0 回复  |  直到 7 年前
        1
  •  15
  •   Nobuhito Kurose    7 年前

    Firestore是有限制的。

    可能是因为它的限制而发生了“超过了最后期限”。

    看看这个。 https://firebase.google.com/docs/firestore/quotas

    对文档的最大写入速率为每秒1次

    https://groups.google.com/forum/#!msg/google-cloud-firestore-discuss/tGaZpTWQ7tQ/NdaDGRAzBgAJ

        2
  •  6
  •   Jürgen Brandstetter    5 年前

    我写了这个小脚本,它使用批写入(最大500次),并且只一批接一批地写入。

    首先创建一个batchWorker来使用它 let batch: any = new FbBatchWorker(db); 然后向工人添加任何内容 batch.set(ref.doc(docId), MyObject); .然后通过 batch.commit() . 该api与普通Firestore批次的api相同( https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes )然而,目前它只支持 set .

    import { firestore } from "firebase-admin";
    
    class FBWorker {
        callback: Function;
    
        constructor(callback: Function) {
            this.callback = callback;
        }
    
        work(data: {
            type: "SET" | "DELETE";
            ref: FirebaseFirestore.DocumentReference;
            data?: any;
            options?: FirebaseFirestore.SetOptions;
        }) {
            if (data.type === "SET") {
                // tslint:disable-next-line: no-floating-promises
                data.ref.set(data.data, data.options).then(() => {
                    this.callback();
                });
            } else if (data.type === "DELETE") {
                // tslint:disable-next-line: no-floating-promises
                data.ref.delete().then(() => {
                    this.callback();
                });
            } else {
                this.callback();
            }
        }
    }
    
    export class FbBatchWorker {
        db: firestore.Firestore;
        batchList2: {
            type: "SET" | "DELETE";
            ref: FirebaseFirestore.DocumentReference;
            data?: any;
            options?: FirebaseFirestore.SetOptions;
        }[] = [];
        elemCount: number = 0;
        private _maxBatchSize: number = 490;
    
        public get maxBatchSize(): number {
            return this._maxBatchSize;
        }
        public set maxBatchSize(size: number) {
            if (size < 1) {
                throw new Error("Size must be positive");
            }
    
            if (size > 490) {
                throw new Error("Size must not be larger then 490");
            }
    
            this._maxBatchSize = size;
        }
    
        constructor(db: firestore.Firestore) {
            this.db = db;
        }
    
        async commit(): Promise<any> {
            const workerProms: Promise<any>[] = [];
            const maxWorker = this.batchList2.length > this.maxBatchSize ? this.maxBatchSize : this.batchList2.length;
            for (let w = 0; w < maxWorker; w++) {
                workerProms.push(
                    new Promise((resolve) => {
                        const A = new FBWorker(() => {
                            if (this.batchList2.length > 0) {
                                A.work(this.batchList2.pop());
                            } else {
                                resolve();
                            }
                        });
    
                        // tslint:disable-next-line: no-floating-promises
                        A.work(this.batchList2.pop());
                    }),
                );
            }
    
            return Promise.all(workerProms);
        }
    
        set(dbref: FirebaseFirestore.DocumentReference, data: any, options?: FirebaseFirestore.SetOptions): void {
            this.batchList2.push({
                type: "SET",
                ref: dbref,
                data,
                options,
            });
        }
    
        delete(dbref: FirebaseFirestore.DocumentReference) {
            this.batchList2.push({
                type: "DELETE",
                ref: dbref,
            });
        }
    }
    
        3
  •  5
  •   Leonardo Ferreira    6 年前

    根据我自己的经验,当你试图使用糟糕的互联网连接编写文档时,这个问题也会发生。

    我使用了一种类似于Jurgen建议的解决方案,一次插入小于500的批量文档,如果我使用的是不太稳定的wifi连接,就会出现这个错误。当我插入电缆时,使用相同数据的相同脚本运行时不会出现错误。

        4
  •  1
  •   MbaiMburu    5 年前

    如果错误是在大约10秒后生成的,可能不是您的internet连接,可能是您的函数没有返回任何承诺。根据我的经验,我之所以会出错,仅仅是因为我在另一个承诺中包装了一个firebase set操作(它返回一个承诺)。 你能做到的

    return db.collection("COL_NAME").doc("DOC_NAME").set(attribs).then(ref => {
            var SuccessResponse = {
                "code": "200"
            }
    
            var resp = JSON.stringify(SuccessResponse);
            return resp;
        }).catch(err => {
            console.log('Quiz Error OCCURED ', err);
            var FailureResponse = {
                "code": "400",
            }
    
            var resp = JSON.stringify(FailureResponse);
            return resp;
        });
    

    而不是

    return new Promise((resolve,reject)=>{ 
        db.collection("COL_NAME").doc("DOC_NAME").set(attribs).then(ref => {
            var SuccessResponse = {
                "code": "200"
            }
    
            var resp = JSON.stringify(SuccessResponse);
            return resp;
        }).catch(err => {
            console.log('Quiz Error OCCURED ', err);
            var FailureResponse = {
                "code": "400",
            }
    
            var resp = JSON.stringify(FailureResponse);
            return resp;
        });
    
    });
    
        5
  •  0
  •   Sigex    4 年前

    我通过15个并发AWS Lambda函数将10000个请求写入数据库中的不同集合/文档来测试这一点。我没有收到通知 DEADLINE_EXCEEDED 错误

    请参阅 firebase .

    “超过截止日期”:在操作完成之前,截止日期已过期。对于更改系统状态的操作,即使操作已成功完成,也可能会返回此错误。例如,来自服务器的成功响应可能会延迟足够长的时间,直到截止日期过期。

    在我们的例子中,我们只写了少量的数据,大部分时间都是有效的,但丢失数据是不可接受的。我还没有得出结论,为什么Firestore无法写入简单的小块数据。

    解决方案:

    我使用的是AWS Lambda函数,它使用SQS事件触发器。

      # This function receives requests from the queue and handles them
      # by persisting the survey answers for the respective users.
      QuizAnswerQueueReceiver:
        handler: app/lambdas/quizAnswerQueueReceiver.handler
        timeout: 180 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.
        reservedConcurrency: 1 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit    
        events:
          - sqs:
              batchSize: 10 # Wait for 10 messages before processing.
              maximumBatchingWindow: 60 # The maximum amount of time in seconds to gather records before invoking the function
              arn:
                Fn::GetAtt:
                  - SurveyAnswerReceiverQueue
                  - Arn
        environment:
          NODE_ENV: ${self:custom.myStage}
    

    我正在使用一个死信队列连接到我的主队列,用于失败的事件。

      Resources:
        QuizAnswerReceiverQueue:
          Type: AWS::SQS::Queue
          Properties:
            QueueName: ${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}
            # VisibilityTimeout MUST be greater than the lambda functions timeout https://lumigo.io/blog/sqs-and-lambda-the-missing-guide-on-failure-modes/
    
            # The length of time during which a message will be unavailable after a message is delivered from the queue.
            # This blocks other components from receiving the same message and gives the initial component time to process and delete the message from the queue.
            VisibilityTimeout: 900 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.
    
            # The number of seconds that Amazon SQS retains a message. You can specify an integer value from 60 seconds (1 minute) to 1,209,600 seconds (14 days).
            MessageRetentionPeriod: 345600  # The number of seconds that Amazon SQS retains a message. 
            RedrivePolicy:
              deadLetterTargetArn:
                "Fn::GetAtt":
                  - QuizAnswerReceiverQueueDLQ
                  - Arn
              maxReceiveCount: 5 # The number of times a message is delivered to the source queue before being moved to the dead-letter queue.
        QuizAnswerReceiverQueueDLQ:
          Type: "AWS::SQS::Queue"
          Properties:
            QueueName: "${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}DLQ"
            MessageRetentionPeriod: 1209600 # 14 days in seconds