fix: Respect flush_queue_size when calling Workers.flush#63
fix: Respect flush_queue_size when calling Workers.flush#63dsaxton-1password wants to merge 3 commits intoamplitude:mainfrom
Conversation
|
Hi @dsaxton-1password , |
Not frequently, but the way our event consumer logic works there is a need to periodically "checkpoint" our progress before continuing. To do this we explicitly call Client.flush to ensure all the events are processed before checkpointing. However, since there is no guarantee that the total queue has a reasonable size when we do this (we could be ingesting events much faster than they have been flushed in the background) we can end up sending an exceedingly large payload that then fails. It's also worth noting that we're already setting both the max flush queue size and the flush interval when we initialize our client but the issue happens nonetheless. The way I see it there shouldn't be any disadvantage to sending the data in batches when someone calls flush (especially if the batches are the same size as what normally gets sent in the background), but there may be a benefit. |
848b1a5 to
4964d80
Compare
|
Hi @dsaxton-1password |
Summary
Currently
Workers.flushdoesn't respectflush_queue_sizewhich can cause a large number of events to be sent to Amplitude when this method is called. This can then trigger errors such as 413 (Content Too Large). The change here updates the method to send the data in batches according toflush_queue_sizein order to prevent this.This modifies the return type of
Workers.flushfromFuturetoList[Future]which I believe only impacts the behavior inTimeline.flushbut I could be wrong there.Checklist