Community users reported that workflows executions are somtimes failing when there are some steps with a significant amount of output data.
This is because there is a limit on the total amount of storage output. When the sum of all step output reaches that limit, the execution of every next step would fail.
To improve things a bit, this is what we changed for now:
- We increased the default limit to 10 MB
- If that is not enough, you can now customize the limit via the OPS_REQUEST_BODY_LIMIT env variable