You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Part of the Community Awareness and UX improvements initiative.
The goal of this issue is to see if lowering the store node query batch from 24 hours to one hour could improve the UX for first time users of Communities while also not impeding on the performance of the store nodes themselves.
@Ivansete-status also noted that store nodes are currently partitioned per hour already, so using the same partition time could improve the fetch time a lot.
here is an example of how they are partitioned:
When a user first joins a big Community (eg Status), they receive the first messages faster than before (especially on Mobile)
The performance of store nodes is not impacted or barely impacted
Use Grafana to monitor the RAM and CPU usage
Possible side solution
If the performance of store nodes is negatively impacted by this, we can consider doing some sort of exponential increment.
Eg:
1 hour, 2 hours, 4 hours, 8 hours, 16 hours, 24 hours (stay at 24 hour batches as a max), up until we have fetched the full duration the client asked for (7 days I think, but the algorithm should adapt to whatever interval is given)
The text was updated successfully, but these errors were encountered:
Description
Part of the Community Awareness and UX improvements initiative.
The goal of this issue is to see if lowering the store node query batch from 24 hours to one hour could improve the UX for first time users of Communities while also not impeding on the performance of the store nodes themselves.
The code is located here: https://github.com/waku-org/go-waku/blob/master/waku/v2/api/history/history.go#L163-L175
@Ivansete-status also noted that store nodes are currently partitioned per hour already, so using the same partition time could improve the fetch time a lot.
here is an example of how they are partitioned:
Acceptance Criteria
Possible side solution
If the performance of store nodes is negatively impacted by this, we can consider doing some sort of exponential increment.
Eg:
1 hour, 2 hours, 4 hours, 8 hours, 16 hours, 24 hours (stay at 24 hour batches as a max), up until we have fetched the full duration the client asked for (7 days I think, but the algorithm should adapt to whatever interval is given)
The text was updated successfully, but these errors were encountered: