Skip to content

Try lowering the store node query batch to one hour #6531

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jrainville opened this issue Apr 15, 2025 · 0 comments
Open

Try lowering the store node query batch to one hour #6531

jrainville opened this issue Apr 15, 2025 · 0 comments

Comments

@jrainville
Copy link
Member

Description

Part of the Community Awareness and UX improvements initiative.

The goal of this issue is to see if lowering the store node query batch from 24 hours to one hour could improve the UX for first time users of Communities while also not impeding on the performance of the store nodes themselves.

The code is located here: https://github.com/waku-org/go-waku/blob/master/waku/v2/api/history/history.go#L163-L175

@Ivansete-status also noted that store nodes are currently partitioned per hour already, so using the same partition time could improve the fetch time a lot.
here is an example of how they are partitioned:

 public | messages_1720555200_1720558800 | table             | nim-waku | permanent   | heap          | 310 MB     | 
 public | messages_1721055600_1721059200 | table             | nim-waku | permanent   | heap          | 926 MB     | 
 public | messages_1721185200_1721188800 | table             | nim-waku | permanent   | heap          | 1924 MB    | 
 public | messages_1721203200_1721206800 | table             | nim-waku | permanent   | heap          | 5053 MB    | 
 public | messages_1721206800_1721210400 | table             | nim-waku | permanent   | heap          | 4675 MB    | 
 public | messages_1721210400_1721214000 | table             | nim-waku | permanent   | heap          | 1919 MB    | 

Acceptance Criteria

  • When a user first joins a big Community (eg Status), they receive the first messages faster than before (especially on Mobile)
  • The performance of store nodes is not impacted or barely impacted
    • Use Grafana to monitor the RAM and CPU usage

Possible side solution

If the performance of store nodes is negatively impacted by this, we can consider doing some sort of exponential increment.
Eg:
1 hour, 2 hours, 4 hours, 8 hours, 16 hours, 24 hours (stay at 24 hour batches as a max), up until we have fetched the full duration the client asked for (7 days I think, but the algorithm should adapt to whatever interval is given)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

No branches or pull requests

1 participant