Skip to content

ACM-20689-Compress-ACM-Backend-Caches #4526

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

jeswanke
Copy link
Contributor

@jeswanke jeswanke commented May 22, 2025

compression.ts now has three methods to compress the backend caches (event and app)

  1. filter
  2. compress
  3. deflate

1. filter
if a resource has any of these keys it will filter it from cached resource:
const filterKeys = new Set(['finalizers', 'flags'])
2. json compression
based on an idea in https://www.npmjs.com/package/compress-json
converts commonly used keys into a dictionary array (index=1, key='apiVersion') so that something lie this:
apiVersion: "cluster.open-cluster-management.io/v1beta1"
is compressed to:
"1": "3"

3. deflate
we then use the zlib deflateRawSync( ) function to zip this compressed json

before:
CleanShot 2025-05-22 at 16 06 33@2x

after:
CleanShot 2025-05-22 at 16 10 43@2x

Signed-off-by: John Swanke [email protected]

Copy link

openshift-ci bot commented May 22, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jeswanke

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Signed-off-by: John Swanke <[email protected]>
@jeswanke jeswanke requested a review from KevinFCormier May 27, 2025 19:55
Signed-off-by: John Swanke <[email protected]>
Copy link

Copy link
Contributor

@KevinFCormier KevinFCormier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

John,

This is a very big memory saving, and some neat coding techniques. I appreciated that you handled keys that are already a number!

I noted some concerns inline about the data we are stripping.

Further, I performed an experiment by commenting out the calls to compressResource and decompressResource from deflateResource and inflateResource, respectively. I did this on weekly, which doesn't have a lot of data on it, but perhaps we could test against a scale environment or load up the cluster more to test.

main branch:
INFO:memory, caches{ appCache:306 KB, eventCache:702 KB, clients:0, events:696 KB } = 1704 KB

with your changes:
INFO:memory, caches{ clients:0, appCache:72 KB, appDict:24 KB, eventCache:198 KB, eventDict:75 KB } = 369 KB

with only zlib compression:
INFO:memory, caches{ clients:0, appCache:136 KB, appDict:0 KB, eventCache:304 KB, eventDict:0 KB } = 440 KB

So, it looks to me like we get almost all the benefit with zlib compression alone, which reduces the code complexity and risks. Might also be worth looking at the numbers without zlib and with just the dictionary-based, in case your method is cheaper (in CPU cycles) than zlib.

Did you make any efforts to measure impact on compute requirements or initial loading time? I'm curious if having to inflate every individual event will have an impact, or if network speeds and client-side processing will still be the limiting factor.

Comment on lines +86 to +93
// filter out all but the 3 most recent conditions
if (parentKey === 'conditions') {
resource = resource
.sort((a: SortTimeType, b: SortTimeType) => {
return new Date(b?.lastTransitionTime).getTime() - new Date(a?.lastTransitionTime).getTime()
})
.slice(0, 3)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not confident this is a safe change to make. Status conditions are used a lot in the UI code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Example:
image

}

// keys that point to values we should filter out
const filterKeys = new Set(['finalizers', 'flags'])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have a pruneResources function that strips certain things. It looks to me like these key names would apply recursively, and who knows where "flags" might appear in resource YAML. I think it would be better to centralize any pruning of data and to be explicit about what exact field we are targetting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants