-
Notifications
You must be signed in to change notification settings - Fork 570
Create opensearch.py #4028
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Create opensearch.py #4028
Conversation
Congratulations! One of the builds has completed. 🍾 You can install the built RPMs by following these steps:
Please note that the RPMs should be used only in a testing environment. |
1 similar comment
Congratulations! One of the builds has completed. 🍾 You can install the built RPMs by following these steps:
Please note that the RPMs should be used only in a testing environment. |
opensearch_config_file = self.path_join( | ||
"/etc/opensearch/opensearch.yml" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not simple opensearch_config_file = "/etc/opensearch/opensearch.yml"
? Is it because you open the file in get_hostname_port
and the absolute filename is different inside a container (due to changed sysroot)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reference path can be relative to a different sysroot (rather than hardcoding the full path).. Also this is almost a replica of the elasticsearch.py plugin, which works almost similarly for OpenSearch (except for the charmed versions, which might require customizations)
self.add_cmd_output([ | ||
f"curl -X GET '{endpoint}/_cluster/settings?pretty'", | ||
f"curl -X GET '{endpoint}/_cluster/health?pretty'", | ||
f"curl -X GET '{endpoint}/_cluster/stats?pretty'", | ||
f"curl -X GET '{endpoint}/_cat/nodes?v'", | ||
f"curl -X GET '{endpoint}/_cat/indices'", | ||
f"curl -X GET '{endpoint}/_cat/shards'", | ||
f"curl -X GET '{endpoint}/_cat/aliases'", | ||
]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How robust is to invoke so many curl
commands? If there would be some timeouting problem with the (local) peer - something that can happen as sos report
is called to diagnose any kind of problems - then invoking these curl
s will get much time till the commands or plugin timeout.
Is that acceptable behaviour? Isnt it worth decreasing the commands timeout to limit this possible negative impact? (i.e. if you know any curl command usually finishes in a few seconds, put timeout to, say 30s).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the curl output shouldn't take long (simple queries), plus this is a replica of the elasticsearch.py plugin which works fine, I believe there should be no issue in those APIs..
profiles = ('services', ) | ||
|
||
packages = ('opensearch',) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a service we should monitor? If so, it may be worth adding:
services = ('opensearch',)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this service
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
@henryoudaimy for the DCO error, make sure you sign the commit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@henryoudaimy fantastic work! Now, we still get the DCO errors, and we have two commits that should be one. Could you squash them together? One way could be:
git rebase -i HEAD~2
Then use -f
(fixup) to add the code from the second one (b20937aa86d69d5de3e0151f02c991175515ce2f
) to the first commit. Once you do that, sign the commit with:
git commit --amend -s
There's probably other ways to do it, but that's what works for me.
Please place an 'X' inside each '[]' to confirm you adhere to our Contributor Guidelines