Skip to content

Create libvirt pool and volume on KVM host & libvirt config map update #499

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 20, 2025

Conversation

Saripalli-lavanya
Copy link
Contributor

@Saripalli-lavanya Saripalli-lavanya commented Jan 22, 2025

Adding some test log results :

Checking existence of libvirt pool 'sl-pool-1' and volume 'sl-volume-1'...
virsh -d 0 -c 'qemu+ssh://[email protected]/system?no_verify=1' pool-info 'sl-pool-1'
pool-info: pool(optdata): sl-pool-1
pool-info: found option <pool>: sl-pool-1
pool-info: <pool> trying as pool NAME
error: failed to get pool 'sl-pool-1'
error: Storage pool not found: no storage pool with matching name 'sl-pool-1'
vol-info: pool(optdata): sl-pool-1
vol-info: vol(optdata): sl-volume-1
vol-info: found option <pool>: sl-pool-1
vol-info: <pool> trying as pool NAME
error: failed to get pool 'sl-pool-1'
error: Storage pool not found: no storage pool with matching name 'sl-pool-1'
Libvirt pool 'sl-pool-1' or volume 'sl-volume-1' does not exist. Proceeding to create...
pool-define-as: name(optdata): sl-pool-1
pool-define-as: type(optdata): dir
pool-define-as: target(optdata): /var/lib/libvirt/images
Pool sl-pool-1 defined
pool-start: pool(optdata): sl-pool-1
pool-start: found option <pool>: sl-pool-1
pool-start: <pool> trying as pool NAME
Pool sl-pool-1 started
vol-create-as: pool(optdata): sl-pool-1
vol-create-as: name(optdata): sl-volume-1
vol-create-as: capacity(optdata): 20G
vol-create-as: allocation(optdata): 2G
vol-create-as: prealloc-metadata(bool)
vol-create-as: format(optdata): qcow2
vol-create-as: found option <pool>: sl-pool-1
vol-create-as: <pool> trying as pool NAME
Vol sl-volume-1 created
Created libvirt pool and volume successfully.
Creating qcow2 image for libvirt provider from scratch

If pool and volume already exsist:

Checking existence of libvirt pool 'sl-pool-1' and volume 'sl-volume-1'...
virsh -d 0 -c 'qemu+ssh://[email protected]/system?no_verify=1' pool-info 'sl-pool-1'
pool-info: pool(optdata): sl-pool-1
pool-info: found option <pool>: sl-pool-1
pool-info: <pool> trying as pool NAME
Name:      sl-pool-1
UUID:      d9dc84d3-9a18-4833-8d66-95c2dc26aa9b
State:     running
Pool persistent flag value: 1
Persistent:   yes
Autostart:   no
Capacity:    2.00 TiB
Allocation:   1.32 TiB
Available:   699.47 GiB
vol-info: pool(optdata): sl-pool-1
vol-info: vol(optdata): sl-volume-1
vol-info: found option <pool>: sl-pool-1
vol-info: <pool> trying as pool NAME
vol-info: found option <vol>: sl-volume-1
vol-info: <vol> trying as vol name
Name:      sl-volume-1
Type:      file
Capacity:    100.00 GiB
Allocation:   1.13 GiB
Disclaimer: A Libvirt pool named 'sl-pool-1' with volume 'sl-volume-1' already exists on the KVM host. Image will be uploaded to same volume.
Creating qcow2 image for libvirt provider from scratch

Deleting test log result :

Deleting Libvirt image id
Deleting Libvirt image
vol-delete: pool(optdata): sl-pool-1
vol-delete: vol(optdata): sl-volume-1
vol-delete: found option <pool>: sl-pool-1
vol-delete: <pool> trying as pool NAME
vol-delete: found option <vol>: sl-volume-1
vol-delete: <vol> trying as vol name
Vol sl-volume-1 deleted
Volume 'sl-volume-1' deleted successfully.
Other volumes are present in the pool. Pool will not be deleted.
Deleted libvirt image successfully
Deleting libvirt volume from peer-pods-cm configmap
configmap/peer-pods-cm patched
libvirt image id deleted from peer-pods-cm configmap successfully
configmap/peer-pods-cm patched (no change)

@openshift-ci openshift-ci bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jan 22, 2025
Copy link

openshift-ci bot commented Jan 22, 2025

Hi @Saripalli-lavanya. Thanks for your PR.

I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@Saripalli-lavanya Saripalli-lavanya force-pushed the sl-libvirt-auto branch 3 times, most recently from 95a4394 to f7d6a28 Compare January 30, 2025 14:24
@Saripalli-lavanya Saripalli-lavanya marked this pull request as ready for review January 30, 2025 14:26
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 30, 2025
@openshift-ci openshift-ci bot requested review from tbuskey and vvoronko January 30, 2025 14:27
@@ -8,7 +8,6 @@ stringData:
#CLOUD_PROVIDER: "libvirt"
#LIBVIRT_URI: "qemu+ssh://[email protected]/system?no_verify=1"
#LIBVIRT_NET: "default
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LIBVIRT_NET also can be moved to peer-pods-cm right ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the review @bpradipt, Moved LIBVIRT_NET also into peer-pods-cm. Also updated description with some test outputs.

@bpradipt bpradipt added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Feb 5, 2025
@bpradipt bpradipt requested a review from snir911 February 5, 2025 12:33
@openshift-ci openshift-ci bot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Feb 5, 2025
@Saripalli-lavanya
Copy link
Contributor Author

/retest

// libvirt ConfigMap Keys
libvirtConfigMapKeys := []string{"CLOUD_PROVIDER"}
libvirtConfigMapKeys := []string{"CLOUD_PROVIDER", "LIBVIRT_POOL", "LIBVIRT_VOL_NAME", "LIBVIRT_DIR_NAME"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
nit: typo in commit message s/udpate/update

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 11, 2025
@bpradipt
Copy link
Contributor

Looks like the prow bot takes lgtm for the whole PR and not just for individual commit. I need to remove it so that it accidentally doesn't get merged before all reviews are done

@bpradipt bpradipt removed the lgtm Indicates that a PR is ready to be merged. label Feb 11, 2025
@Saripalli-lavanya Saripalli-lavanya changed the title Create libvirt pool and volume on KVM host & libvirt config map udpate Create libvirt pool and volume on KVM host & libvirt config map update Feb 11, 2025
Copy link
Contributor

@bpradipt bpradipt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 11, 2025
Copy link
Contributor

@snir911 snir911 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, thanks, sorry for the late review
LGTM overall, I added a small logic suggestion, would be nice if considered IMHO :)

|| error_exit "Failed to start libvirt pool '${LIBVIRT_POOL}'."
fi

virsh -d 0 -c "${LIBVIRT_URI}" vol-create-as --pool "${LIBVIRT_POOL}" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is failing at this stage (as example, also the pool-start), means there will be a leftover pool which is not cleaned (and unusable) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @snir911, Thank you for review & suggestions, have updated code to clean up pool upon volume creation failure.

virsh -d 0 -c "${LIBVIRT_URI}" pool-destroy "${LIBVIRT_POOL}" ||
# Delete the libvirt volume
virsh -d 0 -c "${LIBVIRT_URI}" vol-delete --pool "${LIBVIRT_POOL}" "${LIBVIRT_VOL_NAME}" ||
error_exit "Failed to delete volume '${LIBVIRT_VOL_NAME}' from pool '${LIBVIRT_POOL}'"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it fail here? or should it keep and try to delete the pool?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added clean up

virsh -d 0 -c "${LIBVIRT_URI}" pool-undefine "${LIBVIRT_POOL}" ||
echo "Pool '${LIBVIRT_POOL}' destroyed successfully."

virsh -d 0 -c "${LIBVIRT_URI}" pool-undefine "${LIBVIRT_POOL}" ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If pool exist we may destroy pool & volumes that was not created by the code.
Would it make sense to do the following:

If user didn't set LIBVIRT_VOL & LIBVIRT_POOL ->
create pool & vol named based on the cluster id e.g. vol_ab12ac ->
set the values to the configMap

This is similar to the approach we take when we create images

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the user misses setting LIBVIRT_VOL and LIBVIRT_POOL, the controller will give an error and will not proceed further. However, as a preventive measure, we have added default values.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the defaults can be removed if we will create the pool/volume with specific naming, doing that will prevent accidental modification of unrelated volume/pools that are in use by other libvirtd clients

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Feb 17, 2025
Copy link

openshift-ci bot commented Feb 17, 2025

New changes are detected. LGTM label has been removed.

Moving LIBVIRT_POOL, LIBVIRT_VOL_NAME & LIBVIRT_DIR_NAME variables into config map instead of secrets

Signed-off-by: Saripalli Lavanya <[email protected]>
Added code to check if Libvirt pool and volume exists if not creating them through handler

Signed-off-by: Saripalli Lavanya <[email protected]>
@Saripalli-lavanya
Copy link
Contributor Author

Hi @snir911 and @bpradipt, could you please take a re-review of the PR? I have updated code as per suggestions. Thank you so much.

Copy link

openshift-ci bot commented Feb 17, 2025

@Saripalli-lavanya: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Copy link
Contributor

@snir911 snir911 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM , thanks, also pls see my comment for consideration as future change

@bpradipt bpradipt merged commit 55d18c9 into openshift:devel Feb 20, 2025
4 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ok-to-test Indicates a non-member PR verified by an org member that is safe to test.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants