-
Notifications
You must be signed in to change notification settings - Fork 244
CORENET-5972: Consume openvswitch-ipsec systemd service for OVN IPsec deployment #2662
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
CORENET-5972: Consume openvswitch-ipsec systemd service for OVN IPsec deployment #2662
Conversation
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4854 openshift/os#1718 openshift/machine-config-operator#4878 openshift/ovn-kubernetes#2472 |
2 similar comments
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4854 openshift/os#1718 openshift/machine-config-operator#4878 openshift/ovn-kubernetes#2472 |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4854 openshift/os#1718 openshift/machine-config-operator#4878 openshift/ovn-kubernetes#2472 |
The |
@pperiyasamy I agree, we should get OVS 3.5 first into rhcos / ovn-k / microshift. We can install openvswitch3.5-ipsec at the same time, it should not be a problem since the service is disabled until CNO activates it. We need OVS 3.5 either way for other purposes (rhel 10 support, for example). Once we have OVS 3.5 and the openvswitch-ipsec service we can more easily test CNO and other changes. |
@pperiyasamy: This pull request references SDN-5330 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
51fb402
to
70f121e
Compare
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4854 openshift/os#1718 openshift/machine-config-operator#4878 openshift/ovn-kubernetes#2472 |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/os#1718 openshift/machine-config-operator#4878 openshift/ovn-kubernetes#2472 |
2933e60
to
45b106a
Compare
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/ovn-kubernetes#2472 openshift/machine-config-operator#4878 |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-serial openshift/ovn-kubernetes#2472 openshift/machine-config-operator#4878 |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/ovn-kubernetes#2472 openshift/machine-config-operator#4878 |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-serial openshift/ovn-kubernetes#2472 openshift/machine-config-operator#4878 |
45b106a
to
555d31c
Compare
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/ovn-kubernetes#2472 openshift/machine-config-operator#4878 |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-serial openshift/ovn-kubernetes#2472 openshift/machine-config-operator#4878 |
/assign @anuragthehatter @huiran0826 |
555d31c
to
5b0839a
Compare
01be1b3
to
ac1108b
Compare
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/ovn-kubernetes#2472 openshift/machine-config-operator#4878 |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-serial openshift/ovn-kubernetes#2472 openshift/machine-config-operator#4878 |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4878 |
/assign @jcaamano |
@pperiyasamy: This pull request references SDN-5330 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@pperiyasamy: This pull request references CORENET-5972 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
ac1108b
to
fd6e0e3
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: pperiyasamy The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4878 |
The ovn-ipsec-host daemonset pod currently spins up ovs-monitor-ipsec process to configure IPsec connections with the peer nodes. This would make ipsec connections to be established for the existing nodes a bit later after kubelet is started at the time node/service restart scenario, but by the time workloads are scheduled on the node started hitting traffic drops because of unavailability of IPsec connections between nodes. This makes IPsec jobs in CI so unstable and monitor jobs always failing during IPsec upgrade. The FDP story (https://issues.redhat.com/browse/FDP-1051) gets openvswitch-ipsec systemd service (runs ovs-monitor-ipsec) with required configurable parameters, It's available with OVS 3.5 version. So this commit does the following. 2. Stop spawning ovs-monitor-ipsec as foreground process in the ovn-ipsec container, Instead setup required IPsec configuration parameters in the /etc/sysconfig/openvswitch file, enable and start the openvswitch-ipsec service on the host. This is done at the of when ovn-ipsec-host pod is coming up for the first time, for the pod restart scenarios, it just checks openvswitch-ipsec service is running on the host, otherwise exit from the container with error. 2. Enable and start openvswitch-ipsec systemd service from IPsec Machine configs when service is already configured for east west traffic. 3. Keep running an ovn-ipsec container and redirects /var/log/openvswitch/ovs-monitor-ipsec.log to the ovn-ipsec container's stdout console. 4. There is no necessity of doing ipsec state and policy cleanup in ovn-ipsec-cleanup container when OVN IPsec is handled via openvswitch-ipsec systemd service. 5. During the OCP upgrade, the new ipsec os extension takes while to deploy with openvswitch3.5-ipsec package, so by the time ovn-ipsec-host daemonset is rendered, We need to handle that scenario by running ovs-monitor-ipsec in the container. so this commit is also considering the transition phase of the process that is moving from container to host. 6. The ovn-keys init container configures ovs with IPsec certificate paths, so the container uses same host directory path to store and configure ovs with certificates because the ovs-monitor-ipsec process is running on the host now. Signed-off-by: Periyasamy Palanisamy <[email protected]>
Having any changes in IPsec machine configs lead machine configs rolled out two times and nodes are rebooted twice, because of this apiserver pod's container are exited excessive amount of times, so this commit removes the changes from IPsec machine configs and moves those changes to be part of mco wait-for-ipsec-connect.service. Signed-off-by: Periyasamy Palanisamy <[email protected]>
fd6e0e3
to
4303e9b
Compare
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4878 |
@pperiyasamy: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
In the latest CI runs |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-serial openshift/machine-config-operator#4878 |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4878 |
This ipsec machine config extension is now installing
openvswitch3.5-ipsec
package on the node, so this PR consumes that package to configure, enable and startopenvswitch-ipsec
systemd service which basically moves away running ovs-monitor-ipsec process from container to host.It fixes following issues.
auto=start
parameter into each IPsec connections which is introdued by the PR OCPBUGS-52280, SDN-5330: Add ipsec connect wait service machine-config-operator#4854.In order to consume openvswitch-ipsec systemd service, this PR does the following:
Stop spawning ovs-monitor-ipsec as foreground process in the ovn-ipsec container, Instead setup required IPsec configuration parameters in the /etc/sysconfig/openvswitch file, enable and start the openvswitch-ipsec service
on the host. This is done at the of when ovn-ipsec-host pod is coming up for the first time, for the pod restart scenarios, it just checks openvswitch-ipsec service is running on the host, otherwise exit from the container with error.
Enable and start openvswitch-ipsec systemd service from IPsec Machine configs when service is already configured for east west traffic.
Keep running an ovn-ipsec container and redirects /var/log/openvswitch/ovs-monitor-ipsec.log to the ovn-ipsec container's stdout console.
There is no necessity of doing ipsec state and policy cleanup in ovn-ipsec-cleanup container when OVN IPsec is handled via openvswitch-ipsec systemd service.
During the OCP upgrade, the new ipsec os extension takes while to deploy with openvswitch3.5-ipsec package, so by the time ovn-ipsec-host daemonset is rendered, We need to handle that scenario by running ovs-monitor-ipsec in the container. so this PR is also considering the transition phase of the process that is moving from container to host.
The ovn-keys init container configures ovs with IPsec certificate paths, so the container uses same host directory path to store and configure ovs with certificates because the ovs-monitor-ipsec process is running on the host now.
/assign @igsilya