Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relocate SSNv2 range objects for new CA instances #4885

Merged
merged 2 commits into from
Oct 31, 2024

Conversation

edewata
Copy link
Contributor

@edewata edewata commented Oct 24, 2024

pkispawn has been modified to create ou=ranges subtree for SSNv1 and optionally ou=ranges_v2 subtree for SSNv2 if it's enabled for new CA instances. The pki-server <subsystem>-db-init and <subsystem>-range-update commands have been updated to use the proper subtree to store the range objects. Hard-coded subtrees in the create.ldif have been removed.

Similar changes are made to KRA as well, but since there are no tests for KRA with SSNv2 it's not officially supported yet.

The test scripts have been updated to use ou=ranges_v2 subtree for SSNv2 range objects.

@edewata edewata requested a review from fmarco76 October 24, 2024 16:12
@edewata
Copy link
Contributor Author

edewata commented Oct 24, 2024

@fmarco76 For SSNv2 we could use it like this: edewata@4a85681

@edewata edewata mentioned this pull request Oct 24, 2024
request_dn = self.mdict.get('pki_request_number_range_dn')
if request_dn:
subsystem.set_config('dbs.requestRangeDN', request_dn)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dbs.requestRangeDNis assigned twice, here and on line 1188. Could we merge with something like:

request_dn = self.mdict.setdefault('pki_request_number_range_dn',  'ou=requests,ou=ranges')
subsystem.set_config('dbs.requestRangeDN', request_dn)

and remove the one in line 1188 so it is assigned only once.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided not to add the pkispawn param for now due to complexity, so the subtree is always ou=ranges for SSNv1 and ou=ranges_v2 for SSNv2.

serial_dn = self.mdict.get('pki_serial_number_range_dn')
if serial_dn:
subsystem.set_config('dbs.serialRangeDN', serial_dn)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above. (I think we could use the same for the other configuration here)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see above.

@@ -105,6 +109,21 @@ public void execute(CommandLine cmd) throws Exception {

if (!cmd.hasOption("skip-containers")) {
ldapConfigurator.createContainers(subsystem);

String requestRangeRDN = dbConfig.getRequestRangeDN();
if (!StringUtils.isEmpty(requestRangeRDN)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the id generator is legacy or legacy2 should we throw an exception if this is not configured?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the updated PR the param is guaranteed to exist in CA and KRA so it won't fail. The param will not exist in other subsystems so the code won't create the subtree.

}

String serialRangeRDN = dbConfig.getSerialRangeDN();
if (!StringUtils.isEmpty(serialRangeRDN)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above

@fmarco76
Copy link
Member

fmarco76 commented Oct 24, 2024

@fmarco76 For SSNv2 we could use it like this: edewata@4a85681

Since the DS was configurable from the start it is OK to add the possibility to configure. To move the nextRange I am not sure yet.

@edewata
Copy link
Contributor Author

edewata commented Oct 26, 2024

@fmarco76 Thanks for the comments! I'll update this PR after PR #4886. I think we need to change the location of nextRange for SSNv2 so that if there's a problem with the migration the nextRange for SSNv1 is still intact.

@fmarco76
Copy link
Member

For SSNv2 we could use it like this: edewata@4a85681

With the new commit PR #4887 the current ranges tree is not modified so it is alwais possible to get the nextRange value for the legacy generator therefore I am wondering if it could be enough a procedure to revert the nextRange value. May be with the command:

# pki-server ca-id-generator-update --type legacy --revert cert

The option --revert means: do not do migration but just revert the value in CS.cfg and in nextRange using the range values and eventually some of the dbs.* values.

@edewata
Copy link
Contributor Author

edewata commented Oct 28, 2024

As mentioned in PR #4887 we are still modifying the same nextRange attribute used by SSNv1 which is replicated to all DS instances and it might create potential issues. I'd prefer we move that into the new ou=ranges_v2 subtree so we have a cleaner separation between SSNv1 and SSNv2. Once this is done, if we provide a revert functionality it will be just restoring the SSNv1 params in CS.cfg. However, if new certs have been issued with SSNv2 things could be more complicated. I'm not sure how SSNv1 code will handle that scenario. Worst case the new certs will need to be removed/revoked as well. See also the Recovery section for RSNv3: https://github.com/dogtagpki/pki/wiki/Configuring-CA-with-Random-Serial-Numbers-v3

@fmarco76
Copy link
Member

fmarco76 commented Oct 28, 2024

However, if new certs have been issued with SSNv2 things could be more complicated. I'm not sure how SSNv1 code will handle that scenario. Worst case the new certs will need to be removed/revoked as well. See also the Recovery section for RSNv3: https://github.com/dogtagpki/pki/wiki/Configuring-CA-with-Random-Serial-Numbers-v3

There should be no problem to move from ssnv1 to ssnv2 and going back. Certificates are serial and there is no overlap issue so the problem is to find the right values to avoid conflicts. In the worst case it is possible to read last serial released and start new range from that point. You could have a big gap but not issues.

To move the nextRange in a different location additional configuration parameters could be needed. It is weird that looking at the repository object the nextRange is not used and maintain a wrong value while the new repository for nextRange actually it is not a repository. Therefore, I am also looking at the possibility to store the nextRange in a different place (e.g. the description field is not used, could it be used?)

@edewata
Copy link
Contributor Author

edewata commented Oct 28, 2024

However, if new certs have been issued with SSNv2 things could be more complicated. I'm not sure how SSNv1 code will handle that scenario. Worst case the new certs will need to be removed/revoked as well. See also the Recovery section for RSNv3: https://github.com/dogtagpki/pki/wiki/Configuring-CA-with-Random-Serial-Numbers-v3

There should be no problem to move from ssnv1 to ssnv2 and going back. Certificates are serial and there is no overlap issue so the problem is to find the right values to avoid conflicts. In the worst case it is possible to read last serial released and start new range from that point. You could have a big gap but not issues.

Yes, it might work just fine, but since we do not have a test for this, we don't really know whether there's any unforeseen issues. It's better to minimize the risk as much as possible, especially since we will not be able to assist directly during the actual migration of production environment.

To move the nextRange in a different location additional configuration parameters could be needed. It is weird that looking at the repository object the nextRange is not used and maintain a wrong value while the new repository for nextRange actually it is not a repository. Therefore, I am also looking at the possibility to store the nextRange in a different place (e.g. the description field is not used, could it be used?)

In edewata@4a85681 I was actually trying to use repository object class for the ranges subtree so we can store the nextRange there. Let me update this PR to include that change.

@edewata edewata changed the title Add installation params for range DNs Relocate SSNv2 range objects for new CA instances Oct 31, 2024
@edewata
Copy link
Contributor Author

edewata commented Oct 31, 2024

@fmarco76 Sorry for the delay. Please see the updated PR. I've changed the code to create SSNv2 range objects in ou=ranges_v2 for new instances to match migrated instances. I dropped the pkispawn params for now and deferred moving the nextRange to a separate PR due to complexity. There are some failures in the CI but I think they are caused by recent changes in Fedora.

pkispawn has been modified to create ou=ranges subtree for SSNv1
and optionally ou=ranges_v2 subtree for SSNv2 if it's enabled for
new CA instances. The pki-server <subsystem>-db-init and
<subsystem>-range-update commands have been updated to use the
proper subtree to store the range objects. Hard-coded subtrees in
the create.ldif have been removed.

Similar changes are made to KRA as well, but since there are no
tests for KRA with SSNv2 it's not officially supported yet.
The test scripts have been updated to use ou=ranges_v2 subtree
for SSNv2 range objects.
Copy link

sonarcloud bot commented Oct 31, 2024

Copy link
Member

@fmarco76 fmarco76 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@edewata
Copy link
Contributor Author

edewata commented Oct 31, 2024

@fmarco76 Thanks!

@edewata edewata merged commit 830de10 into dogtagpki:master Oct 31, 2024
151 of 168 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants