Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve field count typical case performance #120

Merged
merged 38 commits into from
Oct 9, 2024

Conversation

runer112
Copy link
Contributor

The tightest upper bound one can specify on the number of fields in a struct is sizeof(type) * CHAR_BIT. So this was previously used when performing a binary search for the field count. This upper bound is extremely loose when considering a typical large struct, which is more likely to contain a relatively small number of relatively large fields rather than the other way around. The binary search range being multiple orders of magnitude larger than necessary wouldn't have been a significant issue if each test was cheap, but they're not. Testing a field count of N costs O(N) memory and time. As a result, the initial few steps of the binary search may be prohibitively expensive.

The primary optimization introduced by these changes is to use unbounded binary search, a.k.a. exponential search, instead of the typically loosely bounded binary search. This produces a tight upper bound (within 2x) on the field count to then perform the binary search with.

As an upside of this change, the compiler-specific limit placed on the upper bound on the field count to stay within compiler limits could be removed.

The tightest upper bound one can specify on the number of fields in a
struct is `sizeof(type) * CHAR_BIT`. So this was previously used when
performing a binary search for the field count. This upper bound is
extremely loose when considering a typical large struct, which is more
likely to contain a relatively small number of relatively large fields
rather than the other way around. The binary search range being multiple
orders of magnitude larger than necessary wouldn't have been a
significant issue if each test was cheap, but they're not. Testing a
field count of N costs O(N) memory and time. As a result, the initial
few steps of the binary search may be prohibitively expensive.

The primary optimization introduced by these changes is to use unbounded
binary search, a.k.a. exponential search, instead of the typically
loosely bounded binary search. This produces a tight upper bound (within
2x) on the field count to then perform the binary search with.

As an upside of this change, the compiler-specific limit placed on the
upper bound on the field count to stay within compiler limits could be
removed.
@runer112
Copy link
Contributor Author

runer112 commented Jan 18, 2023

This issue was originally noticed when some source files in a large project seemed to be consuming a suspiciously large amount of memory (and also time). After some digging, the culprit was eventually nailed down as field count detection.

Before these changes, compiling one such source file with clang 14 peaked at 1.5 GB of memory usage. After these changes, compiling the same file peaked at 617 MB. These numbers include the memory usage of compiling everything else as well, which is why the after-fix number is still relatively large. The memory usage attributable just to field counting probably went from something like 1 GB to something at least one or two orders of magnitude less.

@runer112 runer112 marked this pull request as ready for review January 18, 2023 22:57
@apolukhin
Copy link
Member

apolukhin commented Jan 19, 2023

The PR fails on msvc-14.1 with c1xx: fatal error C1060: compiler is out of heap space error.

I'd recommend to try another approach: just change the detect_fields_count_greedy to fill an bool init_succeeded[Last]. This could be done in one go, by getting an index sequence from Last, and applying it to the function that returns true/false, depending on the construction success. After that a simple loop in constexpr could finish the job.

Something like that:

constexpr size_t detect_fields_count_greedy(index_sequence<Indexes...>) {
bool init_succeeded[Last] = { is_aggr_initable<T, Indexes≥(), ... };
for ( i = Last - 1; i > 0; --i) if init_succeeded [i] return i;
}

In the last CI run, 15 tasks failed with a compiler is out of heap space error.
With the jobs running in parallel, it's hard to determine which tasks failed due
to their own excessive memory usage and which were well-behaved, but a victim of
running when another task consumed all the available memory.
@runer112
Copy link
Contributor Author

runer112 commented Jan 24, 2023

In the last CI run, 15 tasks failed with a compiler is out of heap space error. With the jobs running in parallel, it's hard to determine which tasks failed due to their own excessive memory usage and which were well-behaved, but a victim of running when another task consumed all the available memory. I pushed a temporary commit that I believe should disable testing in parallel. Could you please approve the CI build? Hopefully this will give me enough info to be able to diagnose the real issue, after which I can revert the CI config change.

  • Revert the CI config change.

Regarding your suggestion, maybe I'm not understanding it correctly, but I don't see how it would help with performance. The crux of the performance issue is that the check of whether a type is constructible with N arguments, enable_if_constructible_helper_t (SFINAE), costs O(N) memory and time. In your suggestion, I believe this would make initializing init_succeeded cost O(Last^2) time and O(Last) memory.

My proposed changes aim to minimize the overall cost by minimizing the sum of all N checked. They effectively replace sizeof(T) with just the result (field count) in asymptotic performance analysis, which may be substantially lesser. Comparing to the current code (don't trust the current comments):

Case Memory Before Memory After Time Before Time After
T is an array O(1) O(1) O(1) O(1)
T is default-constructible O(sizeof(T)) O(result) O(sizeof(T) * log(sizeof(T))) O(result * log(result))
T is not default-constructible O(sizeof(T)) O(result) O(sizeof(T)^2) O(result^2)

@runer112
Copy link
Contributor Author

runer112 commented Jan 24, 2023

The excessive compile time/memory usage issues that were causing failures have been fixed.

The central issue was that the static_assert preconditions didn't actually prevent the compiler from trying to expand the field counting templates, which sometimes expanded indefinitely. This was fixed by dummying the field counting dispatch to do basically nothing (count the number of fields in an int[1] instead, which is trivially 1) if any precondition is not met.

@runer112
Copy link
Contributor Author

runer112 commented Jan 30, 2023

@apolukhin Is there anything more you'd like me to address?

The AppVeyor build succeeds now, and does so roughly 5% faster than before despite there being 3 new tests.

@apolukhin
Copy link
Member

apolukhin commented Feb 3, 2023

Sorry, I've misread you for the first time.

So here's how I see your changes (please fix me if I'm wrong):

T is default-constructible: you do exponentioal search for upper bound of fields count. It takes log(fields_count) + 1. After that you do a binary search that takes log(log(fields_count) + 1). The final complexity is log(fields_count) + 1 + log(log(fields_count) + 1).

The current implementation starts the binary search from sizeof(type) * CHAR_BIT. However, that CHAR_BIT multiplication is not necessary - we do not need to know the eact count of bitfields. Instead of that the binary search could work with sizeof(type)+1, and if we get the maximum value, then the type has bitfields.

Here comes the math. Your algorithm is better when
log(sizeof(type)+1) > log(fields_count) + 1 + log(log(fields_count) + 1)

which is
log(sizeof(type)+1) > log(fields_count) + log(2) + log(log(fields_count) + 1)

which is
sizeof(type)+1 > fields_count*2 * (log(fields_count) + 1)

sizeof(type) is equal to fields_count*avg_field_size. It gives us

fields_countavg_field_size+1 > fields_count2 * (log(fields_count) + 1)

Which is
avg_field_size+1/fields_count > 2 * log(fields_count) + 2

For fields count 16 the avg_field_size should be about 10 to make your algorithm better. For fields count 256 the avg_field_size should be about 18 to make your algorithm better.

For aggregates of ints, chronos, pointers or size_ts existing algorithm performs better. For aggregates of strings and vectors your algorithm performs better than the existing.

I'd rather call it a tie. But the CHAR_BITS multiplocation should be removed.

T is not defsult constructible: your approach is defenetly superior. I'm worried about the cases, when the whole type is not aggregate initializable, because in that case I think your algo would run as long as the RAM os not exhausted and no diagnostic will be provided. Probably it is the reason, why github CI fails.

I'm also worried about compiler idiosyncrasies. Not all the compilers are listed in CI, so I'd rather stick to the existing, well tested algorithm, if it does not make a noticeable difference.

Here's the plan:

  • remove the CHAR_BITS multiplication
  • for the second case: do the linear search for first non default constructible T, and then use the existing binary search with sizeof(T)+1 upper limit

@runer112
Copy link
Contributor Author

runer112 commented Feb 3, 2023

Your understanding of the approach used in these changes is correct: exponential search followed by binary search. However, your performance analysis only counts "steps". Critically, it does not factor in the cost of checking at each step whether the type is constructible with N arguments, which is O(N) memory and time.

Factoring this in to the default-constructible case, the exponential search worst case costs O(1 + 2 + 4 + ... + fields_count + 2 * fields_count) = O(4 * fields_count) to establish bounds separated only by a factor of 2. The best case costs O(1 + 2 + 4 + ... + fields_count + 1) = O(2 * fields_count).

Without exponential search and ignoring the possibility of bitfields, we start with a binary search over [0, sizeof(T)]. To reach bounds separated only by a factor of 2, in the best case of an average field size in [1, 2] bytes, this only requires one check costing O(sizeof(T) / 2). This ranges from O(field_count / 2) to O(field_count), which is 1/4 to 1/2 of exponential search's best case cost.

However, the favorable comparison fades rather quickly. Once the average field size passes 4 bytes, reachng bounds separated only by a factor of 2 requires (at least) three checks costing O(sizeof(T) / 2 + sizeof(T) / 4 + sizeof(T) / 8) = O(7/8 * sizeof(T)). The field count must be less than sizeof(T) / 4, so in terms of field count, the cost is at least O(7/8 * 4 * field_count) = O(7/2 * field_count). This is already nearly the worst case cost of exponential search, and it only gets worse for larger average field sizes.

In the worst case of a single field, the cost is O(sizeof(T) / 2 + sizeof(T) / 4 + sizeof(T) / 8 + ... 1) = O(sizeof(T)). This is sizeof(T) / 4 times exponential search's worst case cost. The cost factor is nearly unbounded as it simply grows with sizeof(T).

Considering that the best case cost factor of using binary search only is 1/4 and the worst case cost factor is nearly unbounded, I believe that that alone should be enough reason to use exponential search. Additionally, it seems that exponential search may be faster in "average" use, as evidenced by the AppVeyor builds with these changes being roughly 5% faster than builds without. And as a reminder, really poor performance cases with binary search aren't just a theoretical possibility that won't happen in practice. I went through the effort to make and propose these changes specifically because I ran into such a poorly performing case where these changes made a huge difference (#120 (comment)).

Regarding the latest CI failure, I haven't had much time to look into it yet. I was hoping that it wouldn't be necessary to do this because it feels unclean, but perhaps it's wise to add a sanity check to the exponential search to halt it if it somehow exceeds sizeof(T) * CHAR_BIT.

@apolukhin
Copy link
Member

Critically, it does not factor in the cost of checking at each step whether the type is constructible with N arguments, which is O(N) memory and time.

It should not take O(N). std::make_index_sequence was a bottleneck in so many cases, that the compiler developers made it an intrinsic (for example https://github.com/microsoft/STL/blob/73924c1920af92899f7582cd904ea819b9db35bc/stl/inc/type_traits#L34). So getting the indexes is O(1), variadic pack expansion is also close to O(1), ubiq_*ref_constructor is not a template and its constructors do not consume time/memory. As a result the check should take about O(1).

Please, check that your compiler supports the builtin and that it is properly detected in https://github.com/boostorg/pfr/blob/28bd7f541f7a7632f4fe52e30d06a121e7cb1f65/include/boost/pfr/detail/make_integer_sequence.hpp

@runer112
Copy link
Contributor Author

runer112 commented Feb 7, 2023

Even if std::make_index_sequence/std::make_integer_sequence is implemented intrinsically, the cost is not constant. The cost increases with the size of the sequence.

Here's a demonstration of recent versions of clang, gcc, and msvc all running out of resources (whether self-limited or host-limited time or memory) trying to evaluate std::make_integer_sequence<int, 10000000>(): https://godbolt.org/z/KqhfxbK4z. Dropping to 1 million, I see clang and gcc succeed, but only after multiple seconds. Dropping to 100 thousand, I see msvc finally succeed, and clang and gcc succeed in less than a second.

This could happen for a type with a constructor accepting a parameter
pack.

This also prevents unbounded growth in case something goes wrong with
the logic and something should have already stopped (or never started).
@runer112
Copy link
Contributor Author

runer112 commented Sep 5, 2024

I noticed that someone else (@rressi-at-globus) appears to have discovered these changes and found the changes to be useful enough to make their own attempt to get these changes upstreamed with #179. I resynchronized my branch with develop to hopefully allow this PR to proceed, now with third-party corroboration that these changes are beneficial.

@runer112
Copy link
Contributor Author

runer112 commented Sep 5, 2024

I re-ran the tests that I previously ran in #120 (comment) to once again demonstrate that the cost of std::make_index_sequence increases with the size of the sequence, now with more recent versions of clang, gcc, and msvc just in case something got optimized. My findings are the same as before: the cost of std::make_index_sequence increases with the size of the sequence; I suspect O(N).

  1. https://godbolt.org/z/PKqes5h1M: std::make_index_sequence<10'000'000> gave me the same result across multiple retries for all 3 compilers tested: failure due to resource exhaustion.
  2. https://godbolt.org/z/9jEGEG776: std::make_index_sequence<1'000'000> gave me best-case results of clang completing in 0.9 seconds, gcc completing in 2.6 seconds, and msvc failing due to resource exhaustion.
  3. https://godbolt.org/z/PKqes5h1M: std::make_index_sequence<100'000> gave me best-case results of clang completing in 0.3 seconds, gcc completing in 0.5 seconds, and msvc completing in 1.3 seconds.

@runer112
Copy link
Contributor Author

runer112 commented Sep 5, 2024

@apolukhin: Here are direct reponses to your comment, #120 (comment). I apologize for not doing this earlier.


I'm worried about the cases, when the whole type is not aggregate initializable, because in that case I think your algo would run as long as the RAM os not exhausted and no diagnostic will be provided.

You were right, there was a problem. I added tests for a default-constructible type accepting 0 or more arguments and a non-default-constructible type accepting 1 or more arguments in e80f7aa. These tests should now pass with the changes from dd1ae1c.


I'd rather stick to the existing, well tested algorithm, if it does not make a noticeable difference

It does make a noticeable difference for large types.

Practical evidence:

  1. Improve field count typical case performance #120 (comment)
  2. Improve field count typical case performance #120 (comment)

Theoretical evidence:

  1. Improve field count typical case performance #120 (comment)
  2. Improve field count typical case performance #120 (comment)

Here's the plan:

  • remove the CHAR_BITS multiplication
  • for the second case: do the linear search for first non default constructible T, and then use the existing binary search with sizeof(T)+1 upper limit

As you're aware, the current implementation performs binary search with an upper bound of sizeof(T) * CHAR_BIT. For large types, the first few iterations could carry a significant cost, perhaps even resulting in resource exhaustion. This could maybe warrant removing the multiplication by CHAR_BIT at the cost of losing support for types with dense bitfields (average field size < 1 byte).

The proposed implementation would perform binary search with an upper bound that is no more than twice the field count. This would not carry any unnecessary significant cost. In this regard, removing the multiplication by CHAR_BIT should not be warranted.

There is however a preexisting case of concern: large types that are not aggregate-initializable in environments that do not provide std::is_aggregate. With both the existing and the proposed implementation, this may result in searching all the way up to sizeof(T) * CHAR_BIT just to eventually encounter resource exhaustion or produce an error stating that the type is not aggregate-initializable. But std::is_aggregate is required by the C++17 standard, environments without it will only become rarer with time, and using a non-aggregate type with pfr seems to be a misuse to begin with? So with the proposed implementation, removing the multiplication by CHAR_BIT only to mitigate poor performance in the case of misuse with a dated C++ environment, while also at the cost of losing support for types with dense bitfields, doesn't seem worth it to me. But if I've forgotten to account for another problematic case or if you otherwise disagree and still want to the multiplication by CHAR_BIT to be removed, let me know.

@rressi-at-globus
Copy link

rressi-at-globus commented Sep 6, 2024

I have exported this contribution as a patch for the vcpkg''s port of this component.

The patch is curently targeting boost-pfr version 1.85.0, and is under review here:

@apolukhin
Copy link
Member

Still fails the tests

@runer112
Copy link
Contributor Author

I'm looking at the output of the failed runs now, but there seems to be a lot of unstructured text to comb through... Any tips as to what to look at?

include/boost/pfr/detail/fields_count.hpp Outdated Show resolved Hide resolved
include/boost/pfr/detail/fields_count.hpp Show resolved Hide resolved
include/boost/pfr/detail/fields_count.hpp Outdated Show resolved Hide resolved
include/boost/pfr/detail/fields_count.hpp Outdated Show resolved Hide resolved
include/boost/pfr/detail/fields_count.hpp Outdated Show resolved Hide resolved
include/boost/pfr/detail/fields_count.hpp Outdated Show resolved Hide resolved
include/boost/pfr/detail/fields_count.hpp Outdated Show resolved Hide resolved
include/boost/pfr/detail/fields_count.hpp Outdated Show resolved Hide resolved
include/boost/pfr/detail/fields_count.hpp Outdated Show resolved Hide resolved
Copy link
Contributor Author

@runer112 runer112 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your code review, @apolukhin. I have read all of your comments. I replied to 3 comments that I wasn't sure how to proceed for. All of the other comments make sense and I will address them ASAP.

include/boost/pfr/detail/fields_count.hpp Outdated Show resolved Hide resolved
include/boost/pfr/detail/fields_count.hpp Outdated Show resolved Hide resolved
include/boost/pfr/detail/fields_count.hpp Show resolved Hide resolved
Copy link
Member

@apolukhin apolukhin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks almost perfectly!

Added a few comments

@@ -30,6 +31,12 @@ import std;

namespace boost { namespace pfr { namespace detail {

///////////////////// min without including <algorithm>
template <class T>
constexpr const T &min(const T &a, const T &b) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just remove all the references and pass std::size_t by value

Copy link
Contributor Author

@runer112 runer112 Oct 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current signature and behavior match std::min (1). In a constant-evaluated context, which should be the case for all operations in this file, there will be no runtime difference.

If you still want this to pass and return by value, let me know and I will make the change. But if doing so, I believe I should also un-template the function and make it strictly accept and return std::size_t so the function can't be accidentally invoked with some other type that may be incorrect to pass/return by value.

include/boost/pfr/detail/fields_count.hpp Show resolved Hide resolved
Copy link
Contributor Author

@runer112 runer112 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have replied to your new comments and will await your responses. I have also pushed a simple fix for what seems to be the only remaining issue identified by CI: Address violation of Boost min/max guidelines.

@@ -30,6 +31,12 @@ import std;

namespace boost { namespace pfr { namespace detail {

///////////////////// min without including <algorithm>
template <class T>
constexpr const T &min(const T &a, const T &b) {
Copy link
Contributor Author

@runer112 runer112 Oct 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current signature and behavior match std::min (1). In a constant-evaluated context, which should be the case for all operations in this file, there will be no runtime difference.

If you still want this to pass and return by value, let me know and I will make the change. But if doing so, I believe I should also un-template the function and make it strictly accept and return std::size_t so the function can't be accidentally invoked with some other type that may be incorrect to pass/return by value.

include/boost/pfr/detail/fields_count.hpp Show resolved Hide resolved
@apolukhin apolukhin merged commit ff415a2 into boostorg:develop Oct 9, 2024
11 checks passed
@apolukhin
Copy link
Member

Awesome work!

Many thanks for it!

@runer112
Copy link
Contributor Author

Awesome work!

Many thanks for it!

And thank you for hanging in there with me through the long process to ensure that these changes meet the high standards expected for a widely-used library.

Although I hope this should never happen, if any concerns are raised about this set of changes in the future, feel free to contact me for support.

apolukhin added a commit that referenced this pull request Oct 15, 2024
* Start upper bound fields search from `4` fields, to avoid slow startup on typical workloads
* Inline the `fields_count_binary_search_unbounded` function to reduce template instantiations depth by 1
* Renamed `min` to `min_of_size_t` to avoid weired syntax
* Applied idea of better error reporting from #120
* Do not start fields count computation if one of the static asserts failed. That speedups error reporting in edge cases
* Use `std::*_t` versions of traits as they are faster in some implementations
* Rewrite binary search to simplify it and to avoid degradation to linear search on types that have constructor from variadic pack
* Remove default template parameters to simplify code

As a result, the whole test suite now runs 10%-25% faster on MSVC, ~20% faster on Clang, and 7%-20% faster on GCC.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants