-
-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve stream rendering performances #641
base: 1.11.x
Are you sure you want to change the base?
Conversation
This avoids many closure and tuple creations, that are costly.
@recons This PR should solve your problem with compact rendering once merged and released. |
2829f8f
to
a0c50b0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just few minor comments.
(List | ||
.range(0, 1000000) | ||
.map(i => Token.NumberValue(i.toString())) :+ Token.EndArray)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(List | |
.range(0, 1000000) | |
.map(i => Token.NumberValue(i.toString())) :+ Token.EndArray)) | |
(List.tabulate(1000000)(i => Token.NumberValue(i.toString())) :+ Token.EndArray)) |
Minor, but IMHO, that makes the intent a bit clearer.
I also would love to avoid appending to that long list, but I assume you want a single chunk? Otherwise using Stream's ++
, this could be simplified further.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I want a single chunk, to see the overhead in processing one chunk.
annctx.groups.unsnoc match { | ||
case Some((OpenGroup(ghpl, gindent, group), groups)) => | ||
annctx.groups = groups.snoc(OpenGroup(ghpl, gindent, group.append(evt))) | ||
case None => // should never happen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a better option than silently ignoring this bug? fs2 itself uses assert()
to catch some bugs – not cool because AFAICT here we would still produce a semi-valid result, but let's at least have the discussion (and maybe a decision for all of fs2-data).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I don't like cases like this one, where we cannot statically ensure never happen. And I don't like crashing with an assert
either. I will try to find something better here. But it was already like that before 😬
} | ||
|
||
private def renderIndentBegin(ctx: RenderingContext): Unit = { | ||
ctx.lines = NonEmptyList(ctx.lines.head + (" " * indentSize), ctx.lines.tail) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think a StringBuilder
that is allocated to the right capacity could help speeding up here? Like first append the head, then indentSize
spaces?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was think it would actually be way better to cache the padding. In a structured data rendering the same indent size will be found over and over again. I will try to reuse the padding that was already encountered for a indent depth, rather than this.
This change addresses #634 in two ways:
By doing so, the compact rendering should be back to the performance in 1.10 and the pretty case is improved a bit.
Here are some benchmark results with the compact rendering in 1.11.1 as the baseline. 1.11.2 represents the results with this change.