-
Notifications
You must be signed in to change notification settings - Fork 121
Performance regression in 3.16.14 series? #150
Comments
Here are some ruby-prof traces of the two versions mentioned above, simply using Uglifier to compress a 1.4mb javascript file. 3.16.14.7 took approximately 70 seconds, while 3.11.8.17 completed in about 18.
|
Very interesting. Thank you, @ledbettj. |
Hmm... Looking at https://github.com/cowboyd/therubyracer/blob/master/ext/v8/string.cc#L22 I'm not sure the regression is caused by v8. @cowboyd needs to weigh in here. |
Just to make sure I include all the variables, the version of Faster: |
+1 on this, We are going from 5 minute deploys up to 18 minutes when deploying to C1 large instances |
+1 |
1 similar comment
+1 |
I am going to have to debug this, as it stands it takes us 12 seconds to simply run our JS snippets through babel to build application.js, this feels way too long. I completely eliminated the ruby racer from our assets minification pipeline due to this huge performance hole. Going to work on some simpler repros this week, but this issue does need more attention. |
A reproduction would help. Let me know if I can help in some way. Unfortunately profiling c++ is a bit out of my league or I would have taken a crack at this. |
@ewoodh2o what's in your test.rb? |
I fear that this is a regression in the upstream code itself. If that proves to be the case, we might have to do one of the following things to get around the issue:
In any case, pinpointing the culprit is essential. |
I started working on this ... I am making opening moves on building the world's most minimal ruby racer that links against latest v8: https://github.com/SamSaffron/mini_racer Performance is significantly better for uglification:
So there you are, exact same args as https://github.com/lautis/uglifier uses except that instead of taking 144 seconds for our app.js it takes 12. I am still very uncomfortable with node being this much faster and think a lot of this is due to the wrapping and es5 shims etc that uglifier gem adds will continue to investigate uglify mini racer code uglify ruby racer code our gigantic already minified application js file I was testing with A lot of the performance pain is due to Note ... this https://gist.github.com/SamSaffron/fe9f938f43a63b2f709b9bd546ba9087 takes 149 seconds (against therubyracer) its EXACTLY the same code that mini racer runs, only diff is that there is |
Are you running this on Linux or on OS X? Keep in mind that I specifically disabled link-time optimisations on Linux as the build system was trying to run an ld.gold binary it had pulled and was failing. |
Running this on Linux x64, perhaps it is those missing optimisations, will try with a "bring my own" libv8 tomorrow |
no luck finding my missing 20% or so of perf, still @ignisf a new libv8 gem would be mighty welcome even with this performance weirdness |
I have a few nits to pick and will push a 5.0 version |
@ignisf awesome! Be sure to pick latest stable from https://omahaproxy.appspot.com/ might as well package the latest stable version of v8 |
k, sure |
@ignisf I just noticed I had to add:
To work around a linker issue on mac turns out it links to libc++ by default or something, be sure to do a --pre release prior to doing a stable one so I can confirm mini racer compiles right |
@SamSaffron are you by any chance hitting this issue: #202? |
I was able to install fine on my osx but had to upgrade xcode to latest On Fri, May 13, 2016 at 11:08 AM, Petko Bordjukov [email protected]
|
For the record, here are results from running mini_racer's
Will keep this issue open for now and hopefully close it after TRR is updated as we'll have to keep around 3.16.14 support at least until then. |
Wow an exponential scale eh? :) |
😞 |
You should be able to do this by locking the version to the source release and configuring a compiler. One thing to note is that at least from the build guide it seems like clang is the preferred compiler for v8. I don't expect there to be any major differences in performance just solely based on compiler choice, though as the recent versions of gcc and clang don't differ too much in their benchmark scores. What could lead to a performance boost would be enabling link-time optimisations. I have not looked into that yet, though I think there's support for this in v8's build system (for linux). |
Peak memory used for linking with lto on one make job is around 3,5512 GB. The produced binary gem archive is 275 MB. It seems that LTO is enabled automatically when linking the mini_racer_extension.so against a LT optimised version of the libv8 static library (Linux x86_64, GCC 6.1.1). Could not benchmark because invoking the benchmark script causes a segfault. LTO-enabled branch can be found here: https://github.com/cowboyd/libv8/tree/lto. I don't see us turning on the lto flag by default for the source distribution (due to the memory footprint), but it could be used for the binaries if anybody can get anything linked to not segfault. |
phoronix had some pretty mixed results with lto .. I mentioned the clang tests cause another bench I have has node clocking On Mon, May 16, 2016 at 3:19 PM, Petko Bordjukov [email protected]
|
I think it is fine to close this for now, nothing more for libv8 to do in this regard |
We upgraded libv8 to the 3.16.14 series last week and started seeing significantly slower asset compilation times (through Uglifier) compared to v3.11.8.17. I came at this through Rails' Asset Pipeline, so I'm way in the weeds here and don't know if it's libv8 or V8 itself, but thought I'd ask here to see if anyone had similar experiences.
This doesn't seem to affect times on my dev machine (OS X 10.10, Intel Core i7), but it's between 2x and 5x slower on our production servers (Scientific Linux 6.5, AMD). Seems to be worse for larger files (obviously in strict wall-clock time, but also closer to 5x+ slower on the big files).
Benchmarks for
Uglifier.compile()
, which is just the easiest way I know how to test:_libv8-3.11.8.17-x86_64-linux_
_libv8-3.16.14.3-x86_64-linux_
Anyone experiencing anything similar, or have any ideas on what might cause this?
The text was updated successfully, but these errors were encountered: