performance benchmarking / optimizing Greenwood #970
Labels
CLI
enhancement
Improve something existing (e.g. no docs, new APIs, etc)
help wanted
Extra attention is needed
question
Further information is requested
v0.27.0
Milestone
Type of Change
Summary
Can't say I've focused to heavily on build time performances but since Zach Leatherman of 11ty put together a bench mark blog post and repo of some of the top SSGs, and so I figured I would test Greenwood just at least get a sense of where it sat amongst the rest.
I made a fork and decided to add Greenwood. 😬 🤞
I tried it on two machines and both had varying struggles:
or
Here are specs, using the same approach Zach took. He didn't mention Node version, so for mine I used 16.14.0. The metric to track is
real
.Results
MBP⚠️
250
500
1000
2000
4000
🚫
MBA 🚨
25
50
100
250
500
🚫
1000
🚫
2000
🚫
4000
🚫
Details
So yeah, definitely some work we could / should do here, at least to chip away it for 1.0. Aside from the issues with the M1 chip one area of opportunity for sure seems to be the bundling phase as each benchmark run always seems to hang a little bit on this part of the process
success, done generating all pages...
Anyway, not sure what is realistic but would certainly like to be somewhat "competitive", as for v1.
Thoughts / Next Steps
prerender: false
- introduce worker thread pools for SSR page generation #983Promise.all
free for all? - introduce worker thread pools for SSR page generation #983Out of even more curiosity, I wonder would it would look like to benchmark the 0.25.0 version, to see how the results would have come out if using Puppeteer.😅compilation.graph
when iterating over pages. Maybe a Map / Set would be better here? - refactor bundling lifecycle and resource optimizations #971async
through and through #823Worker
Threads anymore? (At least for production builds - Enhancement/issue 1088 refactor Workers out of SSR builds #1110)The text was updated successfully, but these errors were encountered: