-
-
Notifications
You must be signed in to change notification settings - Fork 38
feat: fastn_perf
to record js usage
#2173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
`fastn_benchmark_api` and `fastn_perf` global object have been added that are accessible in the browser console. See fastn-js/BENCHMARK_README.md for the things you can do with these objects. The measurements are only recorded when you are on `localhost` or you manually set `window.FASTN_BENCHMARK`. Some interesting counters are: - mutable-created - closure-update Running this on a simple ftd counter app gives the following results: - closure-updates : 1395 - mutable-created : 1392 - mutable-sets : 1395 This is pretty suspicious considering that the code only creates a single mutable variable. Running this on the design-system package, we get: - closure-updates : 1121847 - css-cache-misses : 2 - css-creations : 2 - mutable-created : 1114812 - mutable-sets : 1118291 It is obvious that there's a lot of room for improvement and we can hopefully shirnk these counter and get visible performance improvements as well. I will have to study the js code (and the js output of ftd) to learn more about why this happens. Using the chrome dev tools, I also was able to figure out that there exists a memory leak (memory grows periodically which is cleared by forced gc). I couldn't trace the actual object that is getting allocated. I will have to spend some more time on this. Almost all of the code and docs in this commit is generated using claude. It also generated a bunch of extra code that I deleted manually. The deleted code included a test harness and related docs which is not needed at this time and only contributed to noise. The code that is kept is pretty minimal and a wrapper over the browser performance API. This is good even when we change our js code drastically which is the goal here. The AI started with the following prompt: > I want you to understand `fastn`. This is a web framework. The `fastn` binary > spins up a webserver that takes request, reads corresponding `.ftd` file from > CWD and translates it into an html output that contains html/css/js. > > There's also an option to build using `fastn build` this builds your current > project and outputs static html/css/js files in a directory. We are facing > performance issues with the js output, like resizing browser window is slow > among other things. I want you to look at ./fastn-js/js/**.js files, fastn-core > includes it all in a function called `hashed_default_ftd_js`. > > `all_js_without_test_and_ftd_langugage_js` is the function that lists all the > js files that are included and the order in which they are concatenated. I want > you to help me create benchmarks for these js file contents. Later we'll do > performance enhancement and compare benchmark reports across changes. > > Suggest all the ways to benchmark the js files. Following this, most of the prompts were very small and mostly accepted whatever Claude produced. In between, I manually cleaned some garbage and fixed compiler errors. Asking claude to do all this was possible but it was not worth it.
This PR is not ready to merge yet. The next steps in my head are:
This is a long task and I am doing this all with the help of Claude Code. |
These findings are brilliant @siddhantk232, especially closure-updates : 1121847, wow! We def have a lot of scope of improvement here :-). Let's make fastn super awesome, I believe we may for good solve our backend issues by just addressing these. Or least make significant strides. |
- Node constructor count - Node2 constructor count - setProperty, setStaticProperty and, setDynamicProperty count - destroy count on Node2
fastn_benchmark_api
andfastn_perf
global object have been addedthat are accessible in the browser console.
See fastn-js/BENCHMARK_README.md for the things you can do with these
objects.
The measurements are only recorded when you are on
localhost
or youmanually set
window.FASTN_BENCHMARK
. Some interesting counters are:Running this on a simple ftd counter app gives the following results:
This is pretty suspicious considering that the code only creates a
single mutable variable.
Running this on the design-system package, we get:
It is obvious that there's a lot of room for improvement and we can
hopefully shirnk these counter and get visible performance improvements
as well. I will have to study the js code (and the js output of ftd) to
learn more about why this happens.
Using the chrome dev tools, I also was able to figure out that there
exists a memory leak (memory grows periodically which is cleared by
forced gc). I couldn't trace the actual object that is getting
allocated. I will have to spend some more time on this.
Almost all of the code and docs in this commit is generated using
claude. It also generated a bunch of extra code that I deleted manually.
The deleted code included a test harness and related docs which is not
needed at this time and only contributed to noise. The code that is kept
is pretty minimal and a wrapper over the browser performance API. This
is good even when we change our js code drastically which is the goal
here.
The AI started with the following prompt:
Following this, most of the prompts were very small and mostly accepted
whatever Claude produced. In between, I manually cleaned some garbage
and fixed compiler errors. Asking claude to do all this was possible but
it was not worth it.