I took a better look. This, as other profilers, work by sampling the process, getting the stack trace all at once.
This is either getting it from tooling baked on the kernel (that is why it doesn't work on windows) or getting the stack sample from the JVM (see the tiny-profiler from part-time nerd above)
Sampling makes sense if you want to be as much non intrusive as possible. That is not how JavaFlame works.
What I do is injecting byte code on every function that matches the filters. This code will update a custom stack that stores not only the function name, but also uses reflection to get the argument and their values. In exit, it also records the return value.
It does record the time the function took to run, but this includes the time to get the values from the arguments. It is not exact, but it is good enough to get an idea of the overall logic.
In short, I don't think you can have the argument values with sampling.
I did not know about this project, I will look into it, thanks for linking :)
But, from a quick peek, I think my motivation is different.
My intention when creating this was not profiling exactly each operation, but understanding the logic (what each function calls do and what parameters are passed).
I see that this, apparently, does not capture argument values and return values (I may be wrong).
If you are profiling, that makes sense. Calling toString on every argument obviously affect the performance.
Another thing is that this is pretty simple, there are not many tweaks you need to do, as the goal is just to glance at the values being passed around at runtime.
If you look at the example page, with the sort algorithms, you will see that you can follow exactly how each one works by looking at the values being passed and returned. You don't get this from only looking at the calls.
Mainly for debugging locally. I use it either to understand which parts of the code will be touched by a certain action or over unit tests to get an overall idea of what values are passed to which parts of the system.
Works well if you need to reproduce a bug and wants to check how did you get to whatever invalid state you are looking into. In contrast, before, I would do the same by adding a bunch of breakpoints and going through the function calls. This is slow, and specially annoying when you have lots of timeouts in your code.
But this is project is very new, I am starting to feel the actual json with the stacktrace is a little bit more useful than the graph itself to peek at the values. In the sort example, this would be https://www.isageek.com.br/javaflame/data.js
I've done this after needing to heavily customize slide using reveal js.
Reveal js programmatically changes the slides and made styling them difficult. Besides, the grid layout is perfect for this kind of whole page design.
I don't remember where did I get this from but it is a cool thing to add to your .bashrc
#!/bin/bash
#teach you some new commands every time you open a new terminal
echo 'Did you know that:'
echo $( whatis $(ls /bin | shuf | head -1))
echo $( whatis $(ls /sbin | shuf | head -1))
echo $( whatis $(ls /usr/bin | shuf | head -1))
This is either getting it from tooling baked on the kernel (that is why it doesn't work on windows) or getting the stack sample from the JVM (see the tiny-profiler from part-time nerd above)
Sampling makes sense if you want to be as much non intrusive as possible. That is not how JavaFlame works.
What I do is injecting byte code on every function that matches the filters. This code will update a custom stack that stores not only the function name, but also uses reflection to get the argument and their values. In exit, it also records the return value.
It does record the time the function took to run, but this includes the time to get the values from the arguments. It is not exact, but it is good enough to get an idea of the overall logic.
In short, I don't think you can have the argument values with sampling.