Steve Fosdick fozzy@pelvoux.demon.co.uk wrote:
One technique you may find useful for this kind of thing is to look at the proportion of CPU time spend in the kernel (sys cpu time) and in user code (user cpu time).
I'll have a look tonight. what I have done was run top behind the app, to see exactly when the leak occurs, and what exactly happens. Both apps display a wave, you select the wave to cut it up or apply effects. I noticed then when you try to apply an effect, the app goes up to 40-50% CPU usage, and X takes up another 40-50%, giving a nice 99% no-I'm-not-going-to-do-anything-for-a-bit-sonny-jim. So I figure that maybe it's having problems redrawing the wave or something. I dunno.
However, I have run this app, same version, with same version of X, same kernel, on the same computer, but with Redhat.
So I was thinking, why is it now buggering X? (that's the technical term, you understand). EXCEPT that on this one I have patched it with the NVIDIA drivers. So I recompiled my kernel last night during IRC, and will boot into that tonight, to see if it makes a difference.
If it does, then it's bugreport time. If not, then I'll have to start tracking down the libraries that the apps use (it happens on 2 of them, Sweep and GLAME).
The CPU time and User CPU time may help in this too, I guess. But since all installing a new kernel is is dpkg -i newkernel.deb, I'll try that anyway.
Ricardo