-
Notifications
You must be signed in to change notification settings - Fork 12.5k
imatrix: calculate activation-based statistics for new format (GGUF) imatrices #14891
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
If we had access to some numerical linear algebra routines then it would likely be possible to get much more interesting stats from this. If you think about it:
If instead of using the L2 norms of the differences, we construct the cross-covariance matrix of the paired samples, and then take the SVD of this:
I suspect that the scaling part of the transformation is quite well handled by the current scaler quants, but the rotational component is likely not. IIRC, some of the 1-2bit quants use vector quantization, and if so; these will likely handle the rotational components better and/or show quite different properties. I'm on my phone ATM so can't easily link them, but there have been several papers showing:
|
If it's any use, then there is code here to analyse the symmetrised cross-covaraince matrix I used for the control vectors: https://github.com/jukofyork/control-vectors/blob/main/direction_analyzer.py The symmetrised version deliberately gets rid of the rotational components as there can't be made use of if we are just looking for a single direction... You can actually do the same on the anti-symmetrised version (to look at the rotational components only), but Eigen-decompostion is less useful for this as it will return all complex vectors (hence why SVD makes more sense). I should also add that from my experiments using SVD on the tensors (ie: ignoring the activations!) of LLMs, it often appears that the early/final tensors (which actually appear to be very important and are bumped in bits in the quant routines here!), actually tend to have a less flat distribution of singular values themselves! So when you ignore the distribution of input activations - they generally appear to be doing something inherently "lower dimensional" than the middle tensors!? It would be interesting to investigate this whilst also looking at the activations... |
I'd be lying if I were to claim I understand everything in there 🥴, but I think I got the gist. Implementing the l2 norm seems straightforward without having to introduce additional 3rd party dependencies, but completely agree that a "light" BLAS lib will be a godsend. For now, I'll focus on l2 norm, but will add activation variance as well (good shout!) For a later version, I'd like to try the logit prism approach but that's for another day. Thanks for the steer @jukofyork! more weekend reading 😁 |
If you want to learn more about Linear Algebra then Gilbert Strang's video lectures are amazing: https://www.youtube.com/playlist?list=PLE7DDD91010BC51F8 (IIRC, the first lecture only is bad resolution, so don't be put off by that!) or if you like books: https://www.amazon.co.uk/Practical-Linear-Algebra-Textbooks-Mathematics/dp/0367507846 (or one of the earlier editions of this same book) gives a really solid foundation in terms of 2D and 3D. The biggest problem breaking into it is for some reason American Universities decided to make it much more abstract and proof-based than it needs to be (probably to weed out potential math-majors!). If you look at some much older pre-1980s books, or books not aimed at Westerners, then it's surprising how approachable it is: |
I have tried to bring this up before: #8831 (reply in thread) I think it would be fairly straightforward to port the non-complex routines and then open up all that GSL has to offer: https://www.gnu.org/software/gsl/doc/html/linalg.html instead of trying to rewrite numerical routines that have had 1000's and 1000's of thought and testing put into them! :) |
Following up from #9400 and #12718, I've started tinkering with activation-based statistics, in addition to what's currently available via
--show-statistics
.At the moment, I'm exploring three options going from from easy to implement and OK approximation, to some assembly required but fairly accurate:
llama-perplexity --save-all-logits
run)llama-imatrix
already generates the actual logits to compute PPL, use Thông T. Nguyễn's logit prism approach to calculate the exact contribution of each layer to the final logit scoresSharing with the readers, and in particular @compilade and @jukofyork, in case anyone's willing to double check assumptions and/or suggest alternative approaches I haven't considered.