| T.R | Title | User | Personal Name
 | Date | Lines | 
|---|
| 305.1 |  | ADVAX::J_ROTH |  | Thu Jun 13 1985 08:43 | 15 | 
|  | Well, that's certainly an idea...
Just off the top of my head, it seems that translating a string of bytes
to bytes containing the number of bits in each, and then adding the bytes
would be more practical.
(if the bitstring is word aligned you could add 4 bytes at a time
and fold the final sum last, for a bitstring less than 128 bytes long).
The CRC fetches from a table in memory anyway.
One bit of information that might help; complementing each bit in the
string is equivalent to XORing a mask corresponding to that bit position
into the final CRC word.
- Jim
 | 
| 305.2 |  | TURTLE::GILBERT |  | Thu Jun 13 1985 12:51 | 2 | 
|  | Well, if P=1, then the CRC tells whether the sum is odd or even.
That's a start, no?
 | 
| 305.3 |  | RANI::LEICHTERJ |  | Sun Jun 30 1985 22:07 | 14 | 
|  | No, the CRC instruction cannot be used in this way.  To see this, look at the
CRC in another way:  A CRC is determined by a polynomial P(x) over the integers
mod 2.  Any bit string can be viewed as such a polynomial, with successive bits
being the coeficients of higher and higher powers of x; so we can think of
CRC as being a function from such polynomials to such polynomials (of fixed
maximum degree k, for a (k-1)-bit CRC).  In fact, the CRC from this point of
view is given by CRC(G(x)) = G(x) mod P(x).
But it's then clear that CRC(P(x)) = 0, so P(x) can have no bits turned on,
i.e., it must be identically 0, if this CRC is to compute the number of 1
bits.  But 0 does not define a proper CRC function.  (Well, if you try and
plug it into typical CRC algorithms, you'll just get 0 out all the time.)
							-- Jerry
 | 
| 305.4 |  | RANI::LEICHTERJ |  | Sun Jun 30 1985 22:55 | 8 | 
|  | Actually, I should qualify the previous note:  It is not possible to choose
a polynomial P such that the CRC, w.r.t. P, is the number of one-bits.
This is a much weaker statement than "the CRC INSTRUCTION" can't be used
to make this calculation.  For one thing, it's possible (though I think
unlikely) that the intstruction's algorithm, if fed a carefully-chosen
but non-valid CRC table, would go ahead and compute this.  Or perhaps the
input string should be fed into the table somehow...
							-- Jerry
 | 
| 305.5 |  | JRDV03::GILBERT |  | Sun Jun 30 1985 23:53 | 3 | 
|  | Or perhaps the input string could be used with two different CRC polynomials,
the results of which can be combined to yield the count of set bits for a
fairly large size of the input string?
 | 
| 305.6 |  | METOO::YARBROUGH |  | Mon Jul 01 1985 08:43 | 14 | 
|  | The sum S{b}(N) of the radix-b digits of the representation of an integer
N can be computed from
                   ___
                  \
S{b}(N) = N-(b-1)* > Int(N/b^k)
                  /___
                   k=1
Where Int(x) is the Integer part of x, and the summation runs until b^k>N,
i.e. the remaining terms are all zero.
A fascinating feature of this formula is that none of the digits is actually
used in the summation.
I don't know whether this computation can be done by the CRC instruction,
but it may suggest an alternative approach.
 | 
| 305.7 |  | CLT::GILBERT |  | Wed May 11 1988 15:13 | 26 | 
|  | I was just curious about whether the CRC instruction would do it.
As it happens, I wanted to count the number of bits set in a rather
long string (about 12Mb), and wrote a VAX Macro routine to do it.
It works (roughly) as follows:
	(T is an array of 65536 unsigned bytes, initialized to zeroes)
	(U is an array of 17 longwords, initialized to zeroes)
	For each 16-bit word in the string,
	    Let x be the word
	    Increment T[x] by 1
	    If this makes T[x] zero, then
		Increment U[ bits[x mod 256] ] by 256	(to account for)
		Increment U[ bits[x div 256] ] by 256	( the overflow )
		(bits[m] is an array of 256 bytes that yields the number
		of bits set in the number 'm')
	For x varying from 0 to 65535, do
	    Increment U[ bits[x mod 256] ] by T[x]
	    Increment U[ bits[x div 256] ] by T[x]
	Sum  <-  0
	For x varying from 1 to 17 do
	    Sum  <-  Sum + bits[i] * U[i]
 |