I was very interested to read today a number of facts about the C programming languages that tend to escape “common knowledge” over time. For example, did you know that:
- some processors (especially Digital Signal Processors) cannot efficiently access memory in smaller pieces than the processor’s word size. There is at least one DSP […] where CHAR_BIT is 32. The char types, short, int and long are all 32 bits.
- Every bit in an object of unsigned character types contributes to its value. There are no unused or padding bits, and every possible combinations of bits represents a valid value for an unsigned char. There is no other data type in C or C++ that guarantees this to be true. [no, not even int or long]
- SCHAR_MIN must be -127 or less (more negative), and SCHAR_MAX must be 127 or greater. […] many compilers for processors which use a 2’s complement representation support SCHAR_MIN of -128, but this is not required by the standards.
- likewise, the standard requires minimum ranges for short, int and long data types, but implementations can choose any larger size.
- It is int which causes the greatest confusion. Some people are certain that an int has 16 bits and sizeof(int) is 2. Others are equally sure that an int has 32 bits and sizeof(int) is 4. Who is right? On any given compiler, one or the other could be right. On some compilers, both would be wrong. [there is at least] one compiler for a 24 bit DSP where an int has 24 bits.
- on 32-bit platforms, using “%d” [with printf] to print either an int or long will usually work, but on LP64 platforms “%ld” must be used to print a long.
- the relationship between the fundamental data types can be expressed as sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) = sizeof(size_t).