Q: How should NULL be defined on a machine which uses a nonzero bit pattern as the internal representation of a null pointer?
A: The same as on any other machine: as 0 (or some version of 0; see question 5.4).
Whenever a programmer requests a null pointer, either by writing ``0'' or ``NULL'', it is the compiler's responsibility to generate whatever bit pattern the machine uses for that null pointer. (Again, the compiler can tell that an unadorned 0 requests a null pointer when the 0 is in a pointer context; see question 5.2.) Therefore, #defining NULL as 0 on a machine for which internal null pointers are nonzero is as valid as on any other: the compiler must always be able to generate the machine's correct null pointers in response to unadorned 0's seen in pointer contexts. A constant 0 is a null pointer constant; NULL is just a convenient name for it (see also question 5.13).
(Section 4.1.5 of the C Standard states that NULL ``expands to an implementation-defined null pointer constant,'' which means that the implementation gets to choose which form of 0 to use and whether to use a void * cast; see questions 5.6 and 5.7. ``Implementation-defined'' here does not mean that NULL might be #defined to match some implementation-specific nonzero internal null pointer value.)
See also questions 5.2, 5.10 and 5.17.
References:
ISO Sec. 7.1.6
Rationale Sec. 4.1.5