This module provides operations on the type int32 of signed 32-bit integers. Unlike the built-in int type, the type int32 is guaranteed to be exactly 32-bit wide on all platforms. All arithmetic operations over int32 are taken modulo 232.
Performance notice: values of type int32 occupy more memory space than values of type int, and arithmetic operations on int32 are generally slower than those on int. Use int32 only when the application requires exact 32-bit arithmetic.
Literals for 32-bit integers are suffixed by l:
let zero: int32 = 0l
let one: int32 = 1l
let m_one: int32 = -1l
Integer remainder. If y is not zero, the result of Int32.rem x y satisfies the following property: x = Int32.add (Int32.mul (Int32.div x y) y) (Int32.rem x y). If y = 0, Int32.rem x y raises Division_by_zero.
Int32.shift_right x y shifts x to the right by y bits. This is an arithmetic shift: the sign bit of x is replicated and inserted in the vacated bits. The result is unspecified if y < 0 or y >= 32.
Int32.shift_right_logical x y shifts x to the right by y bits. This is a logical shift: zeroes are inserted in the vacated bits regardless of the sign of x. The result is unspecified if y < 0 or y >= 32.
Convert the given 32-bit integer (type int32) to an integer (type int). On 32-bit platforms, the 32-bit integer is taken modulo 231, i.e. the high-order bit is lost during the conversion. On 64-bit platforms, the conversion is exact.
Convert the given floating-point number to a 32-bit integer, discarding the fractional part (truncate towards 0). If the truncated floating-point number is outside the range [Int32.min_int, Int32.max_int], no exception is raised, and an unspecified, platform-dependent integer is returned.
Convert the given string to a 32-bit integer. The string is read in decimal (by default, or if the string begins with 0u) or in hexadecimal, octal or binary if the string begins with 0x, 0o or 0b respectively.
The 0u prefix reads the input as an unsigned integer in the range [0, 2*Int32.max_int+1]. If the input exceeds Int32.max_int it is converted to the signed integer Int32.min_int + input - Int32.max_int - 1.
The _ (underscore) character can appear anywhere in the string and is ignored.
if the given string is not a valid representation of an integer, or if the integer represented exceeds the range of integers representable in type int32.
Return the internal representation of the given float according to the IEEE 754 floating-point 'single format' bit layout. Bit 31 of the result represents the sign of the float; bits 30 to 23 represent the (biased) exponent; bits 22 to 0 represent the mantissa.
Return the floating-point number whose internal representation, according to the IEEE 754 floating-point 'single format' bit layout, is the given int32.
The comparison function for 32-bit integers, with the same specification as Stdlib.compare. Along with the type t, this function compare allows the module Int32 to be passed as argument to the functors Set.Make and Map.Make.
A seeded hash function for 32-bit ints, with the same output value as Hashtbl.seeded_hash. This function allows this module to be passed as argument to the functor Hashtbl.MakeSeeded.
An unseeded hash function for 32-bit ints, with the same output value as Hashtbl.hash. This function allows this module to be passed as argument to the functor Hashtbl.Make.