Real Types

Numeric types consist of two-, four-, and eight-byte integers, four- and eight-byte floating-point numbers, and selectable-precision decimal.

Data Type Size Size (Not Null) Synonyms Precision
FLOAT 5 bytes 4 bytes REAL, FLOAT32 23 bits
DOUBLE [PRECISION] 9 bytes 8 bytes FLOAT64 53 bits
DECIMAL [(precision_num [, scale_num])] 38 bytes 37 bytes  

32 digits

Floating-Point Types

The data types FLOAT and DOUBLEPRECISION are inexact, variable-precision numeric types. On all currently supported platforms, these types are implementations of IEEE Standard 754 for Binary Floating-Point Arithmetic (single and double precision, respectively), to the extent that the underlying processor, operating system, and compiler support it.

Inexact means that some values cannot be converted exactly to the internal format and are stored as approximations, so that storing and retrieving a value might show slight discrepancies. Managing these errors and how they propagate through calculations is the subject of an entire branch of mathematics and computer science and will not be discussed here, except for the following points:

  • If you require exact storage and calculations (such as for monetary amounts), use the DECIMAL type instead.
  • If you want to do complicated calculations with these types for anything important, especially if you rely on certain behavior in boundary cases (infinity, underflow), you should evaluate the implementation carefully.
  • Comparing two floating-point values for equality might not always work as expected.

Decimal Type

The type DECIMAL can store numbers with a large number of digits. It is especially recommended for storing monetary amounts and other quantities where exactness is required. Calculations with DECIMAL values yield exact results where possible, e.g., addition, subtraction, multiplication. However, calculations on DECIMAL values are very slow compared to the integer types, or to the floating-point types described in the next section.

The precision of a numeric is the total count of significant digits in the whole number, that is, the number of digits to both sides of the decimal point. The scale of a numeric is the count of decimal digits in the fractional part, to the right of the decimal point. So the number 3.14159 has a precision of 6 and a scale of 5. Integers can be considered to have a scale of zero.

The maximum precision for DECIMAL types is 32 and the maximum scale is 32.

Both the maximum precision and the maximum scale of a DECIMAL column can be configured. To declare a column of type DECIMAL use the syntax:

DECIMAL(precision, scale)

The precision must be positive, the scale zero or positive. Alternatively:

DECIMAL(precision)

selects a scale of 0. Specifying:

DECIMAL

without any precision or scale creates DECIMAL column in which precision and scale will be set to the maximum implementation values.

Real types, with the exception of DECIMAL, can be included in an ARRAY. See Arrays.

See Also

NULL Values

DEFAULT Values

Limitations