FP8 data type - all values in a table

Tue
24
Sep 2024

Floating-point numbers are a great invention. Thanks to dedicating separate bits to the sign, exponent, and mantissa (also called significand), they can represent a wide range of numbers on a limited number of bits - numbers that are positive or negative, very large or very small (close to zero), integer or fractional.

In programming, we typically use double-precision (64b) or single-precision (32b) numbers. These are the data types available in programming languages (like double and float in C/C++) and supported by processors, which can perform calculations on them efficiently. Those of you who deal with graphics programming using graphics APIs like OpenGL, DirectX, or Vulkan, may know that some GPUs also support 16-bit floating-point type, also known as half-float.

Such 16b "half" type obviously has limited precision and range compared to the "single" or "double" version. Because of these limitations, I am reserved in recommending them to use in graphics. I summarized capabilities and limits of these 3 types in a table in my old "Floating-Point Formats Cheatsheet".

Now, as artificial intelligence (AI) / machine learning (ML) is a popular topic, programmers use low precision numbers in this domain. When I learned that floating-point formats based only on 8 bits were proposed, I immediately thought: 256 possible value is little enough that they could be all visualized in a 16x16 table! I developed a script that generates such tables, and so I invite you to take a look at my new article:

"FP8 data type - all values in a table"

Comments | #math #artificial intelligence Share

Comments

[Download] [Dropbox] [pub] [Mirror] [Privacy policy]
Copyright © 2004-2024