Loading Calculator...
Please wait a moment
Please wait a moment
Convert decimal numbers (base-10) to binary numbers (base-2) instantly. Perfect for programming, computer science, and digital systems.
| Decimal | Binary |
|---|---|
| 0 | 0 |
| 1 | 1 |
| 2 | 10 |
| 3 | 11 |
| 4 | 100 |
| 5 | 101 |
| 8 | 1000 |
| 10 | 1010 |
| 15 | 1111 |
| 16 | 10000 |
| 32 | 100000 |
| 64 | 1000000 |
| 128 | 10000000 |
| 255 | 11111111 |
Decimal (base-10) is the number system we use in everyday life. It uses ten digits (0-9) and each position represents a power of 10. For example, 352 = 3×10² + 5×10¹ + 2×10⁰.
To convert decimal to binary, repeatedly divide the decimal number by 2 and record the remainder. Continue until the quotient is 0. The binary number is the remainders read from bottom to top (or right to left).
Example: Convert 13 to binary
13 ÷ 2 = 6 remainder 1
6 ÷ 2 = 3 remainder 0
3 ÷ 2 = 1 remainder 1
1 ÷ 2 = 0 remainder 1
Result: 1101 (reading remainders from bottom to top)
Binary is the fundamental language of computers because digital circuits can only be in two states: on (1) or off (0). All data in computers, from numbers and text to images and videos, is ultimately stored and processed as binary code.
The number of bits needed is ⌈log₂(n+1)⌉, where n is the decimal number. For example, 255 requires 8 bits (2⁸-1 = 255), and 1024 requires 11 bits. In general, n bits can represent 2ⁿ different values (0 to 2ⁿ-1).
Zero in binary is simply 0. However, in fixed-width representations (like 8-bit or 16-bit), it's written with leading zeros: 00000000 for 8 bits or 0000000000000000 for 16 bits.
Yes, decimal fractions can be converted to binary fractions. For example, 0.5 in decimal is 0.1 in binary, and 0.25 is 0.01 in binary. However, some decimal fractions (like 0.1) cannot be exactly represented in binary, which can lead to floating-point precision issues in programming.
Binary prefixes use powers of 2: kilo (2¹⁰ = 1,024), mega (2²⁰ = 1,048,576), giga (2³⁰ = 1,073,741,824), etc. These differ slightly from decimal prefixes (1,000, 1,000,000, etc.) and are commonly used for measuring computer memory and storage.