Why 64 bits?
The question of
why we need 64-bit computing is often asked but rarely answered in a
satisfactory manner. That this is so is evidenced by the fact that the question
keeps coming up again and again in online discussions of AMD's upcoming Hammer
processor. There are good reasons for the confusion surrounding the question,
the first of which is the rarely acknowledged fact that "the 64-bit
question" is actually two questions: 1) how does the existing 64-bit server
and workstation market use 64-bit computing, and 2) what use would the consumer
market have for 64-bit computing. People who ask the 64-bit question are
usually asking for the answer to question 1 in order to deduce the answer to
question 2. This being the case, we'll first look at question 1 before tackling
question 2.
What is 64-bit computing? [In detail with respect to hardware]
Let us know the code/data distinction and its implications in
microprocessor technology. Simply put, the labels "16-bit," "32-bit"
or "64-bit," when applied to a microprocessor, characterize the
processor's data stream. Although you may have heard the term "64-bit
code," this designates code that operates on 64-bit data.
In more specific terms, the labels "64-bit,"
32-bit," etc. designate the number of bits that each of the processor's
general-purpose registers (GPRs) can hold. So when someone uses the term
"64-bit processor," what they mean is "a processor with GPRs
that store 64-bit numbers." And in the same vein, a "64-bit instruction"
is an instruction that operates on 64-bit numbers.
In the diagram above, white boxes are data, and gray boxes
are results. Also, don't take the instruction and code "sizes" too
literally, since they're intended to convey a general feel for what it means to
"widen" a processor from 32 bits to 64 bits.
One should notice that not all of the data in either memory,
the cache, or the registers is 64-bit data. Rather, the data sizes are mixed,
with 64 bits being the widest.
Note that in the 64-bit CPU image above, the width of the
code stream has not changed; the same-sized opcode could theoretically
represent an instruction that operates on 32-bit numbers or an instruction that
operates on 64-bit numbers, depending on what the opcode's default data size
is. On the other hand, the width of the data stream has doubled. In order to
accommodate the wider data stream, the sizes of the processor's registers and
the sizes of the internal data paths that feed those registers must
be doubled.
Programming models
Now let's take a look at two programming models, one for a
32-bit processor and another for a 64-bit processor.
The registers in the 64-bit CPU pictured above are twice as
wide as those in the 32-bit CPU, but the size of the instruction register (IR)
that holds the currently executing instruction is the same in both processors.
Again, the data stream has doubled in size, but the instruction stream has not.
Finally, you might also also note that the program counter (PC) is doubled in
size.
For the simple processor pictured above, the two types of
data that it can process are integer data and address data. Ultimately,
addresses are really just integers that designate a memory address, so address
data is just a special type of integer data. Hence, both data types are stored
in the GPRs, and both integer and address calculations are done by the ALU.
From <https://arstechnica.com/gadgets/2002/03/an-introduction-to-64-bit-computing-and-x86-64/>
Many modern processors support two additional data types:
floating-point data and vector data. Each of these two data types has its own
set of registers and its own execution unit(s). The following table compares
all four data types in 32-bit and 64-bit processors:
DATA TYPE |
REGISTER TYPE |
EXECUTION UNIT |
X86 WIDTH |
X86-64 WIDTH |
Integer |
GPR |
ALU |
32 |
64 |
Address |
GPR |
ALU or AGU |
32 |
64 |
Floating-point* |
FPR |
FPU |
64 |
64 |
Vector |
VR |
VPU |
128 |
128 |
*x87 uses 80-bit registers to do double-precision
floating-point. The floats themselves are 64-bit, but the processor converts
them to an internal, 80-bit format for increased precision when doing
computations.
One can see from the table above that the difference the
move to 64 bits makes is in the integer and address hardware. The
floating-point and vector hardware stays the same.
Current 64-bit applications
Now that we know what 64-bit computing is, let's take a look
at the benefits of increased integer and data sizes.
Dynamic range
The main thing that a wider integer gives you is
increased dynamic range. Instead of defining the term "dynamic
range," let us know how it works.
In the base-10 number system to which we're all accustomed, one
can represent a maximum of ten integers (0 to 9) with a single digit. This is
because base-10 has ten different symbols with which to represent numbers. To
represent more than ten integers you need to add another digit, using a
combination of two symbols chosen from among the set of ten to represent any
one of 100 integers (00 to 99). The general formula that you can use to compute
the number of integers (dynamic range, or DR) that you can represent with
an n-digit base-ten number is:
DR = 10n
So a 1-digit number gives you 101 = 10 possible
integers, a 2-digit number 102 = 100 integers, a 3-digit number 103 =
1000 integers, and so on.
The base-2, or "binary," number system that
computers use has only two symbols with which to represent integers: 0 and 1.
Thus, a single-digit binary number allows you to represent only two integers, 0
and 1. With a two-digit (or "2-bit") binary, you can represent four
integers by combining the two symbols (0 and 1) in any of the following four
ways:
00 = 0
01 = 1
10 = 2
11 = 3
Similarly, a 3-bit binary number gives you eight possible
combinations, which you can use to represent eight different integers. As you
increase the number of bits, you increase the number of integers you can
represent. In general, n bits will allow you to represent
2 n integers in binary. So a 4-bit binary number can represent
24 or 16 integers, an 8-bit number gives you 28=256 integers, and so on.
So in moving from a 32-bit GPR to a 64-bit GPR, the range of
integers that a processor can manipulate goes from 232 = 4.3e9 to
264 = 1.8e19. The dynamic range, then, increases by a factor of 4.3
billion. Thus a 64-bit integer can represent a much larger range of numbers than
a 32-bit integer.
The benefits of increased dynamic range, or, how the existing 64-bit
computing market uses 64-bit integers
Since addresses are just special-purpose integers, an ALU
and register combination that can handle more possible integer values can also
handle that many more possible addresses. With all the recent press coverage
that 64-bit architectures have garnered, it's fairly common knowledge that a
32-bit processor can address at most 4GB of memory. (Remember our 232 =
4.3 billion number? That 4.3 billion bytes is about 4GB.) A 64-bit architecture
could theoretically, by contrast, address up to 18 million terabytes.
From <https://arstechnica.com/gadgets/2002/03/an-introduction-to-64-bit-computing-and-x86-64/>
Of course, there's a big difference between the amount of
address space that a 64-bit address value could theoretically yield and the
actual sizes of the virtual and physical address spaces that a given 64-bit
architecture supports. In the case of x86-64, the virtual address space is
48-bit, which makes for about 282 terabytes of virtual address space. x86-64's
physical address space is 40-bit, which can support about 1 terabyte of physical
memory.
So, what do you do with over 4GB of memory? Well, caching a
very large database in it is a start. Back-end servers for mammoth databases
are one place where 64 bits have long been a requirement, so it's no surprise
to see upcoming 64-bit offerings billed as capable database platforms.
On the media and content creation side of things, folks who
work with very large 2D image files also appreciate the extra RAM. And a
related, much sexier application domain where large amounts of memory come in
handy is in simulation and modeling. Under this heading you could put various
CAD tools and 3D rendering programs, as well as things like weather and
scientific simulations, and even realtime 3D games. Though the current crop of
3D games wouldn't benefit from greater than 4GB of RAM, it is quite possible
that we'll see a game that benefits from greater than 4GB RAM within the next
five years.
There is one drawback to the increase in memory space that
64-bit addressing affords. Since memory address values (or pointers, in
programmer lingo) are now twice as large, they take up twice as much cache
space. Pointers normally make up only a fraction of all the data in the cache,
but when that fraction doubles it can squeeze other useful data out of the cache
and degrade performance slightly.
Some applications, mostly in the realm of scientific
computing (MATLAB, Mathematica, MAPLE, etc.) and simulations, require 64-bit
integers because they work with numbers outside the dynamic range of 32-bit
integers. When the result of a calculation exceeds the range of possible
integer values, one get a situation called either overflow (i.e. the
result was greater than the highest positive integer)
or underflow (i.e. the result was less than the largest negative
integer). When this happens, the number one get in the register isn't the right
answer. There's a bit in the x86's processor status word that allows one to
check to see if an integer has just exceeded the processor's dynamic range, so one
know that the result is bogus. Such situations are very, very rare in integer
applications.
Programmers who run into integer overflow or underflow
problems on a 32-bit platform do have the option of using a 64-bit integer
construct provided by a higher level language like C. In such cases, the
compiler uses two registers per integer, one for each half of the integer, to
do 64-bit calculations in 32-bit hardware. This has obvious performance
drawbacks, making it less desirable than a true 64-bit integer implementation.
Finally, there is another application domain for which
64-bit integers can offer real benefits: cryptography. Most popular encryption
schemes rely on the multiplication and factoring of very large integers, and
the larger the integers the more secure the encryption.
From <https://arstechnica.com/gadgets/2002/03/an-introduction-to-64-bit-computing-and-x86-64/>
64-bit
Computing |
|
Example |
Sources
of performance and scalability gains |
Large database |
¾
Larger
memory allocation per user ¾
Many
more users ¾
Large
file implementations ¾
Reduce
swapping |
Decision support |
¾
Direct
addressing ¾
Reduced
swapping ¾
Large
file implementations’ |
Technical applications |
¾
Large
process data space ¾
More
available shared memory segments ¾
Reduced
swapping ¾
High-precision
arithmetic |
Scalability
of 64-Bit computing compare to 32-Bit? Is 64-bit computing necessary? |
||
Word
Length |
Mathematical
Expression |
Relative
Scale |
8-bit |
28
= 256 |
Business
Card |
16-bit |
216
= 65,536 |
Desktop |
32-bit |
232
= 4.29E+09 |
City
Block |
64-bit |
264
= 1.84E+19 |
Surface
of the earth(!) |
64-bit computing has 4
billion times the capacity of 32-bit |
64-bit Registers |
For all discussed seminar topics list click here Index.
…till next post, bye-bye and take care.
No comments:
Post a Comment