Tech Tuesday: Main Memory (Dumb, Lazy and Slow)

In “Of Bits and Bytes” we learned that all kinds of data such as numbers, text, graphics and much more can be represented as sequences of bits.  Then last week we saw that instructions for the CPU are also simply sequences of bits.  Now going back to the Overview, memory is the place where computers keep the instructions and the data so that the CPU can get to them relatively quickly to do its work.  Today we will cover so called “Main Memory” (as it turns out your computer has other memory as well).

First, how does a computer keep stuff in memory?  The simplest metaphor for a bit is that of an on/off switch.  Well that’s exactly how some of the earliest computers, like the Zuse Z3 and the Harvard Mark I, implemented memory: electro-mechanical relays.  Shortly thereafter came machines that used vacuum tubes, such as ENIAC.  Memory on these early machines was extremely limited – for instance the Mark I could store only 72 numbers (actually as 10 digit decimal numbers). Vacuum tubes and relays were bulky, prone to failure and required a lot of power. 

The first step towards making memory smaller was the invention of so called magnetic core memory in the early 1950s.  These were circular magnets for which the direction of magnetization could be changed to represent a 0 or a 1.  The magnets were strung up in matrices that looked a bit like a tennis racket (vertical and horizontal wires with magnets at the intersections).  Magnetic core memory was a huge success and stayed in use until the mid 1970s and in some instances, such as the Space Shuttle program, into the early 1980s.  Core memory storage was still expensive and comparatively bulky.  Most of IBM’s highly successful 700/7000 series had main memory that could hold less than 100,000 characters and yet cost millions of dollars.

The true breakthrough in memory size didn’t occur until we had transistors and capacitors on integrated circuits starting in the late 1960s.  The combination of two transistors (so called “static RAM” or SRAM) or a transistor and a capacitor (“dynamic RAM” or DRAM) can be used to hold a single bit (more on the “RAM” part of the name below).  Unlike core memory before them though these memory chips require a constant supply of power to keep their state.  That’s why we call this kind of memory “volatile” - when the power goes away the memory loses its contents.  That’s a small price to pay for the incredible miniaturization that became possible.  Along with memory shrinking, prices started to drop and today an 8 GB module, that’s 8 billion characters (!), costs $220 at CDW.  If you know how to read a log scale, this amazing chart shows the decline of the cost of 1 MB of main memory from 1957 to 2010 by about 10 orders of magnitude!

Now we know how bits are kept in memory, but how do they get in and out of memory?  Last Tuesday when talking about the CPU, I introduced the concept of an address, which says where some piece of data can be found.  Of course like everything else inside a computer, addresses are sequences of bits.  Let’s explore how many different locations we get depending on the number of bits we use for addresses

8 bit addresses: 256 different locations (doesn’t seem like much but at all, but will turn out to be very useful)

16 bit addresses: 65,536 different locations (this is what an Apple II had)

32 bits addresses: 4,294,967,296 = 4 billion different locations or 4 GB (now we are talking!)

64 bits addresses:  18 quintillion = 18 billion billion different locations (beyond imagination)

When people talk about a computer with a 64-bit processor that is part of what they mean:  having 64-bit long addresses and thus being able to address more memory locations than one can imagine and certainly more than money can buy: even at $220 for 8 GB, it would cost $500 billion to buy that much memory.

For the CPU to be able to access the memory at a particular address, the CPU has to be connected to the memory.  That is the function of the so called “memory controller” combined with the “memory bus.”   You can think of the controller as a traffic cop and the bus as a highway between the CPU and the main memory.  Together they allow the CPU to either send out an address and then receive the contents of that location in memory (a read) or to send out an address and the bits to be put into that location in memory (a write).  The memory controller and bus take care of getting the data to and from the memory even when that memory is spread out – as is usually the case – across multiple different chips no matter what the location is.  The RAM part or “Random Access Memory” in the names above comes from the fact that the CPU can access address locations in main memory in any pattern it wants to

Now we can come back to the title of this post.  Based on that short history of main memory technology and the brief description of addressing of memory by the CPU we now see that main memory has three surprisingly unattractive properties: it is dumb, lazy and slow.

First, memory is incredibly dumb!  It doesn’t know whether it is storing a number or text.  In fact, memory doesn’t even distinguish between holding data or holding instructions for the CPU.  Memory is completely indifferent to what it holds.  On sequence of bits is as good as another.  As we will see that is both incredibly powerful and amazingly dangerous.

Second, memory is lazy!  Memory cannot perform any actions on its own.  Memory needs the CPU for even the simplest of tasks.  Want to add two numbers that are stored in memory? Means you have to fetch them into the CPU, add them up and then write the result back.  That has been a given of computer architecture since we went to SRAM and DRAM memory in the 1960s.  And it has been a huge limitation, which is why people are actively working on alternatives.

Third, memory is slow!  Huh? Didn’t I say in the Building Blocks post and at the beginning of this post that memory lets the CPU get to data and instructions quickly? As we will see in an upcoming Tuesday, memory is indeed fast compared to other storage. But as it turns out, CPU speed has increased a lot more than memory speed in the last few decades.  On my Apple II the CPU speed was 1 MHz and the CPU was able to access memory pretty much on every cycle.  But in the Macbook on which I am writing this the CPU speed is 2.4 GHz and the CPU often has to wait hundreds of cycles to get data from or into main memory.

Each of these three points about main memory: that is is dumb, lazy and slow will give rise to subsequent Tech Tuesday posts!

Loading...
highlight
Collect this post to permanently own it.
Continuations logo
Subscribe to Continuations and never miss a post.
#tech tuesday#main memory