Hi, I wanted to make a small addition, to the two architectures Harvard and Von-Neumann.
I had in the post where I had made the introduction to x86 Assembler CISC, RISC as well as the two architectures that are in the title scribed.
Well, long story short, let's just start with the Von-Neumann architecture.
The architecture is named after its inventor John Von-Neumann. This architecture is a reference model, which states that data and program can share a common memory. This architecture, or model, was developed at the time when computers were built for specific purposes, and or were programmable only with punched cards or tapes. This architecture was considered very revolutionary at that time, because now different programs could run on the same architecture. Until today, the development of a computer system is based on this architecture. The classic Von-Neumann computer no longer exists, I'll explain why later 😄.
The classical Von-Neumann architecture divides the computer into 5 areas:
- Arithmetic Logical Unit (ALU) : Arithmetic unit for executing or converting the operations
- Control unit: Interprets the instructions of the program and regulates the sequence of instructions
- Memory: Memory for programs and data
- Input/Output: Keyboard, Mouse, Monitor etc...
- Binary Unit System (BUS): Connects all components together
Not only the wiring of the memory is a special feature of the architecture, but also its cycle, how programs with data are processed individually.
This cycle consists of 5 components:
- FETCH: The next executed command is loaded from the memory and the command counter is incremented by 1.
- DECODE: The command is interpreted and evaluated accordingly for the calculator.
- FETCH OPERANDS: The operands are loaded from the memory. The operands are values or data which are to be changed.
- EXECUTE: The decoded command is executed together with the data by the calculator.
- WRITEBACK: The result of the operation is written back to memory.
I have created a small Quick & Dirty animation for the cycle.
With the architecture we have the small disadvantage, namely that a partial step can last several clocks. After the cycle has run through, it starts again from the beginning with the next command.
Unfortunately, this architecture has another disadvantage, namely a bottleneck. This bottleneck consists in the fact that data and program run over a common bus, why program and data must divide themselves. Moreover, since the bus is slower than the processor's processing speed, the processor often has to wait. In the past, this bottleneck was not noticeable because the processor used to be the slowest component in a computer. Starting in the 90s, as processors became faster and faster, this bottleneck became more apparent.
Advantages | Disadvantages |
---|---|
Cost-effective, as few components are needed | Execution of the commands sometimes take more than 1 clock |
Puts programs and data together in one memory | Shared bus. Data and programs must be shared. |
High dynamics | Bus is slower than CPU |
In addition to the Von-Neumann architecture, there is also the Harvard architecture. This was developed in a collaboration between IBM and Harvard University. The Harvard architecture has the special feature that programs and data are stored in separate memories. Thus programs and data can be loaded and processed at the same time. RISC processors were often used in the architecture. RISC processors have the advantage, since they have only simple instructions, that per clock one instruction can be processed.
The Harvard architecture looks like shown in the picture.
Here we see that there are 2 different memories. One for data and one for programs. Both have their own address and data bus.
Also here we have a cycle. This consists of 4 partial steps, since an additional reloading of the data can be renounced. The data bus to the I/O can be neglected here, I will come to it later.
The advantage of the separation of data and programs is that no programs can be overwritten by faulty software. Does not mean, however, that no buffers or stack overflows can be generated and exploited, but that is another story 😄.
But this architecture also has a huge disadvantage. It is simply too expensive. Why too expensive? Quite simply, twice as many components must be used as with the Von-Neumann architecture. Another disadvantage related to the RISC processors, which were often used, more effort and more code must be written. Another disadvantage has the story also, because we can use neither the free program memory for data nor vice versa.
Advantages | Disadvantages |
---|---|
Data separated from program | Expensive to produce |
Data and program can be loaded simultaneously | Free program memory cannot be used for data |
One command per clock possible | Free data storage cannot be used for programs |
Program parts are not overwritten by software error |
Where can we find the two architectures today? We still find the Von-Neumann architecture very much in our PCs today. The funny thing about it is that the PC is not a full-fledged Von-Neumann computer, however, because we also have to consider the CPU. Because every CPU you can buy today has a L1 cache in it. And this cache, has program and data separated, which is the Harvard architecture. So we have a hybrid architecture in laptops, PCs as well as servers. That's also why in the Harvard animation, there was a data bus to the IO. Today, caches are used to try to compensate for the Von-Neumann bottleneck to some extent. Otherwise, Von-Neumann is still present in the rest of the system.
The classic Harvard architecture can be found in microcontrollers, e.g. the ATMEGA8/Arduino, PICmicro, PICAXE. Even ARM (ARM9-XX) has a Harvard architecture, because they changed from the Von-Neumann architecture with the ARM9 to the Harvard architecture during the development of the processors. DSPs are also built according to the Harvard architecture.
I hope I was able to provide some clarity on the two architectures. Until next time 😄