Kernel
In computer science, the kernel is the core piece of most operating systems. It is a piece of software responsible for the communication between hardware and software components.
As a basic component of an operating system, a kernel provides abstraction layers for hardware, especially for memory, processors and communication between hardware and software. It also provides software facilities such as process abstractions and makes interprocess communication easier.
ROM
Modern semiconductor ROMs typically take the shape of IC (integrated circuit) packages, i.e. “computer chips”, not immediately distinguishable from other chips like RAMs but for the text printed on the chips. “ROM” in its strictest sense can only be read from, but all ROMs allow data to be written into them at least once, either during initial manufacturing or during a step called “programming”. Some ROMs can be erased and re-programmed multiple times, although they are still referred to as “read only” because the reprogramming process involves relatively infrequent, complete erasure and reprogramming, not the frequent, bit- or word at a time updating that is possible with RAM (random access memory).
Just after a computer has been turned on, it doesn’t have an operating system in memory. The computer’s hardware alone cannot perform complicated actions of the operating system, such as loading a program from disk; so a seemingly irresolvable paradox is created: to load the operating system into memory, one appears to need to have an operating system already installed.
Cache
A simple definition of Cache would be: A temporary storage area where frequently accessed data can be stored for rapid access.
A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations. As long as most memory accesses are to cached memory locations, the average latency of memory accesses will be closer to the cache latency than to the latency of main memory.
When the processor wishes to read or write a location in main memory, it first checks whether that memory location is in the cache. This is accomplished by comparing the address of the memory location to all tags in the cache that might contain that address. If the processor finds that the memory location is in the cache, we say that a cache hit has occurred, otherwise we speak of a cache miss. In the case of a cache hit, the processor immediately reads or writes the data in the cache line. The proportion of accesses that result in a cache hit is known as the hit rate, and is a measure of the effectiveness of the cache.
Motherboard
A motherboard is a printed circuit board used in a personal computer. It is also known as the mainboard and occasionally abbreviated to mobo or MB. The term mainboard is also used for the main circuit board in this and other electronic devices.
A typical motherboard provides attachment points for one or more of the following: CPU, graphics card, sound card, hard disk controller, memory (RAM), and external peripheral devices. The connectors for external peripherals are nearly always color coded according to the PC 99 specification.
Heatsink
A heat sink is an environment or object capable of absorbing heat from another object with which it is in thermal contact (either direct contact or radiational “contact”).
In common use, it is a device made of metal brought into contact with the hot surface of an electronic component (in most cases, some kind of thermal interface material is put between the heat sink and the heat source to increase thermal throughput), such as a microprocessor chip or a power handling semiconductor in order to reduce its temperature through increased thermal mass and heat dissipation (primarily by conduction and convection and to a lesser extent by radiation). Heat sinks are widely used in electronics, and have become almost essential to modern integrated circuits like microprocessors, DSPs, GPUs, etc.
Heat sinks are commonly made of a good thermal conductor such as copper or aluminum alloy. Copper is significantly more expensive than aluminum but is also a better thermal conductor. Aluminum has the significant advantage that it can be easily formed by extrusion, thus making complex cross-sections possible. The contact surface of a heat sink must be flat and smooth in order to ensure the best thermal contact with the object to be cooled. Sometimes a thermally conductive grease is employed to ensure the best thermal contact, such greases often contain colloidal silver (an even better thermal conductor than copper.)
Hertz Myth
he megahertz myth, or less commonly the gigahertz myth, refers to the “hertz myth” error of using clock rate to compare the performance of different microprocessors. While clock rates are a valid way of comparing the performance of different speeds of the same model and type of processor, other factors such as pipelines and instruction sets can greatly affect the performance when considering different processors. For example, one processor may take one clock cycle to add two numbers and another clock cycle to multiply by a third number, whereas another processor is able to do the same calculation in one clock cycle. Another example would be a dual core processor which could theoretically equal the performance of a similarly designed single core processor operating at double the clock rate. Comparisons between different types of processors are difficult because performance varies depending on the type of task.
In November 2000 Intel’s heavily advertised advances in clock speed reached an extreme with the release of the Pentium 4 which sacrificed per-cycle performance and used a deep instruction pipeline to gain very high clock speeds, ignoring problems that this introduced of heat production and power consumption.
The power hungry hot running Pentium 4 was unsuitable for laptops, and in March 2003 Intel overcame these difficulties with the successful Pentium M which proved capable of matching the Pentium 4 on performance at much lower clock rates. In 2002 problems of overheating led Intel to abandon further development of its Pentium 4 experiment in high clock speeds. Instead, Intel focussed its future plans on the Pentium M architecture which by then incorporated RISC techniques to the point that the distinction between RISC and CISC was now meaningless. The IBM G5 also proved unsuitable for laptops, and in 2005 Apple announced that over the following year Macintosh computers would switch to Intel CPUs developed from the Pentium M. The megahertz myth was effectively over, but passionate arguments by apologists for both sides still continue.
Ironically, Intel is now having to dig itself out of a marketing hole it created for itself when it released the Pentium 4. Their new generation of chips, the Intel Core, runs at clock speeds of around 2 GHz. While the Core line is a breakthrough in terms of performance-per-watt, its low clock speed when compared to late generation Pentium 4’s (rated at upwards of 3.5 GHz) is likely to cause some marketing confusion. Intel is now in the position of trying to sell consumers processors with lower gigahertz ratings, having spent the better part of the last five years telling consumers that slower clock speed denotes inferiority.
This can also cause problems for third party manufacturers. For example, Panasonic list a Pentium 4 based machine running at 3 GHz as the minimum system requirement for their soon to be released Blu-ray Disc drives. A Dual Core 1.8 GHz is significantly faster then the P4, but to a naive consumer reading specifications on the side of a box this statement can be completely confusing.
Moore‘s law is the empirical observation that the complexity of integrated circuits, with respect to minimum component cost, doubles every 24 months[1].
Wirth’s law: Software is decelerating faster than hardware is accelerating.
Hardware is clearly getting faster over time, and some of that development is quantified by Moore’s law; Wirth’s law points out that this does not imply that work is actually getting done faster. Programs tend to get bigger and more complicated over time, and sometimes programmers even rely on Moore’s law to justify writing slow code, thinking that it won’t be a problem because the hardware will get faster anyway.
As an example of Wirth’s law, one can observe that the time it takes to boot a modern PC with a modern operating system is usually no less than the time it took to boot a PC five or ten years ago.
Read Full Post »