Introduction to CPU
CPU is the abbreviation of Central Processing Unit (Central Processing Unit). It can be referred to as microprocessor (Microprocessor), but it is often directly called a processor. (processor). Don’t ignore its role because of these abbreviations. The CPU is the core of the computer. It is as important as the brain to humans because it is responsible for processing and calculating all the data inside the computer. The motherboard chipset is more like the heart, which controls Exchange of data. The type of CPU determines the operating system and corresponding software you use. The CPU is mainly composed of arithmetic units, controllers, register groups and internal buses. It is the core of the PC. Together with the memory, input/output interfaces and system bus, it forms a complete PC (personal computer). The register group is used to store operands and intermediate data after the instruction is executed, and the arithmetic unit completes the calculations and operations specified by the instruction.
CPU performance indicators
1. Main frequency
Main frequency is also called clock frequency. The unit is MHz (or GHz), which is used to indicate the operation and operation of the CPU. The speed at which data is processed. CPU main frequency = FSB × multiplication factor. Many people think that the main frequency determines the running speed of the CPU. This is not only one-sided, but also for servers, this understanding is also biased. So far, there is no definite formula that can realize the numerical relationship between the main frequency and the actual computing speed. Even the two major processor manufacturers Intel and AMD have great disputes on this point. From Intel Looking at the product development trends, it can be seen that Intel attaches great importance to strengthening the development of its own main frequency. Like other processor manufacturers, someone once compared a 1G Transmeta processor. Its operating efficiency is equivalent to a 2G Intel processor.
Therefore, the main frequency of the CPU is not directly related to the actual computing power of the CPU. The main frequency indicates the speed of the digital pulse signal oscillation in the CPU. You can also see examples of this in Intel's processor products: 1 GHz Itanium chips can perform almost as fast as 2.66 GHz Xeon/Opteron, or 1.5 GHz Itanium 2 is about as fast as 4 GHz Xeon/Opteron Just as fast. The computing speed of the CPU also depends on the performance indicators of the CPU's pipeline, bus, etc.
The main frequency is related to the actual computing speed. It can only be said that the main frequency is only one aspect of CPU performance and does not represent the overall performance of the CPU.
2. FSB
The FSB is the base frequency of the CPU, in MHz. The CPU's FSB determines the running speed of the entire motherboard. In layman's terms, in desktop computers, the so-called overclocking refers to overclocking the CPU's FSB (of course, under normal circumstances, the CPU multiplier is locked). I believe this is well understood. But for server CPUs, overclocking is absolutely not allowed. As mentioned earlier, the CPU determines the running speed of the motherboard. The two run synchronously. If the server CPU is overclocked and the FSB is changed, asynchronous operation will occur. (Many desktop motherboards support asynchronous operation.) This will cause the entire server to run asynchronously. System instability.
In most current computer systems, the FSB and the motherboard front-side bus are not at the same speed, and the FSB and FSB frequencies are easily confused. The following introduction to the FSB will talk about the two. difference.
3. Front-side bus (FSB) frequency
Front-side bus (FSB) frequency (i.e. bus frequency) directly affects the speed of direct data exchange between the CPU and memory. There is a formula that can be calculated, that is, data bandwidth = (bus frequency × data bit width) / 8. The maximum bandwidth of data transmission depends on the width and transmission frequency of all data transmitted simultaneously. For example, the current Xeon Nocona that supports 64-bit has a front-side bus of 800MHz. According to the formula, its maximum data transmission bandwidth is 6.4GB/second.
The difference between FSB and FSB frequency: The speed of FSB refers to the speed of data transmission, and the FSB is the speed of synchronous operation between the CPU and the motherboard.
In other words, the 100MHz FSB specifically refers to the digital pulse signal oscillating 100 million times per second; while the 100MHz front-side bus refers to the amount of data transmission that the CPU can accept per second, which is 100MHz×64bit÷8bit/Byte=800MB/s .
In fact, the emergence of the "HyperTransport" architecture has changed the actual front-side bus (FSB) frequency. The IA-32 architecture must have three important components: Memory Controller Hub (MCH), I/O Controller Hub and PCI Hub, like Intel's typical chipsets Intel 7501 and Intel7505 chipsets, which are dual Xeon processors. Custom-made, the MCH they contain provides a front-side bus with a frequency of 533MHz for the CPU. With DDR memory, the front-side bus bandwidth can reach 4.3GB/second. However, as processor performance continues to improve, it also brings many problems to the system architecture. The "HyperTransport" architecture not only solves the problem, but also improves the bus bandwidth more effectively, such as AMD Opteron processors. The flexible HyperTransport I/O bus architecture allows it to integrate the memory controller, so that the processor does not transmit data through the system bus. The chipset exchanges data directly with the memory. In this case, the front-side bus (FSB) frequency does not know where to start talking about AMD Opteron processors.
4. CPU bits and word length
Bits: Binary is used in digital circuits and computer technology, and the codes are only "0" and "1", whether it is "0" Or "1" is a "bit" in the CPU.
Word length: In computer technology, the number of binary digits that the CPU can process at one time per unit time (at the same time) is called the word length. Therefore, a CPU that can process data with a word length of 8 bits is usually called an 8-bit CPU. In the same way, a 32-bit CPU can process binary data with a word length of 32 bits per unit time. The difference between byte and word length: Since commonly used English characters can be represented by 8-bit binary, 8 bits are usually called a byte. The length of the word length is not fixed, and the length of the word length is different for different CPUs. An 8-bit CPU can only process one byte at a time, while a 32-bit CPU can process 4 bytes at a time. Similarly, a 64-bit CPU can process 8 bytes at a time.
5. Multiplier coefficient
Multiplier coefficient refers to the relative proportional relationship between the CPU main frequency and the FSB. Under the same FSB, the higher the frequency multiplier, the higher the CPU frequency. But in fact, under the premise of the same FSB, a high-multiplier CPU itself is of little significance. This is because the data transmission speed between the CPU and the system is limited. Blindly pursuing a high main frequency and obtaining a high frequency multiplier will cause an obvious "bottleneck" effect - the CPU's ultimate speed of obtaining data from the system cannot satisfy the CPU's computing requirements. speed. Generally speaking, except for engineering samples, Intel CPUs have locked multipliers. A small number of them, such as the Pentium Dual-Core E6500K with Intel Core 2 core and some Extreme Edition CPUs, do not lock multipliers. However, AMD did not lock multipliers before. Now AMD has launched Black box version of the CPU (that is, the multiplier version is not locked, the user can freely adjust the multiplier, and the overclocking method of adjusting the multiplier is much more stable than adjusting the FSB).
6. Cache
Cache size is also one of the important indicators of the CPU, and the structure and size of the cache have a great impact on the CPU speed. The cache in the CPU runs at an extremely high frequency. Generally, it operates at the same frequency as the processor, and its working efficiency is much greater than that of system memory and hard disk. In actual work, the CPU often needs to read the same data block repeatedly, and the increase in cache capacity can greatly improve the hit rate of reading data within the CPU without having to search for it in the memory or hard disk, thereby improving system performance. . However, due to factors such as CPU chip area and cost, the cache is very small.
L1 Cache (level 1 cache) is the first level cache of the CPU, which is divided into data cache and instruction cache. The capacity and structure of the built-in L1 cache have a greater impact on the performance of the CPU. However, the cache memory is composed of static RAM and has a complicated structure. When the CPU die area cannot be too large, the capacity of the L1 cache is not sufficient. Probably made too big.
The capacity of the L1 cache of a general server CPU is usually 32-256KB.
L2 Cache (Level 2 Cache) is the second level cache of the CPU, which is divided into internal and external chips. The internal on-chip L2 cache runs at the same speed as the main frequency, while the external L2 cache only runs at half the main frequency. The L2 cache capacity will also affect the performance of the CPU. The principle is that the bigger the better. In the past, the largest capacity of home CPUs was 512KB. Now it can also reach 2M in laptops. The L2 cache of CPUs on servers and workstations is higher. Can reach more than 8M.
L3 Cache (three-level cache) is divided into two types. The early one was external, and the current one is built-in. Its actual effect is that the application of L3 cache can further reduce memory latency and improve processor performance when calculating large amounts of data. Reducing memory latency and improving large-data computing capabilities are helpful for games. In the server field, adding L3 cache still has a significant improvement in performance. For example, a configuration with a larger L3 cache will use physical memory more efficiently, so it can handle more data requests than a slower disk I/O subsystem. Processors with larger L3 caches provide more efficient file system cache behavior and shorter message and processor queue lengths.
In fact, the earliest L3 cache was applied to the K6-III processor released by AMD. The L3 cache at that time was limited by the manufacturing process and was not integrated into the chip, but was integrated on the motherboard. The L3 cache, which can only be synchronized with the system bus frequency, is actually not much different from the main memory. Later, the L3 cache was used by Intel's Itanium processor for the server market. Then there are P4EE and Xeon MP. Intel also plans to launch an Itanium2 processor with 9MB L3 cache, and later a dual-core Itanium2 processor with 24MB L3 cache.
But basically the L3 cache is not very important to improve the performance of the processor. For example, the Xeon MP processor equipped with 1MB L3 cache is still not the opponent of Opteron. It can be seen that the increase of front-side bus is more important than that of Opteron. The increase in cache brings more effective performance improvements.
7. CPU extended instruction set
The CPU relies on instructions to calculate and control the system. Each CPU specifies a series of instruction systems that match its hardware circuit during design. The strength of instructions is also an important indicator of the CPU. The instruction set is one of the most effective tools to improve the efficiency of microprocessors. From the current mainstream architecture, the instruction set can be divided into two parts: complex instruction set and simplified instruction set. From the perspective of specific applications, such as Intel's MMX (Multi Media Extended), SSE, SSE2 (Streaming-Single instruction multiple data -Extensions 2), SSE3, SSE4 series and AMD's 3DNow! are all extended instruction sets of the CPU, which respectively enhance the multimedia, graphics and Internet processing capabilities of the CPU. The extended instruction set of the CPU is usually called the "CPU instruction set". The SSE3 instruction set is also the smallest instruction set currently. Previously, MMX contained 57 commands, SSE contained 50 commands, SSE2 contained 144 commands, and SSE3 contained 13 commands. At present, SSE4 is also the most advanced instruction set. Intel Core series processors already support the SSE4 instruction set. AMD will add support for the SSE4 instruction set to future dual-core processors. Transmeta processors will also support this instruction set.
8.CPU core and I/O working voltage
Starting from 586CPU, the working voltage of CPU is divided into core voltage and I/O voltage. Usually the core voltage of CPU is less than Equal to the I/O voltage. The size of the core voltage is determined based on the CPU's production process. Generally, the smaller the production process, the lower the core operating voltage; I/O voltages are generally 1.6~5V. Low voltage can solve the problems of excessive power consumption and excessive heat generation.
9. Manufacturing process
The micron of the manufacturing process refers to the distance between circuits within the IC. The trend in manufacturing processes is towards higher density.
Higher-density IC circuit designs mean that ICs of the same size can have circuit designs with higher density and more complex functions. Now the main ones are 180nm, 130nm, 90nm, 65nm, and 45nm. Recently, officials have stated that they have a 32nm manufacturing process.
10. Instruction set
(1) CISC instruction set
CISC instruction set, also known as complex instruction set, the English name is CISC, (Complex Instruction Abbreviation for Set Computer). In a CISC microprocessor, each instruction of the program is executed serially in order, and each operation in each instruction is also executed serially in order. The advantage of sequential execution is simple control, but the utilization rate of various parts of the computer is not high and the execution speed is slow. In fact, it is the x86 series (that is, IA-32 architecture) CPU produced by Intel and its compatible CPUs, such as AMD and VIA. Even the new X86-64 (also called AMD64) belongs to the category of CISC.
To know what an instruction set is, we have to start with today's X86 architecture CPU. The X86 instruction set was specially developed by Intel for its first 16-bit CPU (i8086). The CPU in the world's first PC—i8088 (simplified version of i8086) launched by IBM in 1981 also used X86 instructions. At the same time, the computer The X87 chip was added to improve floating-point data processing capabilities. From now on, the X86 instruction set and the X87 instruction set will be collectively referred to as the X86 instruction set.
Although with the continuous development of CPU technology, Intel has successively developed newer i80386 and i80486, up to the past PII Xeon, PIII Xeon, Pentium 3, Pentium 4 series, and finally to today's Core 2 series, Xeon (excluding Xeon Nocona), but in order to ensure that the computer can continue to run various applications developed in the past to protect and inherit rich software resources, all CPUs produced by Intel continue to use the X86 instruction set. So its CPU still belongs to the X86 series. Since the Intel X86 series and its compatible CPUs (such as AMD Athlon MP,) all use the X86 instruction set, today's huge lineup of X86 series and compatible CPUs has been formed. x86CPU currently mainly includes Intel server CPU and AMD server CPU.
(2) RISC instruction set
RISC is the abbreviation of "Reduced Instruction Set Computing" in English, which means "reduced instruction set" in Chinese. It was developed on the basis of the CISC instruction system. Someone tested the CISC machine and showed that the frequency of use of various instructions is quite different. The most commonly used instructions are some relatively simple instructions, which only account for 20% of the total number of instructions. But the frequency of occurrence in the program accounts for 80%. A complex instruction system will inevitably increase the complexity of the microprocessor, making the development of the processor long and costly. And complex instructions require complex operations, which will inevitably reduce the speed of the computer. Based on the above reasons, RISC CPUs were born in the 1980s. Compared with CISC CPUs, RISC CPUs not only streamlined the instruction system, but also adopted something called "superscalar and super-pipeline structure", which greatly increased parallel processing capabilities. . The RISC instruction set is the development direction of high-performance CPUs. It is opposed to traditional CISC (Complex Instruction Set). In comparison, RISC has a unified instruction format, fewer types, and fewer addressing methods than complex instruction sets. Of course, the processing speed is greatly improved. At present, CPUs with this instruction system are commonly used in mid-to-high-end servers, especially high-end servers all use CPUs with the RISC instruction system. The RISC instruction system is more suitable for UNIX, the operating system of high-end servers. Now Linux is also a UNIX-like operating system. RISC CPUs are incompatible with Intel and AMD CPUs in both software and hardware.
At present, the CPUs that use RISC instructions in mid-to-high-end servers mainly include the following categories: PowerPC processors, SPARC processors, PA-RISC processors, MIPS processors, and Alpha processors.
(3) IA-64
There has been a lot of debate about whether EPIC (Explicitly Parallel Instruction Computers) is the successor to RISC and CISC systems. In terms of system, it is more like Intel's processor, an important step towards the RISC system. Theoretically speaking, the CPU designed by the EPIC system can handle Windows application software much better than Unix-based application software under the same host configuration.
Intel's server CPU using EPIC technology is Itanium (development codename: Merced). It is a 64-bit processor and the first in the IA-64 series. Microsoft has also developed an operating system code-named Win64 and supports it in software. After Intel adopted the set, so the IA-64 architecture using the EPIC instruction set was born. IA-64 is a huge improvement over x86 in many aspects. It breaks through many limitations of the traditional IA32 architecture and achieves breakthrough improvements in data processing capabilities, system stability, security, usability, and considerable rationality.
The biggest flaw of IA-64 microprocessors is their lack of compatibility with x86. In order for Intel IA-64 processors to better run software from both dynasties, it The x86-to-IA-64 decoder is introduced on (Itanium, Itanium2...), so that x86 instructions can be translated into IA-64 instructions. This decoder is not the most efficient decoder, nor is it the best way to run x86 code (the best way is to run x86 code directly on the x86 processor), so the performance of Itanium and Itanium2 when running x86 applications Very bad. This has also become the fundamental reason for the emergence of X86-64.
(4) X86-64 (AMD64/EM64T)
Designed by AMD, it can handle 64-bit integer operations at the same time and is compatible with the X86-32 architecture. It supports 64-bit logical addressing and provides the option of converting to 32-bit addressing; however, the data operation instructions default to 32-bit and 8-bit, and provides the option of converting to 64-bit and 16-bit; supports general-purpose registers, if it is a 32-bit operation , it is necessary to expand the result to a complete 64 bits. In this way, there is a difference between "direct execution" and "conversion execution" in the instruction. The instruction field is 8 bits or 32 bits, which can avoid the field being too long.
The emergence of x86-64 (also called AMD64) is not groundless. The 32-bit addressing space of x86 processors is limited to 4GB of memory, and IA-64 processors are not compatible with x86. AMD fully considers the needs of customers and enhances the functions of the x86 instruction set so that this instruction set can support 64-bit computing modes at the same time. Therefore, AMD calls their structure x86-64. Technically, in order to perform 64-bit operations in the x86-64 architecture, AMD has introduced a new R8-R15 general-purpose register as an expansion of the original Use these registers. The original registers such as EAX and EBX have also been expanded from 32 bits to 64 bits. Eight new registers have been added to the SSE unit to provide support for SSE2. The increase in the number of registers will lead to performance improvements.
At the same time, in order to support both 32- and 64-bit codes and registers, the x86-64 architecture allows the processor to work in the following two modes: Long Mode (long mode) and Legacy Mode (genetic mode). Long mode is divided into two sub-modes: Mode (64bit mode and Compatibility mode). This standard has been introduced into the Opteron processor in AMD server processors.
This year, EM64T technology that supports 64-bit was also launched. Before it was officially named EM64T, it was IA32E. This is Intel The name of the 64-bit extension technology used to distinguish the X86 instruction set. Intel's EM64T supports 64-bit sub-mode, which is similar to AMD's X86-64 technology. It uses 64-bit linear plane addressing, adds 8 new general-purpose registers (GPRs), and adds 8 registers to support SSE instructions. Similar to AMD, Intel's 64-bit technology will be compatible with IA32 and IA32E. IA32E will only be used when running a 64-bit operating system. IA32E will be composed of 2 sub-modes: 64-bit sub-mode and 32-bit sub-mode, which are backward compatible with AMD64. Intel's EM64T will be fully compatible with AMD's X86-64 technology. Now the Nocona processor has added some 64-bit technology, and Intel's Pentium 4E processor also supports 64-bit technology.
It should be said that both of them are 64-bit microprocessor architectures compatible with the x86 instruction set, but there are still some differences between EM64T and AMD64. The NX bit in the AMD64 processor is not processed by Intel. will not be provided in the server.
11. Superpipeline and superscalar
Before explaining superpipeline and superscalar, let’s first understand the pipeline. The pipeline was first used by Intel in the 486 chip. The assembly line works like an assembly line in industrial production. In the CPU, an instruction processing pipeline is composed of 5-6 circuit units with different functions, and then an X86 instruction is divided into 5-6 steps and then executed by these circuit units respectively, so that one instruction can be completed in one CPU clock cycle. , thus increasing the computing speed of the CPU. Each integer pipeline of the classic Pentium is divided into four levels of pipeline, namely instruction prefetching, decoding, execution, and writing back results. The floating point pipeline is divided into eight levels of pipeline.
Superscalar uses built-in multiple pipelines to execute multiple processors at the same time. Its essence is to trade space for time. The super pipeline is to complete one or more operations in one machine cycle by refining the pipeline and increasing the main frequency. Its essence is to exchange time for space. For example, the Pentium 4's pipeline is as long as 20 stages. The longer the steps (stages) of the pipeline are designed, the faster it can complete an instruction, so it can adapt to CPUs with higher operating frequencies. However, an excessively long pipeline also brings certain side effects. It is very likely that the actual computing speed of a CPU with a higher frequency will be lower. This is the case with Intel's Pentium 4, although its main frequency can be as high as 1.4G or more. , but its computing performance is far inferior to AMD's 1.2G Athlon or even Pentium III.
12. Packaging form
CPU packaging is a protective measure that uses specific materials to solidify the CPU chip or CPU module in it to prevent damage. Generally, the CPU must be packaged before it can be delivered to the user. use. The packaging method of the CPU depends on the CPU installation form and device integration design. From a broad classification point of view, CPUs usually installed using Socket sockets are packaged using PGA (grid array), while CPUs installed using Slot x slots are all packaged using SEC (Single-sided junction box) form of packaging. There are also packaging technologies such as PLGA (Plastic Land Grid Array) and OLGA (Organic Land Grid Array). Due to increasingly fierce market competition, the current development direction of CPU packaging technology is mainly cost saving.
13. Multithreading
Simultaneous multithreading, referred to as SMT.
SMT can copy the structural state on the processor, allowing multiple threads on the same processor to execute simultaneously and fully share the processor's execution resources. It can maximize wide-issue, out-of-order superscalar processing and improve The utilization of the processor's computing components alleviates memory access delays caused by data dependencies or cache misses. When multiple threads are not available, SMT processors are almost the same as traditional wide-issue superscalar processors. The most attractive thing about SMT is that it only requires a small change in the design of the processor core, which can significantly improve performance at almost no additional cost. Multi-threading technology can prepare more data to be processed for the high-speed computing core and reduce the idle time of the computing core. This is undoubtedly very attractive for low-end desktop systems. Starting from the 3.06GHz Pentium 4, all Intel processors will support SMT technology.
14. Multi-core
Multi-core also refers to single-chip multiprocessors (Chip multiprocessors, referred to as CMP). CMP was proposed by Stanford University in the United States. Its idea is to integrate SMP (symmetric multi-processor) in large-scale parallel processors into the same chip, and each processor executes different processes in parallel. Compared with CMP, the flexibility of SMT processor structure is more prominent. However, when the semiconductor process enters 0.18 micron, the line delay has exceeded the gate delay, requiring the design of the microprocessor to be carried out by dividing many basic unit structures with smaller scale and better locality. In contrast, since the CMP structure has been divided into multiple processor cores for design, each core is relatively simple, which is conducive to optimized design, and therefore has more development prospects. Currently, IBM's Power 4 chip and Sun's MAJC5200 chip both use the CMP structure. Multi-core processors can share cache within the processor, improve cache utilization, and simplify the complexity of multi-processor system design.
In the second half of 2005, new processors from Intel and AMD will also be integrated into the CMP structure. The development code of the new Itanium processor is Montecito. It adopts a dual-core design, has at least 18MB of on-chip cache, and is manufactured using a 90nm process. Its design is definitely a challenge to today's chip industry. Each of its individual cores has independent L1, L2 and L3 caches and contains approximately 1 billion transistors.
15. SMP
SMP (Symmetric Multi-Processing), short for symmetric multi-processing structure 16. NUMA technology
NUMA is non-uniform access distribution** * Shared storage technology, which is a system composed of several independent nodes connected through a high-speed dedicated network. Each node can be a single CPU or SMP system. In NUMA, there are multiple solutions for Cache consistency, which require support from the operating system and special software. Figure 2 is an example of Sequent's NUMA system. There are 3 SMP modules connected by a high-speed dedicated network to form a node. Each node can have 12 CPUs. Systems like Sequent can go up to 64 CPUs or even 256 CPUs. Obviously, this is based on SMP and then expanded with NUMA technology. It is a combination of these two technologies.
17. Out-of-order execution technology
Out-of-order execution means that the CPU allows multiple instructions to be sent separately to each corresponding device in an order that is not specified by the program. Technology for circuit unit processing. In this way, after analyzing the status of each circuit unit and the specific situation of whether each instruction can be executed in advance, the instructions that can be executed in advance are immediately sent to the corresponding circuit unit for execution. During this period, the instructions are not executed in the specified order, and then the rearrangement unit Rearrange the results of each execution unit in instruction order. The purpose of using out-of-order execution technology is to make the CPU's internal circuits operate at full capacity and accordingly increase the speed of the CPU's running programs. Branching technology: (branch) instructions need to wait for the results when performing operations. Generally, unconditional branches only need to be executed in the order of instructions, while conditional branches must decide whether to proceed in the original order based on the processed results.
18. Memory controller inside the CPU
Manufacturing process: The current CPU manufacturing process is 0.35 microns, the latest PII can reach 0.28 microns, and in the future the CPU manufacturing process can reach 0.18 micron.
CPU manufacturers
1. Intel Corporation
Intel is the big brother in producing CPUs. In the personal computer market, it occupies more than 80% of the market share. Intel produces The CPU has become the de facto x86CPU technical specification and standard. The latest Core 2 on the personal computer platform has become the first choice for CPU, and the next generation Core i5 and Core i7 have taken the lead and are significantly ahead of other manufacturers' products in terms of performance.
2. AMD Company
CPUs currently in use are products from several companies. Apart from Intel, the most powerful challenger is AMD, the latest AMD Athlon II X2 The Phenom II has a very good price/performance ratio, especially using 3DNOW+ technology and supporting the SSE4.0 instruction set, making it perform very well in 3D.
3. IBM and Cyrix
IBM’s strength lies in its high-end laboratories and non-civilian CPUs in studios
The merger of National Semiconductor NS and Cyrix Finally, it finally has its own chip production line, and its finished products will become increasingly perfect and complete. The current MII performance is also good, especially its price is very low.
4. IDT Company
IDT is a rising star among processor manufacturers, but it is not yet mature yet.
5. VIA VIA Corporation
VIA VIA is a motherboard chipset manufacturer in Taiwan. It acquired the CPU departments of the aforementioned Cyrix and IDT and launched its own CPU
< p>6. Domestic GodsonGodSon, nicknamed Gou Sheng, is a general-purpose processor with state-owned independent intellectual property rights. It currently has 2 generations of products and can reach the performance of low-end CPUs from INTEL and AMD on the market today. Level,
7. ARM Ltd
ARM International Technology is one of the few companies that only licenses its CPU design but does not manufacture it itself. Embedded application software is most commonly executed by ARM architecture microprocessors.
8.Freescale Semiconductor
Freescale, formerly Motorola, designs several embedded devices and SoC PowerPC processors.
[Edit this paragraph] Development History
Everything will go through a process from development to growth. The development history of CPU can be developed to the scale and achievements it is today, which is even more intriguing. The product development history of the two major CPU giants - Intel and AMD.
[1] Dual-core processors of various brands
Intel
Pentium dual-core:
It is the Pentium D and Pentium that use the Presler core 4EE, it can basically be considered that the Presler core is simply the product of two Cedar Mill cores loosely coupled together.
Core 1st generation
Adopts Yonah core architecture.
[3] Core 2 generation
Uses Conroe core (incomplete).
"Core" is a new leading energy-saving micro-architecture. The starting point of the design is to provide outstanding performance and energy efficiency and improve performance per watt, which is the so-called energy efficiency ratio. Early Cores were based on notebook processors.
[Edit this paragraph] Various packaging
Bulk CPU only has one CPU and no packaging. Usually the store warranty is one year. Generally, it is provided by the manufacturer to the installer, but the installer cannot use it and flows into the market. Some dealers match bulk CPUs with fans and package them to look like originals, thus turning them into repackaged goods.
Original package CPU, also called boxed CPU. The original package CPU is a CPU product launched by the manufacturer for the retail market, with original fans and a three-year manufacturer's warranty.
In fact, there is no quality difference between bulk and boxed CPUs. The main difference lies in the different channels and thus different warranties. Boxed CPUs basically have a 3-year warranty, while bulk CPUs only have a 1-year warranty. The fans that come with boxed CPUs are originally packaged by the factory. fan, while the loose version does not match the fan, or the dealer can match the fan himself.
Black box CPU refers to the top-notch non-clocked CPU launched by the manufacturer, such as AMD's black box 500. This type of CPU does not have a fan and is a retail product launched by the manufacturer specifically for overclocking users.
Deep packet CPU, also called flip-pack CPU. The dealer will package the bulk CPU and add fans by himself. There is no manufacturer's warranty, only store warranty, usually three years. Or smuggle the CPU into the country from abroad, repackage it, and add a fan. This type is tax-free and slightly cheaper than bulk.