The CPU cache of modern desktop PC is divided into L1 and L2, and the hit rate of cache is an important factor affecting CPU performance.
Due to the lack of CapacityMisss, it will cause Cachemisss that can't be hired by CapacityMiss (due to capacity problems), which requires CPU to spend more time by the translation backup buffer in the memory management unit MMU (Memory Management Unit) located inside the CPU. TLB (Translate Look Sidebuffer) to access physical memory, allowing CPU access delay, pipeline pause, CPU performance decreases, especially for the current, the super-pipeline CPU CPU of the twenty pipe is very much affected.
Increasing the hit rate of Cache, increasing the CPU's cache capacity is not a final strategy to solve this problem, and climbed in the CPU's cache capacity in major manufacturers. At this time, AMD does not increase the CPU's cache capacity, but with its advanced EV6 bus technology, the advantage of speed and bandwidth, abandon Intel's consistent L1 and L2 "child" cache strategy, A exclusive secondary cache structure Exclisive is used in the Socketa architecture in June 2000, and the L1 and L2 are set to the same stage cache, and the data stored in L1 and L2 is independent but not repetitive, so that the actual cache capacity is L1 and L2. The sum, the CPU is a lookup of L1 and L2 on L1 and L2 like accessing a single cache when reading data.
???? INTEL's CPU cache policy with AMD previous CPU cache policy is to set L1 and L2 to a different level cache, and data in L1 can be found in L2, and the CPU reads the data to look up in the L1 cache, Then look for in the L2 cache. It is clear that AMD's exclusive second-level cache structure Exclisive's cache policy is more advanced, but sometimes it needs to be further considered for its implementation method.
???? In the intense IT industry, in the face of the VIA and Transmeta Tiger, Intel and AMD will not be built, as the CPU updated footsteps new CPU caching strategy is also developing, let us wait and see.