answersLogoWhite

0


Best Answer

When you have a cache miss, this means that the cache did not contain the information needed by the processor. This means that the information will need to be fetched from memory. Retrieving data from memory is much more costly than from cache, therefore you will incur a "miss penalty" where the CPU will potentially have wasted cycles if there isn't other data it can process while it waits for memory. If the data is not in main memory, then a page fault (YAY My username!) occurs. This is very expensive and means that a lot of processor cycles will be wasted while a page is retrieved from the hard drive.

User Avatar

Wiki User

βˆ™ 13y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is the Significance of cache miss?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

What is miss latency?

miss latency is the time (in cycles) the CPU waits when a miss happen in the cache. (the time needed to bring the data from the main memory to the cache).


What is cache hit and cache miss?

Type your answer here... in cache memory when the CPU refer to the memory and find the word in cache it is said to be hit or produced....... if the word is not found in cache it is in main memory it counts as a miss


What is the objective of cache only memory architecture?

Cache memory is the high speed memories which are repeatedly requested by the Cache client (CPU). Whenever the requested data from the cpu is present in the cache, it directly supply the data and is known as cache hit(fast) and when the data is not accessible in cache then cache access the block of the main memory and feed to the CPU and it is termed as cache miss (slow).


What is cache miss penalty?

Additional time required because of a miss it is generally the 30~40 cycles for Main Memory.


What is the difference between cache vs cold cache vs hot cache vs warm cache vs cache hit vs cache miss?

Firstly, it sounds like you are asking for general definitions, rather than differential definitions, which is problematic when the definitions are differential and context specific. Cache miss: not in cache, must be loaded from the original source Cache hit: was loaded from cache (no implication of what "type" of cache was hit). cold cache: The slowest cache hit possible. The actual loading mechanism depends on the type of cache (CPU cache could refer to an L2 (or L3) hit, disk cache could refer to a RAM hit on the drive, web cache could refer to a drive cache hit) hot cache: The fastest cache hit possible. Depends on mechanism described (CPU could be L1 cache, disk could be OS cache hit, web cache could be RAM hit in cache device) Warm cache: Anything between, like L2 when L1 is hot and L3 is cold. It is a less precise term and often used to imply "hot" when the performance is closer to "cold."


What will happen next if a cache miss occurs in the level 1 cache in a system with a level 1 and level 2 cache where will the required data be requested from next?

Cache misses move up the chain (or down the chain, if you want to think of it that way). If the information required is not in your L1 cache, then it checks for it in the L2 cache. If it isn't there either, then you need to go out and grab it from main memory.


Is cache memory a removable memory?

No, a cache memory is often used to store data that has been needed recently on grounds that it will be faster to access when/if it is needed again. When data that is requested is contained in the cache you have a cache hit, and when you have to retrieve it from the hard drive (or where ever its original storage was) again it is called a cache miss. Retrieving data from the hard drive is slower than retrieving it from the cache.


What is the distinction between spatial locality and temporal locality?

Temporal Locality: Concept that a resource will be referenced at one point in time will be referenced again. Cache miss traffic decreases fast when cache size increases and temporal locality determines sensitivity to cache size. Spatial Locality: Concept that likelihood of referencing a resource is higher if a resource near it was referenced. Cache miss traffic does not increase much when line size increases. Spatial locality determines sensivity to line size. ~BR Mukkaysh Srivastav Temporal Locality: Concept that a resource will be referenced at one point in time will be referenced again. Cache miss traffic decreases fast when cache size increases and temporal locality determines sensitivity to cache size. Spatial Locality: Concept that likelihood of referencing a resource is higher if a resource near it was referenced. Cache miss traffic does not increase much when line size increases. Spatial locality determines sensivity to line size. ~BR Mukkaysh Srivastav


What is the computer definition for hits?

There are several computer definitions for the word hits. If someone accesses your website, that is called a hit. Results on a search engine are also called hits. This word is also related to caching. When data that is already in the cache is reused, that is called a hit. When data cannot be found in the cache, that is called a miss. The idea of the caching scheme is to be good at predicting hits and thus improve performance. If everything is a miss, then the cache is useless and may actually be reducing performance.


Principle of locality relate to the use of multiple memory levels?

Temporal Locality: Concept that a resource will be referenced at one point in time will be referenced again. Cache miss traffic decreases fast when cache size increases and temporal locality determines sensitivity to cache size.Spatial Locality: Concept that likelihood of referencing a resource is higher if a resource near it was referenced. Cache miss traffic does not increase much when line size increases. Spatial locality determines sensivity to line size.


What is the impact of cache miss on system performance?

A cache miss is where the processor requests a memory transfer, and that data is not in cache. This requires the bus interface unit to perform a slow access to memory, as opposed to a fast access to cache, or it requires the cache manager to make disk accesses, which can be millions of times slower than main memory. Depending on the cache level, a consistently high percentage of cache misses can impact performance significantly. This is most often seen in low physical memory machines, where the swap file hit-miss ratio is poor. The working set is the memory that is most recently used. Ideally, you want short-term working set to always be smaller than physical memory. Since working set is hard to measure, you can use commit charge, though that is not as accurate. You want commit charge for currently active applications plus kernel memory to be less than physical memory.


What delay penalty is associated with a branch instruction?

If the CPU/cache logic predicts a branch to go a certain way, and the branch goes a different way, then instructions after the branch that have been pre-fetched by the cache must be discarded and new instructions fetched. This will delay processing due to cache miss.