Cache Schemes: Fully Associative Cache

In a fully associative cache, every block of RAM can be mapped inside any cache block, without restriction. For example, now we might find a certain RAM block in cache block $0$; then, at some future point, we will find it inside cache block $3$, etc.

This concept is pretty similar to paging, in which we can place any page of any program inside any frame in main memory. Note, however, that there is usually no connection between RAM pages and RAM blocks: blocks are small chunks of data that we want to fit into cache, while pages are larger chunks of data that we sometimes keep on the disk.

So, to find whether a certain variable searched by the CPU is present in cache or not, we'll need to search the entire cache for that RAM block, which will take a lot of electric energy. If we want to make this search go in parallel, the computer needs to be installed with some complex hardware that includes plenty of comparator circuits, one for each block of cache, which are used to compare numbers (specifically, tags) and a multiplexer circuit to extract the variable that we are looking for if the RAM block for which we are searching is indeed located in cache. This makes a fully associative cache expensive not only in terms of how much electricity it needs to use but also the building materials and time needed to build such a cache device.