11-26-2024, 06:44 PM
I think the naming might be a bit confusing. Our main research projects do not use traditional data cache. We use a local (smaller) memory that is able to respond within a cycle. The higher level hierarchies are then managed differently (either through a DMA, i.e. software controlled). In short L1 refers to the level1 memory, and is not necessarily interpreted as L1 cache. Note that the instruction cache in these systems works in a traditional way.
Conflict between instruction and data memory reads in such systems are handled by multiple physical banks (the logarithmic or tcdm interconnect would allow the system to access two parallel reads from different memory banks) The conflicts are reduced by placing the code and data to physically distinct memory blocks. In some systems where there are also acceleerators this leads to more elaborate banking designs. They differ from application to application so it is not a generic method, we have some projects where we are more or less aggressive with these tricks.
Our Ariane/CVA6 based systems use traditional L1 data caches.
Hope that clarifies some questions
Conflict between instruction and data memory reads in such systems are handled by multiple physical banks (the logarithmic or tcdm interconnect would allow the system to access two parallel reads from different memory banks) The conflicts are reduced by placing the code and data to physically distinct memory blocks. In some systems where there are also acceleerators this leads to more elaborate banking designs. They differ from application to application so it is not a generic method, we have some projects where we are more or less aggressive with these tricks.
Our Ariane/CVA6 based systems use traditional L1 data caches.
Hope that clarifies some questions
Visit pulp-platform.org and follow us on twitter @pulp_platform