Understanding the TCDM interconnect and implementing HWPEs
#2
Hello,

Just wanted to remind you that we have a tutorial under:
https://pulp-platform.org/conferences.html

Slides:
https://pulp-platform.org/docs/riscv_wor...torial.pdf

And there is also a video recording of the hands on talk, which could be useful:
https://www.youtube.com/watch?v=27tndT6cBH0

I think this could reduce the questions by some amount. In a nutshell, you can add as many ports to the HWPE as you want, generally the issue in computing is the memory bandwidth, so the more the better. Of course adding an arbitrary number of ports, could
a) lead to contentions (multiple sources/sinks access the same memory and end up waiting)
b) complicate the crossbar that is in between, reducing the access speed.
So a good balance is required.

If you do not have much memory access, you can actually even get away in making it a regular APB connected peripheral.

As the internals of the HWPE, we have some documentation
https://hwpe-doc.readthedocs.io/en/latest/

The streamers are designed to drive the memory ports, so you are right you should also modify them. The docs above could help you with that.

Part1, Q1: Yes core and HWPE use the same interconnect (not a bus) to access the same memory. If the access is not to the same physical memory block these accesses can be concurrent. There is a roundrobin like arbitration to make sure that the core (or HWPE) does not stall unfairly.

Hope this helps a bit
Visit pulp-platform.org and follow us on twitter @pulp_platform
Reply


Messages In This Thread
RE: Understanding the TCDM interconnect and implementing HWPEs - by kgf - 10-22-2019, 01:35 PM

Forum Jump:


Users browsing this thread: 1 Guest(s)