04-27-2023, 03:35 AM
We are trying to boot the PULP project whose original 8 riscv cores(ri5cy) have been replaced by 2 simd cores with 128bits datapath customized by us. We also replace the original HWPE with our customized systolic array. And we have successfully run RTL and Netlist systhesized by Design Compiler simulation. Now we are trying to boot it on our FPGA, but it seems we can boot the fc core(fabric controller) but fail to boot the cluster.
We try to use gdb and openocd to debug step by step. It seems when the fc step into the cluster starting part, it's stuck. So we want to use gdb to connect our simd cores whose hart id are 0 and 1(the original 8 cores' hart id are 0-7, fc's hart id is 992(0x3e0),and we edit our jlink configuration file and add extra target create to trace the simd cores. But openocd reports it can't halt the core with id 0 and 1.
So here are my questions:
1、We would like to start a customized cluster. Are there any particular considerations or other methods that could help us with debugging? We welcome any suggestions you might have.
2、For this heterogeneous kind of multicore system (one FC and two SIMD cores), how should we perform step-by-step debugging using GDB+OpenOCD? I would like to trace the FC and SIMD cores to see where exactly something went wrong. Or in other words, in the original PULP system, how should we go about debugging the 8 cores in the cluster and the FC core simultaneously?
Here's the jlink configuration file, we add extra target create to trace cores with id 0 and 1 but failed. Openocd reports it can't halt them.
We try to use gdb and openocd to debug step by step. It seems when the fc step into the cluster starting part, it's stuck. So we want to use gdb to connect our simd cores whose hart id are 0 and 1(the original 8 cores' hart id are 0-7, fc's hart id is 992(0x3e0),and we edit our jlink configuration file and add extra target create to trace the simd cores. But openocd reports it can't halt the core with id 0 and 1.
So here are my questions:
1、We would like to start a customized cluster. Are there any particular considerations or other methods that could help us with debugging? We welcome any suggestions you might have.
2、For this heterogeneous kind of multicore system (one FC and two SIMD cores), how should we perform step-by-step debugging using GDB+OpenOCD? I would like to trace the FC and SIMD cores to see where exactly something went wrong. Or in other words, in the original PULP system, how should we go about debugging the 8 cores in the cluster and the FC core simultaneously?
Here's the jlink configuration file, we add extra target create to trace cores with id 0 and 1 but failed. Openocd reports it can't halt them.
Code:
proc init_targets {} {
debug_level 2
adapter speed 10000
reset_config trst_only
set _CHIPNAME riscv
jtag newtap $_CHIPNAME unknown0 -irlen 5 -expected-id 0x10102001
jtag newtap $_CHIPNAME cpu -irlen 5 -expected-id 0x249511C3
# jtag newtap $_CHIPNAME cpu -irlen 5
set _TARGETNAME $_CHIPNAME.cpu
set _TARGETNAME0 $_CHIPNAME.fc
set _TARGETNAME1 $_CHIPNAME.simd0
set _TARGETNAME2 $_CHIPNAME.simd1
# target create $_TARGETNAME riscv -endian little -chain-position $_TARGETNAME -coreid 0
target create $_TARGETNAME0 riscv -endian little -chain-position $_TARGETNAME -coreid 0x3e0
target create $_TARGETNAME1 riscv -endian little -chain-position $_TARGETNAME -coreid 0x0
target create $_TARGETNAME2 riscv -endian little -chain-position $_TARGETNAME -coreid 0x1
target smp $_TARGETNAME0 $_TARGETNAME1 $_TARGETNAME2
# $_TARGETNAME configure -rtos riscv
# $_TARGETNAME configure -work-area-phys 0x3ff0000 -work-area-size 0x10000 -work-area-backup 1
# $_TARGETNAME riscv expose_csrs 3008-3015,4033-4034
}
gdb_report_data_abort enable
gdb_report_register_access_error enable
# prefer to use sba for system bus access
# riscv set_prefer_sba on
# dump jtag chain
scan_chain
init
halt
echo "Ready for Remote Connections"