matlabverilogsimulinkhdl-coder

how to optimize (reduce) the latency in the verilog HDL code (hardware) generated by the MATLAB HDL CODER add-on from a given Simulink model?


Thanks in advance,

I am having a simple Simulink model, that takes in a 32-bit number in the IEEE-754 format and adds the same number, which gives the output again in the 32-bit wide IEEE-754 format. I used MATLAB's HDL CODER add-on and generated the Verilog HDL code for the same. When I wrote a testbench for the same, I found the latency I get from this code is 100ns. But is there a way I can reduce this to even further, say some 10ns.

Below I am attaching the Simulink model I used to generate the Verilog HDL code, along with the generated Verilog files. Also, I am attaching a screenshot of the simulation in case you don't want to waste your time running the scripts

Simulation of addition

Link to download the files


Solution

  • my point is how to use pipeline settings before conversion

    I am assuming that "pipeline settings" is a MATLAB HDL generator parameter.

    Basically what you do is "try": use a pipeline setting and synthesize the code. If you have slack you can:

    (For negative slack you use the inverse methods)

    Now here is where things get tricky:
    Most of the time you can't really speed things up. A certain functionality needs a time to calculate. Some algorithms can be sped up by using more parallel resources but only up tot a limit. An adder is good example: you can have ripple carry, carry look-ahead and more advanced techniques, but you can not speed up infinitely. (Otherwise CPUs these days would be running at Terra Hz)

    I suspect in the end you will find that it takes T time to do your IEEE-754 adder. That can be X clock cycles of an A MHz. clock or Y clock cycles of B MHz. But X times A is about the same as Y times B.

    What you can do is pump lots of calculations into your pipe so a new one comes out every clock cycle. But the latency will still be there.