Linked List Implementation in ASIC – Hardware

In Linked List Implementation in ASIC, overview of dynamic memory allocation was presented. Next Article,  Linked List Implementation in ASIC – ANSI C provided further explaination for the need for Linked List and foundation for software implementations of Linked List. 

This article provides foundation for ASIC linked list design architecture considerations. First, an ANSI-C model is presented to explain and mimic ASIC Linked List Implementation. After this, basic linked list operations are identified and performance analysis is performed. At the same time, various tradeoffs are  discussed and few unique methods are presented to increase Link List performance and throughput.

Two hardware specific restrictions were mentioned in article Linked List Implementation in ASIC – ANSI C as followed

  • In software, memory management and Allocation requests are performed by Operating System. But in HW, allocation has to be implemented in program itself.
  • Only limited amount of memory is available and memory size should be carefully calculated for worst case scenario.

Apart from these, hardware/ASIC implementations have following distinctive features and requirements. Though these features are universally accepted in ASIC design community, designers are free to make different choices, if application demands such.

  • Seperate data management for different data types. For example, link pointer (*next) and data are different types so should be managed in different sections of hardware.
  • In Linked List Implementation in ASIC – ANSI C, we showed a structure having one integer and one pointer variable, with resulting overhead of 100%. In hardware, this high overhead is unacceptable. In hardware, data size is much larger compared to an integer and is called data buffer.
  • Large buffer size results in lower overhead and simple control machine. However, statistical memory space waste also goes up in proportion to Buffer Size.

Please note that a cost function can be devised to calculate optimal buffer size but some of the parameters are hard to quantify, for example complexity of state machine and/or projects/application requirements.

  • To ease memory manipulation, buffer sizes are in power of 2.  64 bytes or 128 bytes are most commonly used buffer size.
  • Hardware memory accesses are not random. Data transactions follow First In First Out rule (For a given port/queue list). Hence, buffers are not needed to be randomly added in to (or deleted from) the queue. Nodes are added at the end of queue and removed from start of queue.

Glossary of Terms, used throughout this article and generally used by Hardware designers

  • En-Queue Operation – Allocation of a node or buffer
  • De-Queue Operation – Deallocation of a node or buffer
  • Queue List – A typical list to be implemented in Hardware. It can be called as Port List as well. There are more than one queue lists in hardware in general.
  • Start Pointer, Head Pointer or Remove Pointer – Start of Queue
  • End Pointer or Insert Pointer – Pointer to End of Queue
  • Free List – As mentioned before, OS allocation function should be implemented in Hardware and to achieve that, a master list, called Free List is created which contains list of free nodes.

Remember, as mentioned in ANSI-C Linked List implementation, a node consists of two set of variables, one to store data and another to store a pointer to next node. In ASIC, both variables are stored into separate memory,  Buffer Memory and Link Memory.

Leave a Reply

Your email address will not be published. Required fields are marked *