Introduction
Code difficulty index:
Learning focus: techniques for using parameter macros
1.1 Basic concept of a queue
In embedded systems and real-time applications, data transfer and processing are critical. A byte queue is a data structure that efficiently stores and manages data streams. With a byte queue, different data types can be handled flexibly, data integrity can be maintained, and operations can be performed safely in multithreaded environments. This article examines the byte queue concept, its role, and implementations for multi-type support, macro-based overloading, and thread safety.
A queue is a first-in first-out (FIFO) data structure. Data is added to the tail by an enqueue operation and removed from the head by a dequeue operation. In embedded systems, queues are commonly used for:
- Data buffering: Queues can temporarily hold data when producer and consumer rates differ, balancing input and output.
- Task scheduling: Tasks or events can be managed via a queue to ensure they are processed in a specific order.
- Communication: Queues pass information between modules or threads, enabling decoupling and synchronization.
1.2 Limitations of a basic byte queue
Although byte queues provide basic storage and management in embedded systems, they have some limitations in real applications:
- Lack of multi-type support: Traditional byte queues often handle a single data type, for example using a fixed byte array. To support different types, developers typically create multiple queues, increasing code complexity and maintenance.
- No function overloading: C does not natively support function overloading like C++. This makes it inconvenient to handle different numbers or types of parameters in queue operations, leading to verbose and harder-to-maintain code.
- Insufficient thread-safety mechanisms: In multithreaded environments, concurrent access to a byte queue without proper synchronization can cause data corruption or inconsistency. Traditional implementations often lack built-in thread safety, complicating concurrent programming.
Improvements to the byte queue
2.1 Implementation principle for multi-type support
Problem: Arrays or buffers in C usually store a single data type, e.g., a uint8_t array for bytes or an int32_t array for integers. Embedded systems often require handling various types—8-bit, 16-bit, 32-bit integers, floating point, and custom structs. Creating a separate queue for each type is undesirable.
Solution: Use C macros to make the queue automatically adapt to the data type passed. The core idea is to treat data as a stream of bytes: the macros compute the required storage size based on the passed type, and the underlying enqueue function stores the bytes.
Type inference with typeof:
The typeof keyword can infer the type of an expression so sizeof can determine its byte size. In this implementation, enqueue operation macros use sizeof on the inferred type.
Example:
#define enqueue(queue,data) enqueue_bytes(queue,&data,sizeof(typeof(data)))
In this macro:
- typeof(data) infers the type of data, and sizeof(sizeof(typeof(data))) determines its byte size.
- Passing the address of the data to the underlying enqueue_bytes function allows all types to be handled as a byte stream.
Using this approach, the queue supports arbitrary data types: 8-bit bytes, 16-bit integers, 32-bit floats, and custom structures, as long as their size is known.
2.2 Implementation principle for macro-based overloading
Problem: Languages like C++ allow function overloading by defining multiple functions with the same name but different parameter lists. C does not support this natively, so an alternative is needed to provide similar convenience.
Solution: Use C macros to simulate function overloading. Macros can select different underlying functions based on the number or type of parameters, using features like __VA_ARGS__ to handle variable arguments.
Implementing overloads by argument count: The enqueue macro can call different implementations based on the number of passed arguments using a variable-argument macro.
Complete enqueue macro implementation:
#define __CONNECT3(__A,__B,__C) __A##__B##__C#define __CONNECT2(__A,__B) __A##__B#define CONNECT3(__A,__B,__C) __CONNECT3(__A,__B,__C)#define CONNECT2(__A,__B) __CONNECT2(__A,__B)#define SAFE_NAME(__NAME) CONNECT3(___,__NAME,__LINE__)#define __PLOOC_VA_NUM_ARGS_IMPL(_0,_1,_2,_3,_4,_5,_6,_7,_8,_9,_10,_11,_12,_13,_14,_15,_16,__N,... ) __N#define __PLOOC_VA_NUM_ARGS(...) __PLOOC_VA_NUM_ARGS_IMPL(0,##__VA_ARGS__,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0)#define __ENQUEUE_0(__QUEUE,__VALUE) ({typeof((__VALUE)) SAFE_NAME(value) = __VALUE; enqueue_bytes((__QUEUE),&SAFE_NAME(value),(sizeof(__VALUE)));})#define __ENQUEUE_1(__QUEUE,__ADDR,__ITEM_COUNT) enqueue_bytes((__QUEUE),(__ADDR),__ITEM_COUNT*(sizeof(typeof((__ADDR[0])))))#define __ENQUEUE_2(__QUEUE,__ADDR,__TYPE,__ITEM_COUNT) enqueue_bytes((__QUEUE),(__ADDR),(__ITEM_COUNT*sizeof(__TYPE)))#define enqueue(__queue,__addr,...) CONNECT2(__ENQUEUE_,__PLOOC_VA_NUM_ARGS(__VA_ARGS__))(__queue,(__addr),##__VA_ARGS__)
The enqueue macro selects the implementation based on the number of variable arguments:
- 0 variable arguments: calls __ENQUEUE_0;
- 1 variable argument: calls __ENQUEUE_1;
- 2 variable arguments: calls __ENQUEUE_2.
2.3 Implementation principle for thread safety
Problem: In multithreaded environments, concurrent operations on the same queue can cause data races, leading to corruption or inconsistency. Queue operations must be atomic to avoid this.
Solution: In embedded systems, disabling interrupts or using locks are common methods to ensure data consistency. This implementation uses interrupt disabling to guarantee atomic queue operations. To minimize the impact on real-time behavior, only pointer operations on the queue are protected by disabling interrupts; time-consuming data copies are performed without disabling interrupts.
Pseudo-code for the enqueue_bytes function:
bool enqueue_bytes(...){ bool bEarlyReturn = false; safe_atom_code() { if (!this.bMutex) { this.bMutex = true; } else { bEarlyReturn = true; } } if (bEarlyReturn) { return false; } safe_atom_code() { /* queue pointer operations */ ... } /* data operations */ memcpy(...); ... this.bMutex = false; return true;}
Implementation of the atomic macro safe_atom_code():
#define __CONNECT3(__A,__B,__C) __A##__B##__C#define __CONNECT2(__A,__B) __A##__B#define CONNECT3(__A,__B,__C) __CONNECT3(__A,__B,__C)#define CONNECT2(__A,__B) __CONNECT2(__A,__B)#define SAFE_NAME(__NAME) CONNECT3(___,__NAME,__LINE__)#include "cmsis_compiler.h"#define safe_atom_code() \for (uint32_t SAFE_NAME(temp) = ({uint32_t SAFE_NAME(temp2) = __get_PRIMASK(); __disable_irq(); SAFE_NAME(temp2);}), *SAFE_NAME(temp3) = NULL; SAFE_NAME(temp3)++ == NULL; __set_PRIMASK(SAFE_NAME(temp)))#endif
How it works:
The safe_atom_code() macro uses a loop structure to ensure interrupts are disabled during the protected section and automatically restores the interrupt state when the loop ends.
2.4 Summary
With multi-type support, macro-based overloading, and thread safety, the byte queue becomes more flexible and practical:
- Multi-type support: Automatically infer data type and size to support queue operations for different types.
- Macro-based overloading: Simulate function overloading in C to handle varying parameter counts and types.
- Thread safety: Use interrupt disabling to ensure atomic queue operations in concurrent environments, preventing data races.
These improvements allow the byte queue to perform efficiently in single-threaded contexts and maintain data consistency and safety in complex multi-threaded systems.
API
#define queue_init(__queue,__buffer,__size,...) __PLOOC_EVAL(__QUEUE_INIT_,##__VA_ARGS__)(__queue,(__buffer),(__size),##__VA_ARGS__)#define dequeue(__queue,__addr,...) __PLOOC_EVAL(__DEQUEUE_,##__VA_ARGS__)(__queue,(__addr),##__VA_ARGS__)#define enqueue(__queue,__addr,...) __PLOOC_EVAL(__ENQUEUE_,##__VA_ARGS__)(__queue,(__addr),##__VA_ARGS__)#define peek_queue(__queue,__addr,...) __PLOOC_EVAL(__PEEK_QUEUE_,##__VA_ARGS__)(__queue,(__addr),##__VA_ARGS__)
API description
1. Initialize a queue
queue_init(__queue,__buffer,__size,...)
Parameters:
| Parameter | Description |
|---|---|
| __QUEUE | Address of the queue |
| __BUFFER | Start address of the queue buffer |
| __BUFFER_SIZE | Queue length |
| Variable args | Whether to overwrite, default is no |
2. Enqueue
#define enqueue(__queue,__addr,...)
Parameters:
| Parameter | Description |
|---|---|
| __QUEUE | Address of the queue |
| __ADDR | Data to enqueue or address of the data |
| ... | Variable args: number of items to enqueue, or data type and count. If empty, enqueue a single item. |
3. Dequeue
#define dequeue(__queue,__addr,...)
Parameters:
| Parameter | Description |
|---|---|
| __QUEUE | Address of the queue |
| __ADDR | Address of a variable to store dequeued data |
| ... | Variable args: number of items to dequeue, or data type and count. If empty, dequeue a single item. |
4. Peek
#define peek_queue(__queue,__addr,...)
Parameters:
| Parameter | Description |
|---|---|
| __QUEUE | Address of the queue |
| __ADDR | Address of a variable to store peeked data |
| ... | Variable args: data type and number of items to peek. If empty, peek a single item. |
Quick usage
Usage example:
#include "ring_queue.h"uint8_t data1 = 0XAA;uint16_t data2 = 0X55AA;uint32_t data3 = 0X55AAAA55;uint16_t data4[] = {0x1234, 0x5678};typedef struct data_t { uint32_t a; uint32_t b; uint32_t c;} data_t;data_t data5 = { .a = 0X11223344, .b = 0X55667788, .c = 0X99AABBCC,};uint8_t data[100];static uint8_t s_hwQueueBuffer[100];static byte_queue_t my_queue;queue_init(&my_queue, s_hwQueueBuffer, sizeof(s_hwQueueBuffer)); // Automatically calculate object size based on the variable typeenqueue(&my_queue, data1);enqueue(&my_queue, data2);enqueue(&my_queue, data3); // The following three methods all correctly store the arrayenqueue(&my_queue, data4, 2); // Data type can be omittedenqueue(&my_queue, data4, uint16_t, 2); // Data type can be specifiedenqueue(&my_queue, data4, uint8_t, sizeof(data4)); // Or use another type // The following two methods both correctly store the struct typeenqueue(&my_queue, data5); // Automatically calculate object size based on the struct typeenqueue(&my_queue, &data5, uint8_t, sizeof(data5)); // Can also store as an arrayenqueue(&my_queue, (uint8_t)0X11); // Constants default to int, cast to required typeenqueue(&my_queue, (uint16_t)0X2233); // Constants default to int, cast to required typeenqueue(&my_queue, 0X44556677);enqueue(&my_queue, (char)'a'); // Single character also needs castingenqueue(&my_queue, "bc"); // Strings include the terminating NUL character\0enqueue(&my_queue, "def"); // Read out all datadequeue(&my_queue, data, get_queue_count(&my_queue));
Conclusion
The goal of this article is to show how to view macros correctly: macros are not inherently harmful to code development or readability.
- Macros are not mere tricks
- Macros can encapsulate infrastructure provided by higher-level languages
- Well-designed macros can improve code readability rather than damage it
- Well-designed macros do not have to impede debugging
- Macros can encapsulate templates to avoid rewriting complex syntax structures repeatedly
ALLPCB