What is a digital twin?
“Twin” originally means identical twins. Because they share identical genes, they often have similar reactions and susceptibilities, so a treatment for one can often apply to the other.
A digital twin creates a “digital clone” of a real-world entity within a computer system. Although one is physical and the other virtual, their behaviors match closely. The physical entity can synchronize its real-time state to the digital clone, while the digital clone can use big data and artificial intelligence to derive decision suggestions and deliver optimized actions back to the physical entity for execution.
Essentially, a digital twin is a tool for handling complex systems. Its value lies in turning the digital clone into an experimental platform for verifying technologies, searching for optimal solutions, and testing hypotheses in the digital domain, while keeping the physical system stable and applying validated results when appropriate. This approach reduces risk, cuts cost, and improves efficiency.
Why the communications industry needs digital twin
Digital twin started in aerospace and defense and has expanded into intelligent manufacturing and smart cities. As digital twin was listed by Gartner among top strategic technology trends for several years, the communications industry has begun exploring its applications.
Engineers who frequently test, operate, and optimize wireless networks often feel they are “walking on thin ice.” Communication networks are critical infrastructure that provide essential services such as voice and internet access. A single operational mistake that causes wide-area or long-duration outages, or significant performance degradation, can have severe consequences.
Therefore, operators treat any change to the production network with extreme caution. For example, new feature testing is typically limited to selected regions, parameter changes are checked repeatedly, and implementation is scheduled at off-peak hours to avoid impacting users.
Communication systems are highly complex: end-to-end networks include many network elements, each with different roles and focal points. Operators often have only local views instead of a full-system perspective, which slows response to user demand changes and limits the ability to predict network performance accurately. No matter how thorough planning is, unexpected issues often arise during execution.
If there were a system that could validate parameter combinations before deployment and perform intelligent optimization to find optimal settings, then after configuration is pushed to a base station it would perform without surprises. In other words, a validated digital process would significantly reduce trial-and-error on live networks.
To build such a system, the industry has turned to a digitalized approach: the digital twin.
How communications applies digital twin
Communication networks are well suited to digital twin construction for several reasons.
First, every base station already has precise models in network management systems. These include site location, latitude and longitude, antenna height, azimuth, tilt, many functional properties, thousands of radio parameters that shape network performance, and parameters for coordination with other elements. These data can construct a digital representation of a base station.
Second, the core network stores comprehensive user data, including approximate location, subscription plan, commonly used apps, device model, call patterns, and other information. Operators manage these data strictly for privacy and security.
Third, as long as a phone has signal, it continuously exchanges information with the network: signaling for control, measurements of radio bands, and reports of measured signal quality. Base stations aggregate collected information into performance KPIs used to assess user experience. In short, the system already captures extensive data and flows.
Given the existing proprietary hardware and mature software stack, along with detailed real-time data, it is feasible to simplify and adapt these elements to build a high-fidelity digital twin system on commodity servers.
The first step is to deploy an end-to-end virtualized twin system on general-purpose servers.
Twin mobile: Simulate multiple phones with different capabilities on a server and configure randomized behaviors to emulate real users. For example, a percentage of devices stream video, some play games, others move rapidly. These virtual devices do not transmit physical signals because they connect to virtual radio channels.
Twin wireless channel: Based on high-precision maps and environmental information, ray-tracing models simulate reflection, scattering, diffraction, and fading of radio propagation. The model adjusts dynamically with device movement and can simulate interference across scenarios.
Twin base station: A server-simulated base station that includes all software and hardware modules of a physical base station. The running algorithms are the same as those on real base stations; the only difference is that the twin base station uses virtual wireless channels instead of emitting physical radio.
Twin core network: Commercial core networks are already virtualized and can be adapted and streamlined for the twin environment.
With an end-to-end virtualized twin system running the same software and parameters as physical devices, the twin can reproduce the behavior of a real base station in a 1:1 manner. Results obtained in the twin environment can match those observed on the live network.
For example, the twin system can accurately predict network performance: if you watch a video in a busy city square, and a virtual device in the twin watches the same video at the same virtual location, the quality and smoothness on both real and virtual devices would be identical.
Beyond the core twin, an intelligent, automated network application layer is required to address real-world problems: end-to-end SLA assurance, precise network optimization, large-scale antenna weight optimization, user experience improvement, and so on.
As shown, data flows between the network application layer, the twin network layer, and the physical network layer form an outer loop. Inside the twin network layer, an inner loop performs iterative optimization and simulation validation.
First, the system builds a digital twin based on physical network entities. Operators issue intents, which the intent network translates into autonomous network requirements. The digital twin optimizes and validates via simulation until performance goals are met. The twin then generates a “digital plan” and synchronizes optimized data to physical entities. This is the inner loop.
When the physical network runs with the new data and results still deviate from targets, the outcomes are fed back to the digital twin for further optimization. This outer closed-loop continues until the physical network achieves the desired objectives.
Example: network rate optimization
Current issues faced in wireless rate optimization include:
- User perception deterioration is often detected passively via user complaints or alarms, followed by work order dispatch, troubleshooting, and resolution, which prolongs user impact and reduces satisfaction.
- The complexity of wireless networks introduces many factors affecting throughput, and it is difficult to determine each factor's effective range under varying external or internal conditions.
- When adjusting different factors, it is hard to predict whether the impact on existing services will be large or small, positive or negative.
- Post-adjustment testing is often insufficient, missing verifications for scenarios not directly related to the operation, risking service impact and wasting significant human and material resources.
With a digital twin network, automated geospatial simulation, coverage simulation, and radio parameter configuration simulation can detect user experience issues promptly. Based on simulation, site additions, coverage adjustments, and parameter optimizations can be iteratively validated in the twin; the optimal plan is then pushed to the physical network.
Certain actions, such as site construction and antenna orientation adjustments, still require manual work. Combined with automated updates executed by the physical network, these measures enable network throughput optimization.
Future outlook
In pre-5G and 5G systems, network planning, construction, operations, and optimization have often been siloed across lifecycle stages, causing inefficiency and high cost. The concept of autonomous, intelligent networks aims to address these issues. Digital twin is a key approach to achieving autonomous network operation, and the industry has been conducting broad research in this area.
Currently, digital twin applications in communication networks are at an early research stage and lack consensus on definitions and connotations. Building digital twin networks over large-scale communication infrastructures presents technical challenges in data, modeling, and architecture.
Today’s digital twin explorations on 5G are usually add-on and fragmented. “Add-on” means the twin and the physical network are separated in hardware and software, limiting the timeliness and effectiveness of data synchronization and deployment. “Fragmented” means use-case-driven automation achieves high levels for specific functions but lacks system-wide generality.
In the 6G era, the goal is to leverage in-network computing and native intelligence to build integrated digital twin networks that enable continuous planning, fault self-healing, and high-level lifecycle autonomy with closed-loop control. This will reduce labor requirements and significantly improve network operation efficiency.
Progress begins with concrete steps. With continued research and development, digital twin technology is expected to play an increasingly important role in future communication networks.
ALLPCB