Smart technology is integral to daily life, enabling tasks like weather queries or meeting reminders. However, ensuring security against data breaches is critical in the era of pervasive intelligence, where artificial intelligence (AI) and security push beyond traditional chip design boundaries.
The semiconductor industry's growth is driven by AI, machine learning (ML), and deep learning (DL), which demand high computational power and specialized chips. Applications like speech recognition, high-performance computing (HPC), data centers, AI-enabled PCs, and autonomous vehicles rely on advanced architectures. These provide robust computing and improve decision-making over time, particularly in fields like scientific research, weather forecasting, finance, and energy exploration.
AI's Growing Momentum
AI's potential grows exponentially as more devices connect to the cloud, creating significant market opportunities. To enable rapid decision-making, critical AI computations must occur in hardware. Dedicated AI chips are essential for cost-effective, scalable AI applications, offering innovative solutions for specific use cases.
AI/ML/DL chips feature custom processor architectures and complex data paths for precise arithmetic analysis. As data processing demands and automation expectations rise, chip developers and verification teams need modern verification techniques to advance AI technology.
Non-Semiconductor Companies Enter Chip Design
As Moore's Law slows, achieving performance gains from general-purpose processors is challenging. Non-traditional semiconductor companies, including Nvidia, Intel, AMD, Qualcomm, Meta, Amazon, Alibaba, Microsoft, and Google, are developing custom ASICs for AI software and specific applications. A decade ago, few predicted companies like Meta would enter this space.
Automotive, HPC, and cloud computing firms are also building specialized hardware architectures. This market expansion fosters new opportunities, introducing advanced design tools and solutions for complex chip design environments.
RISC-V in AI Design
Initially used in embedded systems, the open-source RISC-V standard now supports automotive, data center, and HPC applications, increasingly applied to AI workloads. Key areas include:
- AI: Heterogeneous AI chip designs often use RISC-V processors, focusing on high-performance, energy-efficient accelerators for neural networks and natural language processing.
- Automotive: RISC-V meets performance, power, cost, and safety requirements for infotainment, advanced driver assistance, and communication systems.
- HPC and Data Centers: RISC-V cores with custom ISAs handle complex computations, supporting energy-efficient, secure, and flexible kernels.
Unique Aspects of AI Chip Design
From startups to cloud providers, companies like Nvidia, Ambarella, Atlazo, AWS, and Google have launched notable AI chips, fueling competition for faster, more efficient designs. Data-centric computing is reshaping PCs, with Intel aiming to power 100 million AI-enabled PCs by 2025, integrating neural processing units and AI chatbots like Microsoft's Copilot.
AI system-on-chip (SoC) designs enable distributed computing, surpassing traditional CPU parallel processing. These designs include data-intensive modules like control paths (state machines for input-output processing) and computation modules (arithmetic logic units for data operations), accelerating AI algorithms for repetitive, predictable tasks.
While computation modules are manageable, their complexity grows with larger arithmetic units, increasing verification challenges. For a 4-bit multiplier, 16 input combinations require testing. For a 64-bit adder, verifying 2^64 states using traditional methods could take years.
Verification Challenges
AI chip design often uses C/C++ for algorithms, later converted to RTL for hardware implementation. Teams must either develop test vectors for all combinations or verify RTL against C/C++ models, both time-intensive tasks.
Formal verification, using mathematical analysis, evaluates entire designs without exhaustive test vectors. Once expert-only, modern tools make it accessible to RTL developers. However, the scale of AI chips makes complete model-based verification impractical. For RISC-V, verifying custom instructions adds further complexity.
Advanced Data Path Verification for AI
Formal equivalence checking verifies complex AI data paths by comparing two design representations, ensuring identical outputs for the same inputs. This method supports different abstraction levels and languages, ideal for comparing RTL against C/C++ models. It suits AI projects with existing C/C++ models for simulation or early software testing.
To achieve power, performance, and area (PPA) goals, technologies like gate-all-around (GAA) nodes and multi-die architectures help. AI-driven EDA tools automate tasks like design exploration and regression analysis, accelerating PPA optimization.
Future: Homomorphic Encryption
AI processes vast data volumes, requiring high-performance chips. Research explores chips handling large inputs (e.g., 4096 bits). Hardware security is critical, as shown by a $600 million cryptocurrency theft, highlighting vulnerabilities. Homomorphic encryption, enabling computations on encrypted data without decryption, reduces breach risks and is a promising direction for AI chip design.
Conclusion
AI's integration across computing requires thorough design verification for success, especially in safety-critical applications like autonomous vehicles. Edge AI devices will drive real-time data processing, transforming semiconductor design with improved productivity and faster verification solutions. The future may bring advanced AI assistants, with time revealing their full potential.