Spectral Design & Test: High Speed Low Power Memory for Processors targeting AI applications

Deepak Mehta, President & CEO

The advancement in AI has majorly been confined around software development and self-driving vehicles. Owing to the relentless march of Moore’s law, the decades-old AI software research ideas of Neural Networks, and Machine Learning are now finally seeing the light of day. And now it seems that AI is steadily shifting into the hardware realm, Integrated Circuits (ICs) targeted for Machine Learning applications. In a nutshell, their primary aim is to augment the capabilities of advanced surveillance cameras, drones, and factory systems through hoping on the potential benefits of AI. Enunciating the same, Deepak Mehta, President and CEO of Spectral Design & Test (SDT) says, “While building advanced SOC (System On a Chip) Designs supporting Machine learning applications, the major concern is to produce ICs that can function at high speed and at the same time consume low power.” To put this case into perspective, he cites the example of AI-based Vision Processing Units (VPU) in drones. He elucidates that since drones are battery powered, it brings a profound challenge for onboard systems to operate real time that requires high processing speeds with a limited power budget.
Think, the eye of the drone that needs to recognize objects and maneuver its path in real time at an unmatchable speed. A combination of pipelined architectures, hundreds of concurrent CPUs & asynchronous access to vast amount of on-chip memory that is targeted to a specific AI algorithm ensures such systems can achieve high-speed real-time computation with the lowest power budget.

Based in Somerville, New Jersey, SDT specializes in providing design infrastructure and cutting-edge IP targeting embedded SOC designs that require configurable on-chip SRAMs, ROMs, Flash (Non Volatile Memory) architectures that enable memory intense applications like Vision processing, speech, AI-based computing, autonomous computing where memory speed and power is the bottleneck.” The most advanced AI-based processing systems designed on 14/16 nm process nodes have over a 100 Mbs of SRAM and NVM storage. To assemble such vast amounts of on-chip memory, it is imminent that designers use software automation to assemble and characterize memories of different sizes and shapes that fit on a chip floor pan deter-mined by the size of the SOC package. Through combining memory designer and software developer assets, SDT has developed one of the most advanced and productive workflows for embedded memory design.

SDT builds configurable memory designs in a very cost-effective manner and maintains optimum quality at all levels.
To this end, the firm comprehends the explicit details of its clients’ requirements and accordingly renders designs that can meet their business needs. “Our services are very specific to the customers. We build differentiated memory IP that in turns makes our client competitive in the marketplace.” The industry veteran further adds, “We call this methodology as a memory development platform which comes from our years of experience in this space.” “Our team includes software developers who are adepts in C++ and then we have folks who design extreme low power memory at a consistent level.” “Additionally, we are collaborating with our partners in developing configurable dense low power embedded Non-Volatile Memories that can be a game changer for chips that target AI-based algorithms that morph over time,” says Mehta.

“Our business model is flexible, and we provide a software package that our end customers can use to design their own proprietary SRAM macros or we provide them with a reference design that they can retarget to their desired technology node. Spectral has it’s own indigenous configurable Memory Compilers that are targeted for advanced technology nodes in the 16/14/12 nm FinFet technology nodes. Large amounts of high-speed low power embedded memory in microchips is enabling autonomous computing, advances in speech recognition & vision, applications fueled by AI algorithms encompassing cloud fog and edge computing,” concludes Mehta.