
Deepak Mehta, President & CEO
Think, the eye of the drone that needs to recognize objects and maneuver its path in real time at an unmatchable speed. A combination of pipelined architectures, hundreds of concurrent CPUs & asynchronous access to vast amount of on-chip memory that is targeted to a specific AI algorithm ensures such systems can achieve high-speed real-time computation with the lowest power budget.
Based in Somerville, New Jersey, SDT specializes in providing design infrastructure and cutting-edge IP targeting embedded SOC designs that require configurable on-chip SRAMs, ROMs, Flash (Non Volatile Memory) architectures that enable memory intense applications like Vision processing, speech, AI-based computing, autonomous computing where memory speed and power is the bottleneck.” The most advanced AI-based processing systems designed on 14/16 nm process nodes have over a 100 Mbs of SRAM and NVM storage. To assemble such vast amounts of on-chip memory, it is imminent that designers use software automation to assemble and characterize memories of different sizes and shapes that fit on a chip floor pan deter-mined by the size of the SOC package. Through combining memory designer and software developer assets, SDT has developed one of the most advanced and productive workflows for embedded memory design.
SDT builds configurable memory designs in a very cost-effective manner and maintains optimum quality at all levels.
Based in Somerville, New Jersey, SDT specializes in providing design infrastructure and cutting-edge IP targeting embedded SOC designs that require configurable on-chip SRAMs, ROMs, Flash (Non Volatile Memory) architectures that enable memory intense applications like Vision processing, speech, AI-based computing, autonomous computing where memory speed and power is the bottleneck.” The most advanced AI-based processing systems designed on 14/16 nm process nodes have over a 100 Mbs of SRAM and NVM storage. To assemble such vast amounts of on-chip memory, it is imminent that designers use software automation to assemble and characterize memories of different sizes and shapes that fit on a chip floor pan deter-mined by the size of the SOC package. Through combining memory designer and software developer assets, SDT has developed one of the most advanced and productive workflows for embedded memory design.
SDT builds configurable memory designs in a very cost-effective manner and maintains optimum quality at all levels.
To this end, the firm comprehends the explicit details of its clients’ requirements and accordingly renders designs that can meet their business needs. “Our services are very specific to the customers. We build differentiated memory IP that in turns makes our client competitive in the marketplace.” The industry veteran further adds, “We call this methodology as a memory development platform which comes from our years of experience in this space.” “Our team includes software developers who are adepts in C++ and then we have folks who design extreme low power memory at a consistent level.” “Additionally, we are collaborating with our partners in developing configurable dense low power embedded Non-Volatile Memories that can be a game changer for chips that target AI-based algorithms that morph over time,” says Mehta.
“Our business model is flexible, and we provide a software package that our end customers can use to design their own proprietary SRAM macros or we provide them with a reference design that they can retarget to their desired technology node. Spectral has it’s own indigenous configurable Memory Compilers that are targeted for advanced technology nodes in the 16/14/12 nm FinFet technology nodes. Large amounts of high-speed low power embedded memory in microchips is enabling autonomous computing, advances in speech recognition & vision, applications fueled by AI algorithms encompassing cloud fog and edge computing,” concludes Mehta.
“Our business model is flexible, and we provide a software package that our end customers can use to design their own proprietary SRAM macros or we provide them with a reference design that they can retarget to their desired technology node. Spectral has it’s own indigenous configurable Memory Compilers that are targeted for advanced technology nodes in the 16/14/12 nm FinFet technology nodes. Large amounts of high-speed low power embedded memory in microchips is enabling autonomous computing, advances in speech recognition & vision, applications fueled by AI algorithms encompassing cloud fog and edge computing,” concludes Mehta.