With the festive season fast approaching, many wish lists would feature a snazzy tech gadget. Be it a pair of Ray-Ban Meta Smart Glasses, which can record videos, or a Humane AI Pin, which can control smart devices, consumers are hungry for the latest artificial intelligence (AI) inventions. Chatbots, photo editing, productivity enhancement tools—we're swiftly stepping into a world where AI is integral to our daily lives, gracing our faces, attached to our garments and simplifying our work.
In this new age, the custom ASIC (Application-Specific Integrated Circuit) chip is becoming a serious contender in the AI chip market. At present, graphic processing units (GPUs) from the likes of Nvidia and AMD are the go-to for large language models (LLMs)—very fast but standard products designed for general-purpose AI training. Despite their versatility, GPUs may not always yield the best performance, and could be an overkill for some AI applications. Custom ASIC chips, on the other hand, are highly efficient and optimized, thus providing a more tailored solution for diverse AI needs. Take for example the cryptocurrency chip market: initially, bitcoin was intended to be mined on CPUs. However, ASICs surpassed CPUs and GPUs due to lower electricity consumption and better computing prowess. Constructed specifically for mining a digital currency, ASIC miners now form the majority. The custom chip holds vast growth potential and is set to reshape the current AI landscape. According to Morgan Stanley, the ASIC segment for AI is expected to grow by 85% annually from 2023 to 2027 toward a USD 30bn market.1
Tech giants are placing their bets on ASICs. Google has been partnering with Broadcom to produce its fifth generation of tensor processing units (TPUs), which are made for AI workloads and achieve speeds 10 times faster. As generative AI takes off, others are working on chips aimed at their cloud data centers. Amazon is working with Marvell for its Graviton AI chips, and collaborates with Alchip Technologies on its Trainium and Inferentia lines. Microsoft has its Maia 100 (in conjunction with TSMC) and Cobalt 100 (with Marvell). Even OpenAI, the force behind ChatGPT, has reportedly appointed Google’s former TPU head to lead its own AI hardware and chip business. In the automotive market, Tesla works with Alchip to develop its supercomputing AI chip, Dojo, enabling self-driving. Clearly, one size doesn’t fit all, and companies are well prepared to unwrap the possibilities of custom AI chips.
Alchip Technologies
Alchip Technologies, founded in 2003, has risen to the heart of Taiwan’s tech scene. The company has always focused on ASIC design services for high-performance computing and network communications. But things weren’t always rosy. Before 2018, the market was a different beast. Small-scale ASICs for mobile devices were the mainstream, and Alchip sold chips to Japanese consumer electronic companies for cameras and gaming consoles. As bitcoin mining took off in 2018, the landscape changed dramatically and Alchip’s time to shine arrived. Now, the company is riding yet another hot tech trend: artificial intelligence.
Alchip’s ASICs focus on the leading edge. It specializes in sub 7-nm process nodes, making it highly sought after in high-performance computing (HPC). In fact, Alchip generates 80% of its revenue from the HPC market. With AI and supercomputing needs skyrocketing, Alchip's prowess in advanced node design is more critical than ever. The company is building a moat around its business by forging strategic partnerships with AI powerhouses. With Amazon, Alchip is involved in cloud AI chip design, bringing to life AWS' in-house AI inference chip, Inferentia, and AI training chip, Trainium. With Tesla, they're working on ASICs for the Dojo supercomputer, a linchpin in the self-driving tech of tomorrow. These collaborations aren't just about revenue for Alchip (although that's a nice perk), they're about being at the cutting edge of the tech sector's most exciting developments.
Looking ahead, AI is here to stay and Alchip's expertise in advanced node design positions it well to meet the growing demand for HPC and AI applications. Continued R&D coupled with commitment to top-tier customers is expected to bring a future of success and innovation.