October 11, 2023
You have to hand it to the folks at Gartner. When they began using and promoting the term' hype cycle', it was an insightful description of the collective hysteria that seems to take place around every innovation from the technology community. Today, the world is abuzz with the terms "ChatGPT," "Artificial Intelligence," and "AI."
As a concept, AI has been around since 1955, according to "A Very Short History of Artificial Intelligence (AI)," written by Gil Press and appearing on Forbes in 2016.1 More recently, the talk of the tech world centers around "generative AI" and the potential impact of computer systems that can take both natural language text and voice inputs and create a response that spans a range of outputs, including graphics, animations, white papers, internet search results, and PowerPoint presentations to name a few.
Work on the concepts and implementations of AI have progressed with fits and starts across the decades since 1955. Programming languages such as LISP and PROLOG were created to facilitate AI development, and a series of hardware innovations such as neural networks, FPGA-based accelerators, and certain ASICs were built with AI in mind. Recently, the Open Compute Project (OCP) launched a project, the Open Accelerator Infrastructure (OAI) and the Open Accelerator Module (OAM), aimed at supporting a standard AI accelerator card format.
Today, the combination of massive core count graphics processing units (GPUs) from a world leader in artificial intelligence computing and algorithms that capitalize on the parallel processing capabilities of the GPUs enable large language models (LLMs) to be developed and trained to recognize and 'understand' text-based inputs. Microsoft Bing, Microsoft Copilots, Google Bard, Salesforce Einstein, Adobe Firefly, Craiyon, DALL-E, and countless other AI-enabled applications are now the talk of the industry as they promise new levels of productivity, efficiency, and creativity in a wide variety of settings. Office workers, students, factory automation, and autonomous vehicles all stand to benefit from the widespread availability of new AI-enabled tools. Others leverage AI to lower operation costs, identify and schedule preventive maintenance items, and improve customer service.
Behind the scenes, a world leader in artificial intelligence computing is building AI data center infrastructure that is powering many new AI applications publicized by hyperscalers and SaaS providers using the A100 and, more recently, the H100 Tensor Core GPU. These systems process and communicate massive amounts of data in parallel, enabling innovation across the technology industry to happen at the speed of AI.
Implementation of these platforms comes with demanding power requirements. Selecting appropriate-sized rack PDUs is critical to powering the most complex AI models that demand unprecedented scale. Raritan's intelligent rack PDUs have field-proven specific designs to meet their stringent requirements for power capacity, metering, accessory support, and more. If your organization is interested in operating an AI infrastructure based on this platform, check out the World Leader in Artificial Intelligence Computing's AI data center infrastructure reference guide and Reference Architecture, where they describe these solutions in detail, including the space, power, and cooling requirements needed along with some product recommendations.
In addition to offering field-proven intelligent rack PDU designs, Legrand's cabinets, busways, optical fiber, and transceiver solutions are also suited to your AI needs. We have a team of power experts to support you regardless of your AI project's stage, size, or scale. Contact us today to get started.