Rise of the AI Workstation

Given the rapid interest in artificial intelligence, more workstation vendors are rising to meet demand.

Given the rapid interest in artificial intelligence, more workstation vendors are rising to meet demand.

BOXX positions its AI workstations as sitting at the intersection between product development, graphics, and AI model development. Image courtesy of BOXX.


We are currently witnessing the most rapid technology adoption in modern times. Mobile phones took over 20 years to reach 100 million users globally. The artificial intelligence (AI) application ChatGPT reached 100 million users in only two months after its public release, making it the fastest-growing consumer application in history.

For consumers and modest enterprise use, AI applications are a labor-saving utility. For computationally demanding tasks such as product design, AI promises to not only simplify specific tasks but also provide a way to streamline existing workflows or replace them altogether.

Generally speaking, more advanced AI applications that provide greater benefits tend to require more computational resources (and more electricity). However, efficiency improvements and specialized hardware can sometimes allow for significant AI capabilities with optimized resource usage. This is where AI workstations come into play.

An AI workstation is a computer specifically designed and optimized for professional tasks for creating and using AI and machine learning (ML). These tasks typically require significant computational power and specialized hardware to efficiently handle complex algorithms and large datasets. Key components and features of an AI workstation include a powerful CPU, multiple high-end graphics processing units (GPUs), Tensor processing units, 32 GB or more of RAM, high-speed storage, and a cooling system. Usually an AI workstation comes with AI frameworks and related software preloaded, depending on the intended use.

The top-end GPUs from NVIDIA and AMD are designed for superior graphics and to enhance working with AI. This means the highest level of performance possible on a workstation requires multiple GPUs. Using multiple GPUs locally can sometimes offer better performance than a cloud service and can help avoid network latency and local bandwidth limitations. They also minimize cloud computing fees and reduce data security concerns.

Some workstation vendors are now pitching new configurations as suitable for AI development and use. Using on-premises workstations can provide scalability and flexibility as needs change. Hardware can be customized according to engineering requirements and preferences.

On-Premise AI Development

Using on-premises AI workstations can offer several advantages over cloud-based AI development:

Flexibility: While the cloud provides flexibility, operating costs for long-term AI projects can escalate. On-premises GPU workstations give developers unlimited iteration and testing time for a one-time, fixed cost. The more an on-premises system is used, the greater its return on investment.

Experimentation: When using on-premises workstations, developers have the freedom to experiment without worrying about usage costs or budgets. If a new methodology fails, there’s no added investment required to try a different approach, encouraging creativity.

Data sovereignty and privacy: On-premises deep learning systems make it easier to adopt AI while following strict data regulations and minimizing cybersecurity risks.

The Dell Precision 7780 workstation features a 17-in. screen, 13th-gen Intel Core i7, and an NVIDIA RTX 2000 Ada GPU. Image courtesy of Dell.

Leveraging existing infrastructure: If an organization already has underused workstations, they can be repurposed and upgraded for AI initiatives. Doing AI work on-premises with existing hardware is a quick way to kickstart projects, test use cases and gain additional value from current assets.

Secure and compliant machine learning pipelines: Running AI applications on-premises allows adhering to well-defined internal policies. It becomes easier to build repeatable, secure, and compliant ML pipelines. This is particularly beneficial in heavily regulated industries such as healthcare and aerospace.

Cost savings by avoiding data transfer: For enterprises storing data on-premises, moving it to the cloud for AI development can be time-consuming and costly. Running AI projects where the data resides eliminates these transfer costs.

Of course when there are good reasons to move forward, the possible drawbacks must also be considered. Unless a company has a supply of workstations to upgrade for AI, it could require significant upfront investment in both hardware and infrastructure to bring AI development in house. All the costs of operating and maintaining this new AI infrastructure is now the organization’s problem, not the cloud service provider.

In-house AI development also has different model and service requirements. Cloud platforms, including Amazon Web Services (AWS) and Azure, offer an array of pretrained models for common AI tasks. These include ones most likely needed for product engineering and manufacturing, such as computer vision, natural language processing and predictive maintenance. On-premises solutions may not have the same breadth of pretrained models. Organizations may have to build from scratch or fine-tune existing models.

Comparing GPUs

NVIDIA and AMD are offering GPU technology and supporting software enhanced for AI development. NVIDIA has the largest market share in this area, and was the first to deploy AI-specific features.

Orbital Computers offers its Silenced Monolith workstation as a midrange solution configured for AI tools including PyTorch, Keras and TensorFlow. Image courtesy of Orbital Computers.

Many engineering companies are in long-term relationships with their workstation vendor; for them, the choices will be limited to what the vendor offers. Other companies, especially smaller ones, may have the luxury to shop around. Here are some factors to consider:

Performance requirements: NVIDIA GPUs are often favored for deep learning tasks due to their CUDA architecture and extensive support for open-source AI frameworks like TensorFlow and PyTorch. By comparison, AMD GPUs have made strides in AI performance but may not yet match NVIDIA in terms of ecosystem support and optimization for certain applications. AMD is also known for using open-source software for other aspects of the GPU software infrastructure; NVIDIA’s software ecosystem is more proprietary.

Costs: The upfront cost of workstations equipped with NVIDIA GPUs can be higher, but this may be offset by the performance benefits and time savings in development. AMD GPUs may offer a more budget-friendly option, which could be appealing for companies with tighter budgets. Companies should evaluate the total cost of ownership—including development time—not just the initial price.

Ecosystem and support: NVIDIA has a larger community and more extensive resources available for troubleshooting and support, which can be beneficial for teams new to AI development. Companies should consider the availability of training, documentation, and community support for both platforms. A particular workstation vendor may offer strong partnerships with NVIDIA or AMD that will bring strategic advantage to the buyer.

Efficiency: AMD has focused on thermal management as a strategic differentiator in their competition with NVIDIA. If other factors are similar, using AMD GPUs may offer benefits in lower power consumption and thermal management.

What Vendors Offer

All well-known workstation vendors offer AI-equipped workstations; NVIDIA also offers an AI workstation. What follows is a representative sample of specific products from these vendors. The list below is far from exhaustive.

NVIDIA DGX Station: Companies such as Microsoft and Google offer hardware systems that provide a reference to how they believe their software should be used by other vendors. NVIDIA does a similar thing with its DGX Station. Each workstation is fitted with eight Tesla V100 GPUs and the NVIDIA GPU Cloud Deep Learning Software Stack. The NVIDIA homepage for DGX Station includes success stories from engineering powerhouses including Shell, BMW, and Sony. Models range from deskside units to systems that fill a room.

Lenovo P Series: Lenovo offers deskside and mobile workstations customized for AI development. A key differentiator for Lenovo is its innovative device-as-a-service (DaaS) approach to procurement.

Lambda Labs GPU Workstation: This boutique vendor offers Intel Core i9 and AMD Threadripper as CPU options running up to 1TB memory. They describe their products as suitable for smaller teams and individual engineers looking to train machine learning models on premise.

HP Z8 Fury: HP boasts this is the most powerful Intel-based AI workstation available. StorageReview magazine described it as “a monstrously powerful desktop workstation.” The Intel Xeon W3400 Sapphire Rapids CPU is standard. It offers eight PCIe slots, mixing Gen3, Gen4, and Gen5; 6 SATA ports; four internal 3.5-inch bays; and two internal NVMe connectors to a front-removable M.2 carrier tray.

Dell Precision: Dell offers several Precision models preconfigured for AI development, as deskside, mobile, and rack. Products are available in the 5000 and 7000 series.

Boxx APEXX AI: This company exists somewhere between boutique and enterprise vendors. It offers several models for AI development using the AMD Threadripper PRO 7000 series CPUs and one or more NVIDIA RTX 6000 Ada GPUs.

More AMD Coverage

Accelerating Electric Vehicle Development with Multidisciplinary Simulation and High-Performance Computing
In this new Making the Case guide, learn how a unified approach to design and multidisciplinary simulation from Dassault Systèmes, combined with high-performance computing powered by AMD EPYC™ processors, can accelerate EV design.
AMD Powers Fast Supercomputer, El Capitan
El Capitan touted as the first exascale-class machine for the National Nuclear Security Administration (NNSA) stands as a computing resource for the NNSA Tri-Labs — LLNL, Los Alamos and Sandia National Laboratories.
New Engineering Design Center for AMD Opens in Serbia
AMD expands in the Balkans with a new design center to improve software and AI capabilities.
AMD Acquires Hyperscale Systems Developer ZT Systems
Goal of acquisition is for AMD to greatly expand its data center artificial intelligence systems capabilities, company says.
Velocity Micro Now Offers AMD Ryzen 9000 Series Desktop Processors
New processors with AMD’s Zen 5 core technology will be available in workstation desktop systems, company reports.
AMD Completes Acquisition of Silo AI
Plan is to accelerate development and deployment of artificial intelligence models on AMD hardware.
AMD Company Profile

More BOXX Technologies Coverage

BOXX Technologies Company Profile

More Dell Coverage

Artificial Intelligence for Design and Engineering Workflows
In this white paper, learn how artificial intelligence and machine learning can improve design and simulation.
NVIDIA, nTop Strengthen 3D Solid Modeling Collaboration
NVIDIA invests in nTop, integrates OptiX rendering into nTop software.
AU 2024: Autodesk Offers Glimpses of the Future with Project Bernini
New Proof of Concept at Autodesk University Hints at AI Training Based on Proprietary Data
Compact Power Without Compromise: Dell Precision 3280 Compact Workstation
Dell’s latest ultra-small form factor workstation makes no compromises.
SIGGRAPH 2024: NVIDIA Releases Microservices to Empower AI Apps and Robot Training
NVIDIA NIMs expected to attract developers to NVIDIA Omniverse™
GPUs Lift Heavy Simulation Load
Using Altair EDEM and NVIDIA RTX™ GPUs, Astec Industries has improved and accelerated its DEM simulations.
Dell Company Profile

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Randall  Newton's avatar
Randall Newton

Randall S. Newton is principal analyst at Consilia Vektor, covering engineering technology. He has been part of the computer graphics industry in a variety of roles since 1985.

  Follow DE
#29381