DE · Topics · · Sponsored Content


How ChatGPT and Other AI Bots Could Make Simulation Easier, Faster

Ansys CTO expects reduced order models to become part of simulation.

Ansys CTO expects reduced order models to become part of simulation.

OzenCon 2023 took place at the Computer History Museum in Mountain View, California. Image by DE 24/7.


On a brisk February morning, a group of simulation software users gathered at the Computer History Museum in Mountain View, CA. It was the return of OzenCon, hosted by the Ansys reseller and simulation consultant Ozen Engineering. The annual event was on hiatus during the COVID shutdown, but with relaxing requirements, it was ready for a reboot.

This year, the keynote speaker was Dr. Prith Banerjee, CTO of Ansys. Simulation usage and his company’s strategy will be driven by the five pillars of technology, he revealed. They are:

  • the improving, evolving solvers;
  • the use of high-performance computing (HPC);
  • the rise of Artificial Intelligence (AI) and Machine Learning (ML);
  • the increased use of private, public, and hybrid clouds;
  • digital engineering (encompassing digital twins, model-based design, and similar applications). 

At OzenCon, Ansys CTO Dr. Prith Banerjee discusses how AI programs like ChatGPT might change engineering simulation. NVIDIA and Dell discuss the processing power and hardware needed to run AI or Machine Learning workloads. Reported by Kenneth Wong, DE 24/7.

How ChatGPT Might Change FEA Programs

At OzenCon, the thrust of Banerjee’s talk was on the role of AI or ML, and how they’re expected to reshape simulation user experience. The advancements in FEA have been well publicized. While less public and more secretive, parallel developments have also been happening in AI and deep learning. So Barnerjee anticipates the two will soon converge. 

In the weeks preceding the conference, the AI chatbot ChatGPT, developed by OpenAI, has been causing a buzz, showing off its robust natural-language processing skills. For highly technical disciplines like FEA (finite element analysis), ChatGPT offers tantalizing possibilities to reengineer the user experience, by lowering the learning curve to broaden its reach. 

“ChatGPT will completely transform the way simulation tools are used in the future,” predicted Banerjee. “Today, when you use a simulation tool such as Ansys HFSS for electromagnetic simulation or Ansys Fluent for fluid simulation, you -must set up a hundred different parameters. You have to almost have a Ph.D. in aerospace engineering to know which parameters to set. In the future, with technology like ChatGPT, you can actually say, ‘Run an external aerodynamic simulation over a Boeing 747 plane,’ and it will figure out for that particular case, what are the fluid settings to use.”

At its core, physics solvers are advanced partial difference equations (PDE). Where there’s massive calculation, there’s opportunity for machine learning. In his talk at OzenCon, Banerjee explained, “In FEA solvers like Ansys Fluent, there are a lot of patterns happening. When you do simulation hundreds of thousands of times, these patterns begin to repeat. Just like ChatGPT can understand the patterns [in natural language-based interactions], we are going inside the physics solvers to understand the patterns in the signals,” he said.

Banerjee also believes ChatGPT and similar programs could become first-level tech support by automatically analyzing the archive of interactions between application engineers and customers. This is on Ansys’s roadmap, he said.

On March 6, Microsoft announced it would bundle “the technology behind ChatGPT with its Power Platform that allows users to develop applications with little or no coding, the latest integration of artificial intelligence into its products.”

Reduced Order Models to Speed Up Solve Time

The cost of a simulation job is usually determined by the time it takes to solve. The longer it takes to solve, the costlier it is. This has led some to employ Reduced Order Models (ROM) and Uncertainty Quantification (UQ) to drastically cut down solve time. ROMs and UQ allow you to bypass the need to run simulations based on full physics models. Instead, you can focus on a small number of parameters that make a huge difference in simulation outcomes. Both methods require AI or ML. 

“[Using physics solvers] is very accurate, but it takes a long time to solve. And simulation problems could have 200 million unknowns, a billion unknowns, and so on. With ROMs, you can take a problem with a billion unknowns and automatically reduce it to maybe 1,000 unknowns. At Ansys we have a suite of technologies with static ROMs, dynamic ROMs, linear ROMs, and nonlinear ROMs. They’re built into our Twin Builder, our digital twin tool,” said Banerjee.

As a digital twin platform, Ansys Twin Builder incorporates many existing Ansys technologies. The company describes Twin Builder as a technology to “quickly create a digital twin—a connected replica of an in-service asset.”

Ansys CTO Dr. Prith Banerjee at OzenCon, after his keynote talk. Image by DE 24/7.

Hardware for AI Workloads

AI and ML workloads benefit from the GPU’s parallel processing power. Over time, NVIDIA has expanded its domains far beyond gaming and entertainment,  to now include the emerging field of AI across major industries.  At this month’s NVIDIA GTC, the conference for the era of AI and the metaverse, there will be hundreds of sessions exploring AI trends and its impact on the world. “The user needs to train the model on a neural network to draw inferences. A lot of those workloads are accelerated by NVIDIA GPUs. So you’ll need to upgrade your hardware to NVIDIA GPUs to do the training and inferencing,” said Zihan Wang, Manufacturing Industries, NVIDIA.

Wang also pointed out, “The GPU is good at breaking down a problem into smaller, simpler tasks, and doing them in parallel, so the GPU can greatly accelerate simulation.”

NVIDIA’s hardware partner Dell was also present at OzenCon. Scott Hamilton, Industry Strategist, Dell, said, “We created a specific product called the Data Science Workstation. It’s one of our higher end tower workstations. It allows you to configure it to your needs … either with a single GPU that you might use for inference-related work, or with multiple GPUs for training exercises.” 

The Data Science Workstation is part of the Dell Precision professional workstation line. Configuration options include NVIDIA RTX™ GPUs targeting AI workloads. Hamilton also pointed out Dell and NVIDIA have developed specific hardware recommendations for Ansys applications, including:

Ansys Fluent

Ansys Discovery

Ansys Mechanical

Ansys SpaceClaim

Ansys CFX


 

More Dell Coverage

Artificial Intelligence for Design and Engineering Workflows
In this white paper, learn how artificial intelligence and machine learning can improve design and simulation.
Tower of Power: Dell Debuts 96-core Professional Workstation
The Precision 7875 leverages NVIDIA RTX™ Ada-generation GPUs to support high-end simulation, visualization and AI workflows.
Autodesk AI Takes Center Stage at AU
Design and simulation workflows poised to benefit from AI integration
Keeping Pace With Needs of Workstation Users
Post-pandemic, engineers and designers want a balance between power and form factor for the newest workstations.
NVIDIA’s Ethernet Networking Platform for AI Available Soon
End-to-end platform features latest NVIDIA Spectrum-X networking for customers to transform business wiith AI.
AOUSD Gets Ready to Expand OpenUSD for More Workflows
Founding members NVIDIA and Autodesk discuss the potential for OpenUSD in engineering
Dell Company Profile

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


#27486