“Engineering is done with numbers. Analysis without numbers is only an opinion.”
— Akin’s First Law of Spacecraft Design
When I was studying for my bachelor’s degree in mechanical engineering at the University of Waterloo, resorting to using ANSYS software seemed like a knee-jerk reaction to any vaguely complex modelling problem. Figuring out the drag on an aircraft? Use ANSYS. Determining the stress in a structural member? Use ANSYS. It seemed like the magic answer to any problem – even when it seemed unclear what the problem was to begin with.
Later, as I learned about problem-focused engineering from the University of Waterloo’s Problem Lab and completed two co-op terms at ANSYS itself working on multiphysics software, the real issues with the solution-first-problem-later approach became clear. On reflection, perhaps it was because numerical simulation is such a sought-after skill by co-op employers (and in industry, in general) that we tried to find any excuse to hone our skills. As the old bromide goes, “To the man with a hammer…”
In the context of engineering, numerical simulation is the art and science of predicting how a system will behave by calculating approximate solutions to the partial differential equations that govern it. It is an invaluable tool developed by mathematicians and used extensively by engineers, because there are few exact solutions to these equations, even for the simplest of situations.
For example, the Navier-Stokes equations that govern fluid flow have no exact solution for the vortices that shed off a pole in the wind. Maxwell’s equations for electromagnetics have no exact solution for the shape of a bolt of lightning. There is no exact solution for the distribution of stresses in a human femur, except with generous simplifications.
The solutions to these problems can only be approximated, usually with numerical methods. Rather than solving the equations with a sheet of paper, we take the problem and break it down into thousands, millions, or even billions of tiny pieces (a process known as discretization), then use simplifying assumptions to solve those tiny pieces.
We can take an analogy from computer graphics. It is impossible to display a perfect circle on a screen with square pixels. However, if the pixels are small enough, we can get sufficiently close for most purposes.
In the infancy of numerical simulation in the 1960s, these calculations were tediously done by hand. But now, breaking down these problems and running the calculations are done virtually exclusively through the power of digital computers.
I find that newcomers to computer modelling, especially in the field of numerical simulation, make a number of common mistakes – most of which I’ve made myself. These mistakes can be costly because despite how far computers have come since the 60s, numerical simulation is extremely resource-intensive. Supercomputers may require dozens of processors running for several hours to solve relatively simple fluid dynamic problems. Entering a wrong value somewhere could require restarting the analysis and throwing out days of work.
If you’re new to computer simulation, these lessons may help you avoid the mistakes I made.
Engineering simulation starts on paper
“Plan your work and work your plan.”
— The Twelfth Unwritten Law of Systems Engineering
There’s no point trying to solve a problem you don’t understand.
Never start an engineering analysis by firing up your favorite simulation software. Start by noting the following things, perhaps in your notebook or in a Word document:
- What is the problem?: Make sure you really understand the physical situation that you need to model. What physics are relevant — is it a thermal problem, fluid problem, electromagnetic problem, or something else entirely?
- What results are needed?: Figure out what results your client, supervisor, or colleague need. If you are modelling the flow of air over a wing, do you need to just find the lift, or do you also need to find the rate of vortex shedding?
- What is the analysis going to be used for?: Learn the context of the your work; how is it going to fit into the project’s goals? This will prevent you from doing unnecessary or ill-directed work.
Once you have these three pieces of knowledge, you can begin thinking about what methods you should use to solve your problem.
You need to know what the right answer looks like — and doesn’t
“If your analysis says your terminal velocity is twice the speed of light, you may have invented warp drive, but the chances are a lot better that you’ve screwed up.”
— Akin’s Nineteenth Law of Spacecraft Design
You’ve done your homework, you’ve selected your software, you’ve set up the problem, you’ve begged for CPU hours on the supercomputing cluster, and you now have colorful results plots on your screen.
How do you know your results are of any value at all?
The answer is that you need to have a rough idea a priori of what the right answer does look like. This can come from hand calculations, previous analyses of similar situations, experimental data, and first-principles knowledge of physics. By knowing what the ballpark right answer should look like, you will be able to tell which answers are wrong.
One of the easiest telltale signs is getting unphysical results — results that are obviously impossible or don’t make sense. If your finite element analysis is showing temperatures exceeding ten to the thirty-eighth power Kelvin, either you’re simulating the initial conditions of the Big Bang or a floating-point error has occurred somewhere. If your computational fluid dynamics simulation has spiking residuals of enormous magnitude, the simulation has diverged and the results should be thrown out.
More subtly, you may observe that the result fields don’t look quite right. For example, imagine that you are simulating a simple solenoid coil and the resulting magnetic field lines look chaotic, rather than following the familiar curved lines from north to south. You may have done something wrong and should not trust the results.
There is no such thing as a ‘correct’ engineering analysis — only ones that are less wrong
“Don’t keep polishing the cannonball but do get the caliber right.”
— The Fifteenth Unwritten Law of Systems Engineering
All – and I mean all – engineering analysis is approximate. No engineering analysis is possible without making some simplifying assumptions somewhere. An engineering analysis is only as good as your knowledge of the problem, and without simplifying assumptions, you would require an infinite amount of knowledge about it. That is why we have margins of safety.
Perhaps the only place where first-principles theoretical analysis will yield exact real-world solutions is in quantum or particle physics, and even then there are error bars.
So, choose your simplifying assumptions carefully. When deciding what assumptions to use, ask yourself three questions:
- How much accuracy do I need in my results? What am I using the results for?
- How much of a difference will this simplifying assumption make?
- Should I be using numerical simulation at all?
Avoid the mistake of measuring with a micrometer and cutting with an axe. If you don’t need a lot of accuracy, don’t bother modelling effects that will only make marginal differences in the results.
For example, imagine that you have been tasked with determining the lift generated by an aircraft’s wing at cruise conditions. If you are in the conceptual design stage and only need an approximate answer, you may not need numerical simulation at all. Perhaps looking up the airfoil’s cross-section in a database and using the coefficient of lift will be sufficient for your purposes. On the other hand, if you need precise results for a wing with an exotic shape, you may have no choice but to use numerical simulation.
Be aware that a computationally expensive simulation does not necessarily lead to accurate results. Shrinking the mesh cells and timestep size to microscopic levels in your computational fluid dynamics analysis may give you results that are precise, but not necessarily ones that are accurate. This is because the maximum possible accuracy of your simulation is limited by the simplifying assumptions you have made.
To return to the airfoil example, there’s no point resolving the micron-scale phenomena of the wing’s boundary layer if the properties of the air flowing over it are only known to ballpark accuracy, unless you are conducting a software technology demonstration or theoretical research. This returns to my earlier point about knowing the context of your work.
Work iteratively
“Design is an iterative process. The necessary number of iterations is one more than the number you have currently done. This is true at any point in time.”
— Akin’s Third Law of Spacecraft Design
Trying to model all of the important effects in the analysis on the first go usually leads to a problematic analysis fraught with software issues and confusion over the results. This is because if you run into a problem and have all the bells and whistles activated, it’s difficult to pinpoint where the problem is.
A better approach is to start with the absolute bare-minimum simplest analysis possible. A good first step is to begin with hand-calculations, then move on to a very simple numerical analysis. If you are modelling the structural response of a circuit board to vibrational loading, it may be prudent to start by modelling only the board itself and ignoring the effects of the components. Or if you are calculating the stress in a structure made of strange, nonlinear materials, start with a quick-and-dirty linear material model.
Your results are unlikely to be good, but if you’ve set up the simulation right, they will be in the ballpark. Then, build up the sophistication of the analysis by one step and rerun it. Repeat until you reach the level of fidelity you need.
The benefits are fourfold:
- You get initial results quickly — even if they are of terrible accuracy.
- Those quick initial results give you an idea of what the correct answer is and where to go from here.
- If the simulation starts giving strange results or crashes after you increased its sophistication, you know that it was the last change you made that caused the issue.
- As you compare the results between levels of sophistication, you gain an understanding of how much influence each variable has on the system.
Working iteratively may seem like a time-consuming process, but it will end up saving time and frustration in the long run, especially with unfamiliar software. As the US Navy SEALs say, “Slow is smooth and smooth is fast.”
Communication is key
“A bad design with a good presentation is doomed eventually. A good design with a bad presentation is doomed immediately.”
— Akin’s Twentieth Law of Spacecraft Design
Finally, present the results in a way that’s easy to understand and that isn’t misleading. Document those results, too. Make neat, organized graphs and select colormaps judiciously. If you are making a comparison between two result sets, use the same point of view, use the same colormaps, use the same scale, and align the axes. Make differences and similarities obvious. Making good plots is something that improves with practice and discipline.
Closing thoughts
My mentor at the University of Waterloo taught me that sometimes, numerical simulation is more of an art than a science. Experience is valuable. Early in your career, you are not likely to have an abundance of experience, so go and seek out expert opinion. This is a good idea for anything you do, more broadly. Don’t trust your results at face value and never stop learning.
And please, label your axes!
Further reading
I quote Akin’s Laws of Spacecraft Design and The Unwritten Laws of Systems Engineering extensively. You can find them here:
Disclaimer: Opinions are solely my own. I am no longer affiliated with ANSYS or any other engineering simulation company.