Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multidisciplinary design optimization
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Recent MDO methods === MDO practitioners have investigated [[Optimization (mathematics)|optimization]] methods in several broad areas in the last dozen years. These include decomposition methods, [[approximation]] methods, [[evolutionary algorithm]]s, [[memetic algorithm]]s, [[response surface methodology]], reliability-based optimization, and [[multi-objective optimization]] approaches. The exploration of decomposition methods has continued in the last dozen years with the development and comparison of a number of approaches, classified variously as hierarchic and non hierarchic, or collaborative and non collaborative. Approximation methods spanned a diverse set of approaches, including the development of approximations based on [[surrogate model]]s (often referred to as metamodels), variable fidelity models, and trust region management strategies. The development of multipoint approximations blurred the distinction with response surface methods. Some of the most popular methods include [[Kriging]] and the [[moving least squares]] method. [[Response surface methodology]], developed extensively by the statistical community, received much attention in the MDO community in the last dozen years. A driving force for their use has been the development of massively parallel systems for high performance computing, which are naturally suited to distributing the function evaluations from multiple disciplines that are required for the construction of response surfaces. Distributed processing is particularly suited to the design process of complex systems in which analysis of different disciplines may be accomplished naturally on different computing platforms and even by different teams. Evolutionary methods led the way in the exploration of non-gradient methods for MDO applications. They also have benefited from the availability of massively parallel high performance computers, since they inherently require many more function evaluations than gradient-based methods. Their primary benefit lies in their ability to handle discrete design variables and the potential to find globally optimal solutions. Reliability-based optimization (RBO) is a growing area of interest in MDO. Like response surface methods and evolutionary algorithms, RBO benefits from parallel computation, because the numeric integration to calculate the probability of failure requires many function evaluations. One of the first approaches employed approximation concepts to integrate the probability of failure. The classical first-order reliability method (FORM) and second-order reliability method (SORM) are still popular. Professor Ramana Grandhi used appropriate normalized variables about the most probable point of failure, found by a two-point adaptive nonlinear approximation to improve the accuracy and efficiency. [[Southwest Research Institute]] has figured prominently in the development of RBO, implementing state-of-the-art reliability methods in commercial software. RBO has reached sufficient maturity to appear in commercial structural analysis programs like Altair's [[OptiStruct|Optistruct]] and MSC's [[Nastran]]. Utility-based probability maximization was developed in response to some logical concerns (e.g., Blau's Dilemma) with reliability-based design optimization.<ref>{{cite journal |last1=Bordley |first1=Robert F. |last2=Pollock |first2=Steven M. |date=September 2009 |title=A Decision Analytic Approach to Reliability-Based Design Optimization |journal=Operations Research |volume=57 |number=5 |pages=1262β1270|doi=10.1287/opre.1080.0661 }}</ref> This approach focuses on maximizing the joint probability of both the objective function exceeding some value and of all the constraints being satisfied. When there is no objective function, utility-based probability maximization reduces to a probability-maximization problem. When there are no uncertainties in the constraints, it reduces to a constrained utility-maximization problem. (This second equivalence arises because the utility of a function can always be written as the probability of that function exceeding some random variable). Because it changes the constrained optimization problem associated with reliability-based optimization into an unconstrained optimization problem, it often leads to computationally more tractable problem formulations. In the marketing field there is a huge literature about optimal design for multiattribute products and services, based on experimental analysis to estimate models of consumers' utility functions. These methods are known as [[Conjoint analysis|Conjoint Analysis]]. Respondents are presented with alternative products, measuring preferences about the alternatives using a variety of scales and the utility function is estimated with different methods (varying from regression and surface response methods to choice models). The best design is formulated after estimating the model. The experimental design is usually optimized to minimize the variance of the estimators. These methods are widely used in practice.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)