Rhinehart, R. RussellGhoshal, Prithwijit2014-04-162014-04-162008-07-01http://hdl.handle.net/20.500.14446/9631An initialization method previously used in neural network training is combined with a novel steady state stopping criterion and used in the empirical modeling optimization of various applications and modeling objectives. The effectiveness of both the initialization method and the stopping criterion are tested using direct and indirect optimization algorithms, on contrived data, and on experimental data from a two-phase flow system. The study attempts to create a global optimization logic for nonlinear modelingThe global optimization logic developed for empirical modeling optimization is scale independent, requires no "a priori" knowledge to stop a trial, and is found to be robust while handling noisy data. The logic defines the number of random starts that will find a user defined "best" percentage of possible model solutions from process data with a desired confidence. the logic can use a variety of nonlinear local optimizers, and can be incorporated in future software optimization as a modeling tool.application/pdfCopyright is held by the author who has granted the Oklahoma State University Library the non-exclusive right to share this material in its institutional repository. Contact Digital Library Services at lib-dls@okstate.edu or 405-744-9161 for the permission policy on the use, reproduction or distribution of this material.Study of an Initialization Method And Stopping Criteria for Nonlinear Optimizationtext