• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6672
  • 2448
  • 1001
  • 805
  • 773
  • 223
  • 168
  • 114
  • 79
  • 77
  • 70
  • 63
  • 54
  • 50
  • 47
  • Tagged with
  • 14826
  • 2374
  • 1971
  • 1795
  • 1627
  • 1517
  • 1361
  • 1311
  • 1269
  • 1236
  • 1206
  • 1109
  • 961
  • 913
  • 912
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Development of automated optimum structural design systems

Corey, Marion Willson 05 1900 (has links)
No description available.
252

Nonmonotone methods in optimization and DC optimization of location problems

Zhou, Fangjun 12 1900 (has links)
No description available.
253

Optimal control problems on an infinite time horizon

Achmatowicz, Richard L. (Richard Leon) January 1985 (has links)
No description available.
254

A new approach to quasi-Newton methods for minimization

Saadallah, A. F. January 1987 (has links)
No description available.
255

On the use of function values to improve quasi-Newton methods

Ghandhari, R. A. January 1989 (has links)
No description available.
256

Aerodynamic design optimization of a non-lifting strut

Veenendaal, Justin 03 September 2014 (has links)
A design engineer has a desire to obtain the best possible design configuration producing the most desirable result. This is especially true in designs involving aerodynamics. This thesis presents a way to design the optimum airfoil for a non-lifting strut-like application. This is achieved by combining the governing laws of aerodynamics with appropriate numerical models to simulate an inputted steady flow regime. By using a robust yet simple parameterization method to represent airfoils and by implementing a genetic algorithm, optimization is achieved and occurs in a timely manner. Performing the optimization across a range of flow fields and for struts in different applications also allows some trends to be deduced, thus providing valuable knowledge to design engineers.
257

Per-exemplar analysis with MFoM fusion learning for multimedia retrieval and recounting

Kim, Ilseo 27 August 2014 (has links)
As a large volume of digital video data becomes available, along with revolutionary advances in multimedia technologies, demand related to efficiently retrieving and recounting multimedia data has grown. However, the inherent complexity in representing and recognizing multimedia data, especially for large-scale and unconstrained consumer videos, poses significant challenges. In particular, the following challenges are major concerns in the proposed research. One challenge is that consumer-video data (e.g., videos on YouTube) are mostly unstructured; therefore, evidence for a targeted semantic category is often sparsely located across time. To address the issue, a segmental multi-way local feature pooling method by using scene concept analysis is proposed. In particular, the proposed method utilizes scene concepts that are pre-constructed by clustering video segments into categories in an unsupervised manner. Then, a video is represented with multiple feature descriptors with respect to scene concepts. Finally, multiple kernels are constructed from the feature descriptors, and then, are combined into a final kernel that improves the discriminative power for multimedia event detection. Another challenge is that most semantic categories used for multimedia retrieval have inherent within-class diversity that can be dramatic and can raise the question as to whether conventional approaches are still successful and scalable. To consider such huge variability and further improve recounting capabilities, a per-exemplar learning scheme is proposed with a focus on fusing multiple types of heterogeneous features for video retrieval. While the conventional approach for multimedia retrieval involves learning a single classifier per category, the proposed scheme learns multiple detection models, one for each training exemplar. In particular, a local distance function is defined as a linear combination of element distance measured by each features. Then, a weight vector of the local distance function is learned in a discriminative learning method by taking only neighboring samples around an exemplar as training samples. In this way, a retrieval problem is redefined as an association problem, i.e., test samples are retrieved by association-based rules. In addition, the quality of a multimedia-retrieval system is often evaluated by domain-specific performance metrics that serve sophisticated user needs. To address such criteria for evaluating a multimedia-retrieval system, in MFoM learning, novel algorithms were proposed to explicitly optimize two challenging metrics, AP and a weighted sum of the probabilities of false alarms and missed detections at a target error ratio. Most conventional learning schemes attempt to optimize their own learning criteria, as opposed to domain-specific performance measures. By addressing this discrepancy, the proposed learning scheme approximates the given performance measure, which is discrete and makes it difficult to apply conventional optimization schemes, with a continuous and differentiable loss function which can be directly optimized. Then, a GPD algorithm is applied to optimizing this loss function.
258

Optimal Portfolio Rule: When There is Uncertainty in The Parameter Estimates

Jin, Hyunjong 28 February 2012 (has links)
The classical mean-variance model, proposed by Harry Markowitz in 1952, has been one of the most powerful tools in the field of portfolio optimization. In this model, parameters are estimated by their sample counterparts. However, this leads to estimation risk, which the model completely ignores. In addition, the mean-variance model fails to incorporate behavioral aspects of investment decisions. To remedy the problem, the notion of ambiguity aversion has been addressed by several papers where investors acknowledge uncertainty in the estimation of mean returns. We extend the idea to the variances and correlation coefficient of the portfolio, and study their impact. The performance of the portfolio is measured in terms of its Sharpe ratio. We consider different cases where one parameter is assumed to be perfectly estimated by the sample counterpart whereas the other parameters introduce ambiguity, and vice versa, and investigate which parameter has what impact on the performance of the portfolio.
259

Optimization Method for Inventory and Supply Chain Management

Manilachelvan, Poonkuzhali January 2013 (has links)
This thesis presents an optimization model which helps retailers to reduce product costs by taking advantage of parts commonality in manufacturing and production areas, when selling similar units with uncertainty in demands. The concept of component commonality can be often found in the assemble-to-order system, which is the foremost concept used by prominent manufacturing companies in the global market. The method developed uses genetic algorithm (GA) to solve real world optimization problems that contain integer values for parts and finished items, and uncertain information. Numerical examples are solved using generated stochastic scenarios to show the impact of uncertainty on solutions. This impact is verified using two important criteria, Expected Value of Perfect Information (EVPI) and Value of Stochastic Solution (VSS). The obtained solutions present significant monetary benefits for the manufacturer illustrating the importance of the model presented here for retailers.
260

Designing Better Allocation Policies for Influenza Vaccine

Demirbilek, Mustafa January 2013 (has links)
Influenza has been one of the most infectious diseases for roughly 2400 years. The most effective way to prevent influenza outbreaks and eliminate their seasonal effects is vaccination. The distribution of influenza vaccine to various groups in the population becomes an important decision determining the effectiveness of vaccination for the entire population. We developed a simulation model using the Epifire C++ application program [2] to simulate influenza transmission under a given vaccination strategy. Our model can generate a network that can be configured with different degree distributions, transmission rates, number of nodes and edges, infection periods, and perform chain-binomial based simulation of SIR (Susceptible-Infectious-Recovered) disease transmission. Furthermore, we integrated NOMAD (Nonlinear Optimization by Mesh Adaptive Direct Search) for optimizing vaccine allocation to various age groups. We calibrate our model according to age specific attack rates from the 1918 pandemic. In our simulation model, we evaluate three different vaccine policies for 36 different scenarios and 1000 individuals: The policy of the Advisory Committee on Immunization Practices (ACIP), former recommendations of the Centers for Disease Control and Prevention (CDC), and new suggestions of the CDC. We derive the number of infected people at the end of each run and calculated the corresponding cost and years of life lost. As a result, we observe that optimized vaccine distribution ensures less infected people and years of life lost compared to the fore-mentioned policies in almost all cases. On the other hand, total costs for the policies are close to each other. Former CDC policy ensures slightly lower cost than other policies and our proposed in some cases.

Page generated in 0.0899 seconds