• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 266
  • 87
  • 58
  • 22
  • 8
  • 7
  • 6
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 597
  • 597
  • 429
  • 137
  • 110
  • 99
  • 94
  • 89
  • 76
  • 75
  • 69
  • 62
  • 60
  • 57
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Machinery Health Indicator Construction using Multi-objective Genetic Algorithm Optimization of a Feed-forward Neural Network based on Distance / Maskin-Hälsoindikatorkonstruktion genom Multi-objektiv Genetisk Algoritm-Optimering av ett Feed-forward Neuralt Nätverk baserat på Avstånd

Nyman, Jacob January 2021 (has links)
Assessment of machine health and prediction of future failures are critical for maintenance decisions. Many of the existing methods use unsupervised techniques to construct health indicators by measuring the disparity between the current state and either the healthy or the faulty states of the system. This approach can work well, but if the resulting health indicators are insufficient there is no easy way to steer the algorithm towards better ones. In this thesis a new method for health indicator construction is investigated that aims to solve this issue. It is based on measuring distance after transforming the sensor data into a new space using a feed-forward neural network. The feed-forward neural network is trained using a multi-objective optimization algorithm, NSGA-II, to optimize criteria that are desired in a health indicator. Thereafter the constructed health indicator is passed into a gated recurrent unit for remaining useful life prediction. The approach is compared to benchmarks on the NASA Turbofan Engine Degradation Simulation dataset and in regard to the size of the neural networks, the model performs relatively well, but does not outperform the results reported by a few of the more recent methods. The method is also investigated on a simulated dataset based on elevator weights with two independent failures. The method is able to construct a single health indicator with a desirable shape for both failures, although the latter estimates of time until failure are overestimated for the more rare failure type. On both datasets the health indicator construction method is compared with a baseline without transformation function and does in both cases outperform it in terms of the resulting remaining useful life prediction error using the gated recurrent unit. Overall, the method is shown to be flexible in generating health indicators with different characteristics and because of its properties it is adaptive to different remaining useful life prediction methods. / Estimering av maskinhälsa och prognos av framtida fel är kritiska steg för underhållsbeslut. Många av de befintliga metoderna använder icke-väglett (unsupervised) lärande för att konstruera hälsoindikatorer som beskriver maskinens tillstånd över tid. Detta sker genom att mäta olikheter mellan det nuvarande tillståndet och antingen de friska eller fallerande tillstånden i systemet. Det här tillvägagångssättet kan fungera väl, men om de resulterande hälsoindikatorerna är otillräckliga så finns det inget enkelt sätt att styra algoritmen mot bättre. I det här examensarbetet undersöks en ny metod för konstruktion av hälsoindikatorer som försöker lösa det här problemet. Den är baserad på avståndsmätning efter att ha transformerat indatat till ett nytt vektorrum genom ett feed-forward neuralt nätverk. Nätverket är tränat genom en multi-objektiv optimeringsalgoritm, NSGA-II, för att optimera kriterier som är önskvärda hos en hälsoindikator. Därefter används den konstruerade hälsoindikatorn som indata till en gated recurrent unit (ett neuralt nätverk som hanterar sekventiell data) för att förutspå återstående livslängd hos systemet i fråga. Metoden jämförs med andra metoder på ett dataset från NASA som simulerar degradering hos turbofan-motorer. Med avseende på storleken på de använda neurala nätverken så är resultatet relativt bra, men överträffar inte resultaten rapporterade från några av de senaste metoderna. Metoden testas även på ett simulerat dataset baserat på elevatorer som fraktar säd med två oberoende fel. Metoden lyckas skapa en hälsoindikator som har en önskvärd form för båda felen. Dock så överskattar den senare modellen, som använde hälsoindikatorn, återstående livslängd vid estimering av det mer ovanliga felet. På båda dataseten jämförs metoden för hälsoindikatorkonstruktion med en basmetod utan transformering, d.v.s. avståndet mäts direkt från grund-datat. I båda fallen överträffar den föreslagna metoden basmetoden i termer av förutsägelsefel av återstående livslängd genom gated recurrent unit- nätverket. På det stora hela så visar sig metoden vara flexibel i skapandet av hälsoindikatorer med olika attribut och p.g.a. metodens egenskaper är den adaptiv för olika typer av metoder som förutspår återstående livslängd.
432

Multi-Quality Auto-Tuning by Contract Negotiation

Götz, Sebastian 17 July 2013 (has links)
A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible.
433

MULTI-FIDELITY MODELING AND MULTI-OBJECTIVE BAYESIAN OPTIMIZATION SUPPORTED BY COMPOSITIONS OF GAUSSIAN PROCESSES

Homero Santiago Valladares Guerra (15383687) 01 May 2023 (has links)
<p>Practical design problems in engineering and science involve the evaluation of expensive black-box functions, the optimization of multiple—often conflicting—targets, and the integration of data generated by multiple sources of information, e.g., numerical models with different levels of fidelity. If not properly handled, the complexity of these design problems can lead to lengthy and costly development cycles. In the last years, Bayesian optimization has emerged as a powerful alternative to solve optimization problems that involve the evaluation of expensive black-box functions. Bayesian optimization has two main components: a probabilistic surrogate model of the black-box function and an acquisition function that drives the optimization. Its ability to find high-performance designs within a limited number of function evaluations has attracted the attention of many fields including the engineering design community. The practical relevance of strategies with the ability to fuse information emerging from different sources and the need to optimize multiple targets has motivated the development of multi-fidelity modeling techniques and multi-objective Bayesian optimization methods. A key component in the vast majority of these methods is the Gaussian process (GP) due to its flexibility and mathematical properties.</p> <p><br></p> <p>The objective of this dissertation is to develop new approaches in the areas of multi-fidelity modeling and multi-objective Bayesian optimization. To achieve this goal, this study explores the use of linear and non-linear compositions of GPs to build probabilistic models for Bayesian optimization. Additionally, motivated by the rationale behind well-established multi-objective methods, this study presents a novel acquisition function to solve multi-objective optimization problems in a Bayesian framework. This dissertation presents four contributions. First, the auto-regressive model, one of the most prominent multi-fidelity models in engineering design, is extended to include informative mean functions that capture prior knowledge about the global trend of the sources. This additional information enhances the predictive capabilities of the surrogate. Second, the non-linear auto-regressive Gaussian process (NARGP) model, a non-linear multi-fidelity model, is integrated into a multi-objective Bayesian optimization framework. The NARGP model offers the possibility to leverage sources that present non-linear cross-correlations to enhance the performance of the optimization process. Third, GP classifiers, which employ non-linear compositions of GPs, and conditional probabilities are combined to solve multi-objective problems. Finally, a new multi-objective acquisition function is presented. This function employs two terms: a distance-based metric—the expected Pareto distance change—that captures the optimality of a given design, and a diversity index that prevents the evaluation of non-informative designs. The proposed acquisition function generates informative landscapes that produce Pareto front approximations that are both broad and diverse.</p>
434

Improving Airline Schedule Reliability Using A Strategic Multi-objective Runway Slot Assignment Search Heuristic

Hafner, Florian 01 January 2008 (has links)
Improving the predictability of airline schedules in the National Airspace System (NAS) has been a constant endeavor, particularly as system delays grow with ever-increasing demand. Airline schedules need to be resistant to perturbations in the system including Ground Delay Programs (GDPs) and inclement weather. The strategic search heuristic proposed in this dissertation significantly improves airline schedule reliability by assigning airport departure and arrival slots to each flight in the schedule across a network of airports. This is performed using a multi-objective optimization approach that is primarily based on historical flight and taxi times but also includes certain airline, airport, and FAA priorities. The intent of this algorithm is to produce a more reliable, robust schedule that operates in today's environment as well as tomorrow's 4-Dimensional Trajectory Controlled system as described the FAA's Next Generation ATM system (NextGen). This novel airline schedule optimization approach is implemented using a multi-objective evolutionary algorithm which is capable of incorporating limited airport capacities. The core of the fitness function is an extensive database of historic operating times for flight and ground operations collected over a two year period based on ASDI and BTS data. Empirical distributions based on this data reflect the probability that flights encounter various flight and taxi times. The fitness function also adds the ability to define priorities for certain flights based on aircraft size, flight time, and airline usage. The algorithm is applied to airline schedules for two primary US airports: Chicago O'Hare and Atlanta Hartsfield-Jackson. The effects of this multi-objective schedule optimization are evaluated in a variety of scenarios including periods of high, medium, and low demand. The schedules generated by the optimization algorithm were evaluated using a simple queuing simulation model implemented in AnyLogic. The scenarios were simulated in AnyLogic using two basic setups: (1) using modes of flight and taxi times that reflect highly predictable 4-Dimensional Trajectory Control operations and (2) using full distributions of flight and taxi times reflecting current day operations. The simulation analysis showed significant improvements in reliability as measured by the mean square difference (MSD) of filed versus simulated flight arrival and departure times. Arrivals showed the most consistent improvements of up to 80% in on-time performance (OTP). Departures showed reduced overall improvements, particularly when the optimization was performed without the consideration of airport capacity. The 4-Dimensional Trajectory Control environment more than doubled the on-time performance of departures over the current day, more chaotic scenarios. This research shows that airline schedule reliability can be significantly improved over a network of airports using historical flight and taxi time data. It also provides for a mechanism to prioritize flights based on various airline, airport, and ATC goals. The algorithm is shown to work in today's environment as well as tomorrow's NextGen 4-Dimensional Trajectory Control setup.
435

A Customer Value Assessment Process (CVAP) for Ballistic Missile Defense

Hernandez, Alex 01 June 2015 (has links) (PDF)
A systematic customer value assessment process (CVAP) was developed to give system engineering teams the capability to qualitatively and quantitatively assess customer values. It also provides processes and techniques used to create and identify alternatives, evaluate alternatives in terms of effectiveness, cost, and risk. The ultimate goal is to provide customers (or decision makers) with objective and traceable procurement recommendations. The creation of CVAP was driven by an industry need to provide ballistic missile defense (BMD) customers with a value proposition of contractors’ BMD systems. The information that outputs from CVAP can be used to guide BMD contractors in formulating a value proposition, which is used to steer customers to procure their BMD system(s) instead of competing system(s). The outputs from CVAP also illuminate areas where systems can be improved to stay relevant with customer values by identifying capability gaps. CVAP incorporates proven approaches and techniques appropriate for military applications. However, CVAP is adaptable and may be applied to business, engineering, and even personal every-day decision problems and opportunities. CVAP is based on the systems decision process (SDP) developed by Gregory S. Parnell and other systems engineering faculty at the Unites States Military Academy (USMA). SDP combines Value-Focused Thinking (VFT) decision analysis philosophy with Multi-Objective Decision Analysis (MODA) quantitative analysis of alternatives. CVAP improves SDP’s qualitative value model by implementing Quality Function Deployment (QFD), solution design implements creative problem solving techniques, and the qualitative value model by adding cost analysis and risk assessment processes practiced by the U.S DoD and industry. CVAP and SDP fundamentally differ from other decision making approaches, like the Analytic Hierarchy Process (AHP) and the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), by distinctly separating the value/utility function assessment process with the ranking of alternatives. This explicit value assessment allows for straightforward traceability of the specific factors that influence decisions, which illuminates the tradeoffs involved in making decisions with multiple objectives. CVAP is intended to be a decision support tool with the ultimate purpose of helping decision makers attain the best solution and understanding the differences between the alternatives. CVAP does not include any processes for implementation of the alternative that the customer selects. CVAP is applied to ballistic missile defense (BMD) to give contractors ideas on how to use it. An introduction of BMD, unique BMD challenges, and how CVAP can improve the BMD decision making process is presented. Each phase of CVAP is applied to the BMD decision environment. CVAP is applied to a fictitious BMD example.
436

Application of Artificial Intelligence to Wireless Communications

Rondeau, Thomas Warren 10 October 2007 (has links)
This dissertation provides the theory, design, and implementation of a cognitive engine, the enabling technology of cognitive radio. A cognitive radio is a wireless communications device capable of sensing the environment and making decisions on how to use the available radio resources to enable communications with a certain quality of service. The cognitive engine, the intelligent system behind the cognitive radio, combines sensing, learning, and optimization algorithms to control and adapt the radio system from the physical layer and up the communication stack. The cognitive engine presented here provides a general framework to build and test cognitive engine algorithms and components such as sensing technology, optimization routines, and learning algorithms. The cognitive engine platform allows easy development of new components and algorithms to enhance the cognitive radio capabilities. It is shown in this dissertation that the platform can easily be used on a simulation system and then moved to a real radio system. The dissertation includes discussions of both theory and implementation of the cognitive engine. The need for and implementation of all of the cognitive components is strongly featured as well as the specific issues related to the development of algorithms for cognitive radio behavior. The discussion of the theory focuses largely on developing the optimization space to intelligently and successfully design waveforms for particular quality of service needs under given environmental conditions. The analysis develops the problem into a multi-objective optimization process to optimize and trade-of of services between objectives that measure performance, such as bit error rate, data rate, and power consumption. The discussion of the multi-objective optimization provides the foundation for the analysis of radio systems in this respect, and through this, methods and considerations for future developments. The theoretical work also investigates the use of learning to enhance the cognitive engine's capabilities through feed-back, learning, and knowledge representation. The results of this work include the analysis of cognitive radio design and implementation and the functional cognitive engine that is shown to work in both simulation and on-line experiments. Throughout, examples and explanations of building and interfacing cognitive components to the cognitive engine enable the use and extension of the cognitive engine for future work. / Ph. D.
437

Portfolio management using computational intelligence approaches. Forecasting and Optimising the Stock Returns and Stock Volatilities with Fuzzy Logic, Neural Network and Evolutionary Algorithms.

Skolpadungket, Prisadarng January 2013 (has links)
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN¿s initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
438

Smart Tracking for Edge-assisted Object Detection : Deep Reinforcement Learning for Multi-objective Optimization of Tracking-based Detection Process / Smart Spårning för Edge-assisterad Objektdetektering : Djup Förstärkningsinlärning för Flermålsoptimering av Spårningsbaserad Detekteringsprocess

Zhou, Shihang January 2023 (has links)
Detecting generic objects is one important sensing task for applications that need to understand the environment, for example eXtended Reality (XR), drone navigation etc. However, Object Detection algorithms are particularly computationally heavy for real-time video analysis on resource-constrained mobile devices. Thus Object Tracking, which is a much lighter process, is introduced under the Tracking-By-Detection (TBD) paradigm to alleviate the computational overhead. Still, it is common that the configurations of the TBD remain unchanged, which would result in unnecessary computation and/or performance loss in many cases.\\ This Master's Thesis presents a novel approach for multi-objective optimization of the TBD process on precision and latency, with the platform being power-constrained devices. We propose a Deep Reinforcement Learning based scheduling architecture that selects appropriate TBD actions in video sequences to achieve the desired goals. Specifically, we develop a simulation environment providing Markovian state information as input for the scheduler neural network, justified options of TBD actions, and a scalarized reward function to combine the multiple objectives. Our results demonstrate that the trained policies can learn to utilize content information from the current and previous frames, thus optimally controlling the TBD process at each frame. The proposed approach outperforms the baselines that have fixed TBD configurations and recent research works, achieving the precision close to pure detection while keeping the latency much lower. Both tuneable configurations show positive and synergistic contribution to the optimization objectives. We also show that our policies are generalizable, with inference and action time of the scheduler having minimal latency overhead. This makes our scheduling design highly practical in real XR or similar applications on power-constrained devices. / Att upptäcka generiska objekt är en viktig uppgift inom avkänning för tillämpningar som behöver förstå omgivningen, såsom eXtended Reality (XR) och navigering med drönare, bland annat. Algoritmer för objektdetektering är dock särskilt beräkningstunga när det gäller videoanalyser i realtid på resursbegränsade mobila enheter. Objektspårning, å andra sidan, är en lättare process som vanligtvis implementeras under Tracking-By-Detection (TBD)-paradigmet för att minska beräkningskostnaden. Det är dock vanligt att TBD-konfigurationerna förblir oförändrade, vilket leder till onödig beräkning och/eller prestandaförlust i många fall.\\ I detta examensarbete presenteras en ny metod för multiobjektiv optimering av TBD-processen med avseende på precision och latens på plattformar med begränsad prestanda. Vi föreslår en djup förstärkningsinlärningsbaserad schemaläggningsarkitektur som väljer lämpliga TBD-åtgärder för videosekvenser för att uppnå de önskade målen. Vi utvecklar specifikt en simulering som tillhandahåller Markovian state-information som indata för schemaläggaren, samt neurala nätverk, motiverade alternativ för TBD-åtgärder och en skalariserad belöningsfunktion för att kombinera de olika målen. Våra resultat visar att de tränade strategierna kan lära sig att använda innehållsinformation från aktuella och tidigare ramar för att optimalt styra TBD-processen för varje bild. Det föreslagna tillvägagångssättet är bättre än både de grundläggande metoderna med en fast TBD-konfiguration och nyare forskningsarbeten. Det uppnår en precision som ligger nära den rena detektionen samtidigt som latensen hålls mycket låg. Båda justerbara konfigurationerna bidrar positivt och synergistiskt till optimeringsmålen. Vi visar också att våra strategier är generaliserbara genom att dela upp träning och testning med en 50 %-ig uppdelning, vilket resulterar i minimal inferenslatens och schemaläggarens handlingslatens. Detta gör vår schemaläggningsdesign mycket praktisk i verkliga XR- eller liknande tillämpningar på enheter med begränsad strömförsörjning.
439

An Optimization-Based Treatment Planner for Gamma Knife Radiosurgery

Jitprapaikulsarn, Suradet 04 March 2005 (has links)
No description available.
440

INTELLIGENT MULTIPLE-OBJECTIVE PROACTIVE ROUTING IN MANET WITH PREDICTIONS ON DELAY, ENERGY, AND LINK LIFETIME

Guo, Zhihao January 2008 (has links)
No description available.

Page generated in 0.0355 seconds