311 |
AN AGENT-BASED SYSTEMATIC ENSEMBLE APPROACH FOR AUTO AUCTION PREDICTIONAlfuhaid, Abdulaziz Ataallah January 2018 (has links)
No description available.
|
312 |
Predicting Residential Heating Energy Consumption and Savings Using Neural Network ApproachAl Tarhuni, Badr 30 May 2019 (has links)
No description available.
|
313 |
Evaluating The Predictability of Pseudo-Random Number Generators Using Supervised Machine Learning AlgorithmsApprey-Hermann, Joseph Kwame 20 May 2020 (has links)
No description available.
|
314 |
Characterization of a light petroleum fraction produced from automotive shredder residuesTipler, Steven 20 May 2021 (has links) (PDF)
Wastes have a real potential as being players in the energy mix of tomorrow. They can have a high heating value depending on their composition, which makes them good candidates to be converted into liquid fuel via pyrolysis. Among the different types of wastes, automotive residues are expected to rocket due to the increasing number of cars and the tendency to build cars with more and more polymers. Moreover, the existing regulations concerning the recycling of end-of-life vehicles become more and more stringent. Unconventional fuels such as those derived from automotive shredder residues (ASR) have a particular composition which tends to increase the amount of pollutants comparing with conventional fuels. Relying on alternative combustion modes, such as reactivity controlled compression ignition (RCCI), is a solution to cope with these pollutants. In RCCI, two types of fuels are burned simultaneously, namely a light fraction with a low reactivity, and a heavy fraction with a high reactivity. The heavy fraction governs the ignition as it is injected directly in the cylinder close to the end of compression. A variation of its ignition delay could impact the quality of the combustion. Nevertheless, this issue can be tackled by adjusting the injection timing. As long as the low reactivity fuel is concerned, such a solution cannot be adopted as its reactivity depends on the initial parameters (equivalence ratio, inlet temperature, exhaust gas recirculation ratio). However, if the fuel is too reactive, it could create knock that have a dramatic impact on the engine, leading to damages. Thus, being able to predict its features is a key aspect for a safe usage. Predicting methods exist but had never been tested yet with fuels derived from automotive residues. With petroleum products, usual prediction methods stand at three different levels: the chemical composition, the properties, and the reactivity in an appliance. The fuel is studied at these three levels. First, the structure gives a good overview of the fuel auto-ignition. For instance, aromatics tend to have higher ignition delay time (IDT) than paraffins. Second, the octane numbers are good indicators of the fuel IDT and of the resistance toward knock. Precisely, the octane numbers depict the resistance of a fuel towards an end-gas auto-ignition. Last, the IDT was studied in a rapid compression machine and a surrogate fuel was formulated. Surrogate fuels substitute real fuels during simulations because real fuels cannot be modelled by kinetic mechanisms due to their complexity.The existing methods to estimate the composition were updated to predict the n-paraffin, iso-paraffin, olefin, napthene, aromatic and oxygenate(PIONAOx) fractions. A good accuracy was achieved compared with the literature. This new method requires the measurement of the specific gravity, of the distillation cut points, of the CHO atom fractions, of the kinematic viscosity and of the refractive index.Two methods to predict the octane numbers were developed based on Bayesian inference, principal component analysis (PCA) and artificial neural network (ANN). The first is a Bayesian method which modifies the pseudocomponent (PC) method. It introduces a correcting factor which corrects the existing formulation of the PC method to increase its accuracy. A precision of more than 2% is achieved. The second method is based on PCA and ANN. 41 properties are studied among which reduced set of principal variables are selected to predict the octane numbers. 10 properties calculated only with the distillation cut points, the CHO atom fraction and the specific gravity were selected to accurately predict the octane numbers.Measurements of the IDT in a rapid compression machine (RCM) of a fuel produced from ASR were realized. They are the first measurements insuch a machine ever made. This provide experimental data to the literature. Moreover, these experimental data were used to formulate a surrogate fuel. Surrogate fuels can be used to realize simulations under specific conditions. The current thesis investigates fuels derived from ASR. It was showed that this fuel can be burnt in engines as long as their properties are carefully monitored. Among others, the IDT is particularly important. Nevertheless, additional experimental campaigns and simulations in engine are required in order to correctly assess all of the combustion features of such a fuel in an engine. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
|
315 |
Functional Principal Component Analysis of Vibrational Signal Data: A Functional Data Analytics Approach for Fault Detection and Diagnosis of Internal Combustion EnginesMcMahan, Justin Blake 14 December 2018 (has links)
Fault detection and diagnosis is a critical component of operations management systems. The goal of FDD is to identify the occurrence and causes of abnormal events. While many approaches are available, data-driven approaches for FDD have proven to be robust and reliable. Exploiting these advantages, the present study applied functional principal component analysis (FPCA) to carry out feature extraction for fault detection in internal combustion engines. Furthermore, a feature subset that explained 95% of the variance of the original vibrational sensor signal was used in a multilayer perceptron to carry out prediction for fault diagnosis. Of the engine states studied in the present work, the ending diagnostic performance shows the proposed approach achieved an overall prediction accuracy of 99.72 %. These results are encouraging because they show the feasibility for applying FPCA for feature extraction which has not been discussed previously within the literature relating to fault detection and diagnosis.
|
316 |
Design and Optimization of DSP Techniques for the Mitigation of Linear and Nonlinear Impairments in Fiber-Optic Communication Systems / DESIGN AND OPTIMIZATION OF DIGITAL SIGNAL PROCESSING TECHNIQUES FOR THE MITIGATION OF LINEAR AND NONLINEAR IMPAIRMENTS IN FIBER-OPTIC COMMUNICATION SYSTEMSMaghrabi, Mahmoud MT January 2021 (has links)
Optical fibers play a vital role in modern telecommunication systems and networks. An optical fiber link imposes some linear and nonlinear distortions on the propagating light-wave signal due to the inherent dispersive nature and nonlinear behavior of the fiber. These distortions impede the increasing demand for higher data rate transmission over longer distances. Developing efficient and computationally non-expensive digital signal processing (DSP) techniques to effectively compensate for the fiber impairments is therefore essential and of preeminent importance. This thesis proposes two DSP-based approaches for mitigating the induced distortions in short-reach and long-haul fiber-optic communication systems.
The first approach introduces a powerful digital nonlinear feed-forward equalizer (NFFE), exploiting multilayer artificial neural network (ANN). The proposed ANN-NFFE mitigates nonlinear impairments of short-haul optical fiber communication systems, arising due to the nonlinearity introduced by direct photo-detection. In a direct detection system, the detection process is nonlinear due to the fact that the photo-current is proportional to the absolute square of the electric field intensity. The proposed equalizer provides the most efficient computational cost with high equalization performance. Its performance is comparable to the benchmark compensation performance achieved by maximum-likelihood sequence estimator. The equalizer trains an ANN to act as a nonlinear filter whose impulse response removes the intersymbol interference (ISI) distortions of the optical channel. Owing to the proposed extensive training of the equalizer, it achieves the ultimate performance limit of any feed-forward equalizer. The performance and efficiency of the equalizer are investigated by applying it to various practical short-reach fiber-optic transmission system scenarios. These scenarios are extracted from practical metro/media access networks and data center applications. The obtained results show that the ANN-NFFE compensates for the received BER degradation and significantly increases the tolerance to the chromatic dispersion distortion.
The second approach is devoted for blindly combating impairments of long-haul fiber-optic systems and networks. A novel adjoint sensitivity analysis (ASA) approach for the nonlinear Schrödinger equation (NLSE) is proposed. The NLSE describes the light-wave propagation in optical fiber communication systems. The proposed ASA approach significantly accelerates the sensitivity calculations in any fiber-optic design problem. Using only one extra adjoint system simulation, all the sensitivities of a general objective function with respect to all fiber design parameters are estimated. We provide a full description of the solution to the derived adjoint problem. The accuracy and efficiency of our proposed algorithm are investigated through a comparison with the accurate but computationally expensive central finite-differences (CFD) approach. Numerical simulation results show that the proposed ASA algorithm has the same accuracy as the CFD approach but with a much lower computational cost.
Moreover, we propose an efficient, robust, and accelerated adaptive digital back propagation (A-DBP) method based on adjoint optimization technique. Provided that the total transmission distance is known, the proposed A-DBP algorithm blindly compensates for the linear and nonlinear distortions of point-to-point long-reach optical fiber transmission systems or multi-point optical fiber transmission networks, without knowing the launch power and channel parameters. The NLSE-based ASA approach is extended for the sensitivity analysis of general multi-span DBP model. A modified split-step Fourier scheme method is introduced to solve the adjoint problem, and a complete analysis of its computational complexity is studied. An adjoint-based optimization (ABO) technique is introduced to significantly accelerate the parameters extraction of the A-DBP. The ABO algorithm utilizes a sequential quadratic programming (SQP) technique coupled with the extended ASA algorithm to rapidly solve the A-DBP training problem and optimize the design parameters using minimum overhead of extra system simulations. Regardless of the number of A-DBP design parameters, the derivatives of the training objective function with respect to all parameters are estimated using only one extra adjoint system simulation per optimization iterate. This is contrasted with the traditional finite-difference (FD)-based optimization methods whose sensitivity analysis calculations cost per iterate scales linearly with the number of parameters.
The robustness, performance, and efficiency of the proposed A-DBP algorithm are demonstrated through applying it to mitigate the distortions of a 4-span optical fiber communication system scenario. Our results show that the proposed A-DBP achieves the optimal compensation performance obtained using an ideal fine-mesh DBP scheme utilizing the correct channel parameters. Compared to A-DBPs trained using SQP algorithms based on forward, backward, and central FD approaches, the proposed ABO algorithm trains the A-DBP with 2.02 times faster than the backward/forward FD-based optimizers, and with 3.63 times faster than the more accurate CFD-based optimizer. The achieved gain further increases as the number of design parameters increases. A coarse-mesh A-DBP with less number of spans is also adopted to significantly reduce the computational complexity, achieving compensation performance higher than that obtained using the coarse-mesh DBP with full number of spans. / Thesis / Doctor of Philosophy (PhD) / This thesis proposes two powerful and computationally efficient digital signal processing (DSP)-based techniques, namely, artificial neural network nonlinear feed forward equalizer (ANN-NFFE) and adaptive digital back propagation (A-DBP) equalizer, for mitigating the induced distortions in short-reach and long-haul fiber-optic communication systems, respectively. The ANN-NFFE combats nonlinear impairments of direct-detected short-haul optical fiber communication systems, achieving compensation performance comparable to the benchmark performance obtained using maximum-likelihood sequence estimator with much lower computational cost. A novel adjoint sensitivity analysis (ASA) approach is proposed to significantly accelerate sensitivity analyses of fiber-optic design problems. The A-DBP exploits a gradient-based optimization method coupled with the ASA algorithm to blindly compensate for the distortions of coherent-detected fiber-optic communication systems and networks, utilizing the minimum possible overhead of performed system simulations. The robustness and efficiency of the proposed equalizers are demonstrated using numerical simulations of varied examples extracted from practical optical fiber communication systems scenarios.
|
317 |
Optimization-Based Solutions for Planning and Control / Optimization-based Solutions to Optimal Operation under Uncertainty and Disturbance RejectionJalanko, Mahir January 2021 (has links)
Industrial automation systems normally consist of four different hierarchy levels: planning, scheduling, real-time optimization, and control. At the planning level, the goal is to compute an optimal production plan that minimizes the production cost while meeting process constraints. The planning model is typically formulated as a mixed integer nonlinear programming (MINLP), which is hard to solve to global optimality due to nonconvexity and large dimensionality attributes. Uncertainty in component qualities in gasoline blending due to measurement errors and variation in upstream processes may lead to off-specification products which require re-blending. Uncertainty in product demands may lead to a suboptimal solution and fail in capturing some potential profit due to shortage in products supply. While incorporating process uncertainties is essential to reducing the production cost and increasing profitability, it comes with the disadvantage of increasing the complexity of the MINLP planning model. The key contribution in the planning level is to employ the inventory pinch decomposition method to consider uncertainty in components qualities and products demands to reduce the production cost and increase profitability of the gasoline blend application.
At the control level, the goal is to ensure desired operation conditions by meeting process setpoints, ensure process safety, and avoid process failures. Model predictive control (MPC) is an advanced control strategy that utilizes a dynamic model of the process to predict future process dynamic behavior over a time horizon. The effectiveness of the MPC relies heavily on the availability of a reasonably accurate process model. The key contributions in the control level are: (1) investigate the use of different system identification methods for the purpose of developing a dynamic model for high-purity distillation column, which is a highly nonlinear process. (2) Develop a novel hybrid based MPC to improve the control of the column and achieve flooding-free control. / Dissertation / Doctor of Philosophy (PhD) / The operation of a chemical process involves many decisions which are normally distributed into levels referred to as process automation hierarchy. The process automation hierarchy levels are planning, scheduling, real-time optimization, and control. This thesis addresses two of the levels in the process automation hierarchy, which are planning and control. At the planning level, the objective is to ensure optimal utilization of raw materials and equipment to reduce production cost. At the control level, the objective is to meet and follow process setpoints determined by the real-time optimization level.
The main goals of the thesis are: (1) develop an efficient algorithm to solve a large-scale planning problem that incorporates uncertainties in components qualities and products demands to reduce the production cost and maximize profit for gasoline blending application. (2) Develop a novel hybrid-based model predictive control to improve the control strategy of an industrial distillation column that faces flooding issues.
|
318 |
Är AI framtiden i automatiserad mjukvarutestning? : En studie i AI:s framkomst inom mjukvarutestning / Is AI the future of automated software testing?Türkkan, Volkan, Westerback, Linus January 2022 (has links)
Användningen av Artificiell Intelligens (AI) har ökat de senaste åren och blivit en del av vardagen, men när det kommer till automatiserade mjukvarutester har AI inte varit aktuellt. Produkten är den viktigaste tillgången för ett företag, av den anledningen bör de garantera att produkten är bra. För att göra detta används kvalitetssäkringar med hjälp av mjukvarutester. Denna process är tidskrävande, kostsam och det är omöjligt att testa produkten i helhet. Om AI kan komplettera automatiserade mjukvarutester kommer företag ha möjligheten att producera bättre och billigare produkter under en kortare tidsperiod. I denna studie undersöktes ämnet AI inom automatiserade mjukvarutester. Syftet med detta var att studera om AI kan förbättra automatiserade mjukvarutester inom svenska företag. För att undersöka detta utfördes intervjuer, denna information jämfördes sedan med den valda litteraturen. Informationen användes sedan för att besvara frågorna: kommer AI förbättra dagens mjukvarutester? Vilka hinder finns till implementering av AI? Vilka algoritmer är bäst anpassade för detta ändamål? Arbetet fördjupar sig i algoritmerna Genetic algorithm (GA), Natural Language Processing(NLP) samt Artificial Neural Network (ANN). Beslutet att använda dessa algoritmer togs för att de var prevalenta i litteraturen. Intervjuerna genomfördes med olika experter inom områdena AI och mjukvarutestning. Detta var för att skapa en kontrast och ge en större bild på användnings-/problemområden. Denna information jämfördes sedan med den tidigare forskningen för att se om några lösningar har tillkommit. I nuläget finns det problem med att införa AI inom mjukvarutestning och några av dessa problem är saknaden av ett optimalt test oracle, träningsdata och testmodeller. Alla algoritmer som undersöktes i detta arbete hade områden de excellerade i. Dock fanns det en algoritm som hade mer användningsområden och det är NLP. Den fungerar bäst i kombination med en djupinlärningsalgoritm som till exempel ANN.
|
319 |
Damage detection on railway bridges using Artificial Neural Network and train induced vibrationsShu, Jiangpeng, Zhang, Ziye January 2012 (has links)
A damage detection approach based on Artificial Neural Network (ANN), using the statistics of structural dynamic responses as the damage index, is proposed in this study for Structural Health Monitoring (SHM). Based on the sensitivity analysis, the feasibility of using the changes of variances and covariance of dynamic responses of railway bridges under moving trains as the indices for damage detection is evaluated. A FE Model of a one-span simply supported beam bridge is built, considering both single damage case and multi-damage case. A Back-Propagation Neural Network (BPNN) is designed and trained to simulate the detection process. A series of numerical tests on the FE model with different train properties prove the validity and efficiency of the proposed approach. The results show not only that the trained ANN together with the statistics can correctly estimate the location and severity of damage in the structure, but also that the identification of the damage location is more difficult than that of the damage severity. In summary, it is concluded that the use of statistical property of structural dynamic response as damage index with the Artificial Neural Network as detection tool for damage detection is reliable and effective.
|
320 |
Surrogate model-based design optimization of a mobile deployable structure for overpressure load and vehicular impact mitigationTellkamp, Daniela F 09 December 2022 (has links) (PDF)
Artificial Neural Network (ANN) ensemble and Response Surface Method (RSM) surrogate models were generated from Finite Element (FE) simulations to predict the overpressure load and vehicle impact response of a novel rapidly deployable protective structure. A Non-dominated Sorting Genetic Algorithm-II (NSGA-II) was used in conjunction with the surrogate models to determine structure topology input variable configurations which were suited to produce the optimal balance of minimum mass, minimum rotation angle, minimum displacement, and maximum total length of the deployable structure. The structure was designed to retract into a container, be lightweight to facilitate transportation, and be able to adapt to varying terrain slopes. This research demonstrates that, in comparison to the RSM, ANN ensembles can more accurately and efficiently be used for identifying optimal design solutions for multi-objective design problems when two surrogate models from the same method corresponding to separate FE models are used simultaneously in a NSGA-II.
|
Page generated in 0.065 seconds