• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 113
  • 91
  • 76
  • 36
  • 24
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 878
  • 878
  • 145
  • 124
  • 121
  • 118
  • 113
  • 101
  • 101
  • 85
  • 82
  • 81
  • 73
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Study on Treatment with Respect to Idiopathic Scoliosis (Sensitivity Analysis Based on Buckling Theory)

Takeuchi, Kenzen, Azegami, Hideyuki, Murachi, Shunji, Kitoh, Junzoh, Ishida, Yoshito, Kawakami, Noriaki, Makino, Mitsunori 12 1900 (has links)
No description available.
222

Global sensitivity analysis of fault location algorithms.

Ooi, Hoong Boon January 2009 (has links)
Transmission lines of any voltage level are subject to faults. To speed up repairs and restoration of power, it is important to know where the fault is located. A fault location algorithm’s result is influenced by a series of modeling equations, setting parameters and system factors reflected in voltage and current inputs. The factors mentioned are subject to sources of uncertainty including measurement and signal processing errors, setting errors and incomplete modeling of a system under fault conditions. These errors have affected the accuracy of the distance to fault calculation. Accurate fault location reduces operating costs by avoiding lengthy and expensive patrols. Accurate fault location speeds up repairs and restoration of lines, ultimately reducing revenue loss caused by outages. In this thesis, we have reviewed the fault location algorithms and also how the uncertainty affects the results of fault location. Sensitivity analysis is able to analyze how the variation in the output of the fault location algorithms can be allocated to the variation of uncertain factors. In this research, we have used global sensitivity analysis to determine the most contributed uncertain factors and also the interaction of the uncertain factors. We have chosen Analysis of Variance (ANOVA) decomposition as our global sensitivity analysis. ANOVA decomposition shows us the insight of the fault location, such as relations between uncertain factors of the fault location. Quasi regression technique has also been used to approximate a function. In this research, the transmission line fault location system is fitted into the ANOVA decomposition using quasi regression. From the approximate function, we are able to get the variance of the sensitivity of fault location to uncertain factors using Monte Carlo method. In this research, we have designed novel methodology to test the fault location algorithms and compare the fault location algorithms. In practice, such analysis not only helps in selecting the optimal locator for a specific application, it also helps in the calibration process. / Thesis (M.Eng.Sc.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2009
223

Application of Genetic Algorithm to a Forced Landing Manoeuvre on Transfer of Training Analysis

Tong, Peter, mail@petertong.com January 2007 (has links)
This study raises some issues for training pilots to fly forced landings and examines the impact that these issues may have on the design of simulators for such training. It focuses on flight trajectories that a pilot of a single-engine general aviation aircraft should fly after engine failure and how pilots can be better simulator trained for this forced landing manoeuvre. A sensitivity study on the effects of errors and an investigation on the effect of tolerances in the aerodynamic parameters as prescribed in the Manual of Criteria for the Qualification of Flight Simulators have on the performance of flight simulators used for pilot training was carried out. It uses a simplified analytical model for the Beech Bonanza model E33A aircraft and a vertical atmospheric turbulence based on the MIL-F-8785C specifications. It was found that the effect of the tolerances is highly sensitive on the nature of the manoeuvre flown and that in some cases, negative transfe r of training may be induced by the tolerances. A forced landing trajectory optimisation was carried out using Genetic Algorithm. The forced landing manoeuvre analyses with pre-selected touchdown locations and pre-selected final headings were carried out for an engine failure at 650 ft AGL for bank angles varying from banking left at 45° to banking right at 45°, and with an aircraft's speed varying from 75.6 mph to 208 mph, corresponding to 5% above airplane's stall speed and airplane's maximum speed respectively. The results show that certain pre-selected touchdown locations are more susceptible to horizontal wind. The results for the forced landing manoeuvre with a pre-selected location show minimal distance error while the quality of the results for the forced landing manoeuvre with a pre-selected location and a final heading show that the results depend on the end constraints. For certain pre-selected touchdown locations and final headings, the airplane may either touchdown very close to the pre-selected touchdown location but with greater final h eading error from the pre-selected final heading or touchdown with minimal final heading error from the pre-selected final heading but further away from the pre-selected touchdown location. Analyses for an obstacle avoidance forced landing manoeuvre were also carried out where an obstacle was intentionally placed in the flight path as found by the GA program developed for without obstacle. The methodology developed successfully found flight paths that will avoid the obstacle and touchdown near the pre-selected location. In some cases, there exist more than one ensemble grouping of flight paths. The distance error depends on both the pre-selected touchdown location and where the obstacle was placed. The distance error tends to increase with the addition of a specific final heading requirement for an obstacle avoidance forced landing manoeuvre. As with the case without specific final heading requirement, there is a trade off between touching down nearer to the pre-selected location and touching down with a smaller final heading error.
224

PIEZOELECTRIC ACTUATOR DESIGN OPTIMISATION FOR SHAPE CONTROL OF SMART COMPOSITE PLATE STRUCTURES

Nguyen, Van Ky Quan January 2005 (has links)
Shape control of a structure with distributed piezoelectric actuators can be achieved through optimally selecting the loci, shapes and sizes of the piezoelectric actuators and choosing the electric fields applied to the actuators. Shape control can be categorised as either static or dynamic shape control. Whether it is a transient or gradual change, static or dynamic shape control, both aim to determine the loci, sizes, and shapes of piezoelectric actuators, and the applied voltages such that a desired structural shape is achieved effectively. This thesis is primarily concerned with establishing a finite element formulation for the general smart laminated composite plate structure, which is capable to analyse static and dynamic deformation using non-rectangular elements. The mechanical deformation of the smart composite plate is modelled using a third order plate theory, while the electric field is simulated based on a layer-wise theory. The finite element formulation for static and dynamics analysis is verified by comparing with available numerical results. Selected experiments have also been conducted to measure structural deformation and the experimental results are used to correlate with those of the finite element formulation for static analysis. In addition, the Linear Least Square (LLS) method is employed to study the effect of different piezoelectric actuator patch pattern on the results of error function, which is the least square error between the calculated and desired structural shapes in static structural shape control. The second issue of this thesis deals with piezoelectric actuator design optimisation (PADO) for quasi-static shape control by finding the applied voltage and the configuration of piezoelectric actuator patch to minimise error function, whereas the piezoelectric actuator configuration is defined based on the optimisation technique of altering nodal coordinates (size/shape optimisation) or eliminating inefficient elements in a structural mesh (topology optimisation). Several shape control algorithms are developed to improve the structural shape control by reducing the error function. Further development of the GA-based voltage and piezoelectric actuator design optimisation method includes the constraint handling, where the error function can be optimised subjected to energy consumption or other way around. The numerical examples are presented in order to verify that the proposed algorithms are applicable to quasi-static shape control based on voltage and piezoelectric actuator design optimisation (PADO) in terms of minimising the error function. The third issue is to use the present finite element formulation for a modal shape control and for controlling resonant vibration of smart composite plate structures. The controlled resonant vibration formulation is developed. Modal analysis and LLS methods are also employed to optimise the applied voltage to piezoelectric actuators for achieving the modal shapes. The Newmark direct time integration method is used to study harmonic excitation of smart structures. Numerical results are presented to induce harmonic vibration of structure with controlled magnitude via adjusting the damping and to verify the controlled resonant vibration formulation.
225

Μορφές ανάλυσης ευαισθησίας για προβλήματα γραμμικού προγραμματισμού

Μπαλαφούτη, Παναγιώτα 20 September 2010 (has links)
Ο γραμμικός προγραμματισμός είναι μια μεθοδολογία της Επιχειρησιακής Έρευνας η οποία ασχολείται με το πρόβλημα της κατανομής των περιορισμένων πόρων ενός συστήματος σε ανταγωνιζόμενες μεταξύ τους δραστηριότητες με τον καλύτερο δυνατό τρόπο. Από μαθηματικής σκοπιάς το πρόβλημα αφορά τη μεγιστοποίηση ή ελαχιστοποίηση μιας γραμμικής συνάρτησης σύμφωνα με κάποιους γραμμικούς περιορισμούς. Τόσο η μαθηματική διατύπωση του προβλήματος, όσο και μια συστηματική διαδικασία επίλυσής του, η μέθοδος Simplex, οφείλεται στον G.B. Duntzig στα 1947. Την ίδια εποχή ο J. Von Neuman διατύπωνε το αργότερα γνωστό ως δυϊκό πρόβλημα γραμμικού προγραμματισμού. Το πρώτο κεφάλαιο της παρούσης εργασίας ξεκινά με τη γενική μαθηματική θεώρηση των δύο προβλημάτων και συνεχίζει με τα βασικά θεωρήματα τα οποία αφορούν τη διαδικασία λύσης, τις ιδιότητές τους καθώς επίσης και τις σχέσεις που τα συνδέουν. Στο δεύτερο κεφάλαιο παρουσιάζονται διάφοροι τύποι ανάλυσης ευαισθησίας του γραμμικού μοντέλου, της μελέτης δηλαδή των αλλαγών που επιφέρουν στην άριστη λύση, αλλαγές σε διάφορα μεγέθη -παράμετροι- του προβλήματος. Στο ίδιο κεφάλαιο παρουσιάζεται η ανάλυση ευαισθησίας μιας ειδικής κλάσης προβλημάτων γραμμικού προγραμματισμού, του προβλήματος καταμερισμού εργασίας (εκχώρησης). Τέλος γίνεται μια σύντομη αναφορά στον υπολογισμό των δυϊκών τιμών στην περίπτωση των εκφυλισμένων λύσεων. / Linear programming is a method of Operations Research which deals with the problem of distribution of limited resources of a system to rivaling activities -with each other - in the best possible way. From mathematics point of view the problem concerns the maximization or minimization of a linear function according to certain linear restrictions. Not only the mathematic formulation of the problem, but also a systematic procedure of solution (gradualism), the Simplex method, are due to G. B. Duntzig (1947). At the same time J. Von Neuman formulated the later known as dual problem of linear programming. The first chapter of this paper starts with the general mathematical regard of these two problems and steps to the essential theorems used for the solution procedure, their attributes as well as the relations that bind them. In the second chapter various types of linear model’s sensitivity analysis are presented, the study of changes that lead to the most efficient solution, changes in various elements - parameters of the problem. At the same chapter the sensitivity analysis of special group of linear programming problems is presented, the assignment problem. Finally a brief note is made at the calculation of dual values in case of degenerated solutions.
226

Evaluation of Neural Pattern Classifiers for a Remote Sensing Application

Fischer, Manfred M., Gopal, Sucharita, Staufer-Steinnocher, Petra, Steinocher, Klaus 05 1900 (has links) (PDF)
This paper evaluates the classification accuracy of three neural network classifiers on a satellite image-based pattern classification problem. The neural network classifiers used include two types of the Multi-Layer-Perceptron (MLP) and the Radial Basis Function Network. A normal (conventional) classifier is used as a benchmark to evaluate the performance of neural network classifiers. The satellite image consists of 2,460 pixels selected from a section (270 x 360) of a Landsat-5 TM scene from the city of Vienna and its northern surroundings. In addition to evaluation of classification accuracy, the neural classifiers are analysed for generalization capability and stability of results. Best overall results (in terms of accuracy and convergence time) are provided by the MLP-1 classifier with weight elimination. It has a small number of parameters and requires no problem-specific system of initial weight values. Its in-sample classification error is 7.87% and its out-of-sample classification error is 10.24% for the problem at hand. Four classes of simulations serve to illustrate the properties of the classifier in general and the stability of the result with respect to control parameters, and on the training time, the gradient descent control term, initial parameter conditions, and different training and testing sets. (authors' abstract) / Series: Discussion Papers of the Institute for Economic Geography and GIScience
227

Investigation of CO2 Tracer Gas-Based Calibration of Multi-Zone Airflow Models

January 2011 (has links)
abstract: The modeling and simulation of airflow dynamics in buildings has many applications including indoor air quality and ventilation analysis, contaminant dispersion prediction, and the calculation of personal occupant exposure. Multi-zone airflow model software programs provide such capabilities in a manner that is practical for whole building analysis. This research addresses the need for calibration methodologies to improve the prediction accuracy of multi-zone software programs. Of particular interest is accurate modeling of airflow dynamics in response to extraordinary events, i.e. chemical and biological attacks. This research developed and explored a candidate calibration methodology which utilizes tracer gas (e.g., CO2) data. A key concept behind this research was that calibration of airflow models is a highly over-parameterized problem and that some form of model reduction is imperative. Model reduction was achieved by proposing the concept of macro-zones, i.e. groups of rooms that can be combined into one zone for the purposes of predicting or studying dynamic airflow behavior under different types of stimuli. The proposed calibration methodology consists of five steps: (i) develop a "somewhat" realistic or partially calibrated multi-zone model of a building so that the subsequent steps yield meaningful results, (ii) perform an airflow-based sensitivity analysis to determine influential system drivers, (iii) perform a tracer gas-based sensitivity analysis to identify macro-zones for model reduction, (iv) release CO2 in the building and measure tracer gas concentrations in at least one room within each macro-zone (some replication in other rooms is highly desirable) and use these measurements to further calibrate aggregate flow parameters of macro-zone flow elements so as to improve the model fit, and (v) evaluate model adequacy of the updated model based on some metric. The proposed methodology was first evaluated with a synthetic building and subsequently refined using actual measured airflows and CO2 concentrations for a real building. The airflow dynamics of the buildings analyzed were found to be dominated by the HVAC system. In such buildings, rectifying differences between measured and predicted tracer gas behavior should focus on factors impacting room air change rates first and flow parameter assumptions between zones second. / Dissertation/Thesis / M.S. Built Environment 2011
228

Robustness analysis of VEGA launcher model based on effective sampling strategy

Dong, Siyi January 2016 (has links)
An efficient robustness analysis for the VEGA launch vehicle is essential to minimize the potential system failure during the ascending phase. Monte Carlo sampling method is usually considered as a reliable strategy in industry if the sampling size is large enough. However, due to a large number of uncertainties and a long response time for a single simulation, exploring the entire uncertainties sufficiently through Monte Carlo sampling method is impractical for VEGA launch vehicle. In order to make the robustness analysis more efficient when the number of simulation is limited, the quasi-Monte Carlo(Sobol, Faure, Halton sequence) and heuristic algorithm(Differential Evolution) are proposed. Nevertheless, the reasonable number of samples for simulation is still much smaller than the minimal number of samples for sufficient exploration. To further improve the efficiency of robustness analysis, the redundant uncertainties are sorted out by sensitivity analysis. Only the dominant uncertainties are remained in the robustness analysis. As all samples for simulation are discrete, many uncertainty spaces are not explored with respect to its objective function by sampling or optimization methods. To study these latent information, the meta-model trained by Gaussian Process is introduced. Based on the meta-model, the expected maximum objective value and expected sensitivity of each uncertainties can be analyzed for robustness analysis with much higher efficiency but without loss much accuracy.
229

Analyse de sensibilité et réduction de dimension. Application à l'océanographie / Sensitivity analysis and model reduction : application to oceanography

Janon, Alexandre 15 November 2012 (has links)
Les modèles mathématiques ont pour but de décrire le comportement d'un système. Bien souvent, cette description est imparfaite, notamment en raison des incertitudes sur les paramètres qui définissent le modèle. Dans le contexte de la modélisation des fluides géophysiques, ces paramètres peuvent être par exemple la géométrie du domaine, l'état initial, le forçage par le vent, ou les coefficients de frottement ou de viscosité. L'objet de l'analyse de sensibilité est de mesurer l'impact de l'incertitude attachée à chaque paramètre d'entrée sur la solution du modèle, et, plus particulièrement, identifier les paramètres (ou groupes de paramètres) og sensibles fg. Parmi les différentes méthodes d'analyse de sensibilité, nous privilégierons la méthode reposant sur le calcul des indices de sensibilité de Sobol. Le calcul numérique de ces indices de Sobol nécessite l'obtention des solutions numériques du modèle pour un grand nombre d'instances des paramètres d'entrée. Cependant, dans de nombreux contextes, dont celui des modèles géophysiques, chaque lancement du modèle peut nécessiter un temps de calcul important, ce qui rend inenvisageable, ou tout au moins peu pratique, d'effectuer le nombre de lancements suffisant pour estimer les indices de Sobol avec la précision désirée. Ceci amène à remplacer le modèle initial par un emph{métamodèle} (aussi appelé emph{surface de réponse} ou emph{modèle de substitution}). Il s'agit d'un modèle approchant le modèle numérique de départ, qui nécessite un temps de calcul par lancement nettement diminué par rapport au modèle original. Cette thèse se centre sur l'utilisation d'un métamodèle dans le cadre du calcul des indices de Sobol, plus particulièrement sur la quantification de l'impact du remplacement du modèle par un métamodèle en terme d'erreur d'estimation des indices de Sobol. Nous nous intéressons également à une méthode de construction d'un métamodèle efficace et rigoureux pouvant être utilisé dans le contexte géophysique. / Mathematical models seldom represent perfectly the reality of studied systems, due to, for instance, uncertainties on the parameters that define the system. In the context of geophysical fluids modelling, these parameters can be, e.g., the domain geometry, the initial state, the wind stress, the friction or viscosity coefficients. Sensitivity analysis aims at measuring the impact of each input parameter uncertainty on the model solution and, more specifically, to identify the ``sensitive'' parameters (or groups of parameters). Amongst the sensitivity analysis methods, we will focus on the Sobol indices method. The numerical computation of these indices require numerical solutions of the model for a large number of parameters' instances. However, many models (such as typical geophysical fluid models) require a large amount of computational time just to perform one run. In these cases, it is impossible (or at least not practical) to perform the number of runs required to estimate Sobol indices with the required precision. This leads to the replacement of the initial model by a emph{metamodel} (also called emph{response surface} or emph{surrogate model}), which is a model that approximates the original model, while having a significantly smaller time per run, compared to the original model. This thesis focuses on the use of metamodel to compute Sobol indices. More specifically, our main topic is the quantification of the metamodeling impact, in terms of Sobol indices estimation error. We also consider a method of metamodeling which leads to an efficient and rigorous metamodel, which can be used in the geophysical context.
230

A mathematical model for studying the impact of climate variability on malaria epidemics in South Africa

Abiodun, Gbenga Jacob January 2017 (has links)
Philosophiae Doctor - PhD / Malaria is most prevalent in tropical climates, where there are sufficient rainfall for mosquitoes to breed and conducive temperatures for both the mosquito and protozoa to live. A slight change in temperature can drastically affect the lifespan and patterns of mosquitoes, and moreover, the protozoan itself can only survive in a certain temperature range. With higher temperatures, mosquitoes can mature faster, and thus have more time to spread the disease. The malaria parasite also matures more quickly at warmer temperatures. However, if temperatures become too high, neither mosquitoes nor the malaria pathogen can survive. In addition, stagnant water is also a major contributor to the spread of malaria, since most mosquito species breed in small pools of water. The correct amount and distribution of rainfall increases the possible breeding sites for mosquito larvae, which eventually results in more vectors to spread the disease. With little rainfall, there are few places for the mosquitoes to breed. For these reasons, and in order to control mosquito population, it is important to examine the weather parameters such as temperature and rainfall which are imperative in determining the disease epidemics. Accurate seasonal climate forecasts of these variables, together with malaria models should be able to drive an early warning system in endemic regions. These models can also be used to evaluate the possible change in regions under climate change scenarios, and the spread of malaria to new regions. In this study, we develop and analyse a mosquito model to study the population dynamics of mosquitoes. Ignoring the impact of climate, the model is further developed by introducing human compartments into the model. We perform both analytical and numerical analyses on the two models and verify that both models are epidemiological and mathematical well-posed. Using the next generation matrix method, the basic reproduction number of each system is calculated. Results from both analyses confirm that mosquito- and disease-free equilibria are locally asymptotically stable whenever R0 < 1 and unstable whenever R0 > 1. We further establish the global stability of the mosquito-free equilibrium using a Lyapunov function. In order to examine the effectiveness of control measures, we calculate the sensitivity coefficients of the reproductive number of the mosquito-human malaria model and highlight the importance of mosquito biting rate on malaria transmission. In addition, we introduce climate dependent parameters of Anopheles gambiae and climate data of Limpopo province into the malaria model to study malaria transmission over the province. Climate variables and puddle dynamics are further incorporated into the mosquito model to study the dynamics of Anopheles arabiensis. The climatedependent functions are derived from the laboratory experiments in the study of Maharaj [114], and we further verify the sensitivity of the model to parameters through sensitivity analysis. Running the climate data of Dondotha village in Kwazulu-Natal province over the mosquito model, it is used to simulate the impact of climate variables on the population dynamics of Anopheles arabiensis over the village. Furthermore, we incorporate human compartments into the climate-based mosquito model to explore the impact of climate variability on malaria incidence over KwaZulu-Natal province over the period 1970-2005. The outputs of the climate-based mosquito-human malaria model are further analysed with Principal Component Analysis (PCA), Wavelet Power Spectrum (WPS) and Wavelet Cross-coherence Analysis (WCA) to investigate the relationship between the climate variables and malaria transmission over the province. The results from the mosquito model fairly accurately quantify the seasonality of the population of Anopheles arabiensis over the study region and also demonstrate the influence of climatic factors on the vector population dynamics. The model simulates the population dynamics of both immature and adult Anopheles arabiensis and increases our understanding on the importance of mosquito biology in malaria models. Also, the simulated larval density produces a curve which is similar to observed data obtained from another study. In addition, the mosquito-malaria models produce reasonable fits with the observed data over Limpopo and KwaZulu Natal provinces. In particular, they capture all the spikes in malaria prevalence. Our results further highlight the importance of climate factors on malaria transmission and show the seasonality of malaria epidemics over the provinces. The results of the PCA on the model outputs suggest that there are two major process in the model simulation. One of the processes indicate high loadings on the population of Susceptible, Exposed and Infected humans, while the other is more correlated with Susceptible and Recovered humans. However, both processes reveal the inverse correlation between Susceptible-Infected and Susceptible-Recovered humans respectively. Through spectrum analysis, we notice a strong annual cycle of malaria incidence over the province and ascertain a dominant periodicity of one year. Consequently, our findings indicate that an average of 0 to 120-day lag is generally noted over the study period, but the 120-day lag is more associated with temperature than rainfall. The findings of this study would be useful in an early warning system or forecasting of malaria transmission over the study areas.

Page generated in 0.0834 seconds