• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • Tagged with
  • 9
  • 9
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

RAPID 3D TRACING OF THE MOUSE BRAIN NEUROVASCULATURE WITH LOCAL MAXIMUM INTENSITY PROJECTION AND MOVING WINDOWS

Han, Dong Hyeop 2009 August 1900 (has links)
Neurovascular models have played an important role in understanding neuronal function or medical conditions. In the past few decades, only small volumes of neurovascular data have been available. However, huge data sets are becoming available with high throughput instruments like the Knife-Edge Scanning Microscope (KESM). Therefore, fast and robust tracing methods become necessary for tracing such large data sets. However, most tracing methods are not effective in handling complex structures such as branches. Some methods can solve this issue, but they are not computationally efficient (i.e., slow). Motivated by the issue of speed and robustness, I introduce an effective and efficient fiber tracing algorithm for 2D and 3D data. In 2D tracing, I have implemented a Moving Window (MW) method which leads to a mathematical simplification and noise robustness in determining the trace direction. Moreover, it provides enhanced handling of branch points. During tracing, a Cubic Tangential Trace Spline (CTTS) is used as an accurate and fast nonlinear interpolation approach. For 3D tracing, I have designed a method based on local maximum intensity projection (MIP). MIP can utilize any existing 2D tracing algorithms for use in 3D tracing. It can also significantly reduce the search space. However, most neurovascular data are too complex to directly use MIP on a large scale. Therefore, we use MIP within a limited cube to get unambiguous projections, and repeat the MIP-based approach over the entire data set. For processing large amounts of data, we have to automate the tracing algorithms. Since the automated algorithms may not be 100 percent correct, validation is needed. I validated my approach by comparing the traced results to human labeled ground truth showing that the result of my approach is very similar to the ground truth. However, this validation is limited to small-scale real-world data due to the limitation of the manual labeling. Therefore, for large-scale data, I validated my approach using a model-based generator. The result suggests that my approach can also be used for large-scale real-world data. The main contributions of this research are as follows. My 2D tracing algorithm is fast enough to analyze, with linear processing time based on fiber length, large volumes of biological data and is good at handling branches. The new local MIP approach for 3D tracing provides significant performance improvement and it allows the reuse of any existing 2D tracing methods. The model-based generator enables tracing algorithms to be validated for large-scale real-world data. My approach is widely applicable for rapid and accurate tracing of large amounts of biomedical data.
2

Evaluation of a statistical infill candidate selection technique

Guan, Linhua 30 September 2004 (has links)
Quantifying the drilling or recompletion potential in producing gas basins is often a challenging problem, because of large variability in rock quality, well spacing, and well completion practices, and the large number of wells involved. Complete integrated reservoir studies to determine infill potential are often too time-consuming and costly for many producing gas basins. In this work we evaluate the accuracy of a statistical moving-window technique that has been used in tight-gas formations to assess infill and recompletion potential. The primary advantages of the technique are its speed and its reliance upon well location and production data only. We used the statistical method to analyze simulated low-permeability, 100-well production data sets, then compared the moving-window infill-well predictions to those from reservoir simulation. Results indicate that moving-window infill predictions for individual wells can be off by more than 50%; however, the technique accurately predicts the combined infill-production estimate from a group of infill candidates, often to within 10%. We found that the accuracy of predicted infill performance decreases as heterogeneity increases and increases as the number of wells in the project increases. The cases evaluated in this study included real-world well spacing and production rates and a significant amount of depletion at the infill locations. Because of its speed, accuracy and reliance upon readily available data, the moving window technique can be a useful screening tool for large infill development projects.
3

Archaeomagnetic Secular Variation in the UK During the Past 4000 Years and its Application to Archaeomagnetic Dating

Batt, Catherine M., Lanos, P.H., Tarling, D.H., Zananiri, I., Lindford, P. 18 June 2009 (has links)
No
4

Application de la validation de données dynamiques au suivi de performance d'un procédé

Ullrich, Christophe 17 October 2010 (has links)
La qualité des mesures permettant de suivre l'évolution de procédés chimiques ou pétrochimiques peut affecter de manière significative leur conduite. Malheureusement, toute mesure est entachée d'erreur. Les erreurs présentes dans les données mesurées peuvent mener à des dérives significatives dans la conduite du procédé, ce qui peut avoir des effets néfastes sur la sécurité du procédé ou son rendement. La validation de données est une tâche très importante car elle transforme l'ensemble des données disponibles en un jeu cohérent de valeurs définissant l'état du procédé. La validation de données permet de corriger les mesures, d'estimer les valeurs des variables non mesurées et de calculer les incertitudes a posteriori de toutes les variables. À l'échelle industrielle, elle est régulièrement appliquée à des procédés fonctionnant en continu, représentés par des modèles stationnaires. Cependant, pour le suivi de phénomènes transitoires, les algorithmes de validation stationnaires ne sont plus efficaces. L'étude abordée dans le cadre de cette thèse est l'application de la validation de données dynamiques au suivi des performances des procédés chimiques. L'algorithme de validation de données dynamiques développé dans le cadre de cette thèse, est basé sur une résolution simultanée du problème d'optimisation et des équations du modèle. Les équations différentielles sont discrétisées par une méthode des résidus pondérés : les collocations orthogonales. L'utilisation de la méthode des fenêtres de temps mobiles permet de conserver un problème de dimension raisonnable. L'algorithme d'optimisation utilisé est un algorithme "Successive Quadratic Programming" à point intérieur. L'algorithme de validation de données dynamiques développé a permis la réduction de l'incertitude des estimées. Les exemples étudiés sont présentés du plus simple au plus complexe. Les premiers modèles étudiés sont des cuves de stockages interconnectées. Ce type de modèle est composé uniquement de bilans de matière. Les modèles des exemples suivants, des réacteurs chimiques, sont composés des bilans de matière et de chaleur. Le dernier modèle étudié est un ballon de séparation liquide vapeur. Ce dernier est composé de bilans de matière et de chaleur couplés à des phénomènes d'équilibre liquide-vapeur. L'évaluation de la matrice de sensibilité et du calcul des variances a posteriori a été étendue aux procédés représentés par des modèles dynamiques. Son application a été illustrée par plusieurs exemples. La modification des paramètres de fenêtre de validation influence la redondance dans celle-ci et donc le facteur de réduction de variances a posteriori. Les développements proposés dans ce travail offrent donc un critère rationnel de choix de la taille de fenêtre pour les applications de validation de données dynamiques. L'intégration d'estimateurs alternatifs dans l'algorithme permet d'en augmenter la robustesse. En effet, ces derniers permettent l'obtention d'estimées non-biaisées en présence d'erreurs grossières dans les mesures. Organisation de la thèse : La thèse débute par un chapitre introductif présentant le problème, les objectifs de la recherche ainsi que le plan du travail. La première partie de la thèse est consacrée à l'état de l'art et au développement théorique d'une méthode de validation de données dynamiques. Elle est organisée de la manière suivante : -Le premier chapitre est consacré à la validation de données stationnaires. Il débute en montrant le rôle joué par la validation de données dans le contrôle des procédés. Les différents types d'erreurs de mesure et de redondances sont ensuite présentés. Différentes méthodes de résolution de problèmes stationnaires linéaires et non linéaires sont également explicitées. Ce premier chapitre se termine par la description d'une méthode de calcul des variances a posteriori. -Dans le deuxième chapitre, deux catégories des méthodes de validation de données dynamiques sont présentées : les méthodes de filtrage et les méthodes de programmation non-linéaire. Pour chaque type de méthode, les principales formulations trouvées dans la littérature sont exposées avec leurs principaux avantages et inconvénients. -Le troisième chapitre est consacré au développement théorique de l'algorithme de validation de données dynamiques mis au point dans le cadre de cette thèse. Les différents choix stratégiques effectués y sont également présentés. L'algorithme choisi se base sur une formulation du problème d'optimisation comprenant un système d'équations algébro-différentielles. Les équations différentielles sont discrétisées au moyen d'une méthode de collocations orthogonales utilisant les polynômes d'interpolation de Lagrange. Différentes méthodes de représentation des variables d'entrée sont discutées. Afin de réduire les coûts de calcul et de garder un problème d'optimisation résoluble, la méthode des fenêtres de temps mobiles est utilisée. Un algorithme "Interior Point Sucessive Quadratic Programming" est utilisé afin de résoudre de manière simultanée les équations différentielles discrétisées et les équations du modèle. Les dérivées analytiques du gradient de la fonction objectif et du Jacobien des contraintes sont également présentées dans ce chapitre. Pour terminer, un critère de qualité permettant de comparer les différentes variantes de l'algorithme est proposé. -Cette première partie se termine par le développement d'un algorithme original de calcul des variances a posteriori. La méthode développée dans ce chapitre est similaire à la méthode décrite dans le premier chapitre pour les procédés fonctionnant de manière stationnaire. Le développement est réalisé pour les deux représentations des variables d'entrée discutées au chapitre 3. Pour terminer le chapitre, cette méthode de calcul des variances a posteriori est appliquée de manière théorique sur un petit exemple constitué d'une seule équation différentielle et d'une seule équation de liaison. La seconde partie de la thèse est consacrée à l'application de l'algorithme de validation de données dynamiques développé dans la première partie à l'étude de plusieurs cas. Pour chacun des exemples traités, l'influence des paramètres de l'algorithme sur la robustesse, la facilité de convergence et la réduction de l'incertitude des estimées est examinée. La capacité de l'algorithme à réduire l'incertitude des estimées est évaluée au moyen du taux de réduction d'erreur et du facteur de réduction des variances. -Le premier chapitre de cette deuxième partie est consacré à l'étude d'une ou plusieurs cuves de stockage à niveau variable, avec ou sans recyclage de fluide. Ce premier cas comporte uniquement des bilans de matière. - Le chapitre 6 examine le cas d'un réacteur à cuve agitée avec échange de chaleur. L'exemple traité dans ce chapitre est donc constitué de bilans de matière et d'énergie. -L'étude d'un ballon flash au chapitre 7 permet de prendre en compte les équilibres liquide-vapeur. - Le chapitre 8 est consacré aux estimateurs robustes dont la performance est comparée pour les exemples étudiés aux chapitres 5 et 6. La thèse se termine par un chapitre consacré à la présentation des conclusions et de quelques perspectives futures.
5

Atomistic Studies of Shock-Wave and Detonation Phenomena in Energetic Materials

Budzevich, Mikalai 01 January 2011 (has links)
The major goal of this PhD project is to investigate the fundamental properties of energetic materials, including their atomic and electronic structures, as well as mechanical properties, and relate these to the fundamental mechanisms of shock wave and detonation propagation using state-of-the-art simulation methods. The first part of this PhD project was aimed at the investigation of static properties of energetic materials (EMs) with specific focus on 1,3,5-triamino-2,4,6-trinitrobenzene (TATB). The major goal was to calculate the isotropic and anisotropic equations of state for TATB within a range of compressions not accessible to experiment, and to make predictions of anisotropic sensitivity along various crystallographic directions. The second part of this PhD project was devoted to applications of a novel atomic-scale simulation method, referred to as the moving window molecular dynamics (MW-MD) technique, to study the fundamental mechanisms of condensed-phase detonation. Because shock wave is a leading part of the detonation wave, MW-MD was applied to demonstrate its effectiveness in resolving fast non-equilibrium processes taking place behind the shock-wave front during shock-induced solid-liquid phase transitions in crystalline aluminum. Next, MW-MD was used to investigate the fundamental mechanisms of detonation propagation in condensed energetic materials. Due to the chemical complexity of real EMs, a simplified AB model of a prototypical energetic material was used. The AB interatomic potential, which describes chemical bonds, as well as chemical reactions between atoms A and B in an AB solid, was modified to investigate the mechanism of the detonation wave propagation with different reactive activation barriers. The speed of the shock or detonation wave, which is an input parameter of MW-MD, was determined by locating the Chapman-Jouguet point along the reactive Hugoniot, which was simulated using the constant number of particles, volume, and temperature (NVT) ensemble in MD. Finally, the detonation wave structure was investigated as a function of activation barrier for the chemical reaction AB+B ⇒ A+BB. Different regimes of detonation propagation including 1-D laminar, 2-D cellular, and 3-D spinning and turbulent detonation regimes were identified.
6

GLR Control Charts for Monitoring a Proportion

Huang, Wandi 19 December 2011 (has links)
The generalized likelihood ratio (GLR) control charts are studied for monitoring a process proportion of defective or nonconforming items. The type of process change considered is an abrupt sustained increase in the process proportion, which implies deterioration of the process quality. The objective is to effectively detect a wide range of shift sizes. For the first part of this research, we assume samples are collected using rational subgrouping with sample size n>1, and the binomial GLR statistic is constructed based on a moving window of past sample statistics that follow a binomial distribution. Steady state performance is evaluated for the binomial GLR chart and the other widely used binomial charts. We find that in terms of the overall performance, the binomial GLR chart is at least as good as the other charts. In addition, since it has only two charting parameters that both can be easily obtained based on the approach we propose, less effort is required to design the binomial GLR chart for practical applications. The second part of this research develops a Bernoulli GLR chart to monitor processes based on the continuous inspection, in which case samples of size n=1 are observed. A constant upper bound is imposed on the estimate of the process shift, preventing the corresponding Bernoulli GLR statistic from being undefined. Performance comparisons between the Bernoulli GLR chart and the other charts show that the Bernoulli GLR chart has better overall performance than its competitors, especially for detecting small shifts. / Ph. D.
7

Detection and Diagnosis of Stator and Rotor Electrical Faults for Three-Phase Induction Motor via Wavelet Energy Approach

Hussein, A.M., Obed, A.A., Zubo, R.H.A., Al-Yasir, Yasir I.A., Saleh, A.L., Fadhel, H., Sheikh-Akbari, A., Mokryani, Geev, Abd-Alhameed, Raed 08 April 2022 (has links)
Yes / This paper presents a fault detection method in three-phase induction motors using Wavelet Packet Transform (WPT). The proposed algorithm takes a frame of samples from the three-phase supply current of an induction motor. The three phase current samples are then combined to generate a single current signal by computing the Root Mean Square (RMS) value of the three phase current samples at each time stamp. The resulting current samples are then divided into windows of 64 samples. Each resulting window of samples is then processed separately. The proposed algorithm uses two methods to create window samples, which are called non-overlapping window samples and moving/overlapping window samples. Non-overlapping window samples are created by simply dividing the current samples into windows of 64 sam-ples, while the moving window samples are generated by taking the first 64 current samples, and then the consequent moving window samples are generated by moving the window across the current samples by one sample each time. The new window of samples consists of the last 63 samples of the previous window and one new sample. The overlapping method reduces the fault detection time to a single sample accuracy. However, it is computationally more expensive than the non-overlapping method and requires more computer memory. The resulting window sam-ples are separately processed as follows: The proposed algorithm performs two level WPT on each resulting window samples, dividing its coefficients into its four wavelet subbands. Infor-mation in wavelet high frequency subbands is then used for fault detection and activating the trip signal to disconnect the motor from the power supply. The proposed algorithm was first implemented in the MATLAB platform, and the Entropy power Energy (EE) of the high frequen-cy WPT subbands’ coefficients was used to determine the condition of the motor. If the induction motor is faulty, the algorithm proceeds to identify the type of the fault. An empirical setup of the proposed system was then implemented, and the proposed algorithm condition was tested under real, where different faults were practically induced to the induction motor. Experimental results confirmed the effectiveness of the proposed technique. To generalize the proposed meth-od, the experiment was repeated on different types of induction motors with different working ages and with different power ratings. Experimental results show that the capability of the pro-posed method is independent of the types of motors used and their ages.
8

Analysis and Evaluation of Social Network Anomaly Detection

Zhao, Meng John 27 October 2017 (has links)
As social networks become more prevalent, there is significant interest in studying these network data, the focus often being on detecting anomalous events. This area of research is referred to as social network surveillance or social network change detection. While there are a variety of proposed methods suitable for different monitoring situations, two important issues have yet to be completely addressed in network surveillance literature. First, performance assessments using simulated data to evaluate the statistical performance of a particular method. Second, the study of aggregated data in social network surveillance. The research presented tackle these issues in two parts, evaluation of a popular anomaly detection method and investigation of the effects of different aggregation levels on network anomaly detection. / Ph. D. / Social networks are increasingly becoming a part of our normal lives. These networks contain a wealth of information that can be immensely useful in a variety of areas, from targeting a specific audience for advertisement, to apprehending criminals, to detecting terrorist activities. The research presented focus evaluating popular methods on monitoring these social networks, and the potential information loss one might encounter when only limited information can be collected over a specific time period, we present our commendations on social network monitoring that are applicable to a wide range of scenarios as well.
9

Energy Storage System Requirements For Shipboard Power Systems Supplying Pulsed Power Loads

Duvoor, Prashanth 15 December 2007 (has links)
Energy storage systems will likely be needed for future shipboard power systems that supply loads with high power variability such as pulsed power loads. The power generation in shipboard power systems may not be sufficient to satisfy the energy demands of the pulsed power load systems operating in conjunction with other ship service loads. Two fundamental items in evaluating the requirements of an energy storage system are the energy storage capacity and the ratings of the power conversion equipment that interfaces the energy device to the power system. The supply current of pulsed power load systems is aperiodic and cannot be described in terms of active power. Also, the RMS value and thus apparent power are only defined for periodic quantities. Therefore traditional methods of rating power equipment cannot be used. This thesis describes an approach to determine the ratings of an energy storage interface and the energy storage capacity of an energy storage device as a function of load and supply parameters. The results obtained using the proposed approach are validated with the results obtained from the simulation model of the generator supplying a pulsed power load in conjunction with an energy storage system. The energy storage system requirements for various pulsed power load profiles are obtained using the proposed approach. The method used for determining the ratings of an energy storage system utilizes an orthogonal decomposition of pulsed power load system supply current evaluated within a sliding window. The signals obtained from the decomposition are also useful in generating the control reference signals for the energy storage interface. Although the approach and methods are focused on a particular structure of the pulsed power load system, they may be generalized for use in any type of configuration of a pulsed power load system.

Page generated in 0.0891 seconds