• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 19
  • 9
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 121
  • 121
  • 94
  • 17
  • 17
  • 17
  • 16
  • 16
  • 15
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Modeling of Diesel HCCI combustion and its impact on pollutant emissions applied to global engine system simulation / Modélisation de la combustion diesel HCCI et de son impact sur la formation de polluants appliquée à la simulation système

Dulbecco, Alessio 02 February 2010 (has links)
La législation sur les émissions de polluants des Moteurs à Combustion Interne (ICEs) est de plus en plus contraignante et représente un gros défi pour les constructeurs automobiles. De nouvelles stratégies de combustion telles que la Combustion à Allumage par Compression Homogène (HCCI) et l’exploitation de stratégies d’injections multiples sont des voies prometteuses qui permettent de respecter les normes sur les émissions de NOx et de suies, du fait que la combustion a lieu dans un mélange très dilué et par conséquent à basse température. Ces aspects demandent la création d’outils numériques adaptés à ces nouveaux défis. Cette thèse présente le développement d’un nouveau modèle 0D de combustion Diesel HCCI : le dual Combustion Model (dual - CM). Le modèle dual-CM a été basé sur l’approche PCM-FPI utilisée en Mécanique des Fluides Numérique (CFD) 3D, qui permet de prédire les caractéristiques de l’auto-allumage et du dégagement de chaleur de tous les modes de combustion Diesel. Afin d’adapter l’approche PCM-FPI à un formalisme 0D, il est fondamental de décrire précisément le mélange à l’intérieur du cylindre. Par consequent, des modèles d’évaporation du carburant liquide, de formation de la zone de mélange et de variance de la fraction de mélange, qui permettent d’avoir une description détaillée des proprietés thermochimiques locales du mélange y compris pour des configurations adoptant des stratégies d’injections multiples, sont proposés. Dans une première phase, les résultats du modèle ont été comparés aux résultats du modèle 3D. Ensuite, le modèle dual-CM a été validé sur une grande base de données expérimentales; compte tenu du bon accord avec l’expérience et du temps de calcul réduit, l’approche présentée s’est montrée prometteuse pour des applications de type simulation système. Pour conclure, les limites des hypothèses utilisées dans dual-CM ont été investiguées et des perspectives pour les dévélopements futurs ont été proposées. / More and more stringent restrictions concerning the pollutant emissions of Internal Combustion Engines (ICEs) constitute a major challenge for the automotive industry. New combustion strategies such as Homogeneous Charge Compression Ignition (HCCI) and the implementation of complex injection strategies are promising solutions for achieving the imposed emission standards as they permit low NOx and soot emissions, via lean and highly diluted combustions, thus assuring low combustion temperatures. This requires the creation of numerical tools adapted to these new challenges. This Ph.D presents the development of a new 0D Diesel HCCI combustion model : the dual Combustion Model (dual−CM ). The dual-CM is based on the PCM-FPI approach used in 3D CFD, which allows to predict the characteristics of Auto-Ignition and Heat Release for all Diesel combustion modes. In order to adapt the PCM-FPI approach to a 0D formalism, a good description of the in-cylinder mixture is fundamental. Consequently, adapted models for liquid fuel evaporation, mixing zone formation and mixture fraction variance, which allow to have a detailed description of the local thermochemical properties of the mixture even in configurations adopting multiple injection strategies, are proposed. The results of the 0D model are compared in an initial step to the 3D CFD results. Then, the dual-CM is validated against a large experimental database; considering the good agreement with the experiments and low CPU costs, the presented approach is shown to be promising for global engine system simulations. Finally, the limits of the hypotheses made in the dual-CM are investigated and perspectives for future developments are proposed.
82

Caractérisation géométrique et morphométrique 3-D par analyse d'image 2-D de distributions dynamiques de particules convexes anisotropes. Application aux processus de cristallisation. / 3-D geomatrical and morphometrical characterization from 2-D images of dynamic distributions of anisotropic convex particles. Application to crystallization processes.

Presles, Benoît 09 December 2011 (has links)
La cristallisation en solution est un procédé largement utilisé dans l'industrie comme opération de séparation et de purification qui a pour but de produire des solides avec des propriétés spécifiques. Les propriétés concernant la taille et la forme ont un impact considérable sur la qualité finale des produits. Il est donc primordial de pouvoir déterminer la distribution granulométrique (DG) des cristaux en formation. En utilisant une caméra in situ, il est possible de visualiser en temps réel les projections 2D des particules 3D présentes dans la suspension. La projection d'un objet 3D sur un plan 2D entraîne nécessairement une perte d'informations : déterminer sa taille et sa forme à partir de ses projections 2D n’est donc pas aisé. C'est tout l'enjeu de ce travail: caractériser géométriquement et morphométriquement des objets 3D à partir de leurs projections 2D. Tout d'abord, une méthode basée sur le maximum de vraisemblance des fonctions de densité de probabilité de mesures géométriques projetées a été développée pour déterminer la taille d'objets 3D convexes. Ensuite, un descripteur de forme stéréologique basé sur les diagrammes de forme a été proposé. Il permet de caractériser la forme d'un objet 3D convexe indépendamment de sa taille et a notamment été utilisé pour déterminer les facteurs d'anisotropie des objets 3D convexes considérés. Enfin, une combinaison des deux études précédentes a permis d'estimer à la fois la taille et la forme des objets 3D convexes. Cette méthode a été validée grâce à des simulations, comparée à une méthode de la littérature et utilisée pour estimer des DGs d'oxalate d'ammonium qui ont été comparées à d’autres méthodes granulométriques. / Solution crystallization processes are widely used in the process industry as separation and purification operations and are expected to produce solids with desirable properties. The properties concerning the size and the shape are known to have a considerable impact on the final quality of products. Hence, it is of main importance to be able to determine the granulometry of the crystals (CSD) in formation. By using an in situ camera, it is possible to visualize in real time the 2D projections of the 3D particles in the suspension.The projection of a 3D object on a 2D plane necessarily involves a loss of information. Determining the size and the shape of a 3D object from its 2D projections is therefore not easy. This is the main goal of this work: to characterize geometrically and morphometrically 3D objects from their 2D projections. First of all, a method based on the maximum likelihood estimation of the probability density functions of projected geometrical measurements has been developed to estimate the size of 3D convex objects. Then, a stereological shape descriptor based on shape diagrams has been proposed. It enables to characterize the shape of a 3D convex object independently of its size and has notably been used to estimate the value of the anisotropy factors of the 3D convex objects. At last, a combination of the two previous studies has allowed to estimate both the size and the shape of the 3D convex objects. This method has been validated with simulated data, has been compared to a method from the literature and has been used to estimate size distributions of ammonium oxalate particles crystallizing in water that have been compared to other CSD methods.
83

Long-Term Ambient Noise Statistics in the Gulf of Mexico

Snyder, Mark Alan 15 December 2007 (has links)
Long-term omni-directional ambient noise was collected at several sites in the Gulf of Mexico during 2004 and 2005. The Naval Oceanographic Office deployed bottom moored Environmental Acoustic Recording System (EARS) buoys approximately 159 nautical miles south of Panama City, Florida, in water depths of 3200 meters. The hydrophone of each buoy was 265 meters above the bottom. The data duration ranged from 10-14 months. The buoys were located near a major shipping lane, with an estimated 1.5 to 4.5 ships per day passing nearby. The data were sampled at 2500 Hz and have a bandwidth of 10-1000 Hz. Data are processed in eight 1/3-octave frequency bands, centered from 25 to 950 Hz, and monthly values of the following statistical quantities are computed from the resulting eight time series of noise spectral level: mean, median, standard deviation, skewness, kurtosis and coherence time. Four hurricanes were recorded during the summer of 2004 and they have a major impact on all of the noise statistics. Noise levels at higher frequencies (400-950 Hz) peak during extremely windy months (summer hurricanes and winter storms). Standard deviation is least in the region 100-200 Hz but increases at higher frequencies, especially during periods of high wind variability (summer hurricanes). Skewness is positive from 25-400 Hz and negative from 630-950 Hz. Skewness and kurtosis are greatest near 100 Hz. Coherence time is low in shipping bands and high in weather bands, and it peaks during hurricanes. The noise coherence is also analyzed. The 14-month time series in each 1/3- octave band is highly correlated with other 1/3-octave band time series ranging from 2 octaves below to 2 octaves above the band's center frequency. Spatial coherence between hydrophones is also analyzed for hydrophone separations of 2.29, 2.56 and 4.84 km over a 10-month period. The noise field is highly coherent out to the maximum distance studied, 4.84 km. Additionally, fluctuations of each time series are analyzed to determine time scales of greatest variability. The 14-month data show clearly that variability occurs primarily over three time scales: 7-22 hours (shipping-related), 56-282 hours (2-12 days, weather-related) and over an 8-12 month period.
84

Méthodes de construction des courbes de fragilité sismique par simulations numériques / Development of seismic fragility curves based on numerical simulations

Dang, Cong-Thuat 28 May 2014 (has links)
Une courbe de fragilité sismique qui présente la probabilité de défaillance d’une structure en fonction d’une intensité sismique, est un outil performant pour l’évaluation de la vulnérabilité sismique des structures en génie nucléaire et génie civil. On se concentre dans cette thèse sur l’approche par simulations numériques pour la construction des courbes de fragilité sismique. Une étude comparative des méthodes paramétriques existantes avec l’hypothèse log-normale est d’abord réalisée. Elle permet ensuite de proposer des améliorations de la méthode du maximum de vraisemblance dans le but d’atténuer l’influence de l’excitation sismique lors de son processus de construction. Une autre amélioration est l’application de la méthode de simulations par subsets pour l’évaluation de la probabilité de défaillance faible. Enfin, en utilisant la méthode de calcul de l’évolution des fonctions de densité de probabilité qui permet d’évaluer la probabilité conjointe entre la réponse structurale et les variables aléatoires du système et de l’excitation, nous proposons également une nouvelle technique non-paramétrique de construction des courbes de fragilité sismique sans utiliser l’hypothèse de la loi log-normale. La validation des améliorations et de la nouvelle technique est réalisée sur des exemples numériques. / A seismic fragility curve that shows the failure probability of a structure in function of a seismic intensity is a powerful tool for the evaluation of the seismic vulnerability of structures in nuclear engineering and civil engineering.We focus in this thesis on the numerical simulations-based approach for the construction of seismic fragility curves. A comparative work between existent parametric methods with the lognormal assumption of the fragility curves is first performed. It then allows proposing improvements to the maximum likelihood method in order to mitigate the influence of seismic excitation during its construction process. Another improvement is the application of the subsets simulation method for the evaluation of the low probability of failure. Finally, using the Probability Density Evolution Method (PDEM) for evaluating the joint probability between a structural response and random variables of a system and/or excitations, a new technique for construction of seismic fragility curves was proposed. The seismic fragility curve can be derived without the assumption of lognormal law. The improvements and the new technique are all validated by numerical examples.
85

Large Eddy Simulation/Transported Probability Density Function Modeling of Turbulent Combustion: Model Advancement and Applications

Pei Zhang (6922148) 16 August 2019 (has links)
<div>Studies of turbulent combustion in the past mainly focus on problems with single-regime combustion. In practical combustion systems, however, combustion rarely occurs in a single regime, and different regimes of combustion can be observed in the same system. This creates a significant gap between our existing knowledge of combustion in single regime and the practical need in multi-regime combustion. In this work, we aim to extend the traditional single-regime combustion models to problems involving different regimes of combustion. Among the existing modeling methods, Transported Probability Density Function (PDF) method is attractive for its intrinsic closure of treating detailed chemical kinetics and has been demonstrated to be promising in predicting low-probability but practically important combustion events like local extinction and re-ignition. In this work, we focus on the model assessment and advancement of the Large Eddy Simulation (LES)/ PDF method in predicting turbulent multi-regime combustion.</div><div><br></div><div><div>Two combustion benchmark problems are considered for the model assessment. One is a recently designed turbulent piloted jet flame that features statistically transient processes, the Sydney turbulent pulsed piloted jet flame. A direct comparison of the predicted and measured time series of the axial velocity demonstrates a satisfactory prediction of the flow and turbulence fields of the pulsed jet flame by the employed LES/PDF modeling method. A comparison of the PLIF-OH images and the predicted OH mass fraction contours at a few selected times shows that the method captures the different combustion stages including healthy burning, significant extinction, and the re-establishment of healthy burning, in the statistically transient process. The temporal history of the conditional PDF of OH mass fraction/temperature at around stoichiometric conditions at different axial locations suggests that the method predicts the extinction and re-establishment timings accurately at upstream locations but less accurately at downstream locations with a delay of burning reestablishment. The other test case is a unified series of existing turbulent piloted flames. To facilitate model assessment across different combustion regimes, we develop a model validation framework by unifying several existing pilot stabilized turbulent jet flames in different combustion regimes. The characteristic similarity and difference of the employed piloted flames are examined, including the Sydney piloted flames L, B, and M, the Sandia piloted flames D, E, and F, a series of piloted premixed Bunsen flames, and the Sydney/Sandia inhomogeneous inlet piloted jet flames. Proper parameterization and a regime diagram are introduced to characterize the pilot stabilized flames covering non-premixed, partially premixed, and premixed flames. A preliminary model assessment is carried out to examine the simultaneous model performance of the LES/PDF method for the piloted jet flames across different combustion regimes.</div><div><br></div><div>With the assessment work in the above two test cases, it is found that the LES/PDF method can predict the statistically transient combustion and multi-regime combustion reasonably well but some modeling limitations are also identified. Thus, further model advancement is needed for the LES/PDF method. In this work, we focus on two model advancement studies related to the molecular diffusion and sub-filter scale mixing processes in turbulent combustion. The first study is to deal with differential molecular diffusion (DMD) among different species. The importance of theDMD effects on combustion has been found in many applications. However, in most previous combustion models equal molecular diffusivity is assumed. To incorporate the DMD effects accurately, we develop a model called Variance Consistent Mean Shift (VCMS) model. The second model advancement focuses on the sub-filter scale mixing in high-Karlovitz (Ka) number turbulent combustion. We analyze the DNS data of a Sandia high-Ka premixed jet flame to gain insights into the modeling of sub-filter scale mixing. A sub-filter scale mixing time scale is analyzed with respect to the filter size to examine the validity of a power-law scaling model for the mixing time scale.</div></div>
86

Contrôle du phasage de la combustion dans un moteur HCCI par ajout d’ozone : Modélisation et Contrôle / Control of combustion phasing in HCCI engine through ozone addition

Sayssouk, Salim 18 December 2017 (has links)
Pour franchir les prochaines étapes réglementaires, une des solutions adoptées par les constructeurs automobiles est la dépollution à la source par des nouveaux concepts de combustion. Une piste d’étude est le moteur à charge homogène allumé par compression, le moteur HCCI. Le défi majeur est de contrôler le phasage de la combustion lors des transitions. Or, l’ozone est un additif prometteur de la combustion. La première partie de ce travail est consacrée au développement d’un modèle 0D physique de la combustion dans le moteur HCCI à l’aide d’une approche statistique basée sur une fonction de densité de probabilité (PDF) de la température. Pour cela, un modèle de variance d’enthalpie est développé. Après la validation expérimentale du modèle, il est utilisé pour développer des cartographies du moteur HCCI avec et sans ajout de l’ozone afin d’évaluer le gain apporté par cet actuateur chimique en terme de charge et régime. La deuxième partie porte sur le contrôle du phasage de combustion par ajout d’ozone. Une étude de simulation est effectuée où des lois de commandes sont appliquées sur un modèle orienté contrôle. Les résultats montrent que l’ajout d’ozone permet de contrôler cycle-à-cycle le phasage de la combustion. En parallèle, une étude expérimentale sur un banc moteur est facilitée grâce à un système d’acquisition des paramètres de combustion (Pmax, CA50) en temps réel, développé au cours de cette étude. En intégrant les lois de commande par ajout d’ozone dans le calculateur du moteur (ECU), les résultats expérimentaux montrent la possibilité de contrôler non seulement cycle-à-cycle le phasage de la combustion par ajout d’ozone lors des transitions mais aussi de stabiliser le phasage de la combustion d’un point instable. / To pass the next legislator steps, one of the alternative solutions proposed for the depollution at the source by new concepts of combustion. One of proposed solution is the Homogeneous Charge Compression Ignition (HCCI) engine. The major challenge is to control combustion phasing during transitions. Ozone is promising additive to combustion. During this work, a 0D physical model is developed based on temperature fluctuations inside the combustion chamber by using Probability Density Function (PDF) approach. For this, an enthalpy variance model is developed to be used in Probability Density Function (PDF) of temperature. This model presents a good agreement with the experiments. It is used to develop HCCI engine map with and without ozone addition in order to evaluate the benefit of using ozone in extending the map in term of charge-speed. The second part deals with control the combustion phasing by ozone addition. A Control Oriented Model (COM) coupled with control laws demonstrates the possibility to control combustion phasing cycle-to-cycle. Thereafter, an experimental test bench is developed to prove this possibility. A real time data acquisition system is developed to capture combustion parameters (Pmax, CA50). By integrating control laws into Engine Control Unit (ECU), results demonstrate not only the controllability of combustion phasing cycle-to-cycle during transitions but also to stabilize it for an instable operating point.
87

Metodologia para diagnóstico e análise da influência dos afundamentos e interrupções de tensão nos motores de indução trifásicos / Methodology for the diagnosis and analysis of influence of voltage sags and interruptions in three-phase induction motors

Gibelli, Gerson Bessa 20 May 2016 (has links)
Nesta pesquisa, é proposta uma metodologia para detectar e classificar os distúrbios observados em um Sistema Elétrico Industrial (SEI), além de estimar de forma não intrusiva, o torque eletromagnético e a velocidade associada ao Motor de Indução Trifásico (MIT) em análise. A metodologia proposta está baseada na utilização da Transformada Wavelet (TW) para a detecção e a localização no tempo dos afundamentos e interrupções de tensão, e na aplicação da Função Densidade de Probabilidade (FDP) e Correlação Cruzada (CC) para a classificação dos eventos. Após o processo de classificação dos eventos, a metodologia como implementada proporciona a estimação do torque eletromagnético e a velocidade do MIT por meio das tensões e correntes trifásicas via Redes Neurais Artificiais (RNAs). As simulações computacionais necessárias sobre um sistema industrial real, assim como a modelagem do MIT, foram realizadas utilizando-se do software DIgSILENT PowerFactory. Cabe adiantar que a lógica responsável pela detecção e a localização no tempo detectou corretamente 93,4% das situações avaliadas. Com relação a classificação dos distúrbios, o índice refletiu 100% de acerto das situações avaliadas. As RNAs associadas à estimação do torque eletromagnético e à velocidade no eixo do MIT apresentaram um desvio padrão máximo de 1,68 p.u. e 0,02 p.u., respectivamente. / This study proposes a methodology to detect and classify the disturbances observed in an Industrial Electric System (IES), in addition to, non-intrusively, estimate the electromagnetic torque and speed associated with the Three-Phase Induction Motor (TPIM) under analysis. The proposed methodology is based on the use of the Wavelet Transform WT) for the detection and location in time of voltage sags and interruptions, and on the application of the Probability Density Function (PDF) and Cross Correlation (CC) for the classification of events. After the process of events classification, the methodology, as implemented, provides the estimation of the electromagnetic torque and the TPIM speed through the three-phase voltages and currents via Artificial Neural Networks (ANN). The necessary computer simulations of a real industrial system, as well as the modeling of the TPIM, were performed by using the DIgSILENT PowerFactory software. The logic responsible for the detection and location in time correctly detected 93.4% of the assessed situations. Regarding the classification of disturbances, the index reflected 100% accuracy of the assessed situations. The ANN associated with the estimation of the electromagnetic torque and speed at the TPIM shaft showed a maximum standard deviation of 1.68 p.u. and 0.02 p.u., respectively.
88

Obstacle detection and emergency exit sign recognition for autonomous navigation using camera phone

Mohammed, Abdulmalik January 2017 (has links)
In this research work, we develop an obstacle detection and emergency exit sign recognition system on a mobile phone by extending the feature from accelerated segment test detector with Harris corner filter. The first step often required for many vision based applications is the detection of objects of interest in an image. Hence, in this research work, we introduce emergency exit sign detection method using colour histogram. The hue and saturation component of an HSV colour model are processed into features to build a 2D colour histogram. We backproject a 2D colour histogram to detect emergency exit sign from a captured image as the first task required before performing emergency exit sign recognition. The result of classification shows that the 2D histogram is fast and can discriminate between objects and background with accuracy. One of the challenges confronting object recognition methods is the type of image feature to compute. In this work therefore, we present two feature detectors and descriptor methods based on the feature from accelerated segment test detector with Harris corner filter. The first method is called Upright FAST-Harris and binary detector (U-FaHB), while the second method Scale Interpolated FAST-Harris and Binary (SIFaHB). In both methods, feature points are extracted using the accelerated segment test detectors and Harris filter to return the strongest corner points as features. However, in the case of SIFaHB, the extraction of feature points is done across the image plane and along the scale-space. The modular design of these detectors allows for the integration of descriptors of any kind. Therefore, we combine these detectors with binary test descriptor like BRIEF to compute feature regions. These detectors and the combined descriptor are evaluated using different images observed under various geometric and photometric transformations and the performance is compared with other detectors and descriptors. The results obtained show that our proposed feature detector and descriptor method is fast and performs better compared with other methods like SIFT, SURF, ORB, BRISK, CenSurE. Based on the potential of U-FaHB detector and descriptor, we extended it for use in optical flow computation, which we termed the Nearest-flow method. This method has the potential of computing flow vectors for use in obstacle detection. Just like any other new methods, we evaluated the Nearest flow method using real and synthetic image sequences. We compare the performance of the Nearest-flow with other methods like the Lucas and Kanade, Farneback and SIFT-flow. The results obtained show that our Nearest-flow method is faster to compute and performs better on real scene images compared with the other methods. In the final part of this research, we demonstrate the application potential of our proposed methods by developing an obstacle detection and exit sign recognition system on a camera phone and the result obtained shows that the methods have the potential to solve this vision based object detection and recognition problem.
89

Metodologia para diagnóstico e análise da influência dos afundamentos e interrupções de tensão nos motores de indução trifásicos / Methodology for the diagnosis and analysis of influence of voltage sags and interruptions in three-phase induction motors

Gerson Bessa Gibelli 20 May 2016 (has links)
Nesta pesquisa, é proposta uma metodologia para detectar e classificar os distúrbios observados em um Sistema Elétrico Industrial (SEI), além de estimar de forma não intrusiva, o torque eletromagnético e a velocidade associada ao Motor de Indução Trifásico (MIT) em análise. A metodologia proposta está baseada na utilização da Transformada Wavelet (TW) para a detecção e a localização no tempo dos afundamentos e interrupções de tensão, e na aplicação da Função Densidade de Probabilidade (FDP) e Correlação Cruzada (CC) para a classificação dos eventos. Após o processo de classificação dos eventos, a metodologia como implementada proporciona a estimação do torque eletromagnético e a velocidade do MIT por meio das tensões e correntes trifásicas via Redes Neurais Artificiais (RNAs). As simulações computacionais necessárias sobre um sistema industrial real, assim como a modelagem do MIT, foram realizadas utilizando-se do software DIgSILENT PowerFactory. Cabe adiantar que a lógica responsável pela detecção e a localização no tempo detectou corretamente 93,4% das situações avaliadas. Com relação a classificação dos distúrbios, o índice refletiu 100% de acerto das situações avaliadas. As RNAs associadas à estimação do torque eletromagnético e à velocidade no eixo do MIT apresentaram um desvio padrão máximo de 1,68 p.u. e 0,02 p.u., respectivamente. / This study proposes a methodology to detect and classify the disturbances observed in an Industrial Electric System (IES), in addition to, non-intrusively, estimate the electromagnetic torque and speed associated with the Three-Phase Induction Motor (TPIM) under analysis. The proposed methodology is based on the use of the Wavelet Transform WT) for the detection and location in time of voltage sags and interruptions, and on the application of the Probability Density Function (PDF) and Cross Correlation (CC) for the classification of events. After the process of events classification, the methodology, as implemented, provides the estimation of the electromagnetic torque and the TPIM speed through the three-phase voltages and currents via Artificial Neural Networks (ANN). The necessary computer simulations of a real industrial system, as well as the modeling of the TPIM, were performed by using the DIgSILENT PowerFactory software. The logic responsible for the detection and location in time correctly detected 93.4% of the assessed situations. Regarding the classification of disturbances, the index reflected 100% accuracy of the assessed situations. The ANN associated with the estimation of the electromagnetic torque and speed at the TPIM shaft showed a maximum standard deviation of 1.68 p.u. and 0.02 p.u., respectively.
90

Application of Java on Statistics Education

Tsay, Yuh-Chyuan 24 July 2000 (has links)
With the prevalence of internet, it is gradually becoming a trend to use the network as a tool of computer-added education. However, it is used to present the computer-added education with static state of the word, but it is just convenient to read for the user and there are no difference with traditional textbook. As the growing up of WWW and the development of Java, the interactive computer-added education is becoming a trend in the future and it can promote the effect of teaching basic statistics with the application of this new media. The instructor can take advantage of HTML by combining with Java Applets to achieve the display of interactive education through WWW. In this paper, we will use six examples of Java Applets about statistical computer-added education to help student easily to learn and to understand some abstract statistical concepts. The key methods to reach the goal are visualization and simulation with the display of graphics or games. Finally, we will discuss how to use the Applets and how to add the Java Applets into your homepage easily.

Page generated in 0.0328 seconds