Spelling suggestions: "subject:"density function"" "subject:"clensity function""
91 |
Oxidation and Reduction Process for Polycyclic Aromatic Hydrocarbons and Nitrated Polycyclic Aromatic HydrocarbonsTian, Zhenjiao January 2008 (has links)
No description available.
|
92 |
A basic probability assignment methodology for unsupervised wireless intrusion detectionGhafir, Ibrahim, Kyriakopoulos, K.G., Aparicio-Navarro, F.J., Lambotharan, S., Assadhan, B., Binsalleeh, A.H. 24 January 2020 (has links)
Yes / The broadcast nature of wireless local area networks has made them prone to several types
of wireless injection attacks, such as Man-in-the-Middle (MitM) at the physical layer, deauthentication, and
rogue access point attacks. The implementation of novel intrusion detection systems (IDSs) is fundamental to
provide stronger protection against these wireless injection attacks. Since most attacks manifest themselves
through different metrics, current IDSs should leverage a cross-layer approach to help toward improving the
detection accuracy. The data fusion technique based on the Dempster–Shafer (D-S) theory has been proven
to be an efficient technique to implement the cross-layer metric approach. However, the dynamic generation
of the basic probability assignment (BPA) values used by D-S is still an open research problem. In this
paper, we propose a novel unsupervised methodology to dynamically generate the BPA values, based on
both the Gaussian and exponential probability density functions, the categorical probability mass function,
and the local reachability density. Then, D-S is used to fuse the BPA values to classify whether the Wi-Fi
frame is normal (i.e., non-malicious) or malicious. The proposed methodology provides 100% true positive
rate (TPR) and 4.23% false positive rate (FPR) for the MitM attack and 100% TPR and 2.44% FPR for the
deauthentication attack, which confirm the efficiency of the dynamic BPA generation methodology. / Gulf Science, Innovation and Knowledge Economy Programme of the U.K. Government under UK-Gulf Institutional Link Grant IL 279339985 and in part by the Engineering and Physical Sciences Research Council (EPSRC), U.K., under Grant EP/R006385/1.
|
93 |
Interaction of the eta-meson with light nucleiDe Villiers, Jean Schepers 30 November 2005 (has links)
The long-standing problem of possible formation of metastable states in collisions
of the eta-meson with atomic nuclei is revisited. The two-body eta-nucleon interaction
is described by a local potential, which is constructed by fitting known
low-energy parameters of this interaction. The many-body eta-nucleus potential
obtained within the folding model, is used to search for metastable states of the
systems formed by the eta-meson with hydrogen and helium isotopes. It is found
that all these systems generate strings of overlapping resonances. / Physics / M.Sc. (Physics)
|
94 |
Caractérisation géométrique et morphométrique 3-D par analyse d'image 2-D de distributions dynamiques de particules convexes anisotropes. Application aux processus de cristallisation. / 3-D geomatrical and morphometrical characterization from 2-D images of dynamic distributions of anisotropic convex particles. Application to crystallization processes.Presles, Benoît 09 December 2011 (has links)
La cristallisation en solution est un procédé largement utilisé dans l'industrie comme opération de séparation et de purification qui a pour but de produire des solides avec des propriétés spécifiques. Les propriétés concernant la taille et la forme ont un impact considérable sur la qualité finale des produits. Il est donc primordial de pouvoir déterminer la distribution granulométrique (DG) des cristaux en formation. En utilisant une caméra in situ, il est possible de visualiser en temps réel les projections 2D des particules 3D présentes dans la suspension. La projection d'un objet 3D sur un plan 2D entraîne nécessairement une perte d'informations : déterminer sa taille et sa forme à partir de ses projections 2D n’est donc pas aisé. C'est tout l'enjeu de ce travail: caractériser géométriquement et morphométriquement des objets 3D à partir de leurs projections 2D. Tout d'abord, une méthode basée sur le maximum de vraisemblance des fonctions de densité de probabilité de mesures géométriques projetées a été développée pour déterminer la taille d'objets 3D convexes. Ensuite, un descripteur de forme stéréologique basé sur les diagrammes de forme a été proposé. Il permet de caractériser la forme d'un objet 3D convexe indépendamment de sa taille et a notamment été utilisé pour déterminer les facteurs d'anisotropie des objets 3D convexes considérés. Enfin, une combinaison des deux études précédentes a permis d'estimer à la fois la taille et la forme des objets 3D convexes. Cette méthode a été validée grâce à des simulations, comparée à une méthode de la littérature et utilisée pour estimer des DGs d'oxalate d'ammonium qui ont été comparées à d’autres méthodes granulométriques. / Solution crystallization processes are widely used in the process industry as separation and purification operations and are expected to produce solids with desirable properties. The properties concerning the size and the shape are known to have a considerable impact on the final quality of products. Hence, it is of main importance to be able to determine the granulometry of the crystals (CSD) in formation. By using an in situ camera, it is possible to visualize in real time the 2D projections of the 3D particles in the suspension.The projection of a 3D object on a 2D plane necessarily involves a loss of information. Determining the size and the shape of a 3D object from its 2D projections is therefore not easy. This is the main goal of this work: to characterize geometrically and morphometrically 3D objects from their 2D projections. First of all, a method based on the maximum likelihood estimation of the probability density functions of projected geometrical measurements has been developed to estimate the size of 3D convex objects. Then, a stereological shape descriptor based on shape diagrams has been proposed. It enables to characterize the shape of a 3D convex object independently of its size and has notably been used to estimate the value of the anisotropy factors of the 3D convex objects. At last, a combination of the two previous studies has allowed to estimate both the size and the shape of the 3D convex objects. This method has been validated with simulated data, has been compared to a method from the literature and has been used to estimate size distributions of ammonium oxalate particles crystallizing in water that have been compared to other CSD methods.
|
95 |
Large Eddy Simulation/Transported Probability Density Function Modeling of Turbulent Combustion: Model Advancement and ApplicationsPei Zhang (6922148) 16 August 2019 (has links)
<div>Studies of turbulent combustion in the past mainly focus on problems with single-regime combustion. In practical combustion systems, however, combustion rarely occurs in a single regime, and different regimes of combustion can be observed in the same system. This creates a significant gap between our existing knowledge of combustion in single regime and the practical need in multi-regime combustion. In this work, we aim to extend the traditional single-regime combustion models to problems involving different regimes of combustion. Among the existing modeling methods, Transported Probability Density Function (PDF) method is attractive for its intrinsic closure of treating detailed chemical kinetics and has been demonstrated to be promising in predicting low-probability but practically important combustion events like local extinction and re-ignition. In this work, we focus on the model assessment and advancement of the Large Eddy Simulation (LES)/ PDF method in predicting turbulent multi-regime combustion.</div><div><br></div><div><div>Two combustion benchmark problems are considered for the model assessment. One is a recently designed turbulent piloted jet flame that features statistically transient processes, the Sydney turbulent pulsed piloted jet flame. A direct comparison of the predicted and measured time series of the axial velocity demonstrates a satisfactory prediction of the flow and turbulence fields of the pulsed jet flame by the employed LES/PDF modeling method. A comparison of the PLIF-OH images and the predicted OH mass fraction contours at a few selected times shows that the method captures the different combustion stages including healthy burning, significant extinction, and the re-establishment of healthy burning, in the statistically transient process. The temporal history of the conditional PDF of OH mass fraction/temperature at around stoichiometric conditions at different axial locations suggests that the method predicts the extinction and re-establishment timings accurately at upstream locations but less accurately at downstream locations with a delay of burning reestablishment. The other test case is a unified series of existing turbulent piloted flames. To facilitate model assessment across different combustion regimes, we develop a model validation framework by unifying several existing pilot stabilized turbulent jet flames in different combustion regimes. The characteristic similarity and difference of the employed piloted flames are examined, including the Sydney piloted flames L, B, and M, the Sandia piloted flames D, E, and F, a series of piloted premixed Bunsen flames, and the Sydney/Sandia inhomogeneous inlet piloted jet flames. Proper parameterization and a regime diagram are introduced to characterize the pilot stabilized flames covering non-premixed, partially premixed, and premixed flames. A preliminary model assessment is carried out to examine the simultaneous model performance of the LES/PDF method for the piloted jet flames across different combustion regimes.</div><div><br></div><div>With the assessment work in the above two test cases, it is found that the LES/PDF method can predict the statistically transient combustion and multi-regime combustion reasonably well but some modeling limitations are also identified. Thus, further model advancement is needed for the LES/PDF method. In this work, we focus on two model advancement studies related to the molecular diffusion and sub-filter scale mixing processes in turbulent combustion. The first study is to deal with differential molecular diffusion (DMD) among different species. The importance of theDMD effects on combustion has been found in many applications. However, in most previous combustion models equal molecular diffusivity is assumed. To incorporate the DMD effects accurately, we develop a model called Variance Consistent Mean Shift (VCMS) model. The second model advancement focuses on the sub-filter scale mixing in high-Karlovitz (Ka) number turbulent combustion. We analyze the DNS data of a Sandia high-Ka premixed jet flame to gain insights into the modeling of sub-filter scale mixing. A sub-filter scale mixing time scale is analyzed with respect to the filter size to examine the validity of a power-law scaling model for the mixing time scale.</div></div>
|
96 |
Contrôle du phasage de la combustion dans un moteur HCCI par ajout d’ozone : Modélisation et Contrôle / Control of combustion phasing in HCCI engine through ozone additionSayssouk, Salim 18 December 2017 (has links)
Pour franchir les prochaines étapes réglementaires, une des solutions adoptées par les constructeurs automobiles est la dépollution à la source par des nouveaux concepts de combustion. Une piste d’étude est le moteur à charge homogène allumé par compression, le moteur HCCI. Le défi majeur est de contrôler le phasage de la combustion lors des transitions. Or, l’ozone est un additif prometteur de la combustion. La première partie de ce travail est consacrée au développement d’un modèle 0D physique de la combustion dans le moteur HCCI à l’aide d’une approche statistique basée sur une fonction de densité de probabilité (PDF) de la température. Pour cela, un modèle de variance d’enthalpie est développé. Après la validation expérimentale du modèle, il est utilisé pour développer des cartographies du moteur HCCI avec et sans ajout de l’ozone afin d’évaluer le gain apporté par cet actuateur chimique en terme de charge et régime. La deuxième partie porte sur le contrôle du phasage de combustion par ajout d’ozone. Une étude de simulation est effectuée où des lois de commandes sont appliquées sur un modèle orienté contrôle. Les résultats montrent que l’ajout d’ozone permet de contrôler cycle-à-cycle le phasage de la combustion. En parallèle, une étude expérimentale sur un banc moteur est facilitée grâce à un système d’acquisition des paramètres de combustion (Pmax, CA50) en temps réel, développé au cours de cette étude. En intégrant les lois de commande par ajout d’ozone dans le calculateur du moteur (ECU), les résultats expérimentaux montrent la possibilité de contrôler non seulement cycle-à-cycle le phasage de la combustion par ajout d’ozone lors des transitions mais aussi de stabiliser le phasage de la combustion d’un point instable. / To pass the next legislator steps, one of the alternative solutions proposed for the depollution at the source by new concepts of combustion. One of proposed solution is the Homogeneous Charge Compression Ignition (HCCI) engine. The major challenge is to control combustion phasing during transitions. Ozone is promising additive to combustion. During this work, a 0D physical model is developed based on temperature fluctuations inside the combustion chamber by using Probability Density Function (PDF) approach. For this, an enthalpy variance model is developed to be used in Probability Density Function (PDF) of temperature. This model presents a good agreement with the experiments. It is used to develop HCCI engine map with and without ozone addition in order to evaluate the benefit of using ozone in extending the map in term of charge-speed. The second part deals with control the combustion phasing by ozone addition. A Control Oriented Model (COM) coupled with control laws demonstrates the possibility to control combustion phasing cycle-to-cycle. Thereafter, an experimental test bench is developed to prove this possibility. A real time data acquisition system is developed to capture combustion parameters (Pmax, CA50). By integrating control laws into Engine Control Unit (ECU), results demonstrate not only the controllability of combustion phasing cycle-to-cycle during transitions but also to stabilize it for an instable operating point.
|
97 |
Metodologia para diagnóstico e análise da influência dos afundamentos e interrupções de tensão nos motores de indução trifásicos / Methodology for the diagnosis and analysis of influence of voltage sags and interruptions in three-phase induction motorsGibelli, Gerson Bessa 20 May 2016 (has links)
Nesta pesquisa, é proposta uma metodologia para detectar e classificar os distúrbios observados em um Sistema Elétrico Industrial (SEI), além de estimar de forma não intrusiva, o torque eletromagnético e a velocidade associada ao Motor de Indução Trifásico (MIT) em análise. A metodologia proposta está baseada na utilização da Transformada Wavelet (TW) para a detecção e a localização no tempo dos afundamentos e interrupções de tensão, e na aplicação da Função Densidade de Probabilidade (FDP) e Correlação Cruzada (CC) para a classificação dos eventos. Após o processo de classificação dos eventos, a metodologia como implementada proporciona a estimação do torque eletromagnético e a velocidade do MIT por meio das tensões e correntes trifásicas via Redes Neurais Artificiais (RNAs). As simulações computacionais necessárias sobre um sistema industrial real, assim como a modelagem do MIT, foram realizadas utilizando-se do software DIgSILENT PowerFactory. Cabe adiantar que a lógica responsável pela detecção e a localização no tempo detectou corretamente 93,4% das situações avaliadas. Com relação a classificação dos distúrbios, o índice refletiu 100% de acerto das situações avaliadas. As RNAs associadas à estimação do torque eletromagnético e à velocidade no eixo do MIT apresentaram um desvio padrão máximo de 1,68 p.u. e 0,02 p.u., respectivamente. / This study proposes a methodology to detect and classify the disturbances observed in an Industrial Electric System (IES), in addition to, non-intrusively, estimate the electromagnetic torque and speed associated with the Three-Phase Induction Motor (TPIM) under analysis. The proposed methodology is based on the use of the Wavelet Transform WT) for the detection and location in time of voltage sags and interruptions, and on the application of the Probability Density Function (PDF) and Cross Correlation (CC) for the classification of events. After the process of events classification, the methodology, as implemented, provides the estimation of the electromagnetic torque and the TPIM speed through the three-phase voltages and currents via Artificial Neural Networks (ANN). The necessary computer simulations of a real industrial system, as well as the modeling of the TPIM, were performed by using the DIgSILENT PowerFactory software. The logic responsible for the detection and location in time correctly detected 93.4% of the assessed situations. Regarding the classification of disturbances, the index reflected 100% accuracy of the assessed situations. The ANN associated with the estimation of the electromagnetic torque and speed at the TPIM shaft showed a maximum standard deviation of 1.68 p.u. and 0.02 p.u., respectively.
|
98 |
Distribuição preditiva do preço de um ativo financeiro: abordagens via modelo de série de tempo Bayesiano e densidade implícita de Black & Scholes / Predictive distribution of a stock price: Bayesian time series model and Black & Scholes implied density approachesOliveira, Natália Lombardi de 01 June 2017 (has links)
Apresentamos duas abordagens para obter uma densidade de probabilidades para o preço futuro de um ativo: uma densidade preditiva, baseada em um modelo Bayesiano para série de tempo e uma densidade implícita, baseada na fórmula de precificação de opções de Black & Scholes. Considerando o modelo de Black & Scholes, derivamos as condições necessárias para obter a densidade implícita do preço do ativo na data de vencimento. Baseando-se nas densidades de previsão, comparamos o modelo implícito com a abordagem histórica do modelo Bayesiano. A partir destas densidades, calculamos probabilidades de ordem e tomamos decisões de vender/comprar um ativo. Como exemplo, apresentamos como utilizar estas distribuições para construir uma fórmula de precificação. / We present two different approaches to obtain a probability density function for the stocks future price: a predictive distribution, based on a Bayesian time series model, and the implied distribution, based on Black & Scholes option pricing formula. Considering the Black & Scholes model, we derive the necessary conditions to obtain the implied distribution of the stock price on the exercise date. Based on predictive densities, we compare the market implied model (Black & Scholes) with a historical based approach (Bayesian time series model). After obtaining the density functions, it is simple to evaluate probabilities of one being bigger than the other and to make a decision of selling/buying a stock. Also, as an example, we present how to use these distributions to build an option pricing formula.
|
99 |
Obstacle detection and emergency exit sign recognition for autonomous navigation using camera phoneMohammed, Abdulmalik January 2017 (has links)
In this research work, we develop an obstacle detection and emergency exit sign recognition system on a mobile phone by extending the feature from accelerated segment test detector with Harris corner filter. The first step often required for many vision based applications is the detection of objects of interest in an image. Hence, in this research work, we introduce emergency exit sign detection method using colour histogram. The hue and saturation component of an HSV colour model are processed into features to build a 2D colour histogram. We backproject a 2D colour histogram to detect emergency exit sign from a captured image as the first task required before performing emergency exit sign recognition. The result of classification shows that the 2D histogram is fast and can discriminate between objects and background with accuracy. One of the challenges confronting object recognition methods is the type of image feature to compute. In this work therefore, we present two feature detectors and descriptor methods based on the feature from accelerated segment test detector with Harris corner filter. The first method is called Upright FAST-Harris and binary detector (U-FaHB), while the second method Scale Interpolated FAST-Harris and Binary (SIFaHB). In both methods, feature points are extracted using the accelerated segment test detectors and Harris filter to return the strongest corner points as features. However, in the case of SIFaHB, the extraction of feature points is done across the image plane and along the scale-space. The modular design of these detectors allows for the integration of descriptors of any kind. Therefore, we combine these detectors with binary test descriptor like BRIEF to compute feature regions. These detectors and the combined descriptor are evaluated using different images observed under various geometric and photometric transformations and the performance is compared with other detectors and descriptors. The results obtained show that our proposed feature detector and descriptor method is fast and performs better compared with other methods like SIFT, SURF, ORB, BRISK, CenSurE. Based on the potential of U-FaHB detector and descriptor, we extended it for use in optical flow computation, which we termed the Nearest-flow method. This method has the potential of computing flow vectors for use in obstacle detection. Just like any other new methods, we evaluated the Nearest flow method using real and synthetic image sequences. We compare the performance of the Nearest-flow with other methods like the Lucas and Kanade, Farneback and SIFT-flow. The results obtained show that our Nearest-flow method is faster to compute and performs better on real scene images compared with the other methods. In the final part of this research, we demonstrate the application potential of our proposed methods by developing an obstacle detection and exit sign recognition system on a camera phone and the result obtained shows that the methods have the potential to solve this vision based object detection and recognition problem.
|
100 |
Étude de la rugosité de surface induite par la déformation plastique de tôles minces en alliage d'aluminium AA6016 / A study of plastic strain-induced surface roughnes in thin AA6016 aluminium sheetsGuillotin, Alban 28 May 2010 (has links)
Dans le cadre d'un programme de recherche visant à l'allègement de la structure des véhicules, l'origine de lignage dans des tôles en aluminium AA6016 a été étudiée. Ce phénomène, qui peut apparaître à la suite d'une déformation plastique, est apparenté à de la rugosité de surface alignée dans la direction de laminage (DL). Sa présence est néfaste à une bonne finition de surface, et son intensité est appréciée visuellement par les fabricants.Une méthode de quantification rationnelle a été développée. La caractérisation de la distribution morphologique des motifs de rugosité a été rendue possible par l'utilisation de fonctions fréquentielles telle la densité de puissance spectrale. La note globale, construite à partir de la quantification individuelle des composantes de lignage pur et de rugosité globulaire, s'est montrée en bon accord avec les estimations visuelles, et notamment avec le niveau de lignage intermédiaire regroupant plusieurs aspects de surface différents.La microstructure des matériaux à l'état T4 a été expérimentalement mesurée couche de grains par couche de grain à l'aide d'un couplage entre polissage contrôle et acquisition par EBSD. Les 4 à 5 premières couches sous la surface (-120μm) semblent jouer un rôle mécanique prépondérant dans la formation du lignage car elles offrent à la fois une grande taille de grains moyenne, une importante ségrégation d'orientations cristallines, et une forte similitude de longueurs d'onde entre la rugosité de surface et les motifs de la microtexture.Des simulations numériques ont permis de vérifier que les couples de texture identifiés (Cube/Goss, Cube/Aléatoire et Cube/CT18DN) possédaient des différences d'amincissements hors-plans suffisantes pour générer l'ondulation d'une couche d'éléments. En revanche, l'influence mécanique de cette même couche décroit très rapidement avec son enfouissement dans la profondeur et devient négligeable sous plus de 4 couches d'éléments. / As part of a project on aluminium alloys for vehicle weight reduction, the origins of roping in AA6016 aluminium sheets have been studied. This strain-induced phenomenon is related to surface roughness but involves narrow alignments along rolling direction (RD). Its lowers the surface quality, and its intensity is visually evaluated by vehicle manufacturers.An original quantification method is proposed. The morphological characterization of roughness features has been measured by using frequency functions such as the areal power spectral density. The overall roping quality mark, determined from quantifications of both the isotropic and unidirectional components, shows good agreement with the visual assessment, especially for the intermediate roping levels which exhibit several different surface appearances.The material microtexture has been experimentally measured through grain to grain layers by using serial sectioning and EBSD scans. The first 4 to 5 layers under the surface (-120μm) seem to play a leading role in the micromechanics of roping developpment since they simultaneously exhibit a high average grain size, significant segregation of crystallographic orientations, and a close similitude between surface roughness and microstructural feature wavelenghts.Numerical simulations verified that the identified texture pairs (Cube/Goss, Cube/Random and Cube/CT18DN) have sufficient out-of-plane strain difference to promote one element thick layer undulations. But, the mechanical influence of this layer decreases gradually with depth, and becomes negligible below 4 other layers.
|
Page generated in 0.0684 seconds