381 |
MULTI-TARGET TRACKING WITH UNCERTAINTY IN THE PROBABILITY OF DETECTIONRohith Reddy Sanaga (7042646) 15 August 2019 (has links)
<div>The space around the Earth is becoming increasingly populated with a growth in number of launches and proliferation of debris. Currently, there are around 44,000 objects (with a minimum size of 10cm) orbiting the Earth as per the data made publicly available by the US strategy command (USSTRATCOM). These objects include active satellites and debris. The number of these objects are expected to increase rapidly in future from launches by companies in the private sector. For example, SpaceX is expected to deploy around 12000 new satellites in the LEO region to develop a space-based internet communication system. Hence in order to protect active space assets, tracking of all the objects is necessary. Probabilistic tracking methods have become increasingly popular for solving the multi-target tracking problem in Space Situational Awareness (SSA). This thesis studies one such technique known as the GM-PHD filter, which is an algorithm which estimates the number of objects and its states when non-perfect measurements (noisy measurements, false alarms) are available. For Earth orbiting objects, especially those in Geostationary orbits, ground based optical sensors are a cost-efficient way to gain information.In this case, the likelihood of gaining target-generated measurements depend on the probability of detection (p<sub>D</sub>) of the target.An accurate modeling of this quantity is essential for an efficient performance of the filter. p<sub>D</sub> significantly depends on the amount of light reflected by the target towards the observer. The reflected light depends on the relative position of the target with respect to the Sun and the observer, the shape, size and reflectivity of the object and the relative orientation of the object towards Sun and the observer. The estimation of the area and reflective properties of the object is in general, a difficult process. Uncontrolled objects, for example, start tumbling and no information regarding the attitude motion can be obtained. In addition, the shape can change because of disintegration and erosion of the materials. For the case of controlled objects, given that the object is stable, some information on the attitude can be obtained. But materials age in space which changes the reflective properties of the materials. Also, exact shape models for these objects are rare. Moreover,, area can never be estimated with optical measurements or any other measurements, as it is always albedo-area i.e., reflectivity times area that can be measured.</div><div> The purpose of this work is to design a variation of the GM-PHD filter which accounts for the uncertainty in p<sub>D</sub> as the original GM-PHD filter designed by Vo and Ma assumes p<sub>D</sub> as a constant. It is validated that the proposed method improves the filter performance when there is an uncertainty in area(hence uncertainty in p<sub>D</sub>) of the targets. In the tested cases, the uncertainty in p<sub>D</sub> was modeled as an uncertainty in area while assuming that the targets are spherical and that the reflectivity of the targets is constant. It is seen that a model mismatch in p<sub>D</sub> affects the filter performance significantly and the proposed method improves the performance of the filter in all cases.</div>
|
382 |
Gaussian Process Multiclass Classification : Evaluation of Binarization Techniques and Likelihood FunctionsRingdahl, Benjamin January 2019 (has links)
In binary Gaussian process classification the prior class membership probabilities are obtained by transforming a Gaussian process to the unit interval, typically either with the logistic likelihood function or the cumulative Gaussian likelihood function. Multiclass classification problems can be handled by any binary classifier by means of so-called binarization techniques, which reduces the multiclass problem into a number of binary problems. Other than introducing the mathematics behind the theory and methods behind Gaussian process classification, we compare the binarization techniques one-against-all and one-against-one in the context of Gaussian process classification, and we also compare the performance of the logistic likelihood and the cumulative Gaussian likelihood. This is done by means of two experiments: one general experiment where the methods are tested on several publicly available datasets, and one more specific experiment where the methods are compared with respect to class imbalance and class overlap on several artificially generated datasets. The results indicate that there is no significant difference in the choices of binarization technique and likelihood function for typical datasets, although the one-against-one technique showed slightly more consistent performance. However the second experiment revealed some differences in how the methods react to varying degrees of class imbalance and class overlap. Most notably the logistic likelihood was a dominant factor and the one-against-one technique performed better than one-against-all.
|
383 |
Simulation and optimization of steam-cracking processesCampet, Robin 17 January 2019 (has links) (PDF)
Thermal cracking is an industrial process sensitive to both temperature and pressure operating conditions. The use of internally ribbed reactors is a passive method to enhance the chemical selectivity of the process, thanks to a significant increase of heat transfer. However, this method also induces an increase in pressure loss, which is damageable to the chemical yield and must be quantified. Because of the complexity of turbulence and chemical kinetics, and as detailed experimental measurements are difficult to conduct, the real advantage of such geometries in terms of selectivity is however poorly known and difficult to assess. This work aims both at evaluating the real benefits of internally ribbed reactors in terms of chemical yields and at proposing innovative and optimized reactor designs. This is made possible using the Large Eddy Simulation (LES) approach, which allows to study in detail the reactive flow inside several reactor geometries. The AVBP code, which solves the Navier-Stokes compressible equations for turbulent flows, is used in order to simulate thermal cracking thanks to a dedicated numerical methodology. In particular, the effect of pressure loss and heat transfer on chemical conversion is compared for both a smooth and a ribbed reactor in order to conclude about the impact of wall roughness in industrial operating conditions. An optimization methodology, based on series of LES and Gaussian process, is finally developed and an innovative reactor design for thermal cracking applications, which maximizes the chemical yield, is proposed
|
384 |
Finding the optimal dynamic anisotropy resolution for grade estimation improvement at Driefontein Gold Mine, South AfricaMandava, Senzeni Maggie January 2016 (has links)
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, in partial fulfilment of the requirements for the degree of Master of Science in Mining Engineering.
February, 2016 / Mineral Resource estimation provides an assessment of the quantity, quality, shape and
grade distribution of a mineralised deposit. The resource estimation process involves; the
assessment of data available, creation of geological and/or grade models for the deposit,
statistical and geostatistical analyses of the data, as well as determination of the appropriate
grade interpolation methods. In the grade estimation process, grades are
interpolated/extrapolated into a two or three – dimensional resource block model of a
deposit. The process uses a search volume ellipsoid, centred on each block, to select samples
used for estimation. Traditionally, a global orientated search ellipsoid is used during the
estimation process. An improvement in the estimation process can be achieved if the
direction and continuity of mineralisation is acknowledged by aligning the search ellipsoid
accordingly. The misalignment of the search ellipsoid by just a few degrees can impact the
estimation results. Representing grade continuity in undulating and folded structures can be
a challenge to correct grade estimation. One solution to this problem is to apply the method
of Dynamic Anisotropy in the estimation process. This method allows for the anisotropy
rotation angles defining the search ellipsoid and variogram model, to directly follow the
trend of the mineralisation for each cell within a block model. This research report will
describe the application of Dynamic Anisotropy to a slightly undulating area which lies on a
gently folded limb of a syncline at Driefontein gold mine and where Ordinary Kriging is
used as the method of estimation. In addition, the optimal Dynamic Anisotropy resolution
that will provide an improvement in grade estimates will be determined. This will be
achieved by executing the estimation process on various block model grid sizes. The
geostatistical literature research carried out for this research report highlights the importance
of Dynamic Anisotropy in resource estimation. Through the application and analysis on a
real-life dataset, this research report will put theories and opinions about Dynamic
Anisotropy to the test.
|
385 |
Application of indicator kriging and conditional simulation in assessment of grade uncertainty in Hunters road magmatic sulphide nickel deposit in ZimbabweChiwundura, Phillip January 2017 (has links)
A research project report submitted to the Faculty of Engineering and the Built
Environment, University of the Witwatersrand, in fulfilment of the requirements
for the degree of Masters of Science in Engineering, 2017 / The assessment of local and spatial uncertainty associated with a
regionalised variable such as nickel grade at Hunters Road magmatic
sulphide deposit is one of the critical elements in the resource estimation.
The study focused on the application of Multiple Indicator Kriging (MIK) and
Sequential Gaussian Simulation (SGS) in the estimation of recoverable
resources and the assessment of grade uncertainty at Hunters Road’s
Western orebody. The Hunters Road Western orebody was divided into two
domains namely the Eastern and the Western domains and was evaluated
based on 172 drill holes. MIK and SGS were performed using Datamine
Studio RM module. The combined Mineral Resources estimate for the
Western orebody at a cut-off grade of 0.40%Ni is 32.30Mt at an average
grade of 0.57%Ni, equivalent to 183kt of contained nickel metal. SGS
results indicated low uncertainty associated with Hunters Road nickel
project with 90% probability of an average true grade above cut-off, lying
within +/-3% of the estimated block grade. The estimate of the mean based
on SGS was 0.55%Ni and 0.57% Ni for the Western and Eastern domains
respectively. MIK results were highly comparable with SGS E-type
estimates while the most recent Ordinary Kriging (OK) based estimates by
BNC dated May 2006, overstated the resources tonnage and
underestimated the grade compared to the MIK estimates. It was concluded
that MIK produced better estimates of recoverable resources than OK.
However, since only E-type estimates were produced by MIK, post
processing of “composite” conditional cumulative distribution function (ccdf)
results using a relevant change of support algorithm such as affine
correction is recommended. Although SGS produced a good measure of
uncertainty around nickel grades, post processing of realisations using a
different software such as Isatis has been recommended together with
combined simulation of both grade and tonnage. / XL2018
|
386 |
Multimodal Affective Computing Using Temporal Convolutional Neural Network and Deep Convolutional Neural NetworksAyoub, Issa 24 June 2019 (has links)
Affective computing has gained significant attention from researchers in the last decade due to the wide variety of applications that can benefit from this technology. Often, researchers describe affect using emotional dimensions such as arousal and valence. Valence refers to the spectrum of negative to positive emotions while arousal determines the level of excitement. Describing emotions through continuous dimensions (e.g. valence and arousal) allows us to encode subtle and complex affects as opposed to discrete emotions, such as the basic six emotions: happy, anger, fear, disgust, sad and neutral.
Recognizing spontaneous and subtle emotions remains a challenging problem for computers. In our work, we employ two modalities of information: video and audio. Hence, we extract visual and audio features using deep neural network models. Given that emotions are time-dependent, we apply the Temporal Convolutional Neural Network (TCN) to model the variations in emotions. Additionally, we investigate an alternative model that combines a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). Given our inability to fit the latter deep model into the main memory, we divide the RNN into smaller segments and propose a scheme to back-propagate gradients across all segments. We configure the hyperparameters of all models using Gaussian processes to obtain a fair comparison between the proposed models. Our results show that TCN outperforms RNN for the recognition of the arousal and valence emotional dimensions. Therefore, we propose the adoption of TCN for emotion detection problems as a baseline method for future work. Our experimental results show that TCN outperforms all RNN based models yielding a concordance correlation coefficient of 0.7895 (vs. 0.7544) on valence and 0.8207 (vs. 0.7357) on arousal on the validation dataset of SEWA dataset for emotion prediction.
|
387 |
Optimizing process parameters to increase the quality of the output in a separator : An application of Deep Kernel Learning in combination with the Basin-hopping optimizerHerwin, Eric January 2019 (has links)
Achieving optimal efficiency of production in the industrial sector is a process that is continuously under development. In several industrial installations separators, produced by Alfa Laval, may be found, and therefore it is of interest to make these separators operate more efficiently. The separator that is investigated separates impurities and water from crude oil. The separation performance is partially affected by the settings of process parameters. In this thesis it is investigated whether optimal or near optimal process parametersettings, which minimize the water content in the output, can be obtained.Furthermore, it is also investigated if these settings of a session can be testedto conclude about their suitability for the separator. The data that is usedin this investigation originates from sensors of a factory-installed separator.It consists of five variables which are related to the water content in theoutput. Two additional variables, related to time, are created to enforce thisrelationship. Using this data, optimal or near optimal process parameter settings may be found with an optimization technique. For this procedure, a Gaussian Process with the Deep Kernel Learning extension (GP-DKL) is used to model the relationship between the water content and the sensor data. Three models with different kernel functions are evaluated and the GP-DKL with a Spectral Mixture kernel is demonstrated to be the most suitable option. This combination is used as the objective function in a Basin-hopping optimizer, resulting in settings which correspond to a lower water content.Thus, it is concluded that optimal or near optimal settings can be obtained. Furthermore, the process parameter settings of a session can be tested by utilizing the Bayesian properties of the GP-DKL model. However, due to large posterior variance of the model, it can not be determined if the process parameter settings are suitable for the separator.
|
388 |
Defining and predicting fast-selling clothing optionsJesperson, Sara January 2019 (has links)
This thesis aims to find a definition of fast-selling clothing options and to find a way to predict them using only a few weeks of sale data as input. The data used for this project contain daily sales and intake quantity for seasonal options, with sale start 2016-2018, provided by the department store chain Åhléns. A definition is found to describe fast-selling clothing options as those having sold a certain percentage of their intake after a fixed number of days. An alternative definition based on cluster affiliation is proven less effective. Two predictive models are tested, the first one being a probabilistic classifier and the second one being a k-nearest neighbor classifier, using the Euclidean distance. The probabilistic model is divided into three steps: transformation, clustering, and classification. The time series are transformed with B-splines to reduce dimensionality, where each time series is represented by a vector with its length and B-spline coefficients. As a tool to improve the quality of the predictions, the B-spline vectors are clustered with a Gaussian mixture model where every cluster is assigned one of the two labels fast-selling or ordinary, thus dividing the clusters into disjoint sets: one containing fast-selling clusters and the other containing ordinary clusters. Lastly, the time series to be predicted are assumed to be Laplace distributed around a B-spline and using the probability distributions provided by the clustering, the posterior probability for each class is used to classify the new observations. In the transformation step, the number of knots for the B-splines are evaluated with cross-validation and the Gaussian mixture models, from the clustering step, are evaluated with the Bayesian information criterion, BIC. The predictive performance of both classifiers is evaluated with accuracy, precision, and recall. The probabilistic model outperforms the k-nearest neighbor model with considerably higher values of accuracy, precision, and recall. The performance of each model is improved by using more data to make the predictions, most prominently with the probabilistic model.
|
389 |
Fenômeno de ressonância estocástica na percepção tátil em resposta a sinais determinísticos e aleatórios. / Stochastic resonance phenomenon in tactile perception in response to deterministic and random signals.Márquez, Ana Fernández 22 May 2017 (has links)
A ressonância estocástica (RE) mostra que certos níveis de ruído ajudam na detecção e transmissão de sinais subliminares. Melhorias no desempenho do sistema somato-sensorial e motor (dentre outros) têm sido obtidos por meio da RE gerada pela utilização de sinais aditivos de intensidade ótima. O sinal aditivo (SA) mais comumente utilizado é o ruído branco gaussiano (RBG). Este estudo teve como objetivo verificar se é possível gerar RE no sistema sensorial tátil usando como SA um sinal senoidal e comparar estes resultados com os obtidos realizando o mesmo experimento com SA de tipo RBG. Os sinais usados no experimento foram definidos como sinal de estímulo (SE) de 3Hz a ser reconhecido com a ajuda dos SA, sinal aditivo senoidal (SAS) de 150Hz e sinal aditivo de ruido branco gaussiano (SARBG) filtrado a 150Hz. Na primeira parte do estudo foi feita uma simulação do modelo de neurônio de Hodgkin e Huxley para verificar se na teoria podia se obter RE para SE e SA senoidais. Foi injetado um sinal senoidal de 3Hz no modelo com uma intensidade para a qual o neurônio não conseguia gerar potencial de ação (PA). Quando a este sinal inicial foi adicionado um sinal senoidal de frequência superior, o neurônio conseguiu responder. A mesma resposta foi obtida quando o SA usado foi RBG, conseguindo mostrar de forma qualitativa a nossa hipótese a partir de um modelo simulado. Posteriormente foi realizado um estudo psicofísico com 20 voluntários (11 homens e 9 mulheres) para verificar o desempenho do SAS e comparar este com o desempenho de SARBG para a detecção sensorial do SE. Primeiro foi achado o limiar de detecção (LD) para cada um dos sinais usados e no experimento este valor foi usado para determinar a intensidade de estímulo. No caso do SE a intensidade foi definida como 80% do LD de cada voluntário. No caso dos SA a intensidade foi variando entre 0% até 80% do LD, com o objetivo de se encontrar a melhor proporção de SA adicionado para detectar o SE. Em 90% dos casos conseguiu-se gerar RE tanto empregando um sinal senoidal de frequência rápida como SA, quanto utilizando-se RBG. Ambos SAs apresentaram uma melhoria estatisticamente significativa na proporção de detecção (PD) do SE. Porém, nenhum dos SA apresentou um melhor desempenho em relação ao outro, de maneira que poderia ser usado tanto um quanto outro tipo de SA para gerar RE no sistema somato-sensorial. Este trabalho é pioneiro em usar uma combinação de senóides para gerar RE e abre as portas à elaboração e desenvolvimento de dispositivos biomédicos que contenham uma parte geradora de RE e consigam melhorar a estabilidade e controle postural em pessoas com deficiência motora ou somato-sensorial. / Stochastic ressonance (SR) shows that certain levels of noise help to detect and transmit subliminal signals. Improvements in the performance of the somatosensory and motor systems (among others) have been obtained through the SR generated using additive signals with optimal intensity. The most commonly used additive signal (AS) is white Gaussian noise (WGN). This study aimed to verify whether it is possible to generate SR in the tactile sensory system using a sinusoidal signal as the AS and, at the same time, compare the results when the AS was WGN. The signals used in the experiments were defined as 3Hz for the stimulus signal (SS), to be recognized with the aid of ASs. These were either a sinuoid of 150Hz additive sinusoidal signal (ASS) or a white Gaussian noise additive signal (WGNAS) filtered at 150Hz. In the first part of the study a simulation of the Hodgkin and Huxley neuron model was made to verify if it could undergo SR for the same types of SS and AS mentioned before. A 3Hz sine signal was injected into the model with an intensity at which the neuron could not generate action potentials. When a higher frequency sine wave was added to this initial signal, the neuron could respond. The same behaviour was obtained when the additive signal used was WGN, giving, hence, a qualitative confirmation of our hypothesis. A psychophysical study was then carried out with 20 volunteers (11 men and 9 women) to verify the performance of the ASS and compare it with the WGNAS for the sensory detection of the sinusoidal SS. Initially, the detection threshold (DT) was found for each of the signals used. During the experiment, this value was used to determine the stimulus intensity. In the case of the SS the intensity was defined as 80 % of the DT of each volunteer. In the case of ASs, the intensity varied from 0% to 80% of the DT, in order to find the best proportion of AS added to detect the SS. In 90% of the cases it was possible to generate SR using either a fast frequency ASS or the WGNAS. Both ASs showed a statistically significant improvement in the detection rate (DR) of the SS. However, none of ASs performed better than the other, so that both types could be used to generate SR in the somatosensory system. This work has pioneered the use of a combination of sinusoids to generate SR and opens the door to the development of biomedical devices that help generate SR to provide stability improvement and better postural control for people with motor or somatosensory impairment.
|
390 |
Modelos assimétricos inflacionados de zeros / Zero-inflated asymmetric modelsDias, Mariana Ferreira 28 November 2014 (has links)
A principal motivação desse estudo é a análise da quantidade de sangue recebido em transfusão (padronizada pelo peso) por crianças com problemas hepáticos. Essa quantidade apresenta distribuição assimétrica, além de valores iguais a zero para as crianças que não receberam transfusão. Os modelos lineares generalizados, usuais para variáveis positivas, não permitem a inclusão de zeros. Para os dados positivos, foram ajustados tais modelos com distribuição gama e normal inversa. Também foi considerado o modelo log-normal. A análise de resíduos padronizados indicou heterocedasticidade, e portanto a variabilidade extra foi modelada utilizando a classe de modelos GAMLSS. A terceira abordagem consiste em modelos baseados na mistura de zeros e distribuições para valores positivos, incluídos recentemente na família dos modelos GAMLSS. Estes aliam a distribuição assimétrica para os dados positivos e a probabilidade da ocorrência de zeros. Na análise dos dados de transfusão, observa-se que a distribuição normal inversa apresentou melhor ajuste por acomodar dados com forte assimetria em relação às demais distribuições consideradas. Foram significativos os efeitos das variáveis explicativas Kasai (ocorrência de operação prévia) e PELD (nível de uma medida da gravidade do paciente com 4 níveis) assim como os efeitos de interação sobre a média e variabilidade da quantidade de sangue recebida. A possibilidade de acrescentar efeitos de variáveis explicativas para modelar o parâmetro de dispersão, permite que a variabilidade extra, além de sua dependência da média, seja melhor explicada e melhore o ajuste do modelo. A probabilidade de não receber transfusão depende de modo significativo somente de PELD. A proposta de um só modelo que alia a presença de zeros e diversas distribuições assimétricas facilita o ajuste dos dados e a análise de resíduos. Seus resultados são equivalentes à abordagem em que a ocorrência ou não de transfusão é analisada por meio de modelo logístico independente da modelagem dos dados positivos com distribuições assimétricas. / The main motivation of this study is to analyze the amount of blood transfusions received (by weight) of children with liver problems. This amount shows asymmetric distribution as well as present zero values for children who did not receive transfusions. The usual generalized linear models for positive variables do not allow the inclusion of zeros. For positive data, such models with gamma and inverse normal distributions were fitted in this study. Log-normal distribution was also considered. Analysis of the standardized residuals indicated heterocedasticity and therefore the extra variability was modelled using GAMLSS. The third approach consists of models based on a mixture of zeros and distributions for positive values, also recently included in the family of GAMLSS models. These models combine the asymmetric distribution of positive data and the probability of occurrence of zeros. In the data analysis of transfusion, the inverse normal distribution showed better goodness of fit to allow adjustment of data with greater asymmetry than the other distributions considered. The effects of the explanatory variables Kasai (occurrence of previous operation) and PELD level (a measure of the severity of the patient with 4 levels) and interaction effects on the mean and variability of the amount of blood received were signicant. The inclusion of explanatory variables to model the dispersion parameter, allows to model the extra variability, beyond its dependence on the average, and improves fitness of the model. The probability of not receiving transfusion depends signicantly only PELD. The proposal of a unified model that combines the presence of zeros and several asymmetric distributions greatly facilitates the fitness of the model and the evaluation of fitness. An advantage is the equivalence between this model and a separate logistic model to for the probability of the occurrence of transfusion and a model for the positive skewed data.
|
Page generated in 0.0677 seconds