Spelling suggestions: "subject:"fandom"" "subject:"mandom""
701 |
On the error analysis of correlation devicesChang, Ke-yen January 1969 (has links)
The use of uniformly distributed auxiliary random noise in the polarity-coincidence correlator has been described in the past. It has the advantage of constructional simplicity. It also gives an unbiased and consistent estimate of the correlation function. In this thesis, the method of using auxiliary random noise is extended to the multi-level digital correlator. It is found that any random noise of which the characteristic function has periodic zeros can be used as an auxiliary noise, that uniformly
distributed noise is a special case, and that quantizing the auxiliary random noise has the same effect as directly quantizing the input signals.
The mean-square error of this modified digital correlator is analyzed in detail. It differs from the mean-square error of the direct correlator only by a term which is inversely proportional
to the total number of samples taken, provided that the power spectrum of the auxiliary noise is wide enough. This additional
error is the combined effect of sampling, quantizing and adding auxiliary noise. It can be reduced to any desired value by taking a larger number of samples, by using a higher sampling rate, or by quantizing more finely. For completeness, the mean-square errors of the digital correlator, the Stieltjes correlator and the modified Stieltjes correlator are also derived.
The mean-square error of the modified polarity-coincidence correlator, which is a special case of the modified digital correlator, is compared with the error of the multi-level modified digital correlator. Despite its constructional simplicity, the modified polarity-coincidence has a much larger error than the multi-level correlator. A small increase in the number of quantization levels
reduces the error considerably.
The implementation of a modified digital correlator with coarse quantization is also considered. A fast direct multiplier is designed to obtain high operation speed. Finally, the use of pseudo-random noise as an auxiliary noise is discussed. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
702 |
Can methods of machine learning be used to betterpredict lactation curves for bovines?Östling, Andreas January 2017 (has links)
A random forest is compared to an OLS model for predicting lactation curves for cows.Both of the methods have been estimated and tested using data from the period 2015-01 to2015-09. Random forests outperform OLS in testing for modeling lactation curves with adecrease in MSE by approximately 26%. Data is provided by Sveriges Lantrbruksuniversitetand includes 75 558 milking events from 320 cows. The date of the milking, the time ofday when the milking occurred as well as which cow was milked were found to be importantvariables for accurate predictions.
|
703 |
Prediction-based failure management for supercomputersGe, Wuxiang January 2011 (has links)
The growing requirements of a diversity of applications necessitate the deployment of large and powerful computing systems and failures in these systems may cause severe damage in every aspect from loss of human lives to world economy. However, current fault tolerance techniques cannot meet the increasing requirements for reliability. Thus new solutions are urgently needed and research on proactive schemes is one of the directions that may offer better efficiency. This thesis proposes a novel proactive failure management framework. Its goal is to reduce the failure penalties and improve fault tolerance efficiency in supercomputers when running complex applications. The proposed proactive scheme builds on two core components: failure prediction and proactive failure recovery. More specifically, the failure prediction component is based on the assessment of system events and employs semi-Markov models to capture the dependencies between failures and other events for the forecasting of forthcoming failures. Furthermore, a two-level failure prediction strategy is described that not only estimates the future failure occurrence but also identifies the specific failure categories. Based on the accurate failure forecasting, a prediction-based coordinated checkpoint mechanism is designed to construct extra checkpoints just before each predicted failure occurrence so that the wasted computational time can be significantly reduced. Moreover, a theoretical model has been developed to assess the proactive scheme that enables calculation of the overall wasted computational time.The prediction component has been applied to industrial data from the IBM BlueGene/L system. Results of the failure prediction component show a great improvement of the prediction accuracy in comparison with three other well-known prediction approaches, and also demonstrate that the semi-Markov based predictor, which has achieved the precision of 87.41% and the recall of 77.95%, performs better than other predictors.
|
704 |
Exploring Trusted Platform Module Capabilities: A Theoretical and Experimental StudyGunupudi, Vandana 05 1900 (has links)
Trusted platform modules (TPMs) are hardware modules that are bound to a computer's motherboard, that are being included in many desktops and laptops. Augmenting computers with these hardware modules adds powerful functionality in distributed settings, allowing us to reason about the security of these systems in new ways. In this dissertation, I study the functionality of TPMs from a theoretical as well as an experimental perspective. On the theoretical front, I leverage various features of TPMs to construct applications like random oracles that are impossible to implement in a standard model of computation. Apart from random oracles, I construct a new cryptographic primitive which is basically a non-interactive form of the standard cryptographic primitive of oblivious transfer. I apply this new primitive to secure mobile agent computations, where interaction between various entities is typically required to ensure security. I prove these constructions are secure using standard cryptographic techniques and assumptions. To test the practicability of these constructions and their applications, I performed an experimental study, both on an actual TPM and a software TPM simulator which has been enhanced to make it reflect timings from a real TPM. This allowed me to benchmark the performance of the applications and test the feasibility of the proposed extensions to standard TPMs. My tests also show that these constructions are practical.
|
705 |
Limite do fluído para o grafo aleatório de Erdos-Rényi / Fluid limit for the Erdos-Rényi random graphFabio Marcellus Lima Sá Makiyama Lopes 23 April 2010 (has links)
Neste trabalho, aplicamos o algoritmo Breadth-First Search para encontrar o tamanho de uma componente conectada no grafo aleatório de Erdos-Rényi. Uma cadeia de Markov é obtida deste procedimento. Apresentamos alguns resultados bem conhecidos sobre o comportamento dessa cadeia de Markov. Combinamos alguns destes resultados para obter uma proposição sobre a probabilidade da componente atingir um determinado tamanho e um resultado de convergência do estado da cadeia neste instante. Posteriormente, aplicamos o teorema de convergência de Darling (2002) a sequência de cadeias de Markov reescaladas e indexadas por N, o número de vértices do grafo, para mostrar que as trajetórias dessas cadeias convergem uniformemente em probabilidade para a solução de uma equação diferencial ordinária. Deste resultado segue a bem conhecida lei fraca dos grandes números para a componente gigante do grafo aleatório de Erdos-Rényi, no caso supercrítico. Além disso, obtemos o limite do fluído para um modelo epidêmico que é uma extensão daquele proposto em Kurtz et al. (2008). / In this work, we apply the Breadth-First Search algorithm to find the size of a connected component of the Erdos-Rényi random graph. A Markov chain is obtained of this procedure. We present some well-known results about the behavior of this Markov chain, and combine some of these results to obtain a proposition about the probability that the component reaches a certain size and a convergence result about the state of the chain at that time. Next, we apply the convergence theorem of Darling (2002) to the sequence of rescaled Markov chains indexed by N, the number of vertices of the graph, to show that the trajectories of these chains converge uniformly in probability to the solution of an ordinary dierential equation. From the latter result follows the well-known weak law of large numbers of the giant component of the Erdos-Renyi random graph, in the supercritical case. Moreover, we obtain the uid limit for an epidemic model which is an extension of that proposed in Kurtz et al. (2008).
|
706 |
Algoritmos eficientes para análise de campos aleatórios condicionais semi-markovianos e sua aplicação em sequências genômicas / Efficient algorithms for semi-markov conditional random fields and their application for the analysis of genomic sequencesÍgor Bonadio 06 August 2018 (has links)
Campos Aleatórios Condicionais são modelos probabilísticos discriminativos que tem sido utilizados com sucesso em diversas áreas como processamento de linguagem natural, reconhecimento de fala e bioinformática. Entretanto, implementar algoritmos eficientes para esse tipo de modelo não é uma tarefa fácil. Nesse trabalho apresentamos um arcabouço que ajuda no desenvolvimento e experimentação de Campos Aleatórios Condicionais Semi Markovianos (semi-CRFs). Desenvolvemos algoritmos eficientes que foram implementados em C++ propondo uma interface de programação flexível e intuitiva que habilita o usuário a definir, treinar e avaliar modelos. Nossa implementação foi construída como uma extensão do arcabouço ToPS que, inclusive, pode utilizar qualquer modelo já definido no ToPS como uma função de característica especializada. Por fim utilizamos nossa implementação de semi-CRF para construir um preditor de promotores que apresentou performance superior aos preditores existentes. / Conditional Random Fields are discriminative probabilistic models that have been successfully used in several areas like natural language processing, speech recognition and bioinformatics. However, implementing efficient algorithms for this kind of model is not an easy task. In this thesis we show a framework that helps the development and experimentation of Semi-Markov Conditional Random Fields (semi-CRFs). It has an efficient implementation in C++ and an intuitive API that allow users to define, train and evaluate models. It was built as an extension of ToPS framework and can use ToPS probabilistic models as specialized feature functions. We also use our implementation of semi-CRFs to build a high performance promoter predictor.
|
707 |
Automatic generation of hardware Tree ClassifiersThanjavur Bhaaskar, Kiran Vishal 10 July 2017 (has links)
Machine Learning is growing in popularity and spreading across different fields for various applications. Due to this trend, machine learning algorithms use different hardware platforms and are being experimented to obtain high test accuracy and throughput. FPGAs are well-suited hardware platform for machine learning because of its re-programmability and lower power consumption. Programming using FPGAs for machine learning algorithms requires substantial engineering time and effort compared to software implementation. We propose a software assisted design flow to program FPGA for machine learning algorithms using our hardware library. The hardware library is highly parameterized and it accommodates Tree Classifiers. As of now, our library consists of the components required to implement decision trees and random forests. The whole automation is wrapped around using a python script which takes you from the first step of having a dataset and design choices to the last step of having a hardware descriptive code for the trained machine learning model.
|
708 |
Cartes aléatoires hyperboliques / Hyperbolic random mapsBudzinski, Thomas 09 November 2018 (has links)
Cette thèse s'inscrit dans la théorie des cartes planaires aléatoires, active depuis une quizaine d'années, et plus précisément dans l'étude de modèles de nature hyperbolique.Dans un premier temps, nous nous intéressons à un modèle de triangulations aléatoires dynamiques basé sur les flips d'arêtes, et nous montrons une borne inférieure sur le temps de mélange de ce modèle.Dans la suite, l'objet d'étude principal est une famille de triangulations aléatoires hyperboliques, appelées PSHT. Il s'agit de variantes de la triangulation uniforme du plan (UIPT), qui ont été introduites en 2014 par Nicolas Curien. Nous commençons par établir un résultat de limite d'échelle quasi-critique : si on renormalise les distances tout en faisant tendre le paramètre d'hyperbolicité vers sa valeur critique, les triangulations étudiées convergent vers un espace métrique aléatoire appelé plan brownien hyperbolique. Nous étudions également des propriétés métriques fines des PSHT et du plan brownien hyperbolique, et notamment la structure de leurs géodésiques infinies. Nous présentons aussi de nouvelles propriétés de la frontière de Poisson des PSHT.Enfin, nous nous intéressons à un autre modèle naturel de cartes aléatoires hyperboliques : les cartes causales surcritiques, qui sont construites à partir d'arbres de Galton--Watson surcritiques, en ajoutant des arêtes entre sommets de même hauteur. Nous établissons des résultats d'hyperbolicité métrique, ainsi que des propriétés de la marche aléatoire sur ces cartes, dont un résultat de vitesse positive. Certaines des propriétés obtenues sont robustes, et peuvent se généraliser à n'importe quelle carte planaire contenant un arbre de Galton--Watson surcritique. / This thesis falls into the theory of random planar maps, which has been active in the last fifteen years, and more precisely into the study of hyperbolic models.We are first interested in a model of dynamical random triangulations based on edge-flips, where we prove a lower bound on the mixing time.In the rest of this thesis, the main objects that we study are the random hyperbolic triangulations called PSHT. These are hyperbolic variants of the Uniform Infinite Planar Triangulation (UIPT), and were introduced by Nicolas Curien in 2014. We first establish a near-critical scaling limit result: if we let the hyperbolicity parameter go to its critical value at the same time as the distances are renormalized, the PSHT converge to a random metric space that we call the hyperbolic Brownian plane. We also study precise metric properties of the PSHT and of the hyperbolic Brownian plane, such as the structure of their infinite geodesics. We obtain as well new properties of the Poisson boundary of the PSHT.Finally, we are interested in another natural model of hyperbolic random maps: supercritical causal maps, which are obtained from supercritical Galton--Watson trees by adding edges between vertices at the same height. We establish metric hyperbolicity results about these maps, as well as properties of the simple random walk (including a positive speed result). Some of the properties we obtain are robust, and may be generalized to any planar map containing a supercritical Galton--Watson tree.
|
709 |
Detección de anomalías en componentes mecánicos en base a Deep Learning y Random Cut ForestsAichele Figueroa, Diego Andrés January 2019 (has links)
Memoria para optar al título de Ingeniero Civil Mecánico / Dentro del área de mantenimiento, el monitorear un equipo puede ser de gran utilidad ya que permite advertir cualquier anomalía en el funcionamiento interno de éste, y así, se puede corregir cualquier desperfecto antes de que se produzca una falla de mayor gravedad.
En data mining, detección de anomalías es el ejercicio de identificar elementos anómalos, es decir, aquellos elementos que difieren a lo común dentro de un set de datos. Detección de anomalías tiene aplicación en diferentes dominios, por ejemplo, hoy en día se utiliza en bancos para detectar compras fraudulentas y posibles estafas a través de un patrón de comportamiento del usuario, por ese motivo se necesitan abarcar grandes cantidades de datos por lo que su desarrollo en aprendizajes de máquinas probabilísticas es imprescindible. Cabe destacar que se ha desarrollado una variedad de algoritmos para encontrar anomalías, una de las más famosas es el Isolated Forest dentro de los árboles de decisión. Del algoritmo de Isolated Forest han derivado distintos trabajos que proponen mejoras para éste, como es el Robust Random Cut Forest el cual, por un lado permite mejorar la precisión para buscar anomalías y, también, entrega la ventaja de poder realizar un estudio dinámico de datos y buscar anomalías en tiempo real. Por otro lado, presenta la desventaja de que entre más atributos contengan los sets de datos más tiempo de cómputo tendrá para detectar una anomalía. Por ende, se utilizará un método de reducción de atributos, también conocido como reducción de dimensión, por último se estudiará como afectan tanto en efectividad y eficiencia al algoritmo sin reducir la dimensión de los datos.
En esta memoria se analiza el algoritmo Robust Random Cut Forest para finalmente entregar una posible mejora a éste. Para poner en prueba el algoritmo se realiza un experimento de barras de acero, donde se obtienen como resultado sus vibraciones al ser excitado por un ruido blanco. Estos datos se procesan en tres escenarios distintos: Sin reducción de dimensiones, análisis de componentes principales(principal component analysis) y autoencoder. En base a esto, el primer escenario (sin reducción de dimensiones) servirá para establecer un punto de orientación, para ver como varían el escenario dos y tres en la detección de anomalía, en efectividad y eficiencia. %partida para detección de anomalía, luego se ver si esta mejora Luego, se realiza el estudio en el marco de tres escenarios para detectar puntos anómalos;
En los resultados se observa una mejora al reducir las dimensiones en cuanto a tiempo de cómputo (eficiencia) y en precisión (efectividad) para encontrar una anomalía, finalmente los mejores resultados son con análisis de componentes principales (principal component analysis).
|
710 |
Fall Risk Classification for People with Lower Extremity Amputations Using Machine Learning and Smartphone Sensor Features from a 6-Minute Walk TestDaines, Kyle 04 September 2020 (has links)
Falls are a leading cause of injury and accidental injury death worldwide. Fall-risk prevention techniques exist but fall-risk identification can be difficult. While clinical assessment tools are the standard for identifying fall risk, wearable-sensors and machine learning could improve outcomes with automated and efficient techniques. Machine learning research has focused on older adults. Since people with lower limb amputations have greater falling and injury risk than the elderly, research is needed to evaluate these approaches with the amputee population.
In this thesis, random forest and fully connected feedforward artificial neural network (ANN) machine learning models were developed and optimized for fall-risk identification in amputee populations, using smartphone sensor data (phone at posterior pelvis) from 89 people with various levels of lower-limb amputation who completed a 6-minute walk test (6MWT). The best model was a random forest with 500 trees, using turn data and a feature set selected using correlation-based feature selection (81.3% accuracy, 57.2% sensitivity, 94.9% specificity, 0.59 Matthews correlation coefficient, 0.83 F1 score). After extensive ANN optimization with the best ranked 50 features from an Extra Trees Classifier, the best ANN model achieved 69.7% accuracy, 53.1% sensitivity, 78.9% specificity, 0.33 Matthews correlation coefficient, and 0.62 F1 score.
Features from a single smartphone during a 6MWT can be used with random forest machine learning for fall-risk classification in lower limb amputees. Model performance was similarly effective or better than the Timed Up and Go and Four Square Step Test. This model could be used clinically to identify fall-risk individuals during a 6MWT, thereby finding people who were not intended for fall screening. Since model specificity was very high, the risk of accidentally misclassifying people who are a no fall-risk individual is quite low, and few people would incorrectly be entered into fall mitigation programs based on the test outcomes.
|
Page generated in 0.0317 seconds