• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 7
  • 5
  • 5
  • 1
  • Tagged with
  • 54
  • 54
  • 54
  • 15
  • 12
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Do absoluto em Spinoza : fundamentos para a ação individual

Fachinelli, Lucas Sartor 29 June 2017 (has links)
Esta dissertação tem como objetivo encontrar a relação entre a unidade de Deus conceituado por Spinoza e a pluralidade de possibilidades de ação humana. Para chegar a tal ponto, é analisado o problema denominado por alguns comentadores como a passagem dos modos e atributos infinitos (de Deus) para os modos finitos. Este ponto se dá por não existir uma clareza nas partes finais da primeira sessão da Ética de Spinoza, deixando em aberto diversas interpretações possíveis. De posse desta análise são apresentados os conceitos chaves para o trabalho que foi realizado, entre eles a teoria da informação e análises matemáticas sobre o infinito, culminando na apresentação do conceito de entropia, que consiste na energia da possibilidade. O trabalho demonstra, com uma abordagem na teoria da ação, como é possível a partir de um Deus, através da lei da entropia, existir um mundo de infinitos modos (coisas) finitos. Conclui-se por fim, que da mesma forma da existência finita dos modos, as ações humanas estão em um reino do possível, não sendo totalmente determinadas, mas todas com causas possíveis e previsíveis, seja em Deus, seja nos afetos humanos. / Submitted by Ana Guimarães Pereira (agpereir@ucs.br) on 2017-09-28T16:28:41Z No. of bitstreams: 1 Dissertacao Lucas Sartor Fachinelli.pdf: 1381392 bytes, checksum: c640bdc0dc057d85806f3b8ceea96be1 (MD5) / Made available in DSpace on 2017-09-28T16:28:41Z (GMT). No. of bitstreams: 1 Dissertacao Lucas Sartor Fachinelli.pdf: 1381392 bytes, checksum: c640bdc0dc057d85806f3b8ceea96be1 (MD5) Previous issue date: 2017-09-28 / Conselho Nacional de Desenvolvimento Científico e Tecnológico, CNPq. / This dissertation aims to find the relationship between the unity of God as conceived by Spinoza and the plurality of possibilities of human action. To reach this point, the problem called by a few commentators as the passage from infinite modes and atributes (of God) to finite modes is analyzed. This is because there is no clarity in the final parts of the first session of the Ethics, leaving several possible interpretations. The second part presents the key concepts for the work that has been done, among them information theory and mathematical analyzes on infinity, culminating in the presentation of the concept of entropy, which consists of the energy of possibility. The essay demonstrates, with an approach on action theory, how it is possible from one God, through the law of entropy, to exist a world of infinite finite modes (things). In conclusion, in the same way as the existence of finite modes, human actions are in a realm of the possible, not being totally determined, but all with possible and foreseeable causes, whether from God or from the human's affections.
42

Do absoluto em Spinoza : fundamentos para a ação individual

Fachinelli, Lucas Sartor 29 June 2017 (has links)
Esta dissertação tem como objetivo encontrar a relação entre a unidade de Deus conceituado por Spinoza e a pluralidade de possibilidades de ação humana. Para chegar a tal ponto, é analisado o problema denominado por alguns comentadores como a passagem dos modos e atributos infinitos (de Deus) para os modos finitos. Este ponto se dá por não existir uma clareza nas partes finais da primeira sessão da Ética de Spinoza, deixando em aberto diversas interpretações possíveis. De posse desta análise são apresentados os conceitos chaves para o trabalho que foi realizado, entre eles a teoria da informação e análises matemáticas sobre o infinito, culminando na apresentação do conceito de entropia, que consiste na energia da possibilidade. O trabalho demonstra, com uma abordagem na teoria da ação, como é possível a partir de um Deus, através da lei da entropia, existir um mundo de infinitos modos (coisas) finitos. Conclui-se por fim, que da mesma forma da existência finita dos modos, as ações humanas estão em um reino do possível, não sendo totalmente determinadas, mas todas com causas possíveis e previsíveis, seja em Deus, seja nos afetos humanos. / Conselho Nacional de Desenvolvimento Científico e Tecnológico, CNPq. / This dissertation aims to find the relationship between the unity of God as conceived by Spinoza and the plurality of possibilities of human action. To reach this point, the problem called by a few commentators as the passage from infinite modes and atributes (of God) to finite modes is analyzed. This is because there is no clarity in the final parts of the first session of the Ethics, leaving several possible interpretations. The second part presents the key concepts for the work that has been done, among them information theory and mathematical analyzes on infinity, culminating in the presentation of the concept of entropy, which consists of the energy of possibility. The essay demonstrates, with an approach on action theory, how it is possible from one God, through the law of entropy, to exist a world of infinite finite modes (things). In conclusion, in the same way as the existence of finite modes, human actions are in a realm of the possible, not being totally determined, but all with possible and foreseeable causes, whether from God or from the human's affections.
43

Framework to Evaluate Entropy Based Data Fusion Methods in Supply Chain Management

Tran, Huong Thi 12 1900 (has links)
This dissertation explores data fusion methodology to deduce an overall inference from the data gathered from multiple heterogeneous sources. Typically, if there existed a data source in which the data were reliable and unbiased, then data fusion would not be necessary. Data fusion methodology combines data form multiple diverse sources so that the desired information - such as the population mean - is improved despite redundancies, inaccuracies, biases, and inflated variability in the data. Examples of data fusion include estimating average demand from similar sources, and integrating fatality counts from different media sources after a catastrophe. The approach in this study combines "inputs" from distinct sources so that the information is "fused." Another way of describing this process is "data integration." Important assumptions are 1. Several sources provide "inputs" for information used to estimate parameters of a probability distribution. 2. Since distributions for the data from the sources are heterogeneous, some sources are less reliable. 3. Distortions, bias, censorship, and systematic errors may be more prominent in data from certain sources. 4. The sample size of sources data, number of "inputs," may be very small. Examples of information from multiple sources are abundant: traffic information from sensors at intersections, multiple economic indicators from various sources, demand data for product using similar retail stores as sources, polling data from various sources, and disaster count of fatalities from different media sources after a catastrophic event. This dissertation seeks to address a gap in the operations literature by addressing three research questions regarding entropy base data fusion (EBDF) approaches to estimation. Three separate, but unifying, essays address the research questions for this dissertation. Essay 1 provides an overview of supporting literature for the research questions. A numerical analysis of airline maximum wait time data illustrates the underlying issues involved in EBDF methods. This essay addresses the research question: Why consider alternative entropy-based weighting methods? Essay 2 introduces 13 data fusion methods. A Monte Carlo simulation study examines the performance of these methods in estimating the mean parameter of a population with either a normal or lognormal distribution. This essay addresses the following research questions: 1. Can an alternative formulation for Shannon's entropy enhance the performance of Sheu (2010)'s data fusion approach? 2. Do symmetric and skewed distributions affect the 13 data fusion methods differently? 3. Do negative and positive biases affect the performance of the 13 methods differently? 4. Do entropy based data fusion methods outperform non-entropy based data fusion methods? 5. Which data fusion methods are recommended for symmetric and skewed data sets when no bias is present? What is the recommendation under conditions of few data sources? Essay 3 explores the use of the data fusion method estimates of the population mean in a newsvendor problem. A Monte Carlo simulation study investigates the accuracy of the using the estimates provided in Essay 2 as the parameter estimate for the distribution of demand that follows an exponential distribution. This essay addresses the following research questions: 1. Do data fusion methods with relatively strong performance in estimating the parameter mean estimate also provide relatively strong performance in estimating the optimal demand under a given ratio of overage and underage costs? 2. Do any of the data fusion methods deteriorate or improve with the introduction of positive and negative bias? 3. Do the alternative entropy formulations to Shannon's entropy enhance the performance of the methods on a relative basis? 4. Is the relative rank ordering performance of the data fusion methods different in Essay 2 and Essay 3 in the resulting performances of the methods? The contribution of this research is to introduce alternative EBDF methods, and to establish a framework for using EBDF methods in supply chain decision making. A comparative Monte Carlo simulation analysis study will provide a basis to investigate the robustness of the proposed data fusion methods for estimation of population parameters in a newsvendor problem with known distribution, but unknown parameter. A sensitivity analysis is conducted to determine the effect of multiple sources, sample size, and distributions.
44

Information theoretic models of storage and memory

Hall, Susan Aileen January 1982 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING / Includes bibliographical references. / by Susan Aileen Hall. / M.S.
45

A probabilistic framework and algorithms for modeling and analyzing multi-instance data

Behmardi, Behrouz 28 November 2012 (has links)
Multi-instance data, in which each object (e.g., a document) is a collection of instances (e.g., word), are widespread in machine learning, signal processing, computer vision, bioinformatic, music, and social sciences. Existing probabilistic models, e.g., latent Dirichlet allocation (LDA), probabilistic latent semantic indexing (pLSI), and discrete component analysis (DCA), have been developed for modeling and analyzing multiinstance data. Such models introduce a generative process for multi-instance data which includes a low dimensional latent structure. While such models offer a great freedom in capturing the natural structure in the data, their inference may present challenges. For example, the sensitivity in choosing the hyper-parameters in such models, requires careful inference (e.g., through cross-validation) which results in large computational complexity. The inference for fully Bayesian models which contain no hyper-parameters often involves slowly converging sampling methods. In this work, we develop approaches for addressing such challenges and further enhancing the utility of such models. This dissertation demonstrates a unified convex framework for probabilistic modeling of multi-instance data. The three main aspects of the proposed framework are as follows. First, joint regularization is incorporated into multiple density estimation to simultaneously learn the structure of the distribution space and infer each distribution. Second, a novel confidence constraints framework is used to facilitate a tuning-free approach to control the amount of regularization required for the joint multiple density estimation with theoretical guarantees on correct structure recovery. Third, we formulate the problem using a convex framework and propose efficient optimization algorithms to solve it. This work addresses the unique challenges associated with both discrete and continuous domains. In the discrete domain we propose a confidence-constrained rank minimization (CRM) to recover the exact number of topics in topic models with theoretical guarantees on recovery probability and mean squared error of the estimation. We provide a computationally efficient optimization algorithm for the problem to further the applicability of the proposed framework to large real world datasets. In the continuous domain, we propose to use the maximum entropy (MaxEnt) framework for multi-instance datasets. In this approach, bags of instances are represented as distributions using the principle of MaxEnt. We learn basis functions which span the space of distributions for jointly regularized density estimation. The basis functions are analogous to topics in a topic model. We validate the efficiency of the proposed framework in the discrete and continuous domains by extensive set of experiments on synthetic datasets as well as on real world image and text datasets and compare the results with state-of-the-art algorithms. / Graduation date: 2013
46

On Generalized Measures Of Information With Maximum And Minimum Entropy Prescriptions

Dukkipati, Ambedkar 03 1900 (has links)
Kullback-Leibler relative-entropy or KL-entropy of P with respect to R defined as ∫xlnddPRdP , where P and R are probability measures on a measurable space (X, ), plays a basic role in the definitions of classical information measures. It overcomes a shortcoming of Shannon entropy – discrete case definition of which cannot be extended to nondiscrete case naturally. Further, entropy and other classical information measures can be expressed in terms of KL-entropy and hence properties of their measure-theoretic analogs will follow from those of measure-theoretic KL-entropy. An important theorem in this respect is the Gelfand-Yaglom-Perez (GYP) Theorem which equips KL-entropy with a fundamental definition and can be stated as: measure-theoretic KL-entropy equals the supremum of KL-entropies over all measurable partitions of X . In this thesis we provide the measure-theoretic formulations for ‘generalized’ information measures, and state and prove the corresponding GYP-theorem – the ‘generalizations’ being in the sense of R ´enyi and nonextensive, both of which are explained below. Kolmogorov-Nagumo average or quasilinear mean of a vector x = (x1, . . . , xn) with respect to a pmf p= (p1, . . . , pn)is defined ashxiψ=ψ−1nk=1pkψ(xk), whereψis an arbitrarycontinuous and strictly monotone function. Replacing linear averaging in Shannon entropy with Kolmogorov-Nagumo averages (KN-averages) and further imposing the additivity constraint – a characteristic property of underlying information associated with single event, which is logarithmic – leads to the definition of α-entropy or R ´enyi entropy. This is the first formal well-known generalization of Shannon entropy. Using this recipe of R´enyi’s generalization, one can prepare only two information measures: Shannon and R´enyi entropy. Indeed, using this formalism R´enyi characterized these additive entropies in terms of axioms of KN-averages. On the other hand, if one generalizes the information of a single event in the definition of Shannon entropy, by replacing the logarithm with the so called q-logarithm, which is defined as lnqx =x1− 1 −1 −q , one gets what is known as Tsallis entropy. Tsallis entropy is also a generalization of Shannon entropy but it does not satisfy the additivity property. Instead, it satisfies pseudo-additivity of the form x ⊕qy = x + y + (1 − q)xy, and hence it is also known as nonextensive entropy. One can apply R´enyi’s recipe in the nonextensive case by replacing the linear averaging in Tsallis entropy with KN-averages and thereby imposing the constraint of pseudo-additivity. A natural question that arises is what are the various pseudo-additive information measures that can be prepared with this recipe? We prove that Tsallis entropy is the only one. Here, we mention that one of the important characteristics of this generalized entropy is that while canonical distributions resulting from ‘maximization’ of Shannon entropy are exponential in nature, in the Tsallis case they result in power-law distributions. The concept of maximum entropy (ME), originally from physics, has been promoted to a general principle of inference primarily by the works of Jaynes and (later on) Kullback. This connects information theory and statistical mechanics via the principle: the states of thermodynamic equi- librium are states of maximum entropy, and further connects to statistical inference via select the probability distribution that maximizes the entropy. The two fundamental principles related to the concept of maximum entropy are Jaynes maximum entropy principle, which involves maximizing Shannon entropy and the Kullback minimum entropy principle that involves minimizing relative-entropy, with respect to appropriate moment constraints. Though relative-entropy is not a metric, in cases involving distributions resulting from relative-entropy minimization, one can bring forth certain geometrical formulations. These are reminiscent of squared Euclidean distance and satisfy an analogue of the Pythagoras’ theorem. This property is referred to as Pythagoras’ theorem of relative-entropy minimization or triangle equality and plays a fundamental role in geometrical approaches to statistical estimation theory like information geometry. In this thesis we state and prove the equivalent of Pythagoras’ theorem in the nonextensive formalism. For this purpose we study relative-entropy minimization in detail and present some results. Finally, we demonstrate the use of power-law distributions, resulting from ME-rescriptions of Tsallis entropy, in evolutionary algorithms. This work is motivated by the recently proposed generalized simulated annealing algorithm based on Tsallis statistics. To sum up, in light of their well-known axiomatic and operational justifications, this thesis establishes some results pertaining to the mathematical significance of generalized measures of information. We believe that these results represent an important contribution towards the ongoing research on understanding the phenomina of information. (For formulas pl see the original document) ii
47

Classificação de falhas em maquinas eletricas usando redes neurais, modelos wavelet e medidas de informação

Silva, Lyvia Regina Biagi 21 February 2014 (has links)
CAPES; CNPq / Este trabalho apresenta uma proposta de metodologia para detecção e classificação de falhas em motores de indução trifásicos ligados diretamente à rede elétrica. O método proposto é baseado na análise dos sinais de corrente do estator, com e sem a presença de falhas nos rolamentos, estator e rotor. Um dos efeitos desses tipos de falhas é o aparecimento de componentes de frequência específicas, relacionados à velocidade de rotação da máquina. Os sinais foram analisados usando a decomposição wavelet-packet, que permite a avaliação dos sinais em bandas de frequência de tamanhos variáveis. A partir dessa decomposição, aplicaram-se medidas de previsibilidade, como entropia relativa, potência de previsão e variância de erro normalizada, obtida com a análise de componentes previsíveis. Com essas medidas, foi possível verificar quais componentes da decomposição são mais previsíveis. Neste trabalho, a variância de erro normalizada e a potência de previsão foram utilizadas como entradas para três topologias de redes neurais artificiais classificadoras: perceptron multicamadas, redes de funções de base radial e mapas auto-organizáveis de Kohonen. Foram testados seis diferentes vetores de entrada para as redes neurais, utilizando medidas de previsibilidade e número de elementos dos vetores variados. Os ensaios foram realizados considerando amostras de sinal de diferentes motores, com vários tipos de falha, operando sob diversos regimes de torque e condições de desequilíbrio de tensão. Primeiramente, os sinais foram classificados em dois padrões: com e sem a presença de falhas. Posteriormente, detectou-se o tipo de falha presente nos sinais: rolamento, estator ou rotor. Por último, as amostras foram classificadas dentro do subgrupo de falha em que estavam presentes. / This work presents a methodology for diagnosis and classification of faults in three-phase induction motors connected directly to the power grid. The proposed method is based on the analysis of the stator current signals, with and without the presence of faults in the bearings, stator and rotor. These faults cause the presence of specific frequency components that are related to the machine rotational speed. The signals were analyzed using wavelet-packet decomposition, which allows a multiresolution evaluation of the signals. Using this decomposition, we estimated some predictability measures, such as relative entropy, predictive power and normalized error variance, obtained with the predictability component analysis. With this measures, we verified which were the most predictable components. In this work, normalized error variance and the predictive power were used as inputs to three topologies of artificial neural networks used as classifiers: multilayer perceptron, radial basis function and Kohonen self-organizing maps. We tested six different input vectors to the artificial neural networks, in which we vary the predictability measures and the number of elements of the vectors. The studies were performed considering samples of signals from different motors, with various kinds of faults, working under several load conditions and with voltage unbalance. The signals were firstly classified in two patterns: with and without the presence of faults. After, we detected the kind of fault was present in the signal: bearing, stator or rotor fault. Last, the samples were classified inside the subgroup in which they were.
48

Separação cega de misturas com não-linearidade posterior utilizando estruturas monotônicas e algoritmos bio-inspirados de otimização / Blind separation of post-nonlinear mixture using monotonic structures and bio-inspired optimization algorithms

Pereira, Filipe de Oliveira 16 August 2018 (has links)
Orientadores: Romis Ribeiro de Faissol Attux, Leonardo Tomazeli Duarte / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-16T19:27:38Z (GMT). No. of bitstreams: 1 Pereira_FilipedeOliveira_M.pdf: 3292959 bytes, checksum: b07b4141d2a1f443eb3ab766909a099c (MD5) Previous issue date: 2010 / Resumo: O presente trabalho se propõe a desenvolver métodos de Separação Cega de Fontes (BSS) para modelos de mistura com Não-Linearidade Posterior (PNL). Neste caso particular, a despeito da não-linearidade do modelo, ainda é possível recuperar as fontes através de técnicas de Análise de Componentes Independentes (ICA). No entanto, há duas dificuldades maiores no emprego da ICA em modelos PNL. A primeira delas diz respeito a uma restrição sobre as funções não-lineares presentes no modelo PNL: elas devem ser monotônicas por construção. O segundo problema se encontra no ajuste do sistema separador com base em funções custo associadas à ICA: pode haver mínimos locais sub-ótimos. De modo a contornar o primeiro problema, investigamos a adequabilidade de três tipos distintos de estruturas não-lineares monotônicas. Para lidar com a presença de mínimos sub-ótimos no ajuste do sistema separador, empregamos algoritmos bio-inspirados com significativa capacidade de busca global. Finalmente, buscamos, através de experimentos em diversos cenários representativos, identificar dentre as estratégias estudadas qual a melhor configuração, tanto em termos de qualidade da estimação das fontes quanto em termos de complexidade / Abstract: This work aims at the development of Blind Source Separation (BSS) methods for Post-NonLinear (PNL) mixing models. In this particular case, despite the presence of nonlinear elements in the mixing model, it is still possible to recover the sources through Independent Component Analysis (ICA) methods. However, there are two major problems in the application of ICA techniques to PNL models. The first one concerns a restriction on the nonlinear functions present in the PNL model: they must be monotonic functions by construction. The second one is related to the adjustment of the PNL separating system via ICA-based cost functions: there may be sub-optimal local minima. To cope with the first problem, we investigate three types of monotonic nonlinear structures. Moreover, to circumvent the problem related to the presence of sub-optimal minima, we consider bio-inspired algorithms that have a significant global search potential. Finally, we perform a set of experiments in representative scenarios in order to identify, among the considered strategies, the best ones in terms of quality of the retrieved sources and overall complexity / Mestrado / Mestre em Engenharia Elétrica
49

Compressão de dados de demanda elétrica em Smart Metering / Data compression electricity demand in Smart Metering

Flores Rodriguez, Andrea Carolina, 1987- 08 August 2014 (has links)
Orientador: Gustavo Fraidenraich / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-26T03:16:11Z (GMT). No. of bitstreams: 1 FloresRodriguez_AndreaCarolina_M.pdf: 1415054 bytes, checksum: 6b986968e8d7ec4e6459e4cea044d379 (MD5) Previous issue date: 2014 / Resumo: A compressão dos dados de consumo residencial de energia elétrica registrados torna-se extremadamente necessária em Smart Metering, a fim de resolver o problema de grandes volumes de dados gerados pelos medidores. A principal contribuição desta tese é a proposta de um esquema de representação teórica da informação registrada na forma mais compacta, sugerindo uma forma de atingir o limite fundamental de compressão estabelecido pela entropia da fonte sobre qualquer técnica de compressão disponibilizada no medidor. A proposta consiste na transformação de codificação dos dados, baseado no processamento por segmentação: no tempo em taxas de registros de 1/900 Hz a 1 Hz, e nos valores de consumo residencial de energia elétrica. Este último subdividido em uma compressão por amplitude mudando sua granularidade e compressão dos dados digitais para representar o consumo com o menor número de bits possíveis usando: PCM-Huffman, DPCM-Huffman e codificação de entropia supondo diferentes ordens de distribuição da fonte. O esquema é aplicado sobre dados modelados por cadeias de Markov não homogêneas para as atividades dos membros da casa que influenciam no consumo elétrico e dados reais disponibilizados publicamente. A avaliação do esquema é feita analisando o compromisso da compressão entre as altas taxas de registro, distorção resultante da digitalização dos dados, e exploração da correlação entre amostras consecutivas. Vários exemplos numéricos são apresentados ilustrando a eficiência dos limites de compressão. Os resultados revelam que os melhores esquemas de compressão de dados são encontrados explorando a correlação entre as amostras / Abstract: Data compression of recorded residential electricity consumption becomes extremely necessary on Smart Metering, in order to solve the problem of large volumes of data generated by meters. The main contribution of this thesis is to propose a scheme of theoretical representation of recorded information in the most compact form, which suggests a way to reach the fundamental limit of compression set by the entropy of the source, of any compression technique available in the meter. The proposal consists in the transformation of data encoding, based on the processing by segmentation: in time by registration rate from 1/900 Hz to 1 Hz, and in the values of residential electricity consumption. The latter is subdivided into compression: by amplitude changing their regularity, and digital data compression to represent consumption as few bits as possible. It is using PCM-Huffman, DPCM-Huffman and entropy encoding by assuming different orders of the source. The scheme is applied to modeled data by inhomogeneous Markov chains to create the activities of household members that influence electricity consumption, and real data publicly available. The assessment scheme is made by analyzing the trade off of compression between high registration rates, the distortion resulting from the digitization of data, and analyzing the correlation of consecutive samples. Several examples are presented to illustrate the efficiency of the compression limits. The analysis reveals that better data compression schemes can be found by exploring the correlation among the samples / Mestrado / Telecomunicações e Telemática / Mestra em Engenharia Elétrica
50

Decomposição de sinais eletromiográficos de superfície misturados linearmente utilizando análise de componentes independentes / Decomposition of linearly mixed surface electromyographic signals using independent component analysis

Almeida, Tiago Paggi de 20 August 2018 (has links)
Orientador: Antônio Augusto Fasolo Quevedo / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-20T12:21:10Z (GMT). No. of bitstreams: 1 Almeida_TiagoPaggide_M.pdf: 6663822 bytes, checksum: bdc5918b5983a84b46acf03bb9096cc7 (MD5) Previous issue date: 2012 / Resumo: A eletromiografia e uma pratica clinica que permite inferir sobre a integridade do sistema neuromuscular, o que inclui a analise da unidade funcional contrátil do sistema neuromuscular, a unidade motora. O sinal eletromiografico e um sinal elétrico resultante do transiente iônico devido potenciais de ação de unidades motoras capturados por eletrodos invasivos ou não invasivos. Eletrodos invasivos capturam potenciais de ação de ate uma única unidade motora, porem o procedimento e demorado e incomodo. Eletrodos de superfície permitem detectar potenciais de ação de modo não invasivo, porem resultam na mistura de potenciais de ação de varias unidades motoras, resultando em um sinal com aparência de ruído aleatório, dificultando uma analise. Técnicas de Separação Cega de Fontes, como Analise de Componentes Independentes, tem se mostrado eficientes na decomposição de sinais eletromiograficos de superfície nos constituintes potenciais de ação de unidades motoras. Este projeto tem como objetivo desenvolver um protótipo capaz de capturar sinais mioeletricos de superfície e analisar a viabilidade da separação de sinais eletromiograficos intramusculares misturados linearmente, utilizando Analise de Componentes Independentes. O sistema proposto integra uma matriz de eletrodos com ate sete canais, um modulo de pré-processamento, um software para controle da captura dos sinais eletromiograficos de superfície e o algoritmo FastICA em ambiente MATLABR para separação dos sinais eletromiograficos. Os resultados mostram que o sistema foi capaz de capturar sinais eletromiograficos de superfície e os sinais eletromiograficos intramusculares misturados linearmente foram separados de forma confiável / Abstract: Electromyography is a clinical practice that provides information regarding the physiological condition of the neuromuscular system, which includes the analysis of the contractile functional unit of the neuromuscular system, known as motor unit. The electromyographic signal is an electrical signal resultant from ionic transient regarding motor unit action potentials captured by invasive or non-invasive electrodes. Invasive electrodes are able to detect action potentials of even one motor unit, although the procedure is time consuming and uncomfortable. Surface electrodes enable detecting action potential noninvasively, although the detected signal is a mixture of action potentials from several motor units within the detection area of the electrode, resulting in a complex interference pattern which is difficult to interpret. Blind Source Separation techniques, such as Independent Component Analysis, have proven effective for decomposing surface electromyographic signals into the constituent motor unit action potentials. The objective of this project was to develop a system in order to capture surface myoelectric signals and to analyze the viability for decomposing intramuscular myoelectric signals that were mixed linearly, using independent component analyzes. The system includes an electrode matrix with up to seven channels, a preprocessing module, a software for controlling surface myoelectric signals capture, and the FastICA algorithm in MATLABR for the intramuscular myoelectric signals decomposition. The results show that the system was able to capture surface myoelectric signals and was capable of decomposing the intramuscular myoelectric signals that were previously linearly mixed / Mestrado / Engenharia Biomedica / Mestre em Engenharia Elétrica

Page generated in 0.1446 seconds