121 |
Gaussian copula modelling for integer-valued time seriesLennon, Hannah January 2016 (has links)
This thesis is concerned with the modelling of integer-valued time series. The data naturally occurs in various areas whenever a number of events are observed over time. The model considered in this study consists of a Gaussian copula with autoregressive-moving average (ARMA) dependence and discrete margins that can be specified, unspecified, with or without covariates. It can be interpreted as a 'digitised' ARMA model. An ARMA model is used for the latent process so that well-established methods in time series analysis can be used. Still the computation of the log-likelihood poses many problems because it is the sum of 2^N terms involving the Gaussian cumulative distribution function when N is the length of the time series. We consider an Monte Carlo Expectation-Maximisation (MCEM) algorithm for the maximum likelihood estimation of the model which works well for small to moderate N. Then an Approximate Bayesian Computation (ABC) method is developed to take advantage of the fact that data can be simulated easily from an ARMA model and digitised. A spectral comparison method is used in the rejection-acceptance step. This is shown to work well for large N. Finally we write the model in an R-vine copula representation and use a sequential algorithm for the computation of the log-likelihood. We evaluate the score and Hessian of the log-likelihood and give analytic solutions for the standard errors. The proposed methodologies are illustrated using simulation studies and highlight the advantages of incorporating classic ideas from time series analysis into modern methods of model fitting. For illustration we compare the three methods on US polio incidence data (Zeger, 1988) and we discuss their relative merits.
|
122 |
Inferência de emoções em fragmentos de textos obtidos do Facebook / Inference of emotions in fragments of texts obtained from the FacebookMedeiros, Richerland Pinto [UNESP] 27 April 2017 (has links)
Submitted by Richerland Pinto Medeiros null (rick.land@gmail.com) on 2017-06-27T15:12:38Z
No. of bitstreams: 1
DISSERTACAO_RICHERLAND_MEDEIROS.pdf: 1209454 bytes, checksum: 251490a058f4248162de9508b4627e65 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-06-27T17:04:08Z (GMT) No. of bitstreams: 1
medeiros_rp_me_bauru.pdf: 1209454 bytes, checksum: 251490a058f4248162de9508b4627e65 (MD5) / Made available in DSpace on 2017-06-27T17:04:09Z (GMT). No. of bitstreams: 1
medeiros_rp_me_bauru.pdf: 1209454 bytes, checksum: 251490a058f4248162de9508b4627e65 (MD5)
Previous issue date: 2017-04-27 / Esta pesquisa tem como objetivo analisar o uso da técnica estatística de aprendizado de máquina Maximização de Entropia, voltado para tarefas de processamento de linguagem natural na inferência de emoções em textos obtidos da rede social Facebook. Foram estudados os conceitos primordiais das tarefas de processamento de linguagem natural, os conceitos inerentes a teoria da informação, bem como o aprofundamento no conceito de um modelo entrópico como classificador de textos. Os dados utilizados na presente pesquisa foram obtidos de textos curtos, ou seja, textos com no máximo 500 caracteres. A técnica em questão foi abordada dentro do aprendizado supervisionado de máquina, logo, parte dos dados coletados foram usados como exemplos marcados dentro de um conjunto de classes predefinidas, a fim de induzir o mecanismo de aprendizado a selecionar a classe de emoção mais provável dado o exemplo analisado. O método proposto obteve índice de assertividade médio de 90%, baseado no modelo de validação cruzada. / This research aims to analyze the use of entropy maximization machine learning statistical technique, focused on natural language processing tasks in the inferencing of emotions in short texts from Facebook social network. Were studied the primary concepts of natural language processing tasks, IT intrinsic concepts, as well as deepening the concept of Entropy model as a text classifier. All data used for this research came from short texts found in social networks and had 500 characters or less. The model was used within supervised machine learning, therefore, part of the collected data was used as examples marked within a set of predefined classes in order to induce the learning mechanism to select the most probable emotion class given the analyzed sample. The method has obtained the mean accuracy rate of 90%, based on the cross-validation model.
|
123 |
Modelo para mensuração do desempenho econômico e financeiro de empresas em rede: uma aplicação às cadeias agroindustriais. / Economic and financial performance measurement model for companies in network: a study of Brazilian agribusiness companies.Luís Henrique Andia 12 December 2007 (has links)
Este estudo teve como objetivo principal desenvolver um modelo de mensuração do desempenho financeiro e econômico para empresas em rede. A justificativa para tal desenvolvimento foi, justamente, uma lacuna verificada nos textos de organização industrial, nova economia institucional e modelos de mensuração do desempenho de empresas e cadeias de suprimentos. Estas pesquisas, até o momento, não enfatizaram, diretamente, questões de cunho financeiro: faltou discutir a dinâmica da variável dinheiro nos modelos. Seguindo este argumento, foi desenvolvido um modelo matemático para otimização do lucro e do EVA (Economic Value Added) levando-se em consideração, além do custo e receita operacional, os custos e receitas financeiras, o tipo de cadeia que a empresa está inserida (atividade), o tipo de estrutura de governança (mercado, rede ou hierarquia) adotado e o seu segmento (elo) de atuação dentro da cadeia. Para validar o modelo, foram coletados dados contábeis de 109 empresas do agronegócio brasileiro, entre os exercícios de 2001 a 2005. Aplicou-se um teste MANOVA (ANOVA Multivariado) para verificar a interferência dos fatores (segmento, cadeia, estrutura e constituição jurídica) sobre a variação dos valores dos indicadores de desempenho financeiro (margem bruta, relação entre exigível de longo prazo sobre patrimônio líquido, retorno sobre ativos e sobre o patrimônio líquido e ciclo de caixa) e econômico (EVA). Pelos resultados, pode-se concluir que todos os fatores apresentaram interferência significativa na variação dos indicadores financeiros e somente o fator segmento interferiu no EVA das empresas. / The aim of this study was to develop an economic and financial performance measurement model for companies in network, since there is a gap in the literature texts of industrial organization, new institutional economy and models of performance measurements of companies and supply chains. In the related literature, these researches did not emphasize the questions related to financial matter, in a direct way, since there is a lack of discussion concerning to the dynamics of the \"money\" in the models. Therefore, a mathematical model was developed with the purpose of maximization of the profit and EVA (Economic Value Added) with emphasis in the financial cost and financial incomes. Moreover, the kind of the company\'s supply chain (business), governance\'s form (market, network or hierarchy) and its segment (actor) in the supply chain was studied. For this purpose, 109 Brazilian agribusiness companies had their accounting and financial data collected, during the period of 2001 and 2005. The statistical test MANOVA was used to detect the interference of the factors (segment, network, governance and legal nature) regarding the economic (EVA) and financial performance drivers range (gross margin, long term liability/net assets, return on assets (ROA) and return on net assets). Within the limits of the present study, we may conclude that all the factors provide significant (a<=0.05) interference in the range of the financial performance drivers. In addition, regarding to the economic performance, the segment was the factor that presented significant differences (a<=0.05), affecting the EVA of the companies.
|
124 |
Algoritmos de escalonamento baseados em serviÃos pÃblicos para aumentar a satisfaÃÃo do usuÃrio em sistemas OFDM / Utility-based scheduling algorithms to enhance user satisfaction in OFDMA systemsFrancisco Hugo Costa Neto 25 February 2016 (has links)
CoordenaÃÃo de AperfeiÃoamento de NÃvel Superior / A crescente demanda de mercado por serviÃos sem fio e a escassez de recursos de rÃdio
apela mais do que nunca para a melhoria do desempenho dos sistema de comunicaÃÃo sem fio.
Desse modo, Ã obrigatÃrio garantir o provimento de melhores serviÃos de rÃdio e aperfeiÃoar a
cobertura e a capacidade, com isso aumentando o nÃmero de consumidores satisfeitos.
Esta dissertaÃÃo lida com algoritmos de escalonamento, buscando a maximizaÃÃo e o
controle adaptativo do Ãndice de satisfaÃÃo no enlace direto de uma rede de acesso baseado
em frequÃncia, OFDMA (do inglÃs
Orthogonal Frequency Division Multiple Acess
, considerando
diferentes modelos de trÃfego para serviÃos de tempo nÃo real, NRT (do inglÃs
Non-Real Time
),
e de tempo real, RT (do inglÃs
Real Time
); e condiÃÃes de canal mais realistas, por exemplo, CSI
imperfeitas. Com o intuito de resolver o problema de maximizaÃÃo de satisfaÃÃo com menor
complexidade, uma abordagem com otimizaÃÃo de mÃltiplas camadas usa a teoria da utilidade
para formular o problema como uma maximizaÃÃo de soma de taxa ponderada.
Este estudo à focado no desenvolvimento de um
framework
baseado em utilidade
empregando a funÃÃo log-logÃstica deslocada, que devido Ãs suas caracterÃsticas permite novas
estratÃgias de escalonamento de priorizaÃÃo baseada em QoS e oportunismo de canal, para
uma alocaÃÃo de potÃncia igualitÃria entre os recursos de frequÃncia.
Visando a maximizaÃÃo da satisfaÃÃo de usuÃrios de serviÃos NRT e RT, dois algoritmos
de escalonamento sÃo propostos: MTSM e MDSM, respectivamente. A modificaÃÃo dos
parÃmetros da funÃÃo de utilidade log-logÃstica descolocada permite a implementaÃÃo de
diferentes estratÃgias de distribuiÃÃo de recursos.
Buscando controlar os nÃveis de satisfaÃÃo dos usuÃrios de serviÃos NRT, dois algoritmos
adaptativos de escalonamento sÃo propostos: ATES e ASC. O algoritmo ATES realiza um
controle da satisfaÃÃo mÃdia pela mudanÃa dinÃmica do parÃmetro de escala, permitindo uma
estratÃgia estÃvel para lidar com o dilema entre satisfaÃÃo e capacidade. O algoritmo ASC Ã
capaz de garantir uma variaÃÃo dinÃmica do parÃmetro de formato, garantindo um controle
rigoroso dos nÃveis de satisfaÃÃo dos usuÃrios.
SimulaÃÃes no nÃvel do sistema indicam o cumprimento do objetivo de desenvolvimento
de algoritmos de escalonamento eficientes e de baixa complexidade capazes de maximizar e
controlar os Ãndices de satisfaÃÃo. Estas estratÃgias podem ser Ãteis para o operador da rede,
que se torna capaz de projetar e operar a rede de acordo com um perfil de satisfaÃÃo de usuÃrio. / The increasing market demand for wireless services and the scarcity of radio resources calls
more than ever for the enhancement of the performance of wireless communication systems.
Nowadays, it is mandatory to ensure the provision of better radio services and to improve
coverage and capacity, thereby increasing the number of satisfied subscribers.
This thesis deals with scheduling algorithms aiming at the maximization and adaptive
control of the satisfaction index in the downlink of an Orthogonal Frequency Division Multiple
Access (OFDMA) network, considering different types of traffic models of Non-Real Time
(NRT) and Real Time (RT) services; and more realistic channel conditions, e.g., imperfect
Channel State Information (CSI). In order to solve the problem of maximizing the satisfaction
with affordable complexity, a cross layer optimization approach uses the utility theory to
formulate the problem as a weighted sum rate maximization.
This study is focused on the development of an utility-based framework employing the
shifted log-logistic function, which due to its characteristics allows novel scheduling strategies
of Quality of Service (QoS)-based prioritization and channel opportunism, for an equal power
allocationn among frequency resources.
Aiming at the maximization of the satisfaction of users of NRT and RT services, two
scheduling algorithms are proposed: Modified Throughput-based Satisfaction Maximization
(MTSM) and Modified Delay-based Satisfaction Maximization (MDSM), respectively. The
modification of parameters of the shifted log-logistic utility function enables different
strategies of distribution of resources. Seeking to track satisfaction levels of users of NRT
services, two adaptive scheduling algorithms are proposed: Adaptive Throughput-based
Efficiency-Satisfaction Trade-Off (ATES) and Adaptive Satisfaction Control (ASC). The ATES
algorithm performs an average satisfaction control by adaptively changing the scale parameter,
using a feedback control loop that tracks the overall satisfaction of the users and keep it
around the desired target value, enabling a stable strategy to deal with the trade-off between
satisfaction and capacity. The ASC algorithm is able to ensure a dynamic variation of the shape
parameter, guaranteeing a strict control of the user satisfaction levels.
System level simulations indicate the accomplishment of the objective of development
of efficient and low complexity scheduling algorithms able to maximize and control the
satisfaction indexes. These strategies can be useful to the network operator who is able to
design and operate the network according to a planned user satisfaction profile.
|
125 |
Análise estocástica do comportamento dinâmico de estruturas via métodos probabilísticos / Stochastic analysis of structural dynamic behavior via probabilistic methodsFabro, Adriano Todorovic 16 August 2018 (has links)
Orientador: José Roberto de França Arruda / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-16T06:24:37Z (GMT). No. of bitstreams: 1
Fabro_AdrianoTodorovic_M.pdf: 6602156 bytes, checksum: 3a18dd67bde7f65ae2e4dd268670356d (MD5)
Previous issue date: 2010 / Resumo: Esta dissertação tem como objetivo geral levar 'a realidade industrial subsídios para a modelagem e análise de sistemas mecânicos lineares com variabilidade, assim como metodologias computacionais para quantificação de incertezas, para fins de aplicação em projeto. Neste sentido, foram realizados estudos sobre técnicas de modelagem e análise estocástica de sistemas mecânicos lineares aplicadas, inicialmente, a algumas estruturas simples, de baixo custo computacional, por meio de simulações em MatLabR. Propõe-se uma abordagem probabilística para a modelagem de incertezas baseada no Princípio da Máxima Entropia para a flexibilidade relativa a uma trinca aberta e não propagante em uma barra modelada através do Método do Elemento Espectral (SEM). Também é apresentada uma abordagem para o tratamento de problemas de campo aleatório utilizando o SEM, onde são utilizadas soluções analíticas da decomposição de Karhunen-Lo'eve. Uma formulação para elementos de viga do tipo Euler-Bernoulli é apresentada e um exemplo em que a rigidez à flexão é modelada como um campo aleatório Gaussiano é tratado. Uma abordagem para análise estocástica do comportamento dinâmico de uma tampa de compressor hermético é proposta. Uma aproximação por elementos finitos obtida com o software Ansys R foi utilizada para representar o comportamento determinístico de uma tampa de compressor, e duas abordagens de modelagem estocástica são comparadas. Um ensaio experimental foi realizado com tampas nominalmente idênticas, sendo medidas apenas frequências naturais com excitação por impacto, de modo a
se poder compará-las com os valores obtidos teoricamente / Abstract: This dissertation has as a general objective to bring to the industrial reality subsidies for modeling and analysis of linear mechanical systems with variability, as well as computational methodologies to the uncertainty quantification, aiming industrial design applications. In that sense, theoretical studies about stochastic modeling and analysis for mechanical linear systems were performed. They were applied, firstly, to simple and computationally low cost structures using MatlabR. In that sense, a probabilistic modeling approach based on the Maximum Entropy Principle was proposed to treat the flexibility related to an open and nonpropagating crack in a rod modeled using the Spectral Element Method (SEM). An approach for the treatment of random field problems using SEM, which uses analytical solutions of the Karhunen-Lo'eve Decomposition, is also addressed. An Euler-Bernoulli beam formulation was used, and an example where the flexural stiffness is modeled as a Gaussian random field is presented. A finite element approximation obtained with the software Ansys R was used to represent the deterministic dynamic behavior of a compressor cap shell, and two stochastic modeling approaches were compared. Experiments were performed using nominally identical cap samples. Natural frequencies were measured using impact excitation in order to compare
with the theoretical results / Mestrado / Mecanica dos Sólidos e Projeto Mecanico / Mestre em Engenharia Mecânica
|
126 |
Universal object segmentation in fused range-color dataFinley, Jeffery Michael January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Christopher L. Lewis / This thesis presents a method to perform universal object segmentation on fused SICK laser range data and color CCD camera images collected from a mobile robot. This thesis also details the method of fusion. Fused data allows for higher resolution than range-only data and provides more information than color-only data. The segmentation method utilizes the Expectation Maximization (EM) algorithm to detect the location and number of universal objects modeled by a six-dimensional Gaussian distribution. This is achieved by continuously subdividing objects previously identified by EM. After several iterations, objects with similar traits are merged. The universal object model performs well in environments consisting of both man-made (walls, furniture, pavement) and natural objects (trees, bushes, grass). This makes it ideal for use in both indoor and outdoor environments. The algorithm does not require the number of objects to be known prior to calculation nor does it require a training set of data. Once the universal objects have been segmented, they can be processed and classified or left alone and used inside robotic navigation algorithms like SLAM.
|
127 |
Methods for Viral Population AnalysisArtyomenko, Alexander 08 August 2017 (has links)
The ability of Next-Generation Sequencing (NGS) to produce massive quantities of genomic data inexpensively has allowed to study the structure of viral populations from an infected host at an unprecedented resolution. As a result of a high rate of mutation and recombination events, an RNA virus exists as a heterogeneous "swarm". Virologists and computational epidemiologists are widely using NGS data to study viral populations. However, discerning rare variants is muddled by the presence of errors introduced by the sequencing technology. We develop and implement time- and cost-efficient strategy for NGS of multiple viral samples, and computational methods to analyze large quantities of NGS data and to handle sequencing errors. In particular, we present: (i) combinatorial pooling strategy for massive NGS of viral samples; (ii) kGEM and 2SNV — methods for viral population haplotyping; (iii) ShotMCF — a Multicommodity Flow (MCF) based method for frequency estimation of viral haplotypes; (iv) QUASIM — an agent-based simulator of viral evolution taking in account viral variants and immune response.
|
128 |
In Vivo Channel Characterization and Energy Efficiency Optimization and Game Theoretical Approaches in WBANsLiu, Yang 05 April 2017 (has links)
This dissertation presents several novel accomplishments in the research area of Wireless Body Area Networks (WBANs), including in vivo channel characterization, optimization and game theoretical approaches for energy efficiency in WBANs.
First, we performed the in vivo path loss simulations with HFSS human body model, built a phenomenological model for the distance and frequency dependent path loss, and also investigated angle dependent path loss of the in vivo wireless channel. Simulation data is produced in the range of 0.4−6 GHz for frequency, a wide range of distance and different angles. Based on the measurements, we produce mathematical models for in body, on body and out of body regions. The results show that our proposed models fit well with the simulated data. Based on our research, a comparison of in vivo and ex vivo channels is summarized.
Next, we proposed two algorithms for energy efficiency optimization in WBANs and evaluated their performance. In the next generation wireless networks, where devices and sensors are heterogeneous and coexist in the same geographical area creating possible collisions and interference to each other, the battery power needs to be efficiently used. The first algorithm, Cross-Layer Optimization for Energy Efficiency (CLOEE), enables us to carry out a cross-layer resource allocation that addresses the rate and reliability trade-off in the PHY, as well as the frame size optimization and transmission efficiency for the MAC layer. The second algorithm, Energy Efficiency Optimization of Channel Access Probabilities (EECAP), studies the case where the nodes access the medium in a probabilistic manner and jointly determines the optimal access probability and payload frame size for each node. These two algorithms address the problem from an optimization perspective and they are both computationally efficient and extensible to 5G/IoT networks.
Finally, in order to switch from a centralized method to a distributed optimization method, we study the energy efficiency optimization problem from a game theoretical point of view. We created a game theoretical model for energy efficiency in WBANs and investigated its best response and Nash Equilibrium of the single stage, non-cooperative game. Our results show that cooperation is necessary for efficiency of the entire system. Then we used two approaches, Correlated Equilibrium and Repeated Game, to improve the overall efficiency and enable some level of cooperation in the game.
|
129 |
Practical Dynamic Thermal Management on Intel Desktop ComputerLiu, Guanglei 12 July 2012 (has links)
Fueled by increasing human appetite for high computing performance, semiconductor technology has now marched into the deep sub-micron era. As transistor size keeps shrinking, more and more transistors are integrated into a single chip. This has increased tremendously the power consumption and heat generation of IC chips. The rapidly growing heat dissipation greatly increases the packaging/cooling costs, and adversely affects the performance and reliability of a computing system. In addition, it also reduces the processor's life span and may even crash the entire computing system. Therefore, dynamic thermal management (DTM) is becoming a critical problem in modern computer system design.
Extensive theoretical research has been conducted to study the DTM problem. However, most of them are based on theoretically idealized assumptions or simplified models. While these models and assumptions help to greatly simplify a complex problem and make it theoretically manageable, practical computer systems and applications must deal with many practical factors and details beyond these models or assumptions.
The goal of our research was to develop a test platform that can be used to validate theoretical results on DTM under well-controlled conditions, to identify the limitations of existing theoretical results, and also to develop new and practical DTM techniques. This dissertation details the background and our research efforts in this endeavor. Specifically, in our research, we first developed a customized test platform based on an Intel desktop. We then tested a number of related theoretical works and examined their limitations under the practical hardware environment. With these limitations in mind, we developed a new reactive thermal management algorithm for single-core computing systems to optimize the throughput under a peak temperature constraint. We further extended our research to a multicore platform and developed an effective proactive DTM technique for throughput maximization on multicore processor based on task migration and dynamic voltage frequency scaling technique. The significance of our research lies in the fact that our research complements the current extensive theoretical research in dealing with increasingly critical thermal problems and enabling the continuous evolution of high performance computing systems.
|
130 |
Capacity and Throughput Optimization in Multi-cell 3G WCDMA NetworksNguyen, Son 12 1900 (has links)
User modeling enables in the computation of the traffic density in a cellular network, which can be used to optimize the placement of base stations and radio network controllers as well as to analyze the performance of resource management algorithms towards meeting the final goal: the calculation and maximization of network capacity and throughput for different data rate services. An analytical model is presented for approximating the user distributions in multi-cell third generation wideband code division multiple access (WCDMA) networks using 2-dimensional Gaussian distributions by determining the means and the standard deviations of the distributions for every cell. This model allows for the calculation of the inter-cell interference and the reverse-link capacity of the network. An analytical model for optimizing capacity in multi-cell WCDMA networks is presented. Capacity is optimized for different spreading factors and for perfect and imperfect power control. Numerical results show that the SIR threshold for the received signals is decreased by 0.5 to 1.5 dB due to the imperfect power control. The results also show that the determined parameters of the 2-dimensional Gaussian model match well with traditional methods for modeling user distribution. A call admission control algorithm is designed that maximizes the throughput in multi-cell WCDMA networks. Numerical results are presented for different spreading factors and for several mobility scenarios. Our methods of optimizing capacity and throughput are computationally efficient, accurate, and can be implemented in large WCDMA networks.
|
Page generated in 0.0989 seconds