Spelling suggestions: "subject:"data 2analysis"" "subject:"data 3analysis""
111 |
Problemas inversos em física experimental: a secção de choque fotonuclear e radiação de aniquilação elétron-pósitron / Inverse problems in experimental physics: a section of the fotonuclear shock and radiation of electron-positron annihilation.Takiya, Carlos 27 June 2003 (has links)
Os métodos para resolução de Problemas Inversos aplicados a dados experimentais (Regularização, Bayesianos e Máxima Entropia) foram revistos. O procedimento de Mínimos Quadrados com Regularização por Mínima Variância (MQ-RMV) foi desenvolvido para resolução de Problemas Inversos lineares, tendo sido aplicado em: a) espectros unidimensionais simulados; b)determinação da secção de choque ANTPOT.34 S (, xn) a partir de yield de bremsstrahlung; c)análise da radiação de aniquilação elétron-pósitron em alumínio de experimento de coincidência com dois detetores semicondutores. Os resultados são comparados aos obtidos por outros métodos. / The methods used to solve inverse problems applied to experimental data (Regularization, Bayesian and Maximum Entropy) were revised. The Least-Squares procedure with Minimum Variance Regularization (LS-MVR) was developed to solve linear inverse problems, being applied to: a)simulated one-dimensional histograms; b) 34S (, xn) cross-section determination radiation in Aluminum from coincidence experiments with two semiconductor detectors. The results were compared to that obtained by other methods.
|
112 |
Análise de dados utilizando a medida de tempo de consenso em redes complexas / Data anlysis using the consensus time measure for complex networksLopez, Jean Pierre Huertas 30 March 2011 (has links)
Redes são representações poderosas para muitos sistemas complexos, onde vértices representam elementos do sistema e arestas representam conexões entre eles. Redes Complexas podem ser definidas como grafos de grande escala que possuem distribuição não trivial de conexões. Um tópico importante em redes complexas é a detecção de comunidades. Embora a detecção de comunidades tenha revelado bons resultados na análise de agrupamento de dados com grupos de diversos formatos, existem ainda algumas dificuldades na representação em rede de um conjunto de dados. Outro tópico recente é a caracterização de simplicidade em redes complexas. Existem poucos trabalhos nessa área, no entanto, o tema tem muita relevância, pois permite analisar a simplicidade da estrutura de conexões de uma região de vértices, ou de toda a rede. Além disso, mediante a análise de simplicidade de redes dinâmicas no tempo, é possível conhecer como vem se comportando a evolução da rede em termos de simplicidade. Considerando a rede como um sistema dinâmico de agentes acoplados, foi proposto neste trabalho uma medida de distância baseada no tempo de consenso na presença de um líder em uma rede acoplada. Utilizando essa medida de distância, foi proposto um método de detecção de comunidades para análise de agrupamento de dados, e um método de análise de simplicidade em redes complexas. Além disso, foi proposto uma técnica de construção de redes esparsas para agrupamento de dados. Os métodos têm sido testados com dados artificiais e reais, obtendo resultados promissores / Networks are powerful representations for many complex systems, where nodes represent elements of the system and edges represent connections between them. Complex networks can be defined as graphs with no trivial distribution of connections. An important topic in complex networks is the community detection. Although the community detection have reported good results in the data clustering analysis with groups of different formats, there are still some dificulties in the representation of a data set as a network. Another recent topic is the characterization of simplicity in complex networks. There are few studies reported in this area, however, the topic has much relevance, since it allows analyzing the simplicity of the structure of connections between nodes of a region or connections of the entire network. Moreover, by analyzing simplicity of dynamic networks in time, it is possible to know the behavior in the network evolution in terms of simplicity. Considering the network as a coupled dynamic system of agents, we proposed a distance measure based on the consensus time in the presence of a leader in a coupled network. Using this distance measure, we proposed a method for detecting communities to analyze data clustering, and a method for simplicity analysis in complex networks. Furthermore, we propose a technique to build sparse networks for data clustering. The methods have been tested with artificial and real data, obtaining promising results
|
113 |
The distinction of simulated failure data by the likelihood ratio testDrayer, Darryl D January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
114 |
Detection of outliers in failure dataGallup, Donald Robert January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
115 |
Data analysis techniques useful for the detection of B-mode polarisation of the Cosmic Microwave BackgroundWallis, Christopher January 2016 (has links)
Asymmetric beams can create significant bias in estimates of the power spectra from cosmic microwave background (CMB) experiments. With the temperature power spectrum many orders of magnitude stronger than the B-mode power spectrum any systematic error that couples the two must be carefully controlled and/or removed. In this thesis, I derive unbiased estimators for the CMB temperature and polarisation power spectra taking into account general beams and scan strategies. I test my correction algorithm on simulations of two temperature-only experiments and demonstrate that it is unbiased. I also develop a map-making algorithm that removes beam asymmetry bias at the map level. I demonstrate its implementation using simulations. I present two new map-making algorithms that create polarisation maps clean of temperature-to-polarisation leakage systematics due to differential gain and pointing between a detector pair. Where a half wave plate is used, I show that the spin-2 systematic due to differential ellipticity can also be removed using my algorithms. The first algorithm is designed to work with scan strategies that have a good range of crossing angles for each map pixel and the second for scan strategies that have a limited range of crossing angles. I demonstrate both algorithms by using simulations of time ordered data with realistic scan strategies and instrumental noise. I investigate the role that a scan strategy can have in mitigating certain common systematics by averaging systematic errors down with many crossing angles. I present approximate analytic forms for the error on the recovered B-mode power spectrum that would result from these systematic errors. I use these analytic predictions to search the parameter space of common satellite scan strategies to identify the features of a scan strategy that have most impact in mitigating systematic effects.
|
116 |
Problemas inversos em física experimental: a secção de choque fotonuclear e radiação de aniquilação elétron-pósitron / Inverse problems in experimental physics: a section of the fotonuclear shock and radiation of electron-positron annihilation.Carlos Takiya 27 June 2003 (has links)
Os métodos para resolução de Problemas Inversos aplicados a dados experimentais (Regularização, Bayesianos e Máxima Entropia) foram revistos. O procedimento de Mínimos Quadrados com Regularização por Mínima Variância (MQ-RMV) foi desenvolvido para resolução de Problemas Inversos lineares, tendo sido aplicado em: a) espectros unidimensionais simulados; b)determinação da secção de choque ANTPOT.34 S (, xn) a partir de yield de bremsstrahlung; c)análise da radiação de aniquilação elétron-pósitron em alumínio de experimento de coincidência com dois detetores semicondutores. Os resultados são comparados aos obtidos por outros métodos. / The methods used to solve inverse problems applied to experimental data (Regularization, Bayesian and Maximum Entropy) were revised. The Least-Squares procedure with Minimum Variance Regularization (LS-MVR) was developed to solve linear inverse problems, being applied to: a)simulated one-dimensional histograms; b) 34S (, xn) cross-section determination radiation in Aluminum from coincidence experiments with two semiconductor detectors. The results were compared to that obtained by other methods.
|
117 |
Determinants and consequences of working capital managementSupatanakornkij, Sasithorn January 2015 (has links)
Well-managed working capital plays an important role in running a sound and successful business as it has a direct influence on liquidity and profitability. Working capital management (WCM) has recently received an increased focus from businesses and been regarded as a key managerial intervention to maintain solvency, especially during the global financial crisis when external financing was less available (PwC, 2012). This thesis contains a comprehensive analysis of the determinants and consequences of WCM. For the determinants of WCM, the results suggest that the nature of a firm’s WCM is determined by a combination of firm characteristics, economic condition, and country-level variables. Sources of financing, firm size, and levels of profitability and investment in long-term assets play a vital role in the management of working capital. The financial downturn has also put increased pressure on firms to operate with a lower level of working capital. In addition, country-level variables (i.e., legal environment and culture) have a significant influence on determining a firm’s WCM as well as its determinants. For the consequences of WCM, the findings highlight the importance of higher efficiency in WCM in terms of its potential contribution in enhancing profitability. In particular, firms operating with lower accounts receivable, inventory, and accounts payable periods are associated with higher profitability. Firms can also enhance their profitability further by ensuring a proper “fit” among these components of working capital. Finally, achieving higher efficiency in inventory management can be a source of profitability improvements during the financial crisis. Overall, the thesis contributes to the accounting and finance literature in two distinct ways: research design and new findings. A more extensive data set (in terms of countries coverage and time frame), new estimation technique (i.e., dynamic panel generalised method of moments (GMM) estimation to produce more consistent and reliable results), and substantive robustness tests (conspicuous by their absence in prior studies) were applied and result in several new empirical findings. First, a firm’s WCM is influenced not only by internal factors but also external factors such as country setting, legal environment and culture. Second, a comprehensive measure of WCM (i.e., cash conversion cycle (CCC)) does not represent a useful surrogate for the effects of WCM on corporate profitability. Instead, an examination of the individual components of CCC gives more pronounced and valid results. Third, by managing working capital correctly, firms can enhance their profitability even further, at different levels, and through different components of profitability (including profit margin and asset productivity).
|
118 |
Comparing Event Detection Methods in Single-Channel Analysis Using Simulated DataDextraze, Mathieu Francis 16 October 2019 (has links)
With more states revealed, and more reliable rates inferred, mechanistic schemes for ion channels have increased in complexity over the history of single-channel studies. At the forefront of single-channel studies we are faced with a temporal barrier delimiting the briefest event which can be detected in single-channel data. Despite improvements in single-channel data analysis, the use of existing methods remains sub-optimal. As existing methods in single-channel data analysis are unquantified, optimal conditions for data analysis are unknown. Here we present a modular single-channel data simulator with two engines; a Hidden Markov Model (HMM) engine, and a sampling engine. The simulator is a tool which provides the necessary a priori information to be able to quantify and compare existing methods in order to optimize analytic conditions. We demonstrate the utility of our simulator by providing a preliminary comparison of two event detection methods in single-channel data analysis; Threshold Crossing and Segmental k-means with Hidden Markov Modelling (SKM-HMM).
|
119 |
A review of "longitudinal study" in developmental psychologyFinley, Emily H. 01 January 1972 (has links)
The purpose of this library research thesis is to review the "longitudinal study" in terms of problems and present use. A preliminary search of the literature on longitudinal method revealed problems centering around two areas: (1) definition of "longitudinal study" and (2) practical problems of method itself. The purpose of this thesis then is to explore through a search of books and journals the following questions:
1. How can “longitudinal study” be defined?
2. What problems are inherent in the study of the same individuals over time and how can these problems be solved?
A third question which emerges from these two is:
3. How is “longitudinal study” being used today?
This thesis differentiates traditional longitudinal study from other methods of study: the cross-sectional study, the time-lag study, the experimental study, the retrospective study, and the study from records. Each of these methods of study is reviewed according to its unique problems and best uses and compared with the longitudinal study. Finally, the traditional longitudinal study is defined as the study: (1) of individual change under natural conditions not controlled by the experimenter, (2) which proceeds over time from the present to the future by measuring the same individuals repeatedly, and (3) which retains individuality of data in analyses.
Some problem areas of longitudinal study are delineated which are either unique to this method or especially difficult. The following problems related to planning the study are reviewed: definition of study objectives, selection of method of study, statistical methods, cost, post hoc analysis and replication of the study, time factor in longitudinal study, and the problem of allowing variables to operate freely. Cultural shift and attrition are especially emphasized. The dilemma is examined which is posed by sample selection with its related problems of randomization and generalizability of the study, together with the problems of repeated measurements and selection of control groups. These problems are illustrated with studies from the literature.
Not only are these problems delineated cut considerable evidence is shown that we have already started to accumulate data that will permit their solution. This paper presents a number of studies which have considered these problems separately or as a side issue of a study on some other topic. Some recommendations for further research in problem areas are suggested.
At the same time that this thesis notes differentiation of the longitudinal study from other studies, it also notes integration of results of longitudinal studies with results of other studies. The tenet adopted here is: scientific knowledge is cumulative and not dependent on one crucial experiment.
Trends in recent longitudinal studies are found to be toward more strict observance of scientific protocols and toward limitation of time and objectives of the study. When objectives of the study are well defined and time is limited to only enough for specified change to take place, many of the problems of longitudinal study are reduced to manageable proportions.
Although modern studies are of improved quality, longitudinal method is not being sufficiently used today to supply the demand for this type of data. Longitudinal study is necessary to answer some of the questions in developmental psychology. We have no alternative but to continue to develop this important research tool.
|
120 |
Hypothesis Testing for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous TraitsJanuary 2018 (has links)
acase@tulane.edu / Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in the extreme phenotypic samples within the top and bottom percentiles, EPS can boost the study power compared with the random sampling with the same sample size. The existing statistical methods for EPS data test the variants/regions individually. However, many disorders are caused by multiple genetic factors. Therefore, it is critical to simultaneously model the effects of genetic factors, which may increase the power of current genetic studies and identify novel disease-associated genetic factors in EPS. The challenge of the simultaneous analysis of genetic data is that the number (p ~10,000) of genetic factors is typically greater than the sample size (n ~1,000) in a single study. The standard linear model would be inappropriate for this p>n problem due to the rank deficiency of the design matrix. An alternative solution is to apply a penalized regression method – the least absolute shrinkage and selection operator (LASSO).
LASSO can deal with this high-dimensional (p>n) problem by forcing certain regression coefficients to be zero. Although the application of LASSO in genetic studies under random sampling has been widely studied, its statistical inference and testing under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function to investigate the genetic associations, including the gene expression and rare variant analyses. The comprehensive simulation shows EPS-LASSO outperforms existing methods with superior power when the effects are large and stable type I error and FDR control. Together with the real data analysis of genetic study for obesity, our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. / 1 / Chao Xu
|
Page generated in 0.4074 seconds