Spelling suggestions: "subject:"data 2analysis"" "subject:"data 3analysis""
171 |
Adapting Sensing and Transmission Times to Improve Secondary User Throughput in Cognitive Radio Ad Hoc NetworksJanuary 2012 (has links)
abstract: Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting, the sensing and transmission times of the cognitive radio play a more important role because of the decentralized nature of the network. They have a direct impact on the throughput. Due to the tradeoff between throughput and the sensing time, finding optimal values for sensing time and transmission time is difficult. In this thesis, a method is proposed to improve the throughput of a CRAHN by dynamically changing the sensing and transmission times. To simulate the CRAHN setting, ns-2, the network simulator with an extension for CRAHN is used. The CRAHN extension module implements the required Primary User (PU) and Secondary User (SU) and other CR functionalities to simulate a realistic CRAHN scenario. First, this work presents a detailed analysis of various CR parameters, their interactions, their individual contributions to the throughput to understand how they affect the transmissions in the network. Based on the results of this analysis, changes to the system model in the CRAHN extension are proposed. Instantaneous throughput of the network is introduced in the new model, which helps to determine how the parameters should adapt based on the current throughput. Along with instantaneous throughput, checks are done for interference with the PUs and their transmission power, before modifying these CR parameters. Simulation results demonstrate that the throughput of the CRAHN with the adaptive sensing and transmission times is significantly higher as compared to that of non-adaptive parameters. / Dissertation/Thesis / M.S. Computer Science 2012
|
172 |
[en] DATA ANALISYS VIA GLIM: MODELLING THE RESIDENCIAL ELECTRICITY CONSUMPTION OF JUIZ DE FOR A, MINAS GERAIS / [pt] SOBRE A ANÁLISE DE DADOS GLIM: MODELAGEM DO CONSUMO RESIDENCIAL DE ENERGIA ELÉTRICA EM JUIZ DE FORA, MINAS GERAISJOSE ANTONIO DA SILVA REIS 19 July 2006 (has links)
[pt] O objetivo desta dissertação é modelar o consumo mensal de
utensílios elétricos para fins residenciais em Juiz de
Fora por um modelo linear generalizado e comparar os
resultados obtidos com os de um modelo clássico de
regressão. Para estimar os parâmetros em ambos os casos,
foi o software GLIM (Generalized Linear Interactive
Models), desenvolvido. Para esta finalidade foi usada uma
matriz de dados extraída de uma pesquisa realizada junto a
593 domicílios no Município de Juiz de Fora, MG. Destes,
alguns foram excluídos por apresentarem valores
discrepantes o que poderia distorcer os resultados do
modelo, levando a conclusões errôneas. O corpo desta
dissertação consta de uma introdução sobre os problemas do
aumento pela demanda de energia elétrica no Brasil e no
mundo e as soluções propostas para contornar os problemas
resultantes deste aumento através do gerenciamento pelo
lado da demanda. Existe alguma literatura a respeito
destes trabalhos, sendo que no Brasil o mais conhecido é
um trabalho desenvolvido por LINS, Marcos Estelita Pereira
em 1989, realizado com 10818 domicílios em todo o Brasil.
Neste trabalho supôs-se que o consumo residencial mensal
que é a variável resposta do modelo tem distribuição
normal, utilizando-se o método dos mínimos quadrados
ordinários para a estimação dos parâmetros. Nesta
dissertação, foi utilizado um modelo linear generalizado
que, se considerar a variável resposta normalmente
equivale a um processo clássico de regressão linear. Os
resultados da função desvio e da estatística de Pearson
generalizada indicaram que deve-se utilizar a distribuição
gama de probabilidades para o consumo uma vez que os dados
exibem uma ligeira assimetria positiva. Devido às
condições da matriz de dados que apresenta algumas colunas
com muitos valores nulos para alguns utensílios, recomenda-
se que o processo seja repetido para uma matriz mais
completa. As diferenças encontradas para uma distribuição
normal e uma distribuição gama somente foram
significativas para os valores da função desvio e da
estatística generalizada de Pearson. Os coeficientes de
explicação nos dois casos, são praticamente iguais em
condições semelhantes, provavelmente porque a assimetria
positiva na variável resposta seja muito pequena. A
dissertação é concluída recomendo o emprego dos modelos
lineares generalizados por serem mais flexíveis que os
modelos clássicos e o emprego do software GLIM para
implementar o processo de estimação dos parâmetros dos
mesmos. / [en] The purpose of this study is to estimate model of
residential consumption of electrical energy in Juiz de
Fora, MG, using the computer program GLIM and to compare
such results with the ones obtained when a classical
regression model are employed.
A data representanting the findings of 593 dweling in the
city of Juiz de Fora was used in this study. Some data
weren´t yet survey considered outliers and could conduce
false results and consequenly sing as they were bias the
conclusions.
This dissertation has an introduction about the problems
related to the consumption of electrical energy in Brazil
and the world an the solutions proposed to solve it
through demand side management (DSM). The consumption was
assumed to have a gamma distribution.
The data set was divided into four ranges of consumption
and one model for each range was estimated.
A generalized linear model in this task which can be
consideraded a classical regresssion model when
consumption is supposed to have a normal distribution.
The results from the error function and the generalized
Pearson´statistic pointed towards the use of a Gamma
probability function for consumption due to a slight
positive skeweness shown by the data.
The data matrix presented some null columns values for a
number of appliances; the repetition of the process using
more dense matrix is recommended.
The differences found for a normal function and a Gamma
distributions were significant only for the values of the
errors function and the generalized Pearson statistic. The
coefficients of explanation in both cases were under
similar conditions, perhaps due to the very slight
positive skewness of this response variable. The
dissertation is conclude by recommending the use of
Generalized Linear Models for the greater flexibility when
compared with classic models; besides, the GLIM software
is recommended for this estimation of the model parameters.
|
173 |
Time-Resolved Crystallography using X-ray Free-Electron LaserJanuary 2015 (has links)
abstract: Photosystem II (PSII) is a large protein-cofactor complex. The first step in
photosynthesis involves the harvesting of light energy from the sun by the antenna (made
of pigments) of the PSII trans-membrane complex. The harvested excitation energy is
transferred from the antenna complex to the reaction center of the PSII, which leads to a
light-driven charge separation event, from water to plastoquinone. This phenomenal
process has been producing the oxygen that maintains the oxygenic environment of our
planet for the past 2.5 billion years.
The oxygen molecule formation involves the light-driven extraction of 4 electrons
and protons from two water molecules through a multistep reaction, in which the Oxygen
Evolving Center (OEC) of PSII cycles through 5 different oxidation states, S0 to S4.
Unraveling the water-splitting mechanism remains as a grant challenge in the field of
photosynthesis research. This requires the development of an entirely new capability, the
ability to produce molecular movies. This dissertation advances a novel technique, Serial
Femtosecond X-ray crystallography (SFX), into a new realm whereby such time-resolved
molecular movies may be attained. The ultimate goal is to make a “molecular movie” that
reveals the dynamics of the water splitting mechanism using time-resolved SFX (TRSFX)
experiments and the uniquely enabling features of X-ray Free-Electron Laser
(XFEL) for the study of biological processes.
This thesis presents the development of SFX techniques, including development of
new methods to analyze millions of diffraction patterns (~100 terabytes of data per XFEL
experiment) with the goal of solving the X-ray structures in different transition states.
ii
The research comprises significant advancements to XFEL software packages (e.g.,
Cheetah and CrystFEL). Initially these programs could evaluate only 8-10% of all the
data acquired successfully. This research demonstrates that with manual optimizations,
the evaluation success rate was enhanced to 40-50%. These improvements have enabled
TR-SFX, for the first time, to examine the double excited state (S3) of PSII at 5.5-Å. This
breakthrough demonstrated the first indication of conformational changes between the
ground (S1) and the double-excited (S3) states, a result fully consistent with theoretical
predictions.
The power of the TR-SFX technique was further demonstrated with proof-of principle
experiments on Photoactive Yellow Protein (PYP) micro-crystals that high
temporal (10-ns) and spatial (1.5-Å) resolution structures could be achieved.
In summary, this dissertation research heralds the development of the TR-SFX
technique, protocols, and associated data analysis methods that will usher into practice a
new era in structural biology for the recording of ‘molecular movies’ of any biomolecular
process. / Dissertation/Thesis / Doctoral Dissertation Chemistry 2015
|
174 |
Handling Sparse and Missing Data in Functional Data Analysis: A Functional Mixed-Effects Model ApproachJanuary 2016 (has links)
abstract: This paper investigates a relatively new analysis method for longitudinal data in the framework of functional data analysis. This approach treats longitudinal data as so-called sparse functional data. The first section of the paper introduces functional data and the general ideas of functional data analysis. The second section discusses the analysis of longitudinal data in the context of functional data analysis, while considering the unique characteristics of longitudinal data such, in particular sparseness and missing data. The third section introduces functional mixed-effects models that can handle these unique characteristics of sparseness and missingness. The next section discusses a preliminary simulation study conducted to examine the performance of a functional mixed-effects model under various conditions. An extended simulation study was carried out to evaluate the estimation accuracy of a functional mixed-effects model. Specifically, the accuracy of the estimated trajectories was examined under various conditions including different types of missing data and varying levels of sparseness. / Dissertation/Thesis / Masters Thesis Psychology 2016
|
175 |
Searching for exoplanets using artificial intelligencePearson, Kyle A., Palafox, Leon, Griffith, Caitlin A. 02 1900 (has links)
In the last decade, over a million stars were monitored to detect transiting planets. Manual interpretation of potential exoplanet candidates is labour intensive and subject to human error, the results of which are difficult to quantify. Here we present a new method of detecting exoplanet candidates in large planetary search projects that, unlike current methods, uses a neural network. Neural networks, also called 'deep learning' or 'deep nets', are designed to give a computer perception into a specific problem by training it to recognize patterns. Unlike past transit detection algorithms, deep nets learn to recognize planet features instead of relying on hand-coded metrics that humans perceive as the most representative. Our convolutional neural network is capable of detecting Earth-like exoplanets in noisy time series data with a greater accuracy than a least-squares method. Deep nets are highly generalizable allowing data to be evaluated from different time series after interpolation without compromising performance. As validated by our deep net analysis of Kepler light curves, we detect periodic transits consistent with the true period without any model fitting. Our study indicates that machine learning will facilitate the characterization of exoplanets in future analysis of large astronomy data sets.
|
176 |
Effiziente DatenanalyseHahne, Hannes, Schulze, Frank 03 April 2018 (has links) (PDF)
Die Fähigkeit zur Analyse großer Datenmengen sowie das extrahieren wichtiger Erkenntnisse daraus, sind in der modernen Unternehmenswelt ein entscheidender Wettbewerbsvorteil geworden. Umso wichtiger ist es, dabei vor allem nachvollziehbar, reproduzierbar und effizient vorzugehen.
Der Beitrag stellt mit dem Instrument der skriptbasierten Datenanalyse eine Möglichkeit vor, um diesen Anforderungen gerecht zu werden.
|
177 |
Understanding extreme quasar optical variability with CRTS – I. Major AGN flaresGraham, Matthew J., Djorgovski, S. G., Drake, Andrew J., Stern, Daniel, Mahabal, Ashish A., Glikman, Eilat, Larson, Steve, Christensen, Eric 10 1900 (has links)
There is a large degree of variety in the optical variability of quasars and it is unclear whether this is all attributable to a single (set of) physical mechanism(s). We present the results of a systematic search for major flares in active galactic nucleus (AGN) in the Catalina Real-time Transient Survey as part of a broader study into extreme quasar variability. Such flares are defined in a quantitative manner as being atop of the normal, stochastic variability of quasars. We have identified 51 events from over 900 000 known quasars and high-probability quasar candidates, typically lasting 900 d and with a median peak amplitude of Delta m = 1.25 mag. Characterizing the flare profile with a Weibull distribution, we find that nine of the sources are well described by a single-point single-lens model. This supports the proposal by Lawrence et al. that microlensing is a plausible physical mechanism for extreme variability. However, we attribute the majority of our events to explosive stellar-related activity in the accretion disc: superluminous supernovae, tidal disruption events and mergers of stellar mass black holes.
|
178 |
Texture and Microstructure in Two-Phase Titanium AlloysMandal, Sudipto 01 August 2017 (has links)
This work explores the processing-microstructure-property relationships in two-phase titanium alloys such as Ti-6Al-4V and Ti-5Al-5V-5Mo-3Cr that are used for aerospace applications. For this purpose, an Integrated Computational Materials Engineering approach is used. Microstructure and texture of titanium alloys are characterized using optical microscopy, electron backscatter diffraction and x-ray diffraction. To model their properties, threedimensional synthetic digital microstructures are generated based on experimental characterization data. An open source software package, DREAM.3D, is used to create heterogeneous two-phase microstructures that are statistically representative of two-phase titanium alloys. Both mean-field and full-field crystal plasticity models are used for simulating uniaxial compression at different loading conditions. A viscoplastic self-consistent model is used to match the stress-strain response of the Ti-5553 alloy based on uniaxial compression tests. A physically-based Mechanical Threshold Stress (MTS) model is designed to cover wide ranges of deformation conditions. Uncertainties in the parameters of the MTS model are quantified using canonical correlation analysis, a multivariate global sensitivity analysis technique. An elastoviscoplastic full-field model based on the fast Fourier transform algorithm was used to used to simulate the deformation response at both microscopic and continuum level. The probability distribution of stresses and strains for both the phases in the two-phase material is examined statistically. The effect of changing HCP phase volume fraction and morphology has been explored with the intent of explaining the ow softening behavior in titanium alloys.
|
179 |
Impact of Visualization on Engineers – A SurveyShah, Dhaval Kashyap 29 June 2016 (has links)
In the recent years, there has been a tremendous growth in data. Numerous research and technologies have been proposed and developed in the field of Visualization to cope with the associated data analytics. Despite these new technologies, the pace of people’s capacity to perform data analysis has not kept pace with the requirement. Past literature has hinted as to various reasons behind this disparity. The purpose of this research is to demonstrate specifically the usage of Visualization in the field of engineering. We conducted the research with the help of a survey identifying the places where Visualization educational shortcomings may exist. We conclude by asserting that there is a need for creating awareness and formal education about Visualization for Engineers.
|
180 |
Visual exploratory analysis of large data sets : evaluation and applicationLam, Heidi Lap Mun 11 1900 (has links)
Large data sets are difficult to analyze. Visualization has been proposed to assist exploratory data analysis (EDA) as our visual systems can process signals in
parallel to quickly detect patterns. Nonetheless, designing an effective visual
analytic tool remains a challenge.
This challenge is partly due to our incomplete understanding of how common
visualization techniques are used by human operators during analyses, either in
laboratory settings or in the workplace.
This thesis aims to further understand how visualizations can be used to support EDA. More specifically, we studied techniques that display multiple levels of visual information resolutions (VIRs) for analyses using a range of methods.
The first study is a summary synthesis conducted to obtain a snapshot of
knowledge in multiple-VIR use and to identify research questions for the thesis:
(1) low-VIR use and creation; (2) spatial arrangements of VIRs. The next two
studies are laboratory studies to investigate the visual memory cost of image
transformations frequently used to create low-VIR displays and overview use
with single-level data displayed in multiple-VIR interfaces.
For a more well-rounded evaluation, we needed to study these techniques in
ecologically-valid settings. We therefore selected the application domain of web
session log analysis and applied our knowledge from our first three evaluations
to build a tool called Session Viewer. Taking the multiple coordinated view
and overview + detail approaches, Session Viewer displays multiple levels of
web session log data and multiple views of session populations to facilitate data
analysis from the high-level statistical to the low-level detailed session analysis
approaches.
Our fourth and last study for this thesis is a field evaluation conducted at
Google Inc. with seven session analysts using Session Viewer to analyze their
own data with their own tasks. Study observations suggested that displaying
web session logs at multiple levels using the overview + detail technique helped bridge between high-level statistical and low-level detailed session analyses, and
the simultaneous display of multiple session populations at all data levels using
multiple views allowed quick comparisons between session populations. We also
identified design and deployment considerations to meet the needs of diverse
data sources and analysis styles. / Science, Faculty of / Computer Science, Department of / Graduate
|
Page generated in 0.0485 seconds