• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 863
  • 412
  • 156
  • 84
  • 79
  • 35
  • 27
  • 16
  • 16
  • 14
  • 13
  • 10
  • 9
  • 8
  • 8
  • Tagged with
  • 2081
  • 2081
  • 547
  • 431
  • 430
  • 382
  • 380
  • 202
  • 190
  • 164
  • 162
  • 157
  • 150
  • 147
  • 146
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Predictive reliabilities for electronic components

Nagarur, Nagendra N. January 1988 (has links)
A reliability model to study the behavior of an electronic component subject to several failure mechanisms ls developed. The mechanisms considered for the analysis are of degradation type where the number of defects for a mechanism increases with time, eventually causing the failure of the component. The failure pattern of the component subject to a single mechanism · with given initial and final number of defects is modelled as a pure birth process. Failure time for this mechanism is expressed as the first passage time of the birth process to state k from initial state l. First passage time distribution is derived for different forms of transition rates. When the initial and final states of the process are considered as random, the failure time is expressed as the mixture distribution obtained from the conditional first passage time distributions. The mixture distributions are well represented by a Weibull distribution. A computer program is developed to compute the parameters of the Weibull distribution iteratively by the method of matching moments. The approximation results are statistically validated. The results for a single mechanism are extended to the case of multiple mechanisms. Extreme·value theory and competing risk theory are applied to analyze the simultaneous effects of multiple mechanisms. lt is shown that the aggregate failure time distribution has a Weibull form for both the theories. The model explains the influence of physical and chemical properties of the component and the operating conditions on the failure times. It can be used for accelerated testing and for lncorporating reliability at product design stage. / Ph. D.
242

Quantification of Effect of Solar Storms on TEC over U.S. sector Using Machine Learning

Sardana, Disha 26 June 2018 (has links)
A study of large solar storms in the equinox periods of solar cycles 23 and 24 is presented to quantify their effects on the total electron content (TEC) in the ionosphere. We study the dependence of TEC over the contiguous US on various storm parameters, including the onset time of the storm, the duration of the storm, its intensity, and the rate of change of the ring current response. These parameters are inferred autonomously and compared to TEC values obtained from the CORS network of GPS stations. To quantify the effects we examine the difference between the storm-time TEC value and an average from 5 quiet days during the same month. These values are studied over a grid with 1 deg x 1 deg spatial resolution in latitude and longitude over the US sector. Correlations between storm parameters and the quantified delta TEC values are studied using machine learning techniques to identify the most important controlling variables. The weights inferred by the algorithm for each input variable show their importance to the resultant TEC change. The results of this work are compared to recent TEC studies to investigate the effects of large storms on the distribution of ionospheric density over large spatial and temporal scales. / MS / This study analyzes the impact of geomagnetic storms on the electrical properties of the upper atmosphere at altitudes where satellites routinely fly. The storms are caused by bursts of charged particles from the sun entering the Earth’s atmosphere at high latitudes, leading to phenomena like the aurora. These fluctuations in the atmospheric electrical properties can potentially have serious consequences for the electrical power grid, the communications infrastructure, and various technological systems. Given the risks solar storms can pose, it is important to predict how strong the impact of a given storm is likely to be. The current study applies machine learning techniques to model one particular parameter that relates to the electrified atmosphere over the contiguous US sector. We quantify the strength of the fluctuations as a function of various storm parameters, including onset time and duration. This enables us to autonomously infer which storm parameters have the most significant influence on the resultant atmospheric changes, and compare our results to other recent studies.
243

A scoping review to identify the techniques frequently used when analysing qualitative visual data

Smith, S.K., Mountain, Gail, Hawkins, R.J. 30 September 2015 (has links)
No / Challenges were encountered when attempting to analyse video based data during a project exploring touch screen computer technology with people living with dementia. In order to inform the analytic process, a scoping review of published evidence was undertaken. Results of the scope illustrated the use of various techniques when analysing visual data, the most common of which was the transcription of video into text and analysed using conversation analysis. Three additional issues emerged in the course of the review. First, there is an absence of detail when describing the ethical implications involved when utilising visual methods in research. Second, limited priority is given to providing a clear rationale for utilising visual methods when audio or field notes may have been a viable alternative. Third, only 40% of reviewed articles clearly stated a chosen methodology. The conclusions of the review illustrate a lack of consistency across studies in the overall reporting of research methods and recommend that authors be explicit in their reporting of methodological issues across the research process. / The PhD is funded by the ESRC as part of the White Rose University Consortium
244

Using Neural Networks to Classify Discrete Circular Probability Distributions

Gaumer, Madelyn 01 January 2019 (has links)
Given the rise in the application of neural networks to all sorts of interesting problems, it seems natural to apply them to statistical tests. This senior thesis studies whether neural networks built to classify discrete circular probability distributions can outperform a class of well-known statistical tests for uniformity for discrete circular data that includes the Rayleigh Test1, the Watson Test2, and the Ajne Test3. Each neural network used is relatively small with no more than 3 layers: an input layer taking in discrete data sets on a circle, a hidden layer, and an output layer outputting probability values between 0 and 1, with 0 mapping to uniform and 1 mapping to nonuniform. In evaluating performances, I compare the accuracy, type I error, and type II error of this class of statistical tests and of the neural networks built to compete with them. 1 Jammalamadaka, S. Rao(1-UCSB-PB); SenGupta, A.(6-ISI-ASU)Topics in circular statistics. (English summary) With 1 IBM-PC floppy disk (3.5 inch; HD). Series on Multivariate Analysis, 5. World Scientific Publishing Co., Inc., River Edge, NJ, 2001. xii+322 pp. ISBN: 981-02-3778-2 2 Watson, G. S.Goodness-of-fit tests on a circle. II. Biometrika 49 1962 57–63. 3 Ajne, B.A simple test for uniformity of a circular distribution. Biometrika 55 1968 343–354.
245

Multiple Calibrations in Integrative Data Analysis: A Simulation Study and Application to Multidimensional Family Therapy

Hall, Kristin Wynn 01 January 2013 (has links)
A recent advancement in statistical methodology, Integrative Data Analyses (IDA Curran & Hussong, 2009) has led researchers to employ a calibration technique as to not violate an independence assumption. This technique uses a randomly selected, simplified correlational structured subset, or calibration, of a whole data set in a preliminary stage of analysis. However, a single calibration estimator suffers from instability, low precision and loss of power. To overcome this limitation, a multiple calibration (MC; Greenbaum et al., 2013; Wang et al., 2013) approach has been developed to produce better estimators, while still removing a level of dependency in the data as to not violate independence assumption. The MC method is conceptually similar to multiple imputation (MI; Rubin, 1987; Schafer, 1997), so MI estimators were borrowed for comparison. A simulation study was conducted to compare the MC and MI estimators, as well as to evaluate the performance of the operating characteristics of the methods in a cross classified data characteristic design. The estimators were tested in the context of assessing change over time in a longitudinal data set. Multiple calibrations consisting of a single measurement occasion per subject were drawn from a repeated measures data set, analyzed separately, and then combined by the rules set forth by each method to produce the final results. The data characteristics investigated were effect size, sample size, and the number of repeated measures per subject. Additionally, a real data application of an MC approach in an IDA framework was conducted on data from three completed, randomized controlled trials studying the treatment effects of Multidimensional Family Therapy (MDFT; Liddle et al., 2002) on substance use trajectories for adolescents at a one year follow-up. The simulation study provided empirical evidence of how the MC method preforms, as well as how it compares to the MI method in a total of 27 hypothetical scenarios. There were strong asymptotic tendencies observed for the bias, standard error, mean square error and relative efficiency of an MC estimator to approach the whole set estimators as the number of calibrations approached 100. The MI combination rules proved not appropriate to borrow for the MC case because the standard error formulas were too conservative and performance with respect to power was not robust. As a general suggestion, 5 calibrations are sufficient to produce an estimator with about half the bias of a single calibration estimator and at least some indication of significance, while 20 calibrations are ideal. After 20 calibrations, the contribution of an additional calibration to the combined estimator greatly diminished. The MDFT application demonstrated a successful implementation of 5 calibration approach in an IDA on real data, as well as the risk of missing treatment effects when analysis is limited to a single calibration's results. Additionally, results from the application provided evidence that MDFT interventions reduced the trajectories of substance use involvement at a 1-year follow-up to a greater extent than any of the active control treatment groups, overall and across all gender and ethnicity subgroups. This paper will aid researchers interested in employing a MC approach in an IDA framework or whenever a level of dependency in a data set needs to be removed for an independence assumption to hold.
246

Novel statistical models for ecological momentary assessment studies of sexually transmitted infections

He, Fei 18 July 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The research ideas included in this dissertation are motivated by a large sexually trans mitted infections (STIs) study (IU Phone study), which is also an ecological momentary assessment (EMA) study implemented by Indiana University from 2008 to 2013. EMA, as a group of methods used to collect subjects’ up-to-date behaviors and status, can increase the accuracy of this information by allowing a participant to self-administer a survey or diary entry, in their own environment, as close to the occurrence of the behavior as possible. IU Phone study’s high reporting level shows one of the benefits gain from introducing EMA in STIs study. As a prospective study lasting for 84 days, participants in IU Phone study undergo STI testing and complete EMA forms with project-furnished cellular telephones according to the predetermined schedules. At pre-selected eight-hour intervals, participants respond to a series of questions to identify sexual and non-sexual interactions with specific partners including partner name, relationship satisfaction and sexual satisfaction with this partner, time of each coital event and condom use for each event. etc. STIs lab results of all the participants are collected weekly as well. We are interested in several variables related to the risk of infection and sexual or non-sexual behaviors, especially the relationship among the longitudinal processes of those variables. New statistical models and applications are established to deal with the data with complex dependence and sampling data structures. The methodologies covers various of statistical aspect like generalized mixed models, mul tivariate models and autoregressive and cross-lagged model in longitudinal data analysis, misclassification adjustment in imperfect diagnostic tests, and variable-domain functional regression in functional data analysis. The contribution of our work is we bridge the meth ods from different areas with EMA data in the IU Phone study and also build up a novel understanding of the association among all the variables of interest from different perspec tives based on the characteristic of the data. Besides all the statistical analyses included in this dissertation, variety of data visualization techniques also provide informative support in presenting the complex EMA data structure.
247

Implementation of Advanced Analytics on Customer Satisfaction Process in Comparison to Traditional Data Analytics

Akula, Venkata Ganesh Ashish 06 September 2019 (has links)
No description available.
248

Comparing Communities & User Clusters in Twitter Network Data

Bhowmik, Kowshik January 2019 (has links)
No description available.
249

An Examination of Relationships Between Exposure to Sexually Explicit Media Content and Risk Behaviors: A Case Study of College Students

Stana, Alexandru 20 December 2013 (has links)
No description available.
250

Webqda: uma ferramenta web colaborativa para apoiar a análise qualitativa de dados

Rique, Thiago Pereira 29 March 2011 (has links)
Made available in DSpace on 2015-05-14T12:36:29Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1893673 bytes, checksum: 55063213a7764403cd19557f2628cb42 (MD5) Previous issue date: 2011-03-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The importance of collaborative environments in a globalized world to enable the sharing of information or make the interaction among people from different locations possible is undeniable. It is also fact, in today s society, the need of carrying out tasks and solving problems in a collaborative way. As an example, one can cite the qualitative research which, when performed with the aid of computers, can make use of CAQDAS (Computer Assisted Qualitative Data Analysis Software) applications. Although it is possible to perform qualitative analysis within CAQDAS applications in isolation, the study/work performed by a group has the differential to enable the interaction among members of a team and provide the expression of different points of view and opinions, besides being more likely to comments and criticisms that help improve the quality of the work. Thus, this document presents WebQDA, a collaborative tool that uses the basic features of qualitative data analysis with the aim of illustrating how Web 2.0 new concepts can affect productivity in qualitative research by working in a cooperative way. / É inegável a importância dos ambientes colaborativos no mundo globalizado, seja para possibilitar o compartilhamento de informações ou tornar possível a interação entre pessoas distantes. Também é fato, na sociedade atual, a necessidade de realização de tarefas e solução de problemas de forma colaborativa. Como exemplo, pode-se citar a pesquisa qualitativa que, quando realizada com o auxílio do computador, pode fazer uso dos aplicativos CAQDAS (Computer Assisted Qualitative Data Analysis Software). Apesar de ser possível realizar análises qualitativas em aplicativos CAQDAS de forma isolada, o estudo/trabalho realizado por um grupo possui o diferencial de permitir a interação entre os membros de uma equipe, possibilitando a expressão de pontos de vista e opiniões diferentes, além de ser mais propenso a comentários e críticas que contribuem para a melhoria e qualidade do trabalho. Desse modo, este documento apresenta o WebQDA, uma ferramenta colaborativa que utiliza as funcionalidades básicas da análise qualitativa de dados, visando ilustrar como os novos conceitos da Web 2.0, como redes sociais, podem afetar a produtividade na pesquisa qualitativa ao se trabalhar de forma cooperativa.

Page generated in 0.0542 seconds