• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 855
  • 412
  • 156
  • 83
  • 79
  • 35
  • 26
  • 16
  • 16
  • 14
  • 13
  • 10
  • 9
  • 8
  • 8
  • Tagged with
  • 2066
  • 2066
  • 546
  • 431
  • 430
  • 382
  • 380
  • 202
  • 188
  • 164
  • 162
  • 155
  • 147
  • 147
  • 144
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Sistema supervisório de unidades de microgeração de energia elétrica: o caso da geração de eletricidade com o biogás

Otto, Rodrigo Bueno 10 March 2015 (has links)
Made available in DSpace on 2017-07-10T15:14:32Z (GMT). No. of bitstreams: 1 DissertacaoRodrigoOtto.pdf: 3119591 bytes, checksum: f05ef49eec98ca7dd9533c11023a1c28 (MD5) Previous issue date: 2015-03-10 / The generating units form a distributed power generator system that uses biogas as fuel, this biogas is produced on small farms that collect the waste from agricultural activities systems. The supervisory system of the biogas generation units can monitor local and remote environmental, electrical and mechanical variables of the process and display this data in a user-friendly interface for analysis and decision making. In addition to providing an user interface, the data collected will be stored to build the process history and analysis database that will serve as a study source and promotion of new research lines related to this topic. / As unidades geradoras formam um sistema distribuído de geração de energia que utiliza como combustível o biogás produzido em pequenas propriedades rurais, oriundos dos resíduos da atividade agropecuária. O sistema supervisório de unidades de geração a biogás é capaz de monitorar de forma local e remota as variáveis ambientais, elétricas e mecânicas do processo e disponibilizar estes dados em uma interface amigável para a análise e tomada de decisão. Além de disponibilizar em uma tela, esses dados coletados alimentam um banco de dados para acúmulo do histórico e análise do processo, que servirá como fonte de estudos e fomento de novas linhas de pesquisas futuras relacionadas ao tema de interesse
342

Survival analysis of listed firms in Hong Kong.

January 2007 (has links)
Li, Li. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 34-36). / Abstracts in English and Chinese. / Chapter Chapter One --- Introduction --- p.1 / Chapter Chapter Two --- Methodology --- p.5 / Chapter Chapter Three --- Data --- p.9 / Chapter 3.1 --- Data Description --- p.9 / Chapter 3.2 --- Selection of Covariate --- p.13 / Chapter Chapter Four --- Empirical Analysis --- p.20 / Chapter 4.1 --- General Survival Analysis by Cox PH Model --- p.20 / Chapter 4.2 --- Competing Risk Analysis of Listed Firms --- p.24 / Chapter 4.3 --- Robustness Check --- p.28 / Chapter Chapter Five --- Conclusion --- p.30 / Appendix 1 --- p.32 / Appendix II --- p.33 / Reference --- p.34 / Tables --- p.37 / Figures --- p.58
343

Time Series Decomposition Using Singular Spectrum Analysis

Deng, Cheng 01 May 2014 (has links)
Singular Spectrum Analysis (SSA) is a method for decomposing and forecasting time series that recently has had major developments but it is not yet routinely included in introductory time series courses. An international conference on the topic was held in Beijing in 2012. The basic SSA method decomposes a time series into trend, seasonal component and noise. However there are other more advanced extensions and applications of the method such as change-point detection or the treatment of multivariate time series. The purpose of this work is to understand the basic SSA method through its application to the monthly average sea temperature in a point of the coast of South America, near where “EI Ni˜no” phenomenon originates, and to artificial time series simulated using harmonic functions. The output of the basic SSA method is then compared with that of other decomposition methods such as classic seasonal decomposition, X-11 decomposition using moving averages and seasonal decomposition by Loess (STL) that are included in some time series courses.
344

Journal of Mental Health Counseling (JMHC) Publication Pattern Review: A Meta-study of Author and Article Characteristics from 1994-20

Byrd, Rebekah J., Erford, Bradley 01 January 2012 (has links)
Patterns of articles published in the Journal of Mental Health Counseling (JMHC) from 1994 through 2009 were reviewed. Characteristics of authors (e.g., sex, employment setting, nation of domicile) and articles (e.g., topic, type, design, sample, sample size, participant type, statistical procedures and sophistication) are described and analyzed for trends over time.
345

Career Development Quarterly (CDQ) Publication Pattern Review: A Meta-Study of Author and Article Characteristics.

Crockett, Stephanie, Byrd, Rebekah J., Erford, Bradley 01 December 2014 (has links)
Patterns of articles published in The Career Development Quarterly (CDQ) from 1990 to 2011 were reviewed in this metastudy. Author characteristics (e.g., gender, employment setting, nation of domicile) and article characteristics (e.g., topic, type, design, sample, sample size, participant type, statistical procedures and sophistication) were described and analyzed for trends over time. Significant changes were noted in increased proportions of female authors, international contributors, research articles, more sophisticated research designs, and decreased numbers of practitioner-authors. These trends highlight a robust journal that continues to evolve to address changing career development and counseling challenges.
346

Bi-filtration and stability of TDA mapper for point cloud data

Bungula, Wako Tasisa 01 August 2019 (has links)
TDA mapper is an algorithm used to visualize and analyze big data. TDA mapper is applied to a dataset, X, equipped with a filter function f from X to R. The output of the algorithm is an abstract graph (or simplicial complex). The abstract graph captures topological and geometric information of the underlying space of X. One of the interests in TDA mapper is to study whether or not a mapper graph is stable. That is, if a dataset X is perturbed by a small value, and denote the perturbed dataset by X∂, we would like to compare the TDA mapper graph of X to the TDA mapper graph of X∂. Given a topological space X, if the cover of the image of f satisfies certain conditions, Tamal Dey, Facundo Memoli, and Yusu Wang proved that the TDA mapper is stable. That is, the mapper graph of X differs from the mapper graph of X∂ by a small value measured via homology. The goal of this thesis is three-fold. The first is to introduce a modified TDA mapper algorithm. The fundamental difference between TDA mapper and the modified version is the modified version avoids the use of filter function. In comparing the mapper graph outputs, the proposed modified mapper is shown to capture more geometric and topological features. We discuss the advantages and disadvantages of the modified mapper. Tamal Dey, Facundo Memoli, and Yusu Wang showed that a filtration of covers induce a filtration of simplicial complexes, which in turn induces a filtration of homology groups. While Tamal Dey, Facundo Memoli, and Yusu Wang focused on TDA mapper's application to topological space, the second goal of this thesis is to show DBSCAN clustering gives a filtration of covers when TDA mapper is applied to a point cloud. Hence, DBSCAN gives a filtration of mapper graphs (simplicial complexes) and homology groups. More importantly, DBSCAN gives a filtration of covers, mapper graphs, and homology groups in three parameter directions: bin size, epsilon, and Minpts. Hence, there is a multi-dimensional filtration of covers, mapper graphs, and homology groups. We also note that single-linkage clustering is a special case of DBSCAN clustering, so the results proved to be true when DBSCAN is used are also true when single-linkage is used. However, complete-linkage does not give a filtration of covers in the direction of bin, hence no filtration of simpicial complexes and homology groups exist when complete-linkage is applied to cluster a dataset. In general, the results hold for any clustering algorithm that gives a filtration of covers. The third (and last) goal of this thesis is to prove that two multi-dimensional persistence modules (one: with respect to the original dataset, X; two: with respect to the ∂-perturbation of X) are 2∂-interleaved. In other words, the mapper graphs of X and X∂ differ by a small value as measured by homology.
347

Investigating Post-Earnings-Announcement Drift Using Principal Component Analysis and Association Rule Mining

Schweickart, Ian R. W. 01 January 2017 (has links)
Post-Earnings-Announcement Drift (PEAD) is commonly accepted in the fields of accounting and finance as evidence for stock market inefficiency. Less accepted are the numerous explanations for this anomaly. This project aims to investigate the cause for PEAD by harnessing the power of machine learning algorithms such as Principle Component Analysis (PCA) and a rule-based learning technique, applied to large stock market data sets. Based on the notion that the market is consumer driven, repeated occurrences of irrational behavior exhibited by traders in response to news events such as earnings reports are uncovered. The project produces findings in support of the PEAD anomaly using non-accounting nor financial methods. In particular, this project finds evidence for delayed price response exhibited in trader behavior, a common manifestation of the PEAD phenomenon.
348

Improved Standard Error Estimation for Maintaining the Validities of Inference in Small-Sample Cluster Randomized Trials and Longitudinal Studies

Tanner, Whitney Ford 01 January 2018 (has links)
Data arising from Cluster Randomized Trials (CRTs) and longitudinal studies are correlated and generalized estimating equations (GEE) are a popular analysis method for correlated data. Previous research has shown that analyses using GEE could result in liberal inference due to the use of the empirical sandwich covariance matrix estimator, which can yield negatively biased standard error estimates when the number of clusters or subjects is not large. Many techniques have been presented to correct this negative bias; However, use of these corrections can still result in biased standard error estimates and thus test sizes that are not consistently at their nominal level. Therefore, there is a need for an improved correction such that nominal type I error rates will consistently result. First, GEEs are becoming a popular choice for the analysis of data arising from CRTs. We study the use of recently developed corrections for empirical standard error estimation and the use of a combination of two popular corrections. In an extensive simulation study, we find that nominal type I error rates can be consistently attained when using an average of two popular corrections developed by Mancl and DeRouen (2001, Biometrics 57, 126-134) and Kauermann and Carroll (2001, Journal of the American Statistical Association 96, 1387-1396) (AVG MD KC). Use of this new correction was found to notably outperform the use of previously recommended corrections. Second, data arising from longitudinal studies are also commonly analyzed with GEE. We conduct a simulation study, finding two methods to attain nominal type I error rates more consistently than other methods in a variety of settings: First, a recently proposed method by Westgate and Burchett (2016, Statistics in Medicine 35, 3733-3744) that specifies both a covariance estimator and degrees of freedom, and second, AVG MD KC with degrees of freedom equaling the number of subjects minus the number of parameters in the marginal model. Finally, stepped wedge trials are an increasingly popular alternative to traditional parallel cluster randomized trials. Such trials often utilize a small number of clusters and numerous time intervals, and these components must be considered when choosing an analysis method. A generalized linear mixed model containing a random intercept and fixed time and intervention covariates is the most common analysis approach. However, the sole use of a random intercept applies assumptions that will be violated in practice. We show, using an extensive simulation study based on a motivating example and a more general design, alternative analysis methods are preferable for maintaining the validity of inference in small-sample stepped wedge trials with binary outcomes. First, we show the use of generalized estimating equations, with an appropriate bias correction and a degrees of freedom adjustment dependent on the study setting type, will result in nominal type I error rates. Second, we show the use of a cluster-level summary linear mixed model can also achieve nominal type I error rates for equal cluster size settings.
349

Prediction of DNA-Binding Proteins and their Binding Sites

Pokhrel, Pujan 01 May 2018 (has links)
DNA-binding proteins play an important role in various essential biological processes such as DNA replication, recombination, repair, gene transcription, and expression. The identification of DNA-binding proteins and the residues involved in the contacts is important for understanding the DNA-binding mechanism in proteins. Moreover, it has been reported in the literature that the mutations of some DNA-binding residues on proteins are associated with some diseases. The identification of these proteins and their binding mechanism generally require experimental techniques, which makes large scale study extremely difficult. Thus, the prediction of DNA-binding proteins and their binding sites from sequences alone is one of the most challenging problems in the field of genome annotation. Since the start of the human genome project, many attempts have been made to solve the problem with different approaches, but the accuracy of these methods is still not suitable to do large scale annotation of proteins. Rather than relying solely on the existing machine learning techniques, I sought to combine those using novel “stacking technique” and used the problem-specific architectures to solve the problem with better accuracy than the existing methods. This thesis presents a possible solution to the DNA-binding proteins prediction problem which performs better than the state-of-the-art approaches.
350

An Automated Analysis Of Single Particle Tracking Data For Proteins That Exhibit Multi Component Motion.

Ali, Rehan 01 January 2018 (has links)
Neurons are polarized cells with dendrites and an axon projecting from their cell body. Due to this polarized structure a major challenge for neurons is the transport of material to and from the cell body. The transport that occurs between the cell body and axons is called Axonal transport. Axonal transport has three major components: molecular motors which act as vehicles, microtubules which serve as tracks on which these motors move and microtubule associated proteins which regulate the transport of material. Axonal transport maintains the integrity of a neuron and its dysfunction is linked to neurodegenerative diseases such as, Alzheimer’s disease, Frontotemporal dementia linked to chromosome 17 and Pick’s disease. Therefore, understanding the process of axonal transport is extremely important. Single particle tracking is one method in which axonal transport is studied. This involves fluorescent labelling of molecular motors and microtubule associated proteins and tracking their position in time. Single particle tracking has shown that both, molecular motors and microtubule associated proteins exhibit motion with multiple components. These components are directed, where motion is in one direction, diffusive, where motion is random, and static, where there is no motion. Moreover, molecular motors and microtubule associated proteins also switch between these different components in a single instance of motion. We have developed a MATLAB program, called MixMAs, which specializes in analyzing the data provided by single particle tracking. MixMAs uses a sliding window approach to analyze trajectories of motion. It is capable of distinguishing between different components of motion that are exhibited by molecular motors and microtubule associated proteins. It also identifies transitions that take place between different components of motion. Most importantly, it is not limited by the number of transitions and the number of components present in a single trajectory. The analysis results provided by MixMAs include all the necessary parameters required for a complete characterization of a particle’s motion. These parameters are the number of different transitions that take place between different components of motion, the dwell times of different components of motion, velocity for directed component of motion and diffusion coefficient for diffusive component of motion. We have validated the working of MixMAs by simulating motion of particles which show all three components of motion with all the possible transitions that can take place between them. The simulations are performed for different values of error in localizing the position of a particle. The simulations confirm that MixMAs accurately calculates parameters of motion for a range of localization errors. Finally, we show an application of MixMAs on experimentally obtained single particle data of Kinesin-3 motor.

Page generated in 0.0531 seconds