• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 6
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 40
  • 40
  • 12
  • 10
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Study of the Cure Rate Model with Case Weights and Time-Dependent Weights

Datta, Aditi 14 March 2013 (has links)
The proportional hazard (PH) cure rate model and the marginal structural Cox model (MSCM) are two broad areas used in analysing survival models with longitudinal data. Cure rate models were introduced to deal with survival models in the presence of a cure fraction and marginal structural models were introduced to adjust for time- ependent confounders through time-dependent weighting in longitudinal studies. However, few studies have tried to combine these two areas in building cure rate models in the presence of time-dependent covariates and time-dependent confounders. This thesis proposes an extension of the maximum likelihood estimation procedure for the PH cure rate model by incorporating (i) case weights, (ii) time-dependent covariates, and (iii) time-dependent weights in the presence of time-dependent covariates and time-dependent confounders into the model. Further, this thesis compares the performance of the PH cure rate model with case weights to the standard unweighted PH cure rate model through simulation studies. Results of these studies suggest that adding case weights in the PH cure rate model improves the estimation of the latency parameter when the sample size is relatively small.
2

Estimation of mortality rates in stage-structured zooplankton populations

Wood, S. N. January 1989 (has links)
No description available.
3

Analýza generátorů ekonomických scénářů (zejména úrokových měr) / Economic Scenario Generator Analysis (short rates)

Šára, Michal January 2012 (has links)
The thesis is concerned with a detailed examination of the most familiar short-rate models.Furthermore,it contains some author's own derivations of formulas for prices of interest rate derivatives and some relationships between certain discretizations of these short-rate models. These formulas are then used for calibration of ceratain chosen models to the actual market data.All the calculations are performed in R using author's own functions,which are along with the other more involved derivations placed in the appendix.
4

CURE RATE AND DESTRUCTIVE CURE RATE MODELS UNDER PROPORTIONAL HAZARDS LIFETIME DISTRIBUTIONS

Barui, Sandip 11 1900 (has links)
Cure rate models are widely used to model time-to-event data in the presence of long-term survivors. Cure rate models, since introduced by Boag (1949), have gained significance over time due to remarkable advancements in the drug industry resulting in cures for a number of diseases. In this thesis, cure rate models are considered under a competing risk scenario wherein the initial number of competing causes is described by a Conway-Maxwell (COM) Poisson distribution, under the assumption of proportional hazards (PH) lifetime for the susceptibles. This provides a natural extension of the work of Balakrishnan & Pal (2013) who had considered independently and identically distributed (i.i.d.) lifetimes in this setup. By linking covariates to the lifetime through PH assumption, we obtain a flexible cure rate model. First, the baseline hazard is assumed to be of the Weibull form. Parameter estimation is carried out using EM algorithm and the standard errors are estimated using Louis' method. The performance of estimation is assessed through a simulation study. A model discrimination study is performed using Likelihood-based and Information-based criteria since the COM-Poisson model includes geometric, Poisson and Bernoulli as special cases. The details are covered in Chapter 2. As a natural extension of this work, we next approximate the baseline hazard with a piecewise linear functions (PLA) and estimated it non-parametrically for the COM-Poisson cure rate model under PH setup. The corresponding simulation study and model discrimination results are presented in Chapter 3. Lastly, we consider a destructive cure rate model, introduced by Rodrigues et. al (2011), and study it under the PH assumption for the lifetimes of susceptibles. In this, the initial number of competing causes are modeled by a weighted Poisson distribution. We then focus mainly on three special cases, viz., destructive exponentially weighted Poisson, destructive length-biased Poisson and destructive negative binomial cure rate models, and all corresponding results are presented in Chapter 4. / Thesis / Doctor of Philosophy (PhD)
5

Transit Bus Load-Based Modal Emission Rate Model Development

Feng, Chunxia 06 April 2007 (has links)
Heavy-duty diesel vehicle (HDDV) operations are a major source of pollutant emissions in major metropolitan areas. Accurate estimation of heavy-duty diesel vehicle emissions is essential in air quality planning efforts because highway and non-road heavy-duty diesel emissions account for a significant fraction of the oxides of nitrogen (NOx) and particulate matter (PM) emissions inventories. Yet, major modeling deficiencies in the current MOBILE6 modeling approach for heavy-duty diesel vehicles have been widely recognized for more than ten years. While the most recent MOBILE6.2 model integrates marginal improvements to various internal conversion and correction factors, fundamental flaws inherent in the modeling approach still remain. The major effort of this research is to develop a new heavy-duty vehicle load-based modal emission rate model that overcomes some of the limitations of existing models and emission rates prediction methods. This model is part of the proposed Heavy-Duty Diesel Vehicle Modal Emission Modeling (HDDV-MEM) which was developed by Georgia Institute of Technology. HDDV-MEM first predicts second-by-second engine power demand as a function of vehicle operating conditions and then applies brake-specific emission rates to these activity predictions. To provide better estimates of microscopic level, this modeling approach is designed to predict second-by-second emissions from onroad vehicle operations. This research statistically analyzes the database provided by EPA and yields a model for prediction emissions at microscopic level based on engine power demand and driving mode. Research results will enhance the explaining ability of engine power demand on emissions and the importance of simulating engine power in real world applications. The modeling approach provides a significant improvement in HDDV emissions modeling compared to the current average speed cycle-based emissions models.
6

Noise-induced reversals in bistable visual perception

García Rodríguez, Pedro Ernesto 06 May 2012 (has links)
In this thesis, a set of some prevailing rate-based models for bistable perception have been considered in order to find the implications of the novel results reported in Pastukhov & Braun (2011). These authors have quantified not only salient aspects of bistable perception (mean and dispersion of dominance distributions), but also some hidden hysteresis effects ignored up to now. Extensive computational simulations of different prevailing models rigorously demonstrate that the history-dependence of the perceptual process shown by Pastukhov & Braun (2011), effectively constrains the region of the parameter space able to replicate the empirical data. Concretely, that just small regions residing inside a bistable or two-attractor region of the whole parameter space are actually adequate to reproduce the experimental results, both for BR and KDE displays. Remarkably, the results remain valid across all the different classes of models considered, regardless the details of the neuronal implementation. The biological plausibility of the parameter region found for each of the models considered, is further stressed with respect to the widely known Levelt’s propositions. To that end, we make use of weighted sums across the parameter regions computed for each subject in the first part of this Thesis, an algorithm that constitutes an important improvement to the methodology proposed by Shpiro et al. (2007) to fit behavioral data by rate-based models. It is shown how different neuronal mechanisms clearly differ in their suitability to replicate Levelt’s propositions. For instance, models with a slow fatiguing process given by spike-frequency adaptation Wilson (2003); Shpiro et al. (2007), no matter if they are being described by linear Shpiro et al. (2007) or nonlinear Curtu et al. (2008)) functions of the activity, replicate quite well Levelt’s second law. Oppositely, a notable discrepancy between model and empirical results is found when such negative feedback is described as a long-term depression affecting the synapses between the competing neurons representing the two alternative interpretations Laing & Chow (2002); Shpiro et al. (2007). The present work finishes with a study about the capability of the mentioned models to reproduce the resonance effects happening when varying external frequencies, as shown by Kim et al. (2006). Importantly, a resonance respect to the noise dispersion (i.e., a true stochastic resonance ) is clearly demonstrated here for the first time. Previous estimations of noise dispersion (20 − 30% of the input) and its locus (adaptation variables) are questioned, by demonstrating that increased sensitivity to even weak signals of the order of less than 10% can be obtained with the models considered, with the noise variable simply entering as part of the net input feeding the neuron. / En este trabajo, son considerados una serie de modelos para frecuencia neuronal ampliamente aceptados en percepción bi-estable, con el objetivo de evaluar las implicaciones de los resultados recientemente reportados en Pastukhov & Braun (2011). Estos autores han cuantificado no solamente aspectos más conocidos sobre el fenómeno (media y dispersión de las distribuciones de dominancia), sino también efectos de historia que habían sido ignorados hasta el presente. Por medio de simulaciones computacionales, se demuestra rigurosamente que la dependencia de la historia del proceso perceptual encontrada por Pastukhov & Braun (2011) efectivamente restringe la región válida de parámetros que es adecuada para reproducir los datos empíricos. Concretamente, que solamente pequeñas regiones del espacio de parámetros disponible, y que se encuentran dentro de una región dinámica bi-estable caracterizada por dos atractores, son realmente adecuadas, tanto para rivalidad binocular (BR) como para estímulos de estructura por movimiento (KDE). Resulta importante destacar que los resultados permanecen válidos de un modelo a otro, independientemente de los detalles de implementación neuronal. La plausibilidad biológica de la región de parámetros encontrada para cada modelo es entonces considerada, en el contexto de las ampliamente conocidas proposiciones de Levelt. Con tal objetivo, hacemos uso de un algoritmo de suma pesada para extraer valores medios de la regiones de parámetros correspondientes a cada sujeto. Este algoritmo constituye una importante mejora a la metodología propuesta por Shpiro et al. (2007) para ajustar modelos de frecuencia neuronal a datos comportamentales de percepción bi-estable. Es entonces mostrado como cada mecanismo neuronal considerado es clara- mente diferente en su capacidad para reproducir las proposiciones de Levelt. Por ejemplo, modelos conteniendo procesos lentos de retroalimentación negativa da- dos por adaptación de frecuencia de disparo Wilson (2003); Shpiro et al. (2007), sin importar si están descritos por funciones lineales Shpiro et al. (2007) or no lineales Curtu et al. (2008)) de la actividad, consiguen reproducir de modo razonable la segunda proposición de Levelt. Por el contrario, una notable discrepancia entre modelo y resultados empíricos es encontrada cuando tales procesos están dados por la presencia de depresión sináptica de larga duración. El presente trabajo culmina con un estudio sobre la capacidad de los mencionados modelos para reproducir los efectos de resonancia que ocurren al variar la frecuencia externa de modulación Kim et al. (2006). Es de destacar que en nuestro caso, un efecto de resonancia es encontrado respecto a la dispersión del ruido, lo cual indica la presencia de una verdadera resonancia del tipo estocástico. Este efecto es claramente demostrado para estos modelos, por primera vez, en el presente trabajo. Previas estimaciones de la dispersión del ruido (20 − 30 % de la señal de entrada) y su localización (variables de adaptación) son analizadas. Se demuestra que un incremento de la sensibilidad a incluso muy pequeñas señales de menos del 10% puede ser encontrada en estos modelos, con sólo incluir la variable de ruido como parte de la corriente neta que alimenta la neurona.
7

Cure Rate Models with Nonparametric Form of Covariate Effects

Chen, Tianlei 02 June 2015 (has links)
This thesis focuses on development of spline-based hazard estimation models for cure rate data. Such data can be found in survival studies with long term survivors. Consequently, the population consists of the susceptible and non-susceptible sub-populations with the latter termed as "cured". The modeling of both the cure probability and the hazard function of the susceptible sub-population is of practical interest. Here we propose two smoothing-splines based models falling respectively into the popular classes of two component mixture cure rate models and promotion time cure rate models. Under the framework of two component mixture cure rate model, Wang, Du and Liang (2012) have developed a nonparametric model where the covariate effects on both the cure probability and the hazard component are estimated by smoothing splines. Our first development falls under the same framework but estimates the hazard component based on the accelerated failure time model, instead of the proportional hazards model in Wang, Du and Liang (2012). Our new model has better interpretation in practice. The promotion time cure rate model, motivated from a simplified biological interpretation of cancer metastasis, was first proposed only a few decades ago. Nonetheless, it has quickly become a competitor to the mixture models. Our second development aims to provide a nonparametric alternative to the existing parametric or semiparametric promotion time models. / Ph. D.
8

Interlimb transfer of sensorimotor adaptation : predictive factors and underlying processes / Le transfert d'adaptation entre les membres : facteurs prédictifs et processus

Lefumat, Hannah 11 May 2016 (has links)
L’adaptation motrice renvoie à la capacité de notre système nerveux à produire continuellement des mouvements précis et ce malgré le fait que notre environnement ainsi que notre corps puissent être soumis à des modifications. Le transfert d’adaptation entre les membres découle de notre habilité à généraliser ce que l’on a appris, par exemple, avec un bras au bras opposé. Le transfert entre les membres est un objet d’étude complexe. Les conditions amenant au transfert sont largement débattues dans la littérature car les résultats d’une étude à l’autre peuvent être contradictoires. Ce travail de thèse s’inscrit dans une tentative d’apporter une explication concernant l’hétérogénéité des performances et les divergences observées dans les différentes études portant sur le transfert entre les membres. Les deux premières expériences avaient pour but d’identifier si des conditions paradigmatiques ou idiosyncratiques pouvaient influencer les performances du transfert au bras opposé. L’objectif de la troisième expérience était d’étudier l’influence des processus sous-jacents à l’adaptation sur le transfert entre les membres d’après le modèle de Smith et collaborateurs (2006). Nos résultats nous ont permis d’éclaircir certains aspects du transfert concernant les facteurs prédictifs et les processus mis en jeu. Nos deux premières études suggèrent que les différences individuelles sont une source d’information pertinente pour expliquer certains comportements tels que le transfert entre les membres. Notre troisième étude nous a permis de caractériser les processus qui, durant l’adaptation, prédisposent au transfert. / Motor adaptation refers to the capacity of our nervous system to produce accurate movements while the properties of our body and our environment continuously change. Interlimb transfer is a process that directly stems from motor adaptation. It occurs when knowledge gained through training with one arm change the performance of the opposite arm movements. Interlimb transfer of adaptation is an intricate process. Numerous studies have investigated the patterns of transfer and conflicted results have been found. The attempt of my PhD project was to identify which factors and processes favor interlimb transfer of adaptation and thence may explain the discrepancies found in the literature. The first two experiments aimed at investigated whether paradigmatic or idiosyncratic features would influence the performance in interlimb transfer. The third experiment provided some insights on the processes allowing interlimb transfer by using the dual-rate model of adaptation put forth by Smith et al. (2006). Our results show that inter-individual differences may be a key factor to consider when studying interlimb transfer of adaptation. Also, the study of the different sub-processes of adaptation seems helpful to understand how interlimb transfer works and how it can be related to other behaviors such as the expression of motor memory.
9

Estimação de banda disponível em redes sem fio padrão IEEE 802.11N: uma análise experimental sobre os efeitos de seus novos mecanismos em técnicas ativas

AZEVEDO, Diego Cananéa Nóbrega de 08 September 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-08-30T18:33:02Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Diego_Cananea_Dissertacao_Mestrado_CIn_UFPE.pdf: 981557 bytes, checksum: ea7555320bfcb0316c7f0fa816288cef (MD5) / Made available in DSpace on 2017-08-30T18:33:02Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Diego_Cananea_Dissertacao_Mestrado_CIn_UFPE.pdf: 981557 bytes, checksum: ea7555320bfcb0316c7f0fa816288cef (MD5) Previous issue date: 2016-09-08 / A estimação da capacidade e da banda disponível em redes têm sido objeto de diversas pesquisas nos últimos anos, principalmente por causa da evolução das tecnologias e crescimento do uso da Internet. Tais técnicas são de grande importância para que seja possível fazer um melhor aproveitamento da transmissão dos dados, evitando tanto a subutilização quanto o esgotamento de recursos. Existem inúmeros usos para as técnicas de estimação de banda, dentre as quais podemos citar streaming multimídia, aplicações peer-to-peer, protocolos de roteamento baseados na banda disponível, qualidade de serviço, protocolos de transporte fim a fim, entre outros. Com o interesse cada vez maior nestes tipos de aplicações, principalmente na transmissão de vídeo e áudio, a estimação de banda tem sido de bastante interesse para pesquisadores. A estimação de banda disponível em redes sem fio por si só já é um desafio para os pesquisadores e o novo padrão IEEE 802.11n trouxe novos mecanismos para otimizar a transmissão dos dados, alcançando assim maiores taxas. Porém, tais fatores contribuem para que as técnicas hoje existentes encontrem problemas ao tentarem estimar a banda disponível, causando assim resultados que deixam a desejar. Esta pesquisa demonstra experimentalmente a influência dos novos fatores adicionados pelo novo padrão, como a agregação de quadros e de canais, em técnicas ativas de estimação de banda disponível. É possível verificar, então, que todas tem sua acurácia diminuída, causando erros de estimações. Verificamos que mesmo no cenário mais simples, onde os novos fatores são desativados, boa parte dos métodos analisados obtiveram uma performance aquém do esperado, corroborando com a afirmação de que o ambiente de redes sem fio é um grande desafio para o desenvolvimento deste tipo de técnica. A técnica de estimação YAZ se mostrou mais robusta que as demais, aproximando-se dos valores de referência em quase todos os cenários propostos, com exceção para a agregação de quadros. Em uma análise mais específica de seu algoritmo, podemos demonstrar o porquê do erro no resultado da estimação de banda disponível neste contexto. / The estimation of capacity and available bandwidth in computer networks have been the subject of several studies in recent years, mainly because of changing technologies and increasing use of the Internet. Such techniques are of great importance to be able to make better use of data transmission, avoiding both underuse as resource depletion. There are numerous uses for bandwidth estimation techniques, among which we can mention textit stream multimedia applications textit peer-to-peer, routing protocols based on available bandwidth, quality of service, end to end transport protocols, among others. With the growing interest in these types of applications, especially in video and audio transmission, bandwidth estimation has been of great interest to researchers. The estimation of available bandwidth in wireless networks is itself a challenge for researchers and the new IEEE 802.11n standard has brought new mechanisms to optimize the transmission of data, thus achieving higher rates. However, these factors contribute to that existing techniques encounter problems when trying to estimate the available bandwidth, thus causing results that fall short. This research experimentally demonstrates the influence of new factors added by the new standard, such as frame aggregation and channels in active techniques for available bandwidth estimation. It’s possible see, then, that all has its diminished accuracy, causing estimation errors. We found that even in the simplest scenario, where new factors are disabled, most of the methods discussed perform so below expectations, supporting the claim that the wireless network environment is a major challenge for the development of this type of technique . The YAZ estimation technique was more robust than the others, approaching the reference values in almost all proposed scenarios, except for the frame aggregation. In a more specific analysis of their algorithm, we can demonstrate why the error in the result of the available bandwidth estimation occurs in this context.
10

Development of a Simulation Model for Fluidized Bed Mild Gasifier

Mazumder, AKM Monayem Hossain 17 December 2010 (has links)
A mild gasification method has been developed to provide an innovative clean coal technology. The objective of this study is to developed a numerical model to investigate the thermal-flow and gasification process inside a specially designed fluidized-bed mild gasifier using the commercial CFD solver ANSYS/FLUENT. Eulerain-Eulerian method is employed to calculate both the primary phase (air) and secondary phase (coal particles). The Navier-Stokes equations and seven species transport equations are solved with three heterogeneous (gas-solid), two homogeneous (gas-gas) global gasification reactions. Development of the model starts from simulating single-phase turbulent flow and heat transfer to understand the thermal-flow behavior followed by five global gasification reactions, progressively with adding one equation at a time. Finally, the particles are introduced with heterogeneous reactions. The simulation model has been successfully developed. The results are reasonable but require future experimental data for verification.

Page generated in 0.0476 seconds