• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 684
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1504
  • 1030
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
881

General conditional linear models with time-dependent coefficients under censoring and truncation

Teodorescu, Bianca 19 December 2008 (has links)
In survival analysis interest often lies in the relationship between the survival function and a certain number of covariates. It usually happens that for some individuals we cannot observe the event of interest, due to the presence of right censoring and/or left truncation. A typical example is given by a retrospective medical study, in which one is interested in the time interval between birth and death due to a certain disease. Patients who die of the disease at early age will rarely have entered the study before death and are therefore left truncated. On the other hand, for patients who are alive at the end of the study, only a lower bound of the true survival time is known and these patients are hence right censored. In the case of censored and/or truncated responses, lots of models exist in the literature that describe the relationship between the survival function and the covariates (proportional hazards model or Cox model, log-logistic model, accelerated failure time model, additive risks model, etc.). In these models, the regression coefficients are usually supposed to be constant over time. In practice, the structure of the data might however be more complex, and it might therefore be better to consider coefficients that can vary over time. In the previous examples, certain covariates (e.g. age at diagnosis, type of surgery, extension of tumor, etc.) can have a relatively high impact on early age survival, but a lower influence at higher age. This motivated a number of authors to extend the Cox model to allow for time-dependent coefficients or consider other type of time-dependent coefficients models like the additive hazards model. In practice it is of great use to have at hand a method to check the validity of the above mentioned models. First we consider a very general model, which includes as special cases the above mentioned models (Cox model, additive model, log-logistic model, linear transformation models, etc.) with time-dependent coefficients and study the parameter estimation by means of a least squares approach. The response is allowed to be subject to right censoring and/or left truncation. Secondly we propose an omnibus goodness-of-fit test that will test if the general time-dependent model considered above fits the data. A bootstrap version, to approximate the critical values of the test is also proposed. In this dissertation, for each proposed method, the finite sample performance is evaluated in a simulation study and then applied to a real data set.
882

Méthodes multivariées pour l'analyse jointe de données de neuroimagerie et de génétique

Le Floch, Edith 28 September 2012 (has links) (PDF)
L'imagerie cérébrale connaît un intérêt grandissant, en tant que phénotype intermédiaire, dans la compréhension du chemin complexe qui relie les gènes à un phénotype comportemental ou clinique. Dans ce contexte, un premier objectif est de proposer des méthodes capables d'identifier la part de variabilité génétique qui explique une certaine part de la variabilité observée en neuroimagerie. Les approches univariées classiques ignorent les effets conjoints qui peuvent exister entre plusieurs gènes ou les covariations potentielles entre régions cérébrales.Notre première contribution a été de chercher à améliorer la sensibilité de l'approche univariée en tirant avantage de la nature multivariée des données génétiques, au niveau local. En effet, nous adaptons l'inférence au niveau du cluster en neuroimagerie à des données de polymorphismes d'un seul nucléotide (SNP), en cherchant des clusters 1D de SNPs adjacents associés à un même phénotype d'imagerie. Ensuite, nous prolongeons cette idée et combinons les clusters de voxels avec les clusters de SNPs, en utilisant un test simple au niveau du "cluster 4D", qui détecte conjointement des régions cérébrale et génomique fortement associées. Nous obtenons des résultats préliminaires prometteurs, tant sur données simulées que sur données réelles.Notre deuxième contribution a été d'utiliser des méthodes multivariées exploratoires pour améliorer la puissance de détection des études d'imagerie génétique, en modélisant la nature multivariée potentielle des associations, à plus longue échelle, tant du point de vue de l'imagerie que de la génétique. La régression Partial Least Squares et l'analyse canonique ont été récemment proposées pour l'analyse de données génétiques et transcriptomiques. Nous proposons ici de transposer cette idée à l'analyse de données de génétique et d'imagerie. De plus, nous étudions différentes stratégies de régularisation et de réduction de dimension, combinées avec la PLS ou l'analyse canonique, afin de faire face au phénomène de sur-apprentissage dû aux très grandes dimensions des données. Nous proposons une étude comparative de ces différentes stratégies, sur des données simulées et des données réelles d'IRM fonctionnelle et de SNPs. Le filtrage univarié semble nécessaire. Cependant, c'est la combinaison du filtrage univarié et de la PLS régularisée L1 qui permet de détecter une association généralisable et significative sur les données réelles, ce qui suggère que la découverte d'associations en imagerie génétique nécessite une approche multivariée.
883

Harmonization of SACU Trade Policies in the Tourism & Hospitality Service Sectors.

Masuku, Gabriel Mthokozisi Sifiso. January 2009 (has links)
<p>The general objective of the proposed research is to do a needs analysis for the tourism and hospitality industries of South Africa, Botswana, Namibia, Lesotho and Swaziland. This will be followed by an alignment of these industries with the provisions of the General Agreement of Trade in Services, commonly known as GATS, so that a Tourism and Hospitality Services Charter may be moulded that may be used uniformly throughout SACU. The specific objectives of the research are: To analyze impact assessment reports and studies conducted on the Tourism and Hospitality Industries for all five SACU member states with the aim of harmonizing standards, costs and border procedures. To ecognize SACU member states&rsquo / schedule of GATS Commitments, especially in the service sectors being investigated, by improving market access, and to recommend minimal infrastructural development levels to be attained for such sectors&rsquo / support. To make recommendations to harness the challenges faced by the said industries into a working document. To calibrate a uniformity of trade standards in these sectors that shall be used by the SACU membership. To ensure that the template is flexible enough for SACU to easily adopt and use in ongoing bilateral negotiations, for example.</p>
884

Special and differential treatment for trade in agriculture :does it answer the quest for development in African countries?

Fantu Farris Mulleta January 2009 (has links)
<p>The research paper seeks to investigate the possible ways in which African countries can maximise their benefit from the existing special and differential treatment clauses for trade in agriculture, and, then, make recommendations as to what should be the potential bargaining position of African countries with regard to future trade negotiations on agricultural trade.</p>
885

Analysis of 2 x 2 x 2 Tensors

Rovi, Ana January 2010 (has links)
The question about how to determine the rank of a tensor has been widely studied in the literature. However the analytical methods to compute the decomposition of tensors have not been so much developed even for low-rank tensors. In this report we present analytical methods for finding real and complex PARAFAC decompositions of 2 x 2 x 2 tensors before computing the actual rank of the tensor. These methods are also implemented in MATLAB. We also consider the question of how best lower-rank approximation gives rise to problems of degeneracy, and give some analytical explanations for these issues.
886

Simulation Of Conjugate Heat Transfer Problems Using Least Squares Finite Element Method

Goktolga, Mustafa Ugur 01 October 2012 (has links) (PDF)
In this thesis study, a least-squares finite element method (LSFEM) based conjugate heat transfer solver was developed. In the mentioned solver, fluid flow and heat transfer computations were performed separately. This means that the calculated velocity values in the flow calculation part were exported to the heat transfer part to be used in the convective part of the energy equation. Incompressible Navier-Stokes equations were used in the flow simulations. In conjugate heat transfer computations, it is required to calculate the heat transfer in both flow field and solid region. In this study, conjugate behavior was accomplished in a fully coupled manner, i.e., energy equation for fluid and solid regions was solved simultaneously and no boundary conditions were defined on the fluid-solid interface. To assure that the developed solver works properly, lid driven cavity flow, backward facing step flow and thermally driven cavity flow problems were simulated in three dimensions and the findings compared well with the available data from the literature. Couette flow and thermally driven cavity flow with conjugate heat transfer in two dimensions were modeled to further validate the solver. Finally, a microchannel conjugate heat transfer problem was simulated. In the flow solution part of the microchannel problem, conservation of mass was not achieved. This problem was expected since the LSFEM has problems related to mass conservation especially in high aspect ratio channels. In order to overcome the mentioned problem, weight of continuity equation was increased by multiplying it with a constant. Weighting worked for the microchannel problem and the mass conservation issue was resolved. Obtained results for microchannel heat transfer problem were in good agreement in general with the previous experimental and numerical works. In the first computations with the solver / quadrilateral and triangular elements for two dimensional problems, hexagonal and tetrahedron elements for three dimensional problems were tried. However, since only the quadrilateral and hexagonal elements gave satisfactory results, they were used in all the above mentioned simulations.
887

Incompressible Flow Simulations Using Least Squares Spectral Element Method On Adaptively Refined Triangular Grids

Akdag, Osman 01 September 2012 (has links) (PDF)
The main purpose of this study is to develop a flow solver that employs triangular grids to solve two-dimensional, viscous, laminar, steady, incompressible flows. The flow solver is based on Least Squares Spectral Element Method (LSSEM). It has p-type adaptive mesh refinement/coarsening capability and supports p-type nonconforming element interfaces. To validate the developed flow solver several benchmark problems are studied and successful results are obtained. The performances of two different triangular nodal distributions, namely Lobatto distribution and Fekete distribution, are compared in terms of accuracy and implementation complexity. Accuracies provided by triangular and quadrilateral grids of equal computational size are compared. Adaptive mesh refinement studies are conducted using three different error indicators, including a novel one based on elemental mass loss. Effect of modifying the least-squares functional by multiplying the continuity equation by a weight factor is investigated in regards to mass conservation.
888

Aerodynamic Parameter Estimation Of A Missile In Closed Loop Control And Validation With Flight Data

Aydin, Gunes 01 September 2012 (has links) (PDF)
Aerodynamic parameter estimation from closed loop data has been developed as another research area since control and stability augmentation systems have been mandatory for aircrafts. This thesis focuses on aerodynamic parameter estimation of an air to ground missile from closed loop data using separate surface excitations. A design procedure is proposed for designing separate surface excitations. The effect of excitations signals to the system is also analyzed by examining autopilot disturbance rejection performance. Aerodynamic parameters are estimated using two different estimation techniques which are ordinary least squares and complex linear regression. The results are compared with each other and with the aerodynamic database. An application of the studied techniques to a real system is also given to validate that they are directly applicable to real life.
889

Aerodynamic Parameter Estimation Of A Missile In Closed Loop Control And Validation With Flight Data

Aydin, Gunes 01 October 2012 (has links) (PDF)
Aerodynamic parameter estimation from closed loop data has been developed as another research area since control and stability augmentation systems have been mandatory for aircrafts. This thesis focuses on aerodynamic parameter estimation of an air to ground missile from closed loop data using separate surface excitations. A design procedure is proposed for designing separate surface excitations. The effect of excitations signals to the system is also analyzed by examining autopilot disturbance rejection performance. Aerodynamic parameters are estimated using two different estimation techniques which are ordinary least squares and complex linear regression. The results are compared with each other and with the aerodynamic database. An application of the studied techniques to a real system is also given to validate that they are directly applicable to real life.
890

Corruption in Sweden : Exploring Danger Zones and Change.

Andersson, Staffan January 2002 (has links)
In this dissertation I study corruption in the public sector in Sweden, a country which the literature regards as having few corruption problems. Sweden is therefore classified as a “least corrupt” case, and such countries are seldom studied in corruption research. My work is thus an effort to fill a gap in the literature. This research is also motivated by a conviction that such a case provides a fertile ground for studying danger zones for corruption. For example, this work allows me to explore how institutional and contextual changes impact on corruption and danger zones. Though the main focus of this work is on Sweden, I also have comparative ambitions. First, I locate Sweden in a cross-national context. I then study corruption in Sweden using a comparative methodology and with an eye to international comparisons. I apply a combined theoretical approach and a multi-method investigation based on several empirical sources and both quantitative and qualitative techniques. This research strategy enables me to capture a phenomenon (corruption) that is more difficult to identify in countries with relatively few obvious corruption scandals than it is in countries in which the phenomenon has traditionally been studied. Regarding danger zones for corruption, the results show that some of the zones identified in the international literature, such as public procurement, are also important in Sweden. For the Swedish case, my empirical research also identifies the types of corruption that occur, perceptions of danger zones and corruption, how corruption changes over time, and how corruption is fought. With regard to the latter, one conclusion is that ingrained (male) sub-cultures can be problematic and may need to be opened up using a combination of measures like promoting a more heterogeneous group of politicians, creating more transparent proceedings in decision groups and conducting more effective audits. The research also highlights the importance of adapting control measures to existing structures of delegation. For example, if delegation arrangements are changed to improve efficiency and cut costs, new accountability measures may be necessary. In general, delegation and control structures should be structured in such a way as to make the cost of shirking quite high. Finally, based on the results of this multi-method investigation, I conclude that one avenue for further corruption research is to connect our knowledge of danger zones to what we know about mechanisms effecting corrupt behaviour, and then to apply this to discussions of new models of the politics of management in multi-level governance.

Page generated in 0.0361 seconds