• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 20
  • 9
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 99
  • 99
  • 29
  • 22
  • 18
  • 16
  • 15
  • 15
  • 14
  • 12
  • 12
  • 11
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Study of injection moulded long glass fibre-reinforced polypropylene and the effect on the fibre length and orientation distribution

Parveen, Bushra, Caton-Rose, Philip D., Costa, F., Jin, X., Hine, P. 02 1900 (has links)
No / Long glass fibre (LGF) composites are extensively used in manufacturing to produce components with enhanced mechanical properties. Long fibres with length 12 to 25mm are added to a thermoplastic matrix. However severe fibre breakage can occur in the injection moulding process resulting in shorter fibre length distribution (FLD). The majority of this breakage occurs due to the melt experiencing extreme shear stress during the preparation and injection stage. Care should be taken to ensure that the longer fibres make it through the injection moulding process without their length being significantly degraded. This study is based on commercial 12 mm long glass-fibre reinforced polypropylene (PP) and short glass fibre Nylon. Due to the semi-flexiable behaviour of long glass fibres, the fibre orientation distribution (FOD) will differ from the orientation distribution of short glass fibre in an injection molded part. In order to investigate the effect the change in fibre length has on the fibre orientation distribution or vice versa, FOD data was measured using the 2D section image analyser. The overall purpose of the research is to show how the orientation distribution chnages in an injection moulded centre gated disc and end gated plaque geometry and to compare this data against fibre orientation predictions obtained from Autodesk Moldflow Simulation Insight.
22

LATENT VARIABLE MODELS GIVEN INCOMPLETELY OBSERVED SURROGATE OUTCOMES AND COVARIATES

Ren, Chunfeng 01 January 2014 (has links)
Latent variable models (LVMs) are commonly used in the scenario where the outcome of the main interest is an unobservable measure, associated with multiple observed surrogate outcomes, and affected by potential risk factors. This thesis develops an approach of efficient handling missing surrogate outcomes and covariates in two- and three-level latent variable models. However, corresponding statistical methodologies and computational software are lacking efficiently analyzing the LVMs given surrogate outcomes and covariates subject to missingness in the LVMs. We analyze the two-level LVMs for longitudinal data from the National Growth of Health Study where surrogate outcomes and covariates are subject to missingness at any of the levels. A conventional method for efficient handling of missing data is to reexpress the desired model as a joint distribution of variables, including the surrogate outcomes that are subject to missingness conditional on all of the covariates that are completely observable, and estimate the joint model by maximum likelihood, which is then transformed to the desired model. The joint model, however, identifies more parameters than desired, in general. The over-identified joint model produces biased estimates of LVMs so that it is most necessary to describe how to impose constraints on the joint model so that it has a one-to-one correspondence with the desired model for unbiased estimation. The constrained joint model handles missing data efficiently under the assumption of ignorable missing data and is estimated by a modified application of the expectation-maximization (EM) algorithm.
23

A theory of human error caussation in structural design: error predition & control via the soft system approach

Adegoke, Israel Oludotun January 2016 (has links)
No description available.
24

Knowledge sharing for sustainable development : a mixed-method study of an international civil engineering consultancy

Meese, Nicholas January 2012 (has links)
Sustainable development (SD) is a pressing global issue that is becoming increasingly prominent on clients and governing bodies agendas. In order to survive, organisations are seeking ways to negate their detrimental environmental impacts. This is no easy feat: SD is both complex and dynamic. To be successful, organisations need to leverage and expand their most valuable asset – knowledge. Civil engineering plays a significant role in SD – it shapes our environment and governs our interaction with it. However, extant research asserts that civil engineering related disciplines have been slow to adopt SD oriented practices; a possible result of their complex and fragmented organisational environments. The literature suggests that effective knowledge sharing (KS) can overcome these barriers, thus driving enhanced SD performance. Consequently, this research aims to investigate how the civil engineering sector can improve its intra-organisational sharing of SD knowledge, using an international civil engineering consultancy as an exemplar. Whilst there has been much research surrounding KS and SD there has been limited research that has investigated KS for SD, thus this thesis contributes to this limited body of knowledge. Mixed-method research was used to address the abovementioned aim. An increasingly popular approach, it is widely believed to generate greater value through complementary integration of quantitative and qualitative research paradigms. This approach lends itself also to the ethnographic inclinations of the reported research: the author was embedded within the case organisation, and sought a rich and reliable understanding of the study phenomena. An initial set of semi-structured interviews suggested that the case organisation’s members exhibit positive attitudes towards KS and SD, yet are often constrained by a number of common KS barriers, namely: a lack of organisation slack (i.e. time); a silo mentality; and poor SD ICT systems. These socio-cultural and technical barriers were subsequently investigated and contested using social network analysis techniques and an intranet acceptance model. A number of observations are made on the relationships between the findings from the research activities. It is believed the organisation often exhibits a reactive approach to KS for SD, which is deemed undesirable. This signals the need for greater senior management support to cultivate a culture where KS for SD is the norm and is integrated with work practices. A series of recommendations are provided to help the case organisation understand how such change could be cultivated. Several implications follow from this work. The mixed-method approach revealed a number of contradictions between the findings of each research activity. It is therefore postulated that mixed-method designs can provide a richer understanding, thus reducing misconceptions of KS phenomena. Following from this, the research contends that it may be too easy for researchers to identify with ubiquitous KS barriers as the reported research suggests that these may be perceived rather than actual. The research also reinforces the need for senior management support. These individuals govern the systems in which organisational members operate and thus have the ability to enhance KS for SD. Finally, the research demonstrates that SD ICT systems have little impact unless they are embedded in receptive contexts. Thus, an action research approach to KS system development is advocated to ensure systems are shaped to meet user expectations and drive desired KS behaviours. This research is presented in five peer-reviewed articles.
25

Analyse combinatoire de données : structures et optimisation / Logical Analysis of Data : Structures and Optimization

Darlay, Julien 19 December 2011 (has links)
Cette thèse porte sur des problèmes d'exploration de données avec le point de vue de la recherche opérationnelle. L'exploration de données consiste en l'apprentissage de nouvelles connaissances à partir d'observations contenues dans une base de données. La nature des problèmes rencontrés dans ce domaine est proche de celle des problèmes de la recherche opérationnelle: grandes instances, objectifs complexes et difficulté algorithmique. L'exploration de données peut aussi se modéliser comme un problème d'optimisation avec un objectif partiellement connu. Cette thèse se divise en deux parties. La première est une introduction à l'exploration de données. Elle présente l'Analyse Combinatoire de Données (ACD), une méthode d'exploration de données issue de l'optimisation discrète. Cette méthode est appliquée à des données médicales originales et une extension aux problèmes d'analyse de temps de survie est proposée. L'analyse de temps de survie consiste à modéliser le temps avant un événement (typiquement un décès ou une rechute). Les heuristiques proposées utilisent des techniques classiques de recherche opérationnelle telles que la programmation linéaire en nombres entiers, la décomposition de problème, des algorithmes gloutons. La seconde partie est plus théorique et s'intéresse à deux problèmes combinatoires rencontrés dans le domaine de l'exploration de données. Le premier est un problème de partitionnement de graphes en sous-graphes denses pour l'apprentissage non supervisé. Nous montrons la complexité algorithmique de ce problème et nous proposons un algorithme polynomial basé sur la programmation dynamique lorsque le graphe est un arbre. Cet algorithme repose sur des résultats de la théorie des couplages. Le second problème est une généralisation des problèmes de couverture par les tests pour la sélection d'attributs. Les lignes d'une matrice sont coloriées en deux couleurs. L'objectif est de trouver un sous-ensemble minimum de colonnes tel que toute paire de lignes avec des couleurs différentes restent distinctes lorsque la matrice est restreinte au sous-ensemble de colonnes. Nous montrons des résultats de complexité ainsi que des bornes serrées sur la taille des solutions optimales pour différentes structures de matrices. / This thesis focuses on some data mining problems with an operations research point of view. Data mining is the process of learning new knowledge from large datasets. The problems in this field are close to the ones encountered in operations research: Large instances, complex objectives and algorithmic difficulty. Moreover, learning knowledge from a dataset can be viewed as a particular optimization problem with a partially known objective function. This thesis is divided into two main parts. The first part starts with an introduction to data mining. Then it presents a specific method from the field of discrete optimization known as Logical Analysis of Data (LAD). In this part, an original medical application and an extension of LAD to survival analysis are presented. Survival analysis is the modeling of time to event (typically death or failure). The proposed heuristics are derived from classical operations research methods such as integer programming, problem decomposition and greedy algorithms. The second part is more theoretical and focuses on two combinatorial problems encountered while solving practical data mining problems. The first one is a problem of graph partition into dense subgraphs for unsupervised learning. We emphasize the algorithmic complexity of this problem, and give a polynomial algorithm based on dynamic programming when the graph is a tree. This algorithm relies on famous combinatorial optimization results in matching theory. The second problem is a generalization of test cover for feature selection. The rows of a binary matrix are bicolored. The objective is to find a minimum subset of columns such that any pair of rows with different colors are still distinct when the matrix is restricted to the subset of columns. We give complexity results and tight bounds on the size of the optimal solutions for various matrix structures.
26

Survival Analysis using Bivariate Archimedean Copulas

Chandra, Krishnendu January 2015 (has links)
In this dissertation we solve the nonidentifiability problem of Archimedean copula models based on dependent censored data (see [Wang, 2012]). We give a set of identifiability conditions for a special class of bivariate frailty models. Our simulation results show that our proposed model is identifiable under our proposed conditions. We use EM algorithm to estimate unknown parameters and the proposed estimation approach can be applied to fit dependent censored data when the dependence is of research interest. The marginal survival functions can be estimated using the copula-graphic estimator (see [Zheng and Klein, 1995] and [Rivest and Wells, 2001]) or the estimator proposed by [Wang, 2014]. We also propose two model selection procedures for Archimedean copula models, one for uncensored data and the other one for right censored bivariate data. Our simulation results are similar to that of [Wang and Wells, 2000] and suggest that both procedures work quite well. The idea of our proposed model selection procedure originates from the model selection procedure for Archimedean copula models proposed by [Wang and Wells, 2000] for right censored bivariate data using the L2 norm corresponding to the Kendall distribution function. A suitable bootstrap procedure is yet to be suggested for our method. We further propose a new parameter estimator and a simple goodness-of-fit test for Archimedean copula models when the bivariate data is under fixed left truncation. Our simulation results suggest that our procedure needs to be improved so that it can be more powerful, reliable and efficient. In our strategy, to obtain estimates for the unknown parameters, we heavily exploit the concept of truncated tau (a measure of association established by [Manatunga and Oakes, 1996] for left truncated data). The idea of our goodness of fit test originates from the goodness-of-fit test for Archimedean copula models proposed by [Wang, 2010] for right censored bivariate data.
27

Marginal Screening on Survival Data

Huang, Tzu Jung January 2017 (has links)
This work develops a marginal screening test to detect the presence of significant predictors for a right-censored time-to-event outcome under a high-dimensional accelerated failure time (AFT) model. Establishing a rigorous screening test in this setting is challenging, not only because of the right censoring, but also due to the post-selection inference. The oracle property in such situations fails to ensure adequate control of the family-wise error rate, and this raises questions about the applicability of standard inferential methods. McKeague and Qian (2015) constructed an adaptive resampling test to circumvent this problem under ordinary linear regression. To accommodate right censoring, we develop a test statistic based on a maximally selected Koul--Susarla--Van Ryzin estimator from a marginal AFT model. A regularized bootstrap method is used to calibrate the test. Our test is more powerful and less conservative than the Bonferroni correction and other competing methods. This proposed method is evaluated in simulation studies and applied to two real data sets.
28

Modelling Weather Index Based Drought Insurance For Provinces In The Central Anatolia Region

Evkaya, Ozan Omer 01 August 2012 (has links) (PDF)
Drought, which is an important result of the climate change, is one of the most serious natural hazards globally. It has been agreed all over the world that it has adverse impacts on the production of agriculture, which plays a major role in the economy of a country. Studies showed that the results of the drought directly affected the crop yields, and it seems that this negative impact will continue drastically soon. Moreover, many researches revealed that, Turkey will be affected from the results of climate change in many aspects, especially the agricultural production will encounter dry seasons after the rapid changes in the precipitation amount. Insurance is a well-established method, which is used to share the risk based on natural disasters by people and organizations. Furthermore, a new way of insuring against the weather shocks is designing index-based insurance, and it has gained special attention in many developing countries. In this study, our aim is to model weather index based drought insurance product to help the small holder farmers in the Cental Anatolia Region under different models. At first, time series techniques were applied to forecast the wheat yield relying on the past data. Then, the AMS (AgroMetShell) software outputs, NDVI (Normalized Difference Vegetation Index) values were used, and SPI values for distinct time steps were chosen to develop a basic threshold based drought insurance for each province. Linear regression equations were used to calculate the trigger points for weather index, afterwards based on these trigger levels / pure premium and indemnity calculations were made for each province separately. In addition to this, Panel Data Analysis were used to construct an alternative linear model for drought insurance. It can be helpful to understand the direct and actual effects of selected weather index measures on wheat yield and also reduce the basis risks for constructed contracts. A simple ratio was generated to compare the basis risk of the different index-based insurance contracts.
29

Increasing The Accuracy Of Vegetation Classification Using Geology And Dem

Domac, Aysegul 01 December 2004 (has links) (PDF)
The difficulty of gathering information on field and coarse resolution of Landsat images forced to use ancillary data in vegetation mapping. The aim of this study is to increase the accuracy of species level vegetation classification incorporating environmental variables in the Amanos region. In the first part of the study, coarse vegetation classification is attained by using maximum likelihood method with the help of forest management maps. Canonical Correspondence analysis is used to explore the relationships among the environmental variables and vegetation classes. Discriminant Analysis is used in the second part of the study in two different stages. Firstly Fisher&rsquo / s linear equations for each of the previously defined nine groups calculated and the pixels are included in one of these groups by looking at the probability of that pixel being in that group. In the second stage Distance raster value of maximum likelihood classification is used. Distance raster pixels having a value less than one is accepted as misclassified and replaced with a value of first stage result of that pixel. As a result of this study 19.6 % increase in the overall accuracy is obtained by using the relationships between environmental variables and vegetation distribution.
30

Employing a secure Virtual Private Network (VPN) infrastructure as a global command and control gateway to dynamically connect and disconnect diverse forces on a task-force-by-task-force basis

Kilcrease, Patrick N. January 2009 (has links) (PDF)
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, September 2009. / Thesis Advisor(s): Barreto, Albert. "September 2009." Description based on title screen as viewed on 6 November 2009. Author(s) subject terms: Virtual Private Network, GHOSTNet, maritime interdiction operations, internet protocol security, encapsulating security protocol, data encryption standard. Includes bibliographical references (p. 83-84). Also available in print.

Page generated in 0.0776 seconds