• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 19
  • 12
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An improved engineering design flood estimation technique: removing the need to estimate initial loss / by Theresa Michelle Heneker.

Heneker, Theresa Michelle January 2002 (has links)
"May 2002" / Includes list of papers published during this study / Errata slip inserted inside back cover of v. 1 / Includes bibliographical references (leaves 331-357) / 2 v. : ill. (some col.), col. maps ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Develops an alternative design flood estimation methodology. Establishing a relationship between catchment characteristics and the rainfall excess frequency duration proportions enables the definition of these proportions for generic catchment types, increasing the potential for translation to catchments with limited data but similar hydrographic properties, thereby improving design process. / Thesis (Ph.D.)--University of Adelaide, Dept. of Civil and Environmental Engineering, 2002
22

A new approach to the train algorithm for distributed garbage collection.

Lowry, Matthew C. January 2004 (has links)
This thesis describes a new approach to achieving high quality distributed garbage collection using the Train Algorithm. This algorithm has been investigated for its ability to provide high quality collection in a variety of contexts, including persistent object systems and distributed object systems. Prior literature on the distributed Train Algorithm suggests that safe, complete, asynchronous, and scalable collection can be attained, however an approach that achieves this combination of behaviour has yet to emerge. The mechanisms and policies described in this thesis are unique in their ability to exploit the distributed Train Algorithm in a manner that displays all four desirable qualities. Further the mechanisms allow any number of mutator and collector threads to operate concurrently within a site; this is also a unique property amongst train-based mechanisms (distributed or otherwise). Confidence in the quality of the approach promoted in this thesis is obtained via a top-down approach. Firstly a concise behavioural model is introduced to capture fundamental requirements for safe and complete behaviour from train-based collection mechanisms. The model abstracts over the techniques previously introduced under the banner of the Train Algorithm. It serves as a self- contained template for correct train-based collection that is independent of a target object system for deployment of the algorithm. Secondly a means to instantiate the model in a distributed object system is described. The instantiation includes well-established techniques from prior literature, and via the model these are correctly refined and reorganised with new techniques to achieve asynchrony, scalability, and support for concurrency. The result is a flexible approach that allows a distributed system to exhibit a variety of local collection mechanisms and policies, while ensuring their interaction is safe, complete, asynchronous, and scalable regardless of the local choices made by each site. Additional confidence in the properties of the new approach is obtained from implementation within a distributed object system simulation. The implementation provides some insight into the practical issues that arise through the combination of distribution, concurrent execution within sites, and train-based collection. Executions of the simulation system are used to verify that safe collection is observed at all times, and obtain evidence that asynchrony, scalability, and concurrency can be observed in practice. / Thesis (Ph.D.)--School of Computer Science, 2004.
23

A new approach to the train algorithm for distributed garbage collection.

Lowry, Matthew C. January 2004 (has links)
This thesis describes a new approach to achieving high quality distributed garbage collection using the Train Algorithm. This algorithm has been investigated for its ability to provide high quality collection in a variety of contexts, including persistent object systems and distributed object systems. Prior literature on the distributed Train Algorithm suggests that safe, complete, asynchronous, and scalable collection can be attained, however an approach that achieves this combination of behaviour has yet to emerge. The mechanisms and policies described in this thesis are unique in their ability to exploit the distributed Train Algorithm in a manner that displays all four desirable qualities. Further the mechanisms allow any number of mutator and collector threads to operate concurrently within a site; this is also a unique property amongst train-based mechanisms (distributed or otherwise). Confidence in the quality of the approach promoted in this thesis is obtained via a top-down approach. Firstly a concise behavioural model is introduced to capture fundamental requirements for safe and complete behaviour from train-based collection mechanisms. The model abstracts over the techniques previously introduced under the banner of the Train Algorithm. It serves as a self- contained template for correct train-based collection that is independent of a target object system for deployment of the algorithm. Secondly a means to instantiate the model in a distributed object system is described. The instantiation includes well-established techniques from prior literature, and via the model these are correctly refined and reorganised with new techniques to achieve asynchrony, scalability, and support for concurrency. The result is a flexible approach that allows a distributed system to exhibit a variety of local collection mechanisms and policies, while ensuring their interaction is safe, complete, asynchronous, and scalable regardless of the local choices made by each site. Additional confidence in the properties of the new approach is obtained from implementation within a distributed object system simulation. The implementation provides some insight into the practical issues that arise through the combination of distribution, concurrent execution within sites, and train-based collection. Executions of the simulation system are used to verify that safe collection is observed at all times, and obtain evidence that asynchrony, scalability, and concurrency can be observed in practice. / Thesis (Ph.D.)--School of Computer Science, 2004.
24

Concept-Oriented Model and Nested Partially Ordered Sets

Savinov, Alexandr 24 April 2014 (has links)
Concept-oriented model of data (COM) has been recently defined syntactically by means of the concept-oriented query language (COQL). In this paper we propose a formal embodiment of this model, called nested partially ordered sets (nested posets), and demonstrate how it is connected with its syntactic counterpart. Nested poset is a novel formal construct that can be viewed either as a nested set with partial order relation established on its elements or as a conventional poset where elements can themselves be posets. An element of a nested poset is defined as a couple consisting of one identity tuple and one entity tuple. We formally define main operations on nested posets and demonstrate their usefulness in solving typical data management and analysis tasks such as logic navigation, constraint propagation, inference and multidimensional analysis.
25

AB initio studies of a pentacyclo-undecane cage lactam

Singh, Thishana January 2003 (has links)
Thesis (M.Tech.: Chemistry)-Dept. of Chemistry, Durban Institute of Technology, 2003 ix, 70 leaves + 1 computer laser optical disc / The purpose of this study is to utilize computational techniques in the determination of the mechanistic pathways for the one-pot conversion of a pentacyclo-undecane (PCU) dione 1.1 to a pentacyclo-undecane cage lactam 1.2.
26

AB initio studies of a pentacyclo-undecane cage lactam

Singh, Thishana January 2003 (has links)
Thesis (M.Tech.: Chemistry)-Dept. of Chemistry, Durban Institute of Technology, 2003 ix, 70 leaves + 1 computer laser optical disc / The purpose of this study is to utilize computational techniques in the determination of the mechanistic pathways for the one-pot conversion of a pentacyclo-undecane (PCU) dione 1.1 to a pentacyclo-undecane cage lactam 1.2.
27

Simulations and data-based models for electrical conductivities of graphene nanolaminates

Rothe, Tom 13 August 2021 (has links)
Graphene-based conductor materials (GCMs) consist of stacked and decoupled layers of graphene flakes and could potentially transfer graphene’s outstanding material properties like its exceptional electrical conductivity to the macro scale, where alternatives to the heavy and expensive metallic conductors are desperately needed. To reach super-metallic conductivity however, a systematic electrical conductivity optimization regarding the structural and physical input parameters is required. Here, a new trend in the field of process and material optimization are data-based models which utilize data science methods to quickly identify and abstract information and relationships from the available data. In this work such data-based models for the conductivity of a real GCM thin-film sample are build on data generated with an especially improved and extended version of the network simulation approach by Rizzi et al. [1, 2, 3]. Appropriate methods to create data-based models for GCMs are thereby introduced and typical challenges during the modelling process are addressed, so that data-based models for other properties of GCMs can be easily created as soon as sufficient data is accessible. Combined with experimental measurements by Slawig et al. [4] the created data-based models allow for a coherent and comprehensive description of the thin-films’ electrical parameters across several length scales.:List of Figures List of Tables Symbol Directory List of Abbreviations 1 Introduction 2 Simulation approaches for graphene-based conductor materials 2.1 Traditional simulation approaches for GCMs 2.1.1 Analytical model for GCMs 2.1.2 Finite element method simulations for GCMs 2.2 A network simulation approach for GCMs 2.2.1 Geometry generation 2.2.2 Electrical network creation 2.2.3 Contact and probe setting 2.2.4 Conductivity computation 2.2.5 Results obtained with the network simulation approach 2.3 An improved implementation for the network simulation 2.3.1 Rizzi’s implementation of the network simulation approach 2.3.2 An network simulation tool for parameter studies 2.3.3 Extending the network simulation approach for anisotropy investigations and multilayer flakes 3 Data-based material modelling 3.1 Introduction to data-based modelling 3.2 Data-based modelling in material science 3.3 Interpretability of data-based models 3.4 The data-based modelling process 3.4.1 Preliminary considerations 3.4.2 Data acquisition 3.4.3 Preprocessing the data 3.4.4 Partitioning the dataset 3.4.5 Training the model 3.4.6 Model evaluation 3.4.7 Real-world applications 3.5 Regression estimators 3.5.1 Mathematical introduction to regression 3.5.2 Regularization and ridge regression 3.5.3 Support Vector Regression 3.5.4 Introducing non-linearity through kernels 4 Data-based models for a real GCM thin-film 4.1 Experimental measurements 4.2 Simulation procedure 4.3 Data generation 4.4 Creating data-based models 4.4.1 Quadlinear interpolation as benchmark model 4.4.2 KR, KRR and SVR 4.4.3 Enlarging the dataset 4.4.4 KR, KRR and SVR on the enlarged training dataset 4.5 Application to the GCM sample 5 Conclusion and Outlook 5.1 Conclusion 5.2 Outlook Acknowledgements Statement of Authorship
28

AI-Based Transport Mode Recognition for Transportation Planning Utilizing Smartphone Sensor Data From Crowdsensing Campaigns

Grubitzsch, Philipp, Werner, Elias, Matusek, Daniel, Stojanov, Viktor, Hähnel, Markus 11 May 2023 (has links)
Utilizing smartphone sensor data from crowdsen-sing (CS) campaigns for transportation planning (TP) requires highly reliable transport mode recognition. To address this, we present our RNN-based AI model MovDeep, which works on GPS, accelerometer, magnetometer and gyroscope data. It was trained on 92 hours of labeled data. MovDeep predicts six transportation modes (TM) on one second time windows. A novel postprocessing further improves the prediction results. We present a validation methodology (VM), which simulates unknown context, to get a more realistic estimation of the real-world performance (RWP). We explain why existing work shows overestimated prediction qualities, when they would be used on CS data and why their results are not comparable with each other. With the introduced VM, MovDeep still achieves 99.3 % F1 -Score on six TM. We confirm the very good RWP for our model on unknown context with the Sussex-Huawei Locomotion data set. For future model comparison, both publicly available data sets can be used with our VM. In the end, we compare MovDeep to a deterministic approach as a baseline for an average performing model (82 - 88 % RWP Recall) on a CS data set of 540 k tracks, to show the significant negative impact of even small prediction errors on TP.
29

Efficient Bayesian methods for mixture models with genetic applications / Métodos Bayesianos eficientes para modelos de mistura com aplicações em genética

Zuanetti, Daiane Aparecida 14 December 2016 (has links)
We propose Bayesian methods for selecting and estimating different types of mixture models which are widely used inGenetics and MolecularBiology. We specifically propose data-driven selection and estimation methods for a generalized mixture model, which accommodates the usual (independent) and the first-order (dependent) models in one framework, and QTL (quantitativetrait locus) mapping models for independent and pedigree data. For clustering genes through a mixture model, we propose three nonparametric Bayesian methods: a marginal nested Dirichlet process (NDP), which is able to cluster distributions and, a predictive recursion clustering scheme (PRC) and a subset nonparametric Bayesian (SNOB) clustering algorithm for clustering bigdata. We analyze and compare the performance of the proposed methods and traditional procedures of selection, estimation and clustering in simulated and real datasets. The proposed methods are more flexible, improve the convergence of the algorithms and provide more accurate estimates in many situations. In addition, we propose methods for estimating non observable QTLs genotypes and missing parents and improve the Mendelian probability of inheritance of nonfounder genotype using conditional independence structures.We also suggest applying diagnostic measures to check the goodness of fit of QTLmappingmodels. / Nos propomos métodos Bayesianos para selecionar e estimar diferentes tipos de modelos de mistura que são amplamente utilizados em Genética e Biologia Molecular. Especificamente, propomos métodos direcionados pelos dados para selecionar e estimar um modelo de mistura generalizado, que descreve o modelo de mistura usual (independente) e o de primeira ordem numa mesma estrutura, e modelos de mapeamento de QTL com dados independentes e familiares. Para agrupar genes através de modelos de mistura, nos propomos três métodos Bayesianos não-paramétricos: o processo de Dirichlet aninhado que possibilita agrupamento de distribuições e, um algoritmo preditivo recursivo e outro Bayesiano não- paramétrico exato para agrupar dados de alta dimensão. Analisamos e comparamos o desempenho dos métodos propostos e dos procedimentos tradicionais de seleção e estimação de modelos e agrupamento de dados em conjuntos de dados simulados e reais. Os métodos propostos são mais flexíveis, aprimoram a convergência dos algoritmos e apresentam estimativas mais precisas em muitas situações. Além disso, nos propomos procedimentos para estimar o genótipo não observável dos QTL se de pais faltantes e melhorar a probabilidade Mendeliana de herança genética do genótipo dos descendentes através da estrutura condicional de independência entre as variáveis. Também sugerimos aplicar medidas de diagnóstico para verificar a qualidade do ajuste dos modelos de mapeamento de QTLs.
30

Clustering Uncertain Data with Possible Worlds

Lehner, Wolfgang, Volk, Peter Benjamin, Rosenthal, Frank, Hahmann, Martin, Habich, Dirk 16 August 2022 (has links)
The topic of managing uncertain data has been explored in many ways. Different methodologies for data storage and query processing have been proposed. As the availability of management systems grows, the research on analytics of uncertain data is gaining in importance. Similar to the challenges faced in the field of data management, algorithms for uncertain data mining also have a high performance degradation compared to their certain algorithms. To overcome the problem of performance degradation, the MCDB approach was developed for uncertain data management based on the possible world scenario. As this methodology shows significant performance and scalability enhancement, we adopt this method for the field of mining on uncertain data. In this paper, we introduce a clustering methodology for uncertain data and illustrate current issues with this approach within the field of clustering uncertain data.

Page generated in 0.1504 seconds