• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Efficient, Practical, Portable Mapping Technique on Computational Grids

Phinjaroenphan, Panu, s2118294@student.rmit.edu.au January 2007 (has links)
Grid computing provides a powerful, virtual parallel system known as a computational Grid on which users can run parallel applications to solve problems quickly. However, users must be careful to allocate tasks to nodes properly because improper allocation of only one task could result in lengthy executions of applications, or even worse, applications could crash. This allocation problem is called the mapping problem, and an entity that tackles this problem is called a mapper. In this thesis, we aim to develop an efficient, practical, portable mapper. To study the mapping problem, researchers often make unrealistic assumptions such as that nodes of Grids are always reliable, that execution times of tasks assigned to nodes are known a priori, or that detailed information of parallel applications is always known. As a result, the practicality and portability of mappers developed in such conditions are uncertain. Our review of related work suggested that a more efficient tool is required to study this problem; therefore, we developed GMap, a simulator researchers/developers can use to develop practical, portable mappers. The fact that nodes are not always reliable leads to the development of an algorithm for predicting the reliability of nodes and a predictor for identifying reliable nodes of Grids. Experimental results showed that the predictor reduced the chance of failures in executions of applications by half. The facts that execution times of tasks assigned to nodes are not known a priori and that detailed information of parallel applications is not alw ays known, lead to the evaluation of five nearest-neighbour (nn) execution time estimators: k-nn smoothing, k-nn, adaptive k-nn, one-nn, and adaptive one-nn. Experimental results showed that adaptive k-nn was the most efficient one. We also implemented the predictor and the estimator in GMap. Using GMap, we could reliably compare the efficiency of six mapping algorithms: Min-min, Max-min, Genetic Algorithms, Simulated Annealing, Tabu Search, and Quick-quality Map, with none of the preceding unrealistic assumptions. Experimental results showed that Quick-quality Map was the most efficient one. As a result of these findings, we achieved our goal in developing an efficient, practical, portable mapper.
2

MAPPING ALGORITHM FOR AUTONOMOUS NAVIGATION OF LAWN MOWER USING SICK LASER

Baichbal, Shashidhar 07 May 2012 (has links)
No description available.
3

Évaluation de modèles de régression linéaire pour la cartographie de l'équivalent en eau de la neige dans la province de Québec avec le capteur micro-ondes passives AMSR-E

Comtois-Boutet, Félix January 2007 (has links)
Résumé: La mesure de l’équivalent en eau de la neige (EEN) sur le terrain permet de prédire la quantité d’eau libérée par la fonte de la neige. La télédétection dans les micro-ondes passives offre le potentiel d’estimer I’EEN et peut complémenter ces observations de façon synoptique pour l’ensemble du territoire. Un produit de cartographie de I’EEN couvrant l’ensemble du globe a été élaboré par le NSIDC basé sur le capteur AMSR-E. Cet instrument, lancé en 2002, a une résolution améliorée par rapport aux capteurs antérieurs. L’estimation de I’EEN se base sur la différence entre un canal peu affecté (19 GHz) et un canal affecté (37 GHz) par la diffusion de volume de la neige. La précision de ce produit a été évaluée pour la province de Québec à l’hiver 2003 et à l’hiver 2004 qui ont un EEN moyen de 170 mm. Des sous-estimations importantes ont été révélées et une certaine difficulté à détecter la présence de neige. Des modèles régionaux de régressions linéaires ont été développés pour le Québec. Des corrections pour la fraction d’eau et de forêt ont été appliquées à la combinaison T19v.37v et ont permis d’améliorer les résultats. Ces corrections sont basées sur la température de l’air du modèle GEM. Les meilleurs résultats sont pour la classe de neige taïga à l’hiver 2003 avec une erreur relative de 24 % tandis que l’erreur relative est d’environ 40 % pour la région maritime. Les erreurs élevées dans la classe taïga ont été attribuées à des couverts de neige plus épais que la capacité de pénétration des micro-ondes tandis que les erreurs de la classe maritime a des fractions forêt élevées et à la neige mouillée. La présence d’importante quantité de neige et la forêt dense de la province de Québec compliquent l’estimation de I’EEN au Québec avec un modèle de régression. || Abstract: Snow water equivalent (SWE) measurements in the field allow estimation of the quantity of released water from the melting of snow. This is useful to predict the water reserve available for production of hydro-electricity. Remote sensing with microwave can estimate SWE and complement those observations synoptically for whole territories. A SWE mapping products was developed by NSIDC based on the AMSR-E sensor launched in 2002 with an improved resolution compared to previous sensors. SWE estimation is based on difference between a channel weakly affected (19 GHz) and a channel strongly affected by volume scattering. The precision of this product was evaluated for the province of Quebec in winter 2003 and winter 2004 with a mean SWE of 170 mm. Important underestimation and some difficulty of detecting the snow was revealed. Regional linear regression models were developed for the province of Quebec. Corrections for forest and water fraction were applied on T19V-37V combination and permit to improve the results. Those corrections were based on air temperature from the GEM model. Best results were found for taiga snow class in winter 2003 with a relative error of 28% and approximately 40% for maritime snow class. High errors in the taiga region were attributed to snow depth higher than the penetration depth of the microwave and errors in the maritime region to high forest density and wet snow. The important snow amount and high density forest of the province of Quebec hampers the estimation of SWE with a regression model.
4

On a Ductile Void Growth Model with Evolving Microstructure Model for Inelasticity

Tjiptowidjojo, Yustianto 13 December 2014 (has links)
The objective of this work is to develop an evolution equation for the ductile growth of a spherical void in a highly strain rate and temperature dependent material. The material considered in this work is stainless steel 304L at 982 °C. The material is characterized by a physically-based internal state variable model derived within consistent kinematics and thermodynamics — Evolving Microstructure Model for Inelasticity. Through this formulation, the degradation of the elastic moduli due to damage has been naturally acquired. An elastoviscoplasticity user material subroutine has also been developed and implemented into a commercially available finite element software ABAQUS. The subroutine utilizes a return mapping algorithm, where a purely elastic trial state (elastic predictor) is followed by a plastic corrector phase (return mapping). A conditionally stable fully-implicit scheme, derived from the backward Euler integration method, has been employed to calculate the values of the internal state variables in the elastoviscoplasticity integration routine. A repeating unit cell problem is set up by introducing a spherical void inside a matrix material that simulates a periodic array of voids in a component. Using finite element analysis, a database is generated by recording the responses of the unit cell under various combinations of loading conditions, porosity, and state variables. Functional forms of the void growth equations are constructed by utilizing normalization techniques to collapse all the data into master curves. The evolution equations are converted to a form consistent with the continuum damage variable in the complete thermal-elastic-plastic-damage version of the physically-based internal state variable model.
5

Design of a Mapping Algorithm for Delay Sensitive Virtual Networks

Ivaturi, Karthikeswar 01 January 2012 (has links) (PDF)
In this era of constant evolution of Internet, Network Virtualization is a powerful platform for the existence of heterogeneous and customized networks on a shared infrastructure. Virtual network embedding is pivotal step for network virtualization and also enables the usage of virtual network mapping techniques. The existing state- of-the-art mapping techniques addresses the issues relating to bandwidth, processing capacity and location constraints very effectively. But due to the advancement of real- time and delay sensitive applications on the Internet, there is a need to address the issue of delay in virtual network mapping techniques. As none of the existing state- of-the-art mapping algorithms do not address this issue, in this thesis we address this issue using VHub-Delay and other mapping algorithms. Based on the study and observations, we designed a new mapping technique that can address the issue of delay and finally the effectiveness of the mapping technique is validated by extensive simulations.
6

An Algorithm For The Forward Step Of Adaptive Regression Splines Via Mapping Approach

Kartal Koc, Elcin 01 September 2012 (has links) (PDF)
In high dimensional data modeling, Multivariate Adaptive Regression Splines (MARS) is a well-known nonparametric regression technique to approximate the nonlinear relationship between a response variable and the predictors with the help of splines. MARS uses piecewise linear basis functions which are separated from each other with breaking points (knots) for function estimation. The model estimating function is generated in two stepwise procedures: forward selection and backward elimination. In the first step, a general model including too many basis functions so the knot points are generated / and in the second one, the least contributing basis functions to the overall fit are eliminated. In the conventional adaptive spline procedure, knots are selected from a set of distinct data points that makes the forward selection procedure computationally expensive and leads to high local variance. To avoid these drawbacks, it is possible to select the knot points from a subset of data points, which leads to data reduction. In this study, a new method (called S-FMARS) is proposed to select the knot points by using a self organizing map-based approach which transforms the original data points to a lower dimensional space. Thus, less number of knot points is enabled to be evaluated for model building in the forward selection of MARS algorithm. The results obtained from simulated datasets and of six real-world datasets show that the proposed method is time efficient in model construction without degrading the model accuracy and prediction performance. In this study, the proposed approach is implemented to MARS and CMARS methods as an alternative to their forward step to improve them by decreasing their computing time
7

Economic Evaluation of Percutaneous Coronary Intervention in Stable Coronary Artery Disease: Studies in Utilities and Decision Modeling

Wijeysundera, Harindra Channa 29 February 2012 (has links)
The initial treatment options for patients with stable coronary artery disease include optimal medical therapy alone, or coronary revascularization with optimal medical therapy. The most common revascularization modality is percutaneous coronary intervention (PCI) with either bare metal stents (BMS) or drug-eluting stents (DES). PCI is believed to reduce recurrent angina and thereby decrease the need for additional procedures compared to optimal medical therapy alone. It remains unclear if these benefits are sufficient to offset the increased costs and small increase in adverse events associated with PCI. The objectives of this thesis were to determine the degree of angina relief afforded by PCI and develop a tool to provide contemporary estimates of the impact of angina on quality of life. In addition, we sought to develop a comprehensive state-transition model, calibrated to real world costs and outcomes to compare the cost-effectiveness of initial medical therapy versus PCI with either BMS or DES in patients with stable coronary artery disease. ii We performed a systematic search and meta-analysis of the published literature. Although PCI was associated with an overall benefit on angina relief (odds ratio [OR] 1.69; 95% Confidence Interval [CI] 1.24-2.30), this benefit was largely attenuated in contemporary studies (OR 1.13; 95% CI 0.76-1.68). Our meta-regression analysis suggests that this observation was related to greater use of evidence-based medications in more recent trials. Using simple linear regression, we were able to create a mapping tool that could accurately estimate utility weights from data on the Seattle Angina Question, the most common descriptive quality of life instrument used in the cardiovascular literature. In our economic evaluation, we found that an initial strategy of PCI with a BMS was cost- effective compared to medical therapy, with an incremental cost-effectiveness ratio (ICER) of $13,271 per quality adjusted life year gained. In contrast, DES had a greater cost and lower survival than BMS and was therefore a dominated strategy.
8

Economic Evaluation of Percutaneous Coronary Intervention in Stable Coronary Artery Disease: Studies in Utilities and Decision Modeling

Wijeysundera, Harindra Channa 29 February 2012 (has links)
The initial treatment options for patients with stable coronary artery disease include optimal medical therapy alone, or coronary revascularization with optimal medical therapy. The most common revascularization modality is percutaneous coronary intervention (PCI) with either bare metal stents (BMS) or drug-eluting stents (DES). PCI is believed to reduce recurrent angina and thereby decrease the need for additional procedures compared to optimal medical therapy alone. It remains unclear if these benefits are sufficient to offset the increased costs and small increase in adverse events associated with PCI. The objectives of this thesis were to determine the degree of angina relief afforded by PCI and develop a tool to provide contemporary estimates of the impact of angina on quality of life. In addition, we sought to develop a comprehensive state-transition model, calibrated to real world costs and outcomes to compare the cost-effectiveness of initial medical therapy versus PCI with either BMS or DES in patients with stable coronary artery disease. ii We performed a systematic search and meta-analysis of the published literature. Although PCI was associated with an overall benefit on angina relief (odds ratio [OR] 1.69; 95% Confidence Interval [CI] 1.24-2.30), this benefit was largely attenuated in contemporary studies (OR 1.13; 95% CI 0.76-1.68). Our meta-regression analysis suggests that this observation was related to greater use of evidence-based medications in more recent trials. Using simple linear regression, we were able to create a mapping tool that could accurately estimate utility weights from data on the Seattle Angina Question, the most common descriptive quality of life instrument used in the cardiovascular literature. In our economic evaluation, we found that an initial strategy of PCI with a BMS was cost- effective compared to medical therapy, with an incremental cost-effectiveness ratio (ICER) of $13,271 per quality adjusted life year gained. In contrast, DES had a greater cost and lower survival than BMS and was therefore a dominated strategy.
9

A Constitutive Model for Crushable Polymer Foams Used in Sandwich Panels: Theory and FEA Application

Tong, Xiaolong 25 August 2020 (has links)
No description available.
10

Surface-based Synthesis of 3D Maps for Outdoor Unstructured Environments

Melkumyan, Narek January 2009 (has links)
Doctor of Philosophy(PhD) / This thesis is concerned with the theoretical and practical development of a surface-based mapping algorithm for reliable and robust localization and mapping in prior unknown and unstructured environments. A surface-based map consists of a set of compressed surfaces, processed and represented without geometrical modelling. Each surface in the surface-based map represents an object in the environment. The ability to represent the exact shapes of objects via individual surfaces during the mapping process makes the surface-based mapping algorithm valuable in a number of navigation applications, such as mapping of prior unknown indoor and outdoor unstructured environments, target tracking, path planning and collision avoidance. The ability to unify representations of the same object taken from different viewpoints into a single surface makes the algorithm capable of working in multi-robot mapping applications. A surface-based map of the environment is build incrementally by acquiring the 3D range image of the scene, extracting the objects' surfaces from the 3D range image, aligning the set of extracted surfaces relative to the map and unifying the aligned set of surfaces with surfaces in the map. In the surface unification process the surfaces representing the same object are unified to make a single surface. The thesis introduces the following new methods which are used in the surface-based mapping algorithm: the extraction of surfaces from 3D range images based on a scanned surface continuity check; homogenization of the representation of the non-homogenously sampled surfaces; the alignment of the surface set relative to a large set of surfaces based on surface-based alignment algorithm; evaluating the correspondence between two surfaces based on the overlap area between surfaces; unification of the two surfaces belonging to the same object; and surface unification for a large set of surfaces. The theoretical contributions of this thesis are demonstrated with a series of practical implementations in different outdoor environments.

Page generated in 0.078 seconds