• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 456
  • 10
  • 8
  • 6
  • Tagged with
  • 1051
  • 1051
  • 738
  • 315
  • 307
  • 306
  • 295
  • 287
  • 248
  • 239
  • 205
  • 204
  • 113
  • 86
  • 85
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
951

Dealing with Geographic Information in Location-Based Search Engines

Mr Saeid Asadi Unknown Date (has links)
No description available.
952

Model-based strategies for automated segmentation of cardiac magnetic resonance images

Lin, Xiang, 1971- January 2008 (has links)
Segmentation of the left and right ventricles is vital to clinical magnetic resonance imaging studies of cardiac function. A single cardiac examination results in a large amount of image data. Manual analysis by experts is time consuming and also susceptible to intra- and inter-observer variability. This leads to the urgent requirement for efficient image segmentation algorithms to automatically extract clinically relevant parameters. Present segmentation techniques typically require at least some user interaction or editing, and do not deal well with the right ventricle. This thesis presents mathematical model based methods to automatically localize and segment the left and right ventricular endocardium and epicardium in 3D cardiac magnetic resonance data without any user interaction. An efficient initialization algorithm was developed which used a novel temporal Fourier analysis to determine the size, orientation and position of the heart. Quantitative validation on a large dataset containing 330 patients showed that the initialized contours had only ~ 5 pixels (modified Hausdorff distance) error on average in the middle short-axis slices. A model-based graph cuts algorithm was investigated and achieved good results on the midventricular slices, but was not found to be robust on other slices. Instead, automated segmentation of both the left and right ventricular contours was performed using a new framework, called SMPL (Simple Multi-Property Labelled) atlas based registration. This framework was able to integrate boundary, intensity and anatomical information. A comparison of similarity measures showed the sum of squared difference was most appropriate in this context. The method improved the average contour errors of the middle short-axis slices to ~ 1 pixel. The detected contours were then used to update the 3D model using a new feature-based 3D registration method. These techniques were iteratively applied to both short-axis and long-axis slices, resulting in a 3D segmentation of the patient’s heart. This automated model-based method showed a good agreement with expert observers, giving average errors of ~ 1–4 pixels on all slices.
953

Design and evaluation of software obfuscations

Majumdar, Anirban January 2008 (has links)
Software obfuscation is a protection technique for making code unintelligible to automated program comprehension and analysis tools. It works by performing semantic preserving transformations such that the difficulty of automatically extracting the computational logic out of code is increased. Obfuscating transforms in existing literature have been designed with the ambitious goal of being resilient against all possible reverse engineering attacks. Even though some of the constructions are based on intractable computational problems, we do not know, in practice, how to generate hard instances of obfuscated problems such that all forms of program analyses would fail. In this thesis, we address the problem of software protection by developing a weaker notion of obfuscation under which it is not required to guarantee an absolute blackbox security. Using this notion, we develop provably-correct obfuscating transforms using dependencies existing within program structures and indeterminacies in communication characteristics between programs in a distributed computing environment. We show how several well known static analysis tools can be used for reverse engineering obfuscating transforms that derive resilience from computationally hard problems. In particular, we restrict ourselves to one common and potent static analysis tool, the static slicer, and use it as our attack tool. We show the use of derived software engineering metrics to indicate the degree of success or failure of a slicer attack on a piece of obfuscated code. We address the issue of proving correctness of obfuscating transforms by adapting existing proof techniques for functional program refinement and communicating sequential processes. The results of this thesis could be used for future work in two ways: first, future researchers may extend our proposed techniques to design obfuscations using a wider range of dependencies that exist between dynamic program structures. Our restricted attack model using one static analysis tool can also be relaxed and obfuscations capable of withstanding a broader class of static and dynamic analysis attacks could be developed based on the same principles. Secondly, our obfuscatory strength evaluation techniques could guide anti-malware researchers in the development of tools to detect obfuscated strains of polymorphic viruses. / Whole document restricted, but available by request, use the feedback form to request access.
954

Accelerating classifier training using AdaBoost within cascades of boosted ensembles : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Sciences at Massey University, Auckland, New Zealand

Susnjak, Teo January 2009 (has links)
This thesis seeks to address current problems encountered when training classifiers within the framework of cascades of boosted ensembles (CoBE). At present, a signifi- cant challenge facing this framework are inordinate classifier training runtimes. In some cases, it can take days or weeks (Viola and Jones, 2004; Verschae et al., 2008) to train a classifier. The protracted training runtimes are an obstacle to the wider use of this framework (Brubaker et al., 2006). They also hinder the process of producing effective object detection applications and make the testing of new theories and algorithms, as well as verifications of others research, a considerable challenge (McCane and Novins, 2003). An additional shortcoming of the CoBE framework is its limited ability to train clas- sifiers incrementally. Presently, the most reliable method of integrating new dataset in- formation into an existing classifier, is to re-train a classifier from beginning using the combined new and old datasets. This process is inefficient. It lacks scalability and dis- cards valuable information learned in previous training. To deal with these challenges, this thesis extends on the research by Barczak et al. (2008), and presents alternative CoBE frameworks for training classifiers. The alterna- tive frameworks reduce training runtimes by an order of magnitude over common CoBE frameworks and introduce additional tractability to the process. They achieve this, while preserving the generalization ability of their classifiers. This research also introduces a new framework for incrementally training CoBE clas- sifiers and shows how this can be done without re-training classifiers from beginning. However, the incremental framework for CoBEs has some limitations. Although it is able to improve the positive detection rates of existing classifiers, currently it is unable to lower their false detection rates.
955

Development of a framework for evaluating the quality of instructional design ontologies : a thesis presented in partial fulfilment of the requirements for the degree of Master of Management in Information Systems at Massey University, Wellington, New Zealand

Li, Xin January 2009 (has links)
Instructional Design (ID) ontology can be used to formally represent knowledge about the teaching and learning process, which contributes to automatic construction of personalised eLearning experiences. While ID ontologies have been continuously improved and developed over recent years, there are concerns regarding what makes a quality ID ontology. This study proposes a framework for evaluating the quality of an ID ontology by synthesising the crucial elements considered in the ID ontologies developed to date. The framework would allow a more precise evaluation of different ID ontologies, by demonstrating the quality of each ontology with respect to the set of crucial elements that arise from the ontology. This study also gives an overview of the literature on ID ontology, as well as the implications for future research in this area.
956

Accelerating classifier training using AdaBoost within cascades of boosted ensembles : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Sciences at Massey University, Auckland, New Zealand

Susnjak, Teo January 2009 (has links)
This thesis seeks to address current problems encountered when training classifiers within the framework of cascades of boosted ensembles (CoBE). At present, a signifi- cant challenge facing this framework are inordinate classifier training runtimes. In some cases, it can take days or weeks (Viola and Jones, 2004; Verschae et al., 2008) to train a classifier. The protracted training runtimes are an obstacle to the wider use of this framework (Brubaker et al., 2006). They also hinder the process of producing effective object detection applications and make the testing of new theories and algorithms, as well as verifications of others research, a considerable challenge (McCane and Novins, 2003). An additional shortcoming of the CoBE framework is its limited ability to train clas- sifiers incrementally. Presently, the most reliable method of integrating new dataset in- formation into an existing classifier, is to re-train a classifier from beginning using the combined new and old datasets. This process is inefficient. It lacks scalability and dis- cards valuable information learned in previous training. To deal with these challenges, this thesis extends on the research by Barczak et al. (2008), and presents alternative CoBE frameworks for training classifiers. The alterna- tive frameworks reduce training runtimes by an order of magnitude over common CoBE frameworks and introduce additional tractability to the process. They achieve this, while preserving the generalization ability of their classifiers. This research also introduces a new framework for incrementally training CoBE clas- sifiers and shows how this can be done without re-training classifiers from beginning. However, the incremental framework for CoBEs has some limitations. Although it is able to improve the positive detection rates of existing classifiers, currently it is unable to lower their false detection rates.
957

Interaction between existing social networks and information and communication technology (ICT) tools : evidence from rural Andes

Diaz Andrade, Antonio January 2007 (has links)
This exploratory and interpretive research examines the anticipated consequences of information and communication technology (ICT) on six remote rural communities, located in the northern Peruvian Andes, which were provided with computers connected to the Internet. Instead of looking for economic impacts of the now-available technological tools, this research investigates how local individuals use (or not) computers, and analyses the mechanisms by which computer-mediated information, obtained by those who use computers, is disseminated through their customary face-to-face interactions with their compatriots. A holistic multiple-case study design was the basis for the data collection process. Data were collected during four-and-half months of fieldwork. Grounded theory informed both the method of data analysis and the technique for theory building. As a result of an inductive thinking process, two intertwined core themes emerged. The first theme, individuals’ exploitation of ICT, is related to how some individuals overcome some difficulties and try to make the most of the now available ICT tools. The second theme, complementing existing social networks through ICT, reflects the interaction between the newly ICT-mediated information and virtual networks and the local existing social networks. However, these two themes were not evenly distributed across the communities studied. The evidence revealed that dissimilarities in social cohesion among the communities and, to some extent, disparities in physical infrastructure are contributing factors that explain the unevenness. But social actors – named as ‘activators of information’ – become the key triggers of the disseminating process for fresh and valuable ICT-mediated information throughout their communities. These findings were compared to the relevant literature to produce theoretical generalisations. As a conclusion, it is suggested any ICT intervention in a developing country requires at least three elements to be effective: a tolerable physical infrastructure, a strong degree of social texture and an activator of information.
958

Model-based strategies for automated segmentation of cardiac magnetic resonance images

Lin, Xiang, 1971- January 2008 (has links)
Segmentation of the left and right ventricles is vital to clinical magnetic resonance imaging studies of cardiac function. A single cardiac examination results in a large amount of image data. Manual analysis by experts is time consuming and also susceptible to intra- and inter-observer variability. This leads to the urgent requirement for efficient image segmentation algorithms to automatically extract clinically relevant parameters. Present segmentation techniques typically require at least some user interaction or editing, and do not deal well with the right ventricle. This thesis presents mathematical model based methods to automatically localize and segment the left and right ventricular endocardium and epicardium in 3D cardiac magnetic resonance data without any user interaction. An efficient initialization algorithm was developed which used a novel temporal Fourier analysis to determine the size, orientation and position of the heart. Quantitative validation on a large dataset containing 330 patients showed that the initialized contours had only ~ 5 pixels (modified Hausdorff distance) error on average in the middle short-axis slices. A model-based graph cuts algorithm was investigated and achieved good results on the midventricular slices, but was not found to be robust on other slices. Instead, automated segmentation of both the left and right ventricular contours was performed using a new framework, called SMPL (Simple Multi-Property Labelled) atlas based registration. This framework was able to integrate boundary, intensity and anatomical information. A comparison of similarity measures showed the sum of squared difference was most appropriate in this context. The method improved the average contour errors of the middle short-axis slices to ~ 1 pixel. The detected contours were then used to update the 3D model using a new feature-based 3D registration method. These techniques were iteratively applied to both short-axis and long-axis slices, resulting in a 3D segmentation of the patient’s heart. This automated model-based method showed a good agreement with expert observers, giving average errors of ~ 1–4 pixels on all slices.
959

Design and evaluation of software obfuscations

Majumdar, Anirban January 2008 (has links)
Software obfuscation is a protection technique for making code unintelligible to automated program comprehension and analysis tools. It works by performing semantic preserving transformations such that the difficulty of automatically extracting the computational logic out of code is increased. Obfuscating transforms in existing literature have been designed with the ambitious goal of being resilient against all possible reverse engineering attacks. Even though some of the constructions are based on intractable computational problems, we do not know, in practice, how to generate hard instances of obfuscated problems such that all forms of program analyses would fail. In this thesis, we address the problem of software protection by developing a weaker notion of obfuscation under which it is not required to guarantee an absolute blackbox security. Using this notion, we develop provably-correct obfuscating transforms using dependencies existing within program structures and indeterminacies in communication characteristics between programs in a distributed computing environment. We show how several well known static analysis tools can be used for reverse engineering obfuscating transforms that derive resilience from computationally hard problems. In particular, we restrict ourselves to one common and potent static analysis tool, the static slicer, and use it as our attack tool. We show the use of derived software engineering metrics to indicate the degree of success or failure of a slicer attack on a piece of obfuscated code. We address the issue of proving correctness of obfuscating transforms by adapting existing proof techniques for functional program refinement and communicating sequential processes. The results of this thesis could be used for future work in two ways: first, future researchers may extend our proposed techniques to design obfuscations using a wider range of dependencies that exist between dynamic program structures. Our restricted attack model using one static analysis tool can also be relaxed and obfuscations capable of withstanding a broader class of static and dynamic analysis attacks could be developed based on the same principles. Secondly, our obfuscatory strength evaluation techniques could guide anti-malware researchers in the development of tools to detect obfuscated strains of polymorphic viruses. / Whole document restricted, but available by request, use the feedback form to request access.
960

Accelerating classifier training using AdaBoost within cascades of boosted ensembles : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Sciences at Massey University, Auckland, New Zealand

Susnjak, Teo January 2009 (has links)
This thesis seeks to address current problems encountered when training classifiers within the framework of cascades of boosted ensembles (CoBE). At present, a signifi- cant challenge facing this framework are inordinate classifier training runtimes. In some cases, it can take days or weeks (Viola and Jones, 2004; Verschae et al., 2008) to train a classifier. The protracted training runtimes are an obstacle to the wider use of this framework (Brubaker et al., 2006). They also hinder the process of producing effective object detection applications and make the testing of new theories and algorithms, as well as verifications of others research, a considerable challenge (McCane and Novins, 2003). An additional shortcoming of the CoBE framework is its limited ability to train clas- sifiers incrementally. Presently, the most reliable method of integrating new dataset in- formation into an existing classifier, is to re-train a classifier from beginning using the combined new and old datasets. This process is inefficient. It lacks scalability and dis- cards valuable information learned in previous training. To deal with these challenges, this thesis extends on the research by Barczak et al. (2008), and presents alternative CoBE frameworks for training classifiers. The alterna- tive frameworks reduce training runtimes by an order of magnitude over common CoBE frameworks and introduce additional tractability to the process. They achieve this, while preserving the generalization ability of their classifiers. This research also introduces a new framework for incrementally training CoBE clas- sifiers and shows how this can be done without re-training classifiers from beginning. However, the incremental framework for CoBEs has some limitations. Although it is able to improve the positive detection rates of existing classifiers, currently it is unable to lower their false detection rates.

Page generated in 0.1721 seconds