• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 164
  • 164
  • 164
  • 50
  • 28
  • 27
  • 27
  • 23
  • 22
  • 21
  • 20
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Algebraic Constructions Applied to Theories

Tran, Minh Quang 10 1900 (has links)
<p>MathScheme is a long-range research project being conducted at McMaster University with the aim to develop a mechanized mathematics system in which formal deduction and symbolic computation are integrated from the lowest level. The novel notion of a biform theory that is a combination of an axiomatic theory and an algorithmic theory is used to integrate formal deduction and symbolic computation into a uniform theory. A major focus of the project has currently been on building a library of formalized mathematics called the MathScheme Library. The MathScheme Library is based on the little theories method in which a portion of mathematical knowledge is represented as a network of biform theories interconnected via theory morphisms. In this thesis, we describe a systematic explanation of the underlying techniques which have been used for the construction of the MathScheme Library. Then we describe several algebraic constructions that can derive new useful machinery by leveraging the information extracted from a theory. For instance, we show a construction that can reify the term algebra of a (possibly multi-sorted) theory as an inductive data type.</p> / Master of Science (MSc)
42

Identification and Documentation of Environmental Assumptions for the PACEMAKER System

WANG, Vivien You 04 1900 (has links)
<p>An interest has been established in the identification, documentation and classifi- cation of the environmental assumptions that are missing from the original PACE- MAKER System Specification. This thesis addresses the presented challenge and documents the procedure used to identify, classify, and document these missing en- vironmental assumptions.</p> <p>In summary, this thesis answers the following questions: <ol> <li></p> <p>What can be done in order to improve the original PACEMAKER System</p> <p>Specification with respect to environmental assumptions? </li> <li></p> <p>Why is it beneficial, in terms of enhancing software quality, to include the doc- umentation of environmental assumptions – which sometimes are (wrongfully) perceived as being collateral and optional – as part of the software requirements document? </li> <li></p> <p>How should such environmental assumptions be documented? </li> </ol></p> <p>More specifically, this thesis • Presents an abstract model for the PACEMAKER system. • Identifies system boundaries and interfaces in the PACEMAKER model. • Identifies environmental assumptions for the PACEMAKER system.</p> <p>• Presents a classification system for the environmental assumptions identified for the PACEMAKER system based on the proposed model.</p> <p>• Proposes a process for identifying environmental assumptions.</p> <p>Furthermore, the research findings presented in this thesis are not limited to the PACEMAKER system. The documentation convention proposed in this thesis is meant to be generalized and can be extended to address similar documentation needs posed by all kinds of software systems. Additionally, the process of environmental assumptions elicitation described in this thesis provides a useful reference for con- ducting similar assumption identification projects. Lastly, the classification system presented in this thesis for the environmental assumptions exhibits one facet of a grander conceptual system – one that incorporates multiple ‘views’ of the same set of assumptions, with each view being distinguished by a unique set of classification criteria.</p> / Master of Applied Science (MASc)
43

INTERSECTION STATE VISUALIZATION FOR REALTIME SIMULATIONS

Roth, Justin L. 10 1900 (has links)
<p>Driving simulators have existed since the beginning of the $20<sup>th</sup> century. From its roots, it has been a technology used primarily to train drivers, test and prototype new technology, and improve the safety of automobile users. As technology has progressed, so has the quality of the driving simulation, and along side it, the complexity of experiments performed. The McMaster motion simulation system combines the latest software with state of the art psychology techniques, to analyze the driving experience in new and unique ways. To accommodate the wide range of plausible experiments, a robust software system was developed that allows for custom driving scenarios. The software system is comprised of several sub-components including content generation, scenario management, visualization and artificial intelligence. This thesis details the development of a traffic light system and its incorporation into the existing simulation system. A variety of challenges were encountered including real-time constraints, adapting flight software to driving simulation, inter-system communication, and interoperability of multiple APIs. A secondary objective was to document, this thesis records the methodology used to overcome these challenges in an attempt to facilitate future work in this field.</p> / Master of Applied Science (MASc)
44

Probabilistic Graphical Models for Prognosis and Diagnosis of Breast Cancer

KHADEMI, MAHMOUD 04 1900 (has links)
<p>One in nine women is expected to be diagnosed with breast cancer during her life. In 2013, an estimated 23, 800 Canadian women will be diagnosed with breast cancer and 5, 000 will die of it. Making decisions about the treatment for a patient is difficult since it depends on various clinical features, genomic factors, and pathological and cellular classification of a tumor.</p> <p>In this research, we propose a probabilistic graphical model for prognosis and diagnosis of breast cancer that can help medical doctors make better decisions about the best treatment for a patient. Probabilistic graphical models are suitable for making decisions under uncertainty from big data with missing attributes and noisy evidence.</p> <p>Using the proposed model, we may enter the results of different tests (e.g. estrogen and progesterone receptor test and HER2/neu test), microarray data, and clinical traits (e.g. woman's age, general health, menopausal status, stage of cancer, and size of the tumor) to the model and answer to following questions. How likely is it that the cancer will extend in the body (distant metastasis)? What is the chance of survival? How likely is that the cancer comes back (local or regional recurrence)? How promising is a treatment? For example, how likely metastasis is and how likely recurrence is for a new patient, if certain treatment e.g. surgical removal, radiation therapy, hormone therapy, or chemotherapy is applied. We can also classify various types of breast cancers using this model.</p> <p>Previous work mostly relied on clinical data. In our opinion, since cancer is a genetic disease, the integration of the genomic (microarray) and clinical data can improve the accuracy of the model for prognosis and diagnosis. However, increasing the number of variables may lead to poor results due to the curse of dimensionality dilemma and small sample size problem. The microarray data is high dimensional. It consists of around 25, 000 variables per patient. Moreover, structure learning and parameter learning for probabilistic graphical models require a significant amount of computations. The number of possible structures is also super-exponential with respect to the number of variables. For instance, there are more than 10^18 possible structures with just 10 variables.</p> <p>We address these problems by applying manifold learning and dimensionality reduction techniques to improve the accuracy of the model. Extensive experiments using real-world data sets such as METRIC and NKI show the accuracy of the proposed method for classification and predicting certain events, like recurrence and metastasis.</p> / Master of Science (MSc)
45

Privacy Preserving Distributed Data Mining

Lin, Zhenmin 01 January 2012 (has links)
Privacy preserving distributed data mining aims to design secure protocols which allow multiple parties to conduct collaborative data mining while protecting the data privacy. My research focuses on the design and implementation of privacy preserving two-party protocols based on homomorphic encryption. I present new results in this area, including new secure protocols for basic operations and two fundamental privacy preserving data mining protocols. I propose a number of secure protocols for basic operations in the additive secret-sharing scheme based on homomorphic encryption. I derive a basic relationship between a secret number and its shares, with which we develop efficient secure comparison and secure division with public divisor protocols. I also design a secure inverse square root protocol based on Newton's iterative method and hence propose a solution for the secure square root problem. In addition, we propose a secure exponential protocol based on Taylor series expansions. All these protocols are implemented using secure multiplication and can be used to develop privacy preserving distributed data mining protocols. In particular, I develop efficient privacy preserving protocols for two fundamental data mining tasks: multiple linear regression and EM clustering. Both protocols work for arbitrarily partitioned datasets. The two-party privacy preserving linear regression protocol is provably secure in the semi-honest model, and the EM clustering protocol discloses only the number of iterations. I provide a proof-of-concept implementation of these protocols in C++, based on the Paillier cryptosystem.
46

AUTOMATIC PERFORMANCE LEVEL ASSESSMENT IN MINIMALLY INVASIVE SURGERY USING COORDINATED SENSORS AND COMPOSITE METRICS

Taha Abu Snaineh, Sami 01 January 2013 (has links)
Skills assessment in Minimally Invasive Surgery (MIS) has been a challenge for training centers for a long time. The emerging maturity of camera-based systems has the potential to transform problems into solutions in many different areas, including MIS. The current evaluation techniques for assessing the performance of surgeons and trainees are direct observation, global assessments, and checklists. These techniques are mostly subjective and can, therefore, involve a margin of bias. The current automated approaches are all implemented using mechanical or electromagnetic sensors, which suffer limitations and influence the surgeon’s motion. Thus, evaluating the skills of the MIS surgeons and trainees objectively has become an increasing concern. In this work, we integrate and coordinate multiple camera sensors to assess the performance of MIS trainees and surgeons. This study aims at developing an objective data-driven assessment that takes advantage of multiple coordinated sensors. The technical framework for the study is a synchronized network of sensors that captures large sets of measures from the training environment. The measures are then, processed to produce a reliable set of individual and composed metrics, coordinated in time, that suggest patterns of skill development. The sensors are non-invasive, real-time, and coordinated over many cues such as, eye movement, external shots of body and instruments, and internal shots of the operative field. The platform is validated by a case study of 17 subjects and 70 sessions. The results show that the platform output is highly accurate and reliable in detecting patterns of skills development and predicting the skill level of the trainees.
47

Improving a Particle Swarm Optimization-based Clustering Method

Shahadat, Sharif 19 May 2017 (has links)
This thesis discusses clustering related works with emphasis on Particle Swarm Optimization (PSO) principles. Specifically, we review in detail the PSO clustering algorithm proposed by Van Der Merwe & Engelbrecht, the particle swarm clustering (PSC) algorithm proposed by Cohen & de Castro, Szabo’s modified PSC (mPSC), and Georgieva & Engelbrecht’s Cooperative-Multi-Population PSO (CMPSO). In this thesis, an improvement over Van Der Merwe & Engelbrecht’s PSO clustering has been proposed and tested for standard datasets. The improvements observed in those experiments vary from slight to moderate, both in terms of minimizing the cost function, and in terms of run time.
48

Mitigating Interference During Virtual Machine Live Migration through Storage Offloading

Stuart, Morgan S 01 January 2016 (has links)
Today's cloud landscape has evolved computing infrastructure into a dynamic, high utilization, service-oriented paradigm. This shift has enabled the commoditization of large-scale storage and distributed computation, allowing engineers to tackle previously untenable problems without large upfront investment. A key enabler of flexibility in the cloud is the ability to transfer running virtual machines across subnets or even datacenters using live migration. However, live migration can be a costly process, one that has the potential to interfere with other applications not involved with the migration. This work investigates storage interference through experimentation with real-world systems and well-established benchmarks. In order to address migration interference in general, a buffering technique is presented that offloads the migration's read, eliminating interference in the majority of scenarios.
49

Data Exploration Interface for Digital Forensics

Dontula, Varun 17 December 2011 (has links)
The fast capacity growth of cheap storage devices presents an ever-growing problem of scale for digital forensic investigations. One aspect of scale problem in the forensic process is the need for new approaches to visually presenting and analyzing large amounts of data. Current generation of tools universally employ three basic GUI components—trees, tables, and viewers—to present all relevant information. This approach is not scalable as increasing the size of the input data leads to a proportional increase in the amount of data presented to the analyst. We present an alternative approach, which leverages data visualization techniques to provide a more intuitive interface to explore the forensic target. We use tree visualization techniques to give the analyst both a high-level view of the file system and an efficient means to drill down into the details. Further, we provide means to search for keywords and filter the data by time period.
50

An empirical study of semantic similarity in WordNet and Word2Vec

Handler, Abram 18 December 2014 (has links)
This thesis performs an empirical analysis of Word2Vec by comparing its output to WordNet, a well-known, human-curated lexical database. It finds that Word2Vec tends to uncover more of certain types of semantic relations than others -- with Word2Vec returning more hypernyms, synonomyns and hyponyms than hyponyms or holonyms. It also shows the probability that neighbors separated by a given cosine distance in Word2Vec are semantically related in WordNet. This result both adds to our understanding of the still-unknown Word2Vec and helps to benchmark new semantic tools built from word vectors.

Page generated in 0.1099 seconds