• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7537
  • 1107
  • 1049
  • 794
  • 483
  • 291
  • 238
  • 185
  • 90
  • 81
  • 64
  • 52
  • 45
  • 44
  • 42
  • Tagged with
  • 14567
  • 9373
  • 3974
  • 2386
  • 1934
  • 1931
  • 1742
  • 1653
  • 1539
  • 1451
  • 1383
  • 1365
  • 1360
  • 1307
  • 1285
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1051

Ultra-high precision grinding of BK7 glass

Onwuka, Goodness Raluchukwu January 2016 (has links)
With the increase in the application of ultra-precision manufactured parts and the absence of much participation of researchers in ultra-high precision grinding of optical glasses which has a high rate of demand in the industries, it becomes imperative to garner a full understanding of the production of these precision optics using the above-listed technology. Single point inclined axes grinding configuration and Box-Behnken experimental design was developed and applied to the ultra-high precision grinding of BK7 glass. A high sampling acoustic emission monitoring system was implemented to monitor the process. The research tends to monitor the ultra-high precision grinding of BK7 glass using acoustic emission which has proven to be an effective sensing technique to monitor grinding processes. Response surface methodology was adopted to analyze the effect of the interaction between the machining parameters: feed, speed, depth of cut and the generated surface roughness. Furthermore, back propagation Artificial Neural Network was also implemented through careful feature extraction and selection process. The proposed models are aimed at creating a database guide to the ultra-high precision grinding of precision optics.
1052

Computational approaches to predicting drug induced toxicity

Marchese Robinson, Richard Liam January 2013 (has links)
Novel approaches and models for predicting drug induced toxicity in silico are presented. Typically, these were based on Quantitative Structure-Activity Relationships (QSAR). The following endpoints were modelled: mutagenicity, carcinogenicity, inhibition of the hERG ion channel and the associated arrhythmia - Torsades de Pointes. A consensus model was developed based on Derek for WindowsTM and Toxtree and used to filter compounds as part of a collaborative effort resulting in the identification of potential starting points for anti-tuberculosis drugs. Based on the careful selection of data from the literature, binary classifiers were generated for the identification of potent hERG inhibitors. These were found to perform competitively with, or better than, those computational approaches previously presented in the literature. Some of these models were generated using Winnow, in conjunction with a novel proposal for encoding molecular structures as required by this algorithm. The Winnow models were found to perform comparably to models generated using the Support Vector Machine and Random Forest algorithms. These studies also emphasised the variability in results which may be obtained when applying the same approaches to different train/test combinations. Novel approaches to combining chemical information with Ultrafast Shape Recognition (USR) descriptors are introduced: Atom Type USR (ATUSR) and a combination between a proposed Atom Type Fingerprint (ATFP) and USR (USR-ATFP). These were applied to the task of predicting protein-ligand interactions - including the prediction of hERG inhibition. Whilst, for some of the datasets considered, either ATUSR or USR-ATFP was found to perform marginally better than all other descriptor sets to which they were compared, most differences were statistically insignificant. Further work is warranted to determine the advantages which ATUSR and USR-ATFP might offer with respect to established descriptor sets. The first attempts to construct QSAR models for Torsades de Pointes using predicted cardiac ion channel inhibitory potencies as descriptors are presented, along with the first evaluation of experimentally determined inhibitory potencies as an alternative, or complement to, standard descriptors. No (clear) evidence was found that 'predicted' ('experimental') 'IC-descriptors' improve performance. However, their value may lie in the greater interpretability they could confer upon the models. Building upon the work presented in the preceding chapters, this thesis ends with specific proposals for future research directions.
1053

Bayesian methods for gravitational waves and neural networks

Graff, Philip B. January 2012 (has links)
Einstein’s general theory of relativity has withstood 100 years of testing and will soon be facing one of its toughest challenges. In a few years we expect to be entering the era of the first direct observations of gravitational waves. These are tiny perturbations of space-time that are generated by accelerating matter and affect the measured distances between two points. Observations of these using the laser interferometers, which are the most sensitive length-measuring devices in the world, will allow us to test models of interactions in the strong field regime of gravity and eventually general relativity itself. I apply the tools of Bayesian inference for the examination of gravitational wave data from the LIGO and Virgo detectors. This is used for signal detection and estimation of the source parameters. I quantify the ability of a network of ground-based detectors to localise a source position on the sky for electromagnetic follow-up. Bayesian criteria are also applied to separating real signals from glitches in the detectors. These same tools and lessons can also be applied to the type of data expected from planned space-based detectors. Using simulations from the Mock LISA Data Challenges, I analyse our ability to detect and characterise both burst and continuous signals. The two seemingly different signal types will be overlapping and confused with one another for a space-based detector; my analysis shows that we will be able to separate and identify many signals present. Data sets and astrophysical models are continuously increasing in complexity. This will create an additional computational burden for performing Bayesian inference and other types of data analysis. I investigate the application of the MOPED algorithm for faster parameter estimation and data compression. I find that its shortcomings make it a less favourable candidate for further implementation. The framework of an artificial neural network is a simple model for the structure of a brain which can “learn” functional relationships between sets of inputs and outputs. I describe an algorithm developed for the training of feed-forward networks on pre-calculated data sets. The trained networks can then be used for fast prediction of outputs for new sets of inputs. After demonstrating capabilities on toy data sets, I apply the ability of the network to classifying handwritten digits from the MNIST database and measuring ellipticities of galaxies in the Mapping Dark Matter challenge. The power of neural networks for learning and rapid prediction is also useful in Bayesian inference where the likelihood function is computationally expensive. The new BAMBI algorithm is detailed, in which our network training algorithm is combined with the nested sampling algorithm MULTINEST to provide rapid Bayesian inference. Using samples from the normal inference, a network is trained on the likelihood function and eventually used in its place. This is able to provide significant increase in the speed of Bayesian inference while returning identical results. The trained networks can then be used for extremely rapid follow-up analyses with different priors, obtaining orders of magnitude of speed increase. Learning how to apply the tools of Bayesian inference for the optimal recovery of gravitational wave signals will provide the most scientific information when the first detections are made. Complementary to this, the improvement of our analysis algorithms to provide the best results in less time will make analysis of larger and more complicated models and data sets practical.
1054

Harmonic analysis of the brushless doubly-fed machine including single-phase operation

Logan, Thomas George January 2012 (has links)
No description available.
1055

Automatic Extraction Of Machining Primitives for Process Planning

Nagaraj, H S 12 1900 (has links) (PDF)
No description available.
1056

A Location-Aware Social Media Monitoring System

Ji, Liu January 2014 (has links)
Social media users generate a large volume of data, which can contain meaningful and useful information. One such example is information about locations, which may be useful in applications such as marketing and security monitoring. There are two types of locations: location entities mentioned in the text of the messages and the physical locations of users. Extracting the first type of locations is not trivial because the location entities in the text are often ambiguous. In this thesis, we implement a sequential classification model with conditional random fields followed by a rule-based disambiguation model, we apply them to Twitter messages (tweets) and we show that they handle the ambiguous location entities in our dataset reasonably well. Only very few users disclose their physical locations; in order to automatically detect their locations, many approaches have been proposed using various types of information, including the tweets posted by the users. It is not easy to infer the original locations from text data, because text tends to be noisy, particularly in social media. Recently, deep learning techniques have been shown to reduce the error rate of many machine learning tasks, due to their ability to learn meaningful representations of input data. We investigate the potential of building a deep-learning architecture to infer the location of Twitter users based merely on their tweets. We find that stacked denoising auto-encoders are well suited for this task, with results comparable to state-of-the-art models. Finally, we combine the two models above with a third-party sentiment analysis tool and obtain a intelligent social media monitoring system. We show a demo of the system and that it is able to predict and visualize the locations and sentiments contained in a stream of tweets related to mobile phone brands - a typical real world e-business application.
1057

An Automatically Generated Lexical Knowledge Base with Soft Definitions

Scaiano, Martin January 2016 (has links)
There is a need for methods that understand and represent the meaning of text for use in Artificial Intelligence (AI). This thesis demonstrates a method to automatically extract a lexical knowledge base from dictionaries for the purpose of improving machine reading. Machine reading refers to a process by which a computer processes natural language text into a representation that supports inference or inter-connection with existing knowledge (Clark and Harrison, 2010).1 There are a number of linguistic ideas associated with representing and applying the meaning of words which are unaddressed in current knowledge representations. This work draws heavily from the linguistic theory of frame semantics (Fillmore, 1976). A word is not a strictly defined construct; instead, it evokes our knowledge and experiences, and this information is adapted to a given context by human intelligence. This can often be seen in dictionaries, as a word may have many senses, but some are only subtle variations of the same theme or core idea. Further unaddressed issue is that sentences may have multiple reasonable and valid interpretations (or readings). This thesis postulates that there must be algorithms that work with symbolic rep- resentations which can model how words evoke knowledge and then contextualize that knowledge. I attempt to answer this previously unaddressed question, “How can a sym- bolic representation support multiple interpretations, evoked knowledge, soft word senses, and adaptation of meaning?” Furthermore, I implement and evaluate the proposed so- lution. This thesis proposes the use of a knowledge representation called Multiple Interpre- tation Graphs (MIGs), and a lexical knowledge structure called auto-frames to support contextualization. MIG is used to store a single auto-frame, the representation of a sen- tence, or an entire text. MIGs and auto-frames are produced from dependency parse trees using an algorithm I call connection search. MIG supports representing multiple different interpretations of a text, while auto-frames combine multiple word senses and in- formation related to the word into one representation. Connection search contextualizes MIGs and auto-frames, and reduces the number of interpretations that are considered valid. In this thesis, as proof of concept and evaluation, I extracted auto-frames from Long- man Dictionary of Contemporary English (LDOCE). I take the point of view that a word’s meaning depends on what it is connected to in its definition. I do not use a 1The term machine reading was coined by Etzioni et al. (2006). ii  predetermined set of semantic roles; instead, auto-frames focus on the connections or mappings between a word’s context and its definitions. Once I have extracted the auto-frames, I demonstrate how they may be contextu- alized. I then apply the lexical knowledge base to reading comprehension. The results show that this approach can produce good precision on this task, although more re- search and refinement is needed. The knowledge base and source code is made available to the community at http://martin.scaiano.com/Auto-frames.html or by contacting martin@scaiano.com.
1058

Learning the Sub-Conceptual Layer: A Framework for One-Class Classification

Sharma, Shiven January 2016 (has links)
In the realm of machine learning research and application, binary classification algorithms, i.e. algorithms that attempt to induce discriminant functions between two categories of data, reign supreme. Their fundamental property is the reliance on the availability of data from all known categories in order to induce functions that can offer acceptable levels of accuracy. Unfortunately, data from so-called ``real-world'' domains sometimes do not satisfy this property. In order to tackle this, researchers focus on methods such as sampling and cost-sensitive classification to make the data more conducive for binary classifiers. However, as this thesis shall argue, there are scenarios in which even such explicit methods to rectify distributions fail. In such cases, one-class classification algorithms become a practical alternative. Unfortunately, if the domain is inherently complex, the advantage that they offer over binary classifiers becomes diminished. The work in this thesis addresses this issue, and builds a framework that allows for one-class algorithms to build efficient classifiers. In particular, this thesis introduces the notion of learning along the lines sub-concepts in the domain; the complexity in domains arises due to the presence of sub-concepts, and by learning over them explicitly rather than on the entire domain as a whole, we can produce powerful one-class classification systems. The level of knowledge regarding these sub-concepts will naturally vary by domain, and thus we develop three distinct frameworks that take the amount of domain knowledge available into account. We demonstrate these frameworks over three real-world domains. The first domain we consider is that of biometric authentication via a users swipe on a smartphone. We identify sub-concepts based on a users motion, and given that modern smartphones employ sensors that can identify motion, during learning as well as application, sub-concepts can be identified explicitly, and novel instances can be processed by the appropriate one-class classifier. The second domain is that of invasive isotope detection via gamma-ray spectra. The sub-concepts are based on environmental factors; however, the hardware employed cannot detect such concepts, and quantifying the precise source that creates these sub-concepts is difficult to ascertain. To remedy this, we introduce a novel framework in which we employ a sub-concept detector by means of a multi-class classifier, which pre-processes novel instances in order to send them to the correct one-class classifier. The third domain is that of compliance verification of the Comprehensive Test Ban Treaty (CTBT) through Xenon isotope measurements. This domain presents the worst case where sub-concepts are not known. To this end, we employ a generic version of our framework in which we simply cluster the domain and build classifiers over each cluster. In all cases, we demonstrate that learning in the context of domain concepts greatly improves the performance of one-class classifiers.
1059

Exploring Mediatoil Imagery: A Content-Based Approach

Saroop, Sahil January 2016 (has links)
The future of Alberta’s bitumen sands, also known as “oil sands” or “tar sands,” and their place in Canada’s energy future has become a topic of much public debate. Within this debate, the print, television, and social media campaigns of those who both support and oppose developing the oil sands are particularly visible. As such, campaigns around the oil sands may be seen as influencing audience perceptions of the benefits and drawbacks of oil sands production. There is consequently a need to study the media materials of various tar sands stakeholders and explore how they differ. In this setting, it is essential to gather documents and identify content within images, which requires the use of an image retrieval technique such as a content-based image retrieval (CBIR) system. In a CBIR system, images are represented by low-level features (i.e. specific structures in the image such as points, edges, or objects), which are used to distinguish pictures from one another. The oil sands domain has to date not been mapped using CBIR systems. The research thus focuses on creating an image retrieval system, namely Mediatoil-IR, for exploring documents related to the oil sands. Our aim is to evaluate various low-level representations of the images within this context. To this end, our experimental framework employs LAB color histogram (LAB) and speeded up robust features (SURF) in order to typify the imagery. We further use machine learning techniques to improve the quality of retrieval (in terms of both accuracy and speed). To achieve this aim, the extracted features from each image are encoded in the form of vectors and used as a training set for learning classification models to organize pictures into different categories. Different algorithms were considered such as Linear SVM, Quadratic SVM, Weighted KNN, Decision Trees, Bagging, and Boosting on trees. It was shown that Quadratic SVM algorithm trained on SURF features is a good approach for building CBIR, and is used in building Mediatoil-IR. Finally, with the help of created CBIR, we were able to extract the similar documents and explore the different types of imagery used by different stakeholders. Our experimental evaluation shows that our Mediatoil-IR system is able to accurately explore the imagery used by different stakeholders.
1060

Virtual Machine Management for Dynamic Vehicular Clouds

Refaat, Tarek January 2017 (has links)
Vehicular clouds involve a dynamic environment where virtual machines are hosted on moving vehicles, leading to frequent changes in the data center network topology. These frequent topological changes require frequent virtual machine migrations in order to meet the service level agreements with cloud users. Such topology changes include fluctuations in connectivity, signal strength and quality. Few studies address vehicles as potential virtual machine hosts, while there is a significant opportunity in capitalizing on underutilized resources. Due to the rapidly changing environment of a vehicular cloud, hosts frequently change or leave coverage. As such, virtual machine management and migration schemes are necessary to ensure cloud subscribers have a satisfactory level of access to the resources. This thesis addresses the need for virtual machine management for the vehicular cloud. First, a mobility model is proposed and utilized to test a set of novel Vehicular Virtual Machine Migration (VVMM) schemes: VVMM-U (Uniform), VVMM-LW (Least Workload), VVMM-MA (Mobility Aware) and MDWLAM (Mobility and Destination Workload Aware Migration). Their performance is evaluated with respect to a set of metrics through simulations with varying levels of vehicular traffic congestion, virtual machine sizes and load restriction levels. The most advanced scheme (MDWLAM) takes into account the workload and mobility of the original host as well as those of the potential destinations. By doing so a valid destination will both have time to receive the workload and migrate the new load when necessary. The behavior of various algorithms is compared and the MDWLAM has been shown to demonstrate the best performance, exhibiting migration drop rates that are negligibly small. Finally, an integer linear program formulation based on a modified single source shortest path problem is presented, tested and successfully shown to be a benchmark that can be used in comparison to the proposed heuristics.

Page generated in 0.058 seconds