• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7434
  • 1103
  • 1048
  • 794
  • 476
  • 291
  • 237
  • 184
  • 90
  • 81
  • 63
  • 52
  • 44
  • 43
  • 42
  • Tagged with
  • 14409
  • 9227
  • 3943
  • 2366
  • 1925
  • 1915
  • 1721
  • 1624
  • 1514
  • 1439
  • 1374
  • 1354
  • 1341
  • 1275
  • 1269
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
891

Non-deterministic communication complexity of regular languages

Ada, Anil. January 2007 (has links)
The notion of communication complexity was introduced by Yao in his seminal paper [Yao79]. In [BFS86], Babai Frankl and Simon developed a rich structure of communication complexity classes to understand the relationships between various models of communication complexity. This made it apparent that communication complexity was a self-contained mini-world within complexity theory. In this thesis, we study the place of regular languages within this mini-world. In particular, we are interested in the non-deterministic communication complexity of regular languages. / We show that a regular language has either O(1) or O(log n) non-deterministic complexity. We obtain several linear lower bound results which cover a wide range of regular languages having linear non-deterministic complexity. These lower bound results also imply a result in semigroup theory: we obtain sufficient conditions for not being in the positive variety Pol(Com). / To obtain our results, we use algebraic techniques. In the study of regular languages, the algebraic point of view pioneered by Eilenberg ([Eil74]) has led to many interesting results. Viewing a semigroup as a computational device that recognizes languages has proven to be prolific from both semigroup theory and formal languages perspectives. In this thesis, we provide further instances of such mutualism.
892

Applying ant colony optimization to solve the single machine total tardiness problem

Bauer, Andreas, Bullnheimer, Bernd, Hartl, Richard F., Strauß, Christine January 1999 (has links) (PDF)
Ant Colony Optimization is a relatively new meta-heuristic that has proven its quality and versatility on various combinatorial optimization problems such as the traveling salesman problem, the vehicle routing problem and the job shop scheduling problem. The paper introduces an Ant Colony Optimization approach to solve the problem of determining a job-sequence that minimizes the overall tardiness for a given set of jobs to be processed on a single, continuously available machine, the Single Machine Total Tardiness Problem. We experiment with various heuristic information as well as with variants for local search. Experiments with 250 benchmark problems with 50 and 100 jobs illustrate that Ant Colony Optimization is an adequate method to tackle the SMTTP. (author's abstract) / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
893

An evaluation system for intelligent smart badges

Liu, Yi January 2006 (has links)
In this thesis we develop and test a software algorithm for an electronic smart badge system. The smart badge system we have developed has the ability to figure out the interests of people who wear the badge by using time and position information collected by the badge. The badge can also present feedback to the wearer, so that users may be guided to people will similar interests and so may have more effective conversations. The smart badge system is based on an inference system which uses a Bayesian network. Evaluation of the system was challenging because there were no completed badges that could be used. To overcome this, we developed a simulation of crowd behaviour in a conference setting. We tuned the parameters of the model using several test situations and the final simulated behavior appeared realistic. Compared to other smart badge systems, our work is unique because it is able to enhance conversation by the real time inference of common ideas or interests of the conversion participants.
894

Semi-supervised lexical acquisition for wide-coverage parsing

Thomforde, Emily Jane January 2013 (has links)
State-of-the-art parsers suffer from incomplete lexicons, as evidenced by the fact that they all contain built-in methods for dealing with out-of-lexicon items at parse time. Since new labelled data is expensive to produce and no amount of it will conquer the long tail, we attempt to address this problem by leveraging the enormous amount of raw text available for free, and expanding the lexicon offline, with a semi-supervised word learner. We accomplish this with a method similar to self-training, where a fully trained parser is used to generate new parses with which the next generation of parser is trained. This thesis introduces Chart Inference (CI), a two-phase word-learning method with Combinatory Categorial Grammar (CCG), operating on the level of the partial parse as produced by a trained parser. CI uses the parsing model and lexicon to identify the CCG category type for one unknown word in a context of known words by inferring the type of the sentence using a model of end punctuation, then traversing the chart from the top down, filling in each empty cell as a function of its mother and its sister. We first specify the CI algorithm, and then compare it to two baseline wordlearning systems over a battery of learning tasks. CI is shown to outperform the baselines in every task, and to function in a number of applications, including grammar acquisition and domain adaptation. This method performs consistently better than self-training, and improves upon the standard POS-backoff strategy employed by the baseline StatCCG parser by adding new entries to the lexicon. The first learning task establishes lexical convergence over a toy corpus, showing that CI’s ability to accurately model a target lexicon is more robust to initial conditions than either of the baseline methods. We then introduce a novel natural language corpus based on children’s educational materials, which is fully annotated with CCG derivations. We use this corpus as a testbed to establish that CI is capable in principle of recovering the whole range of category types necessary for a wide-coverage lexicon. The complexity of the learning task is then increased, using the CCGbank corpus, a version of the Penn Treebank, and showing that CI improves as its initial seed corpus is increased. The next experiment uses CCGbank as the seed and attempts to recover missing question-type categories in the TREC question answering corpus. The final task extends the coverage of the CCGbank-trained parser by running CI over the raw text of the Gigaword corpus. Where appropriate, a fine-grained error analysis is also undertaken to supplement the quantitative evaluation of the parser performance with deeper reasoning as to the linguistic points of the lexicon and parsing model.
895

LFG-DOT : a hybrid architecture for robust MT

Way, Andrew January 2001 (has links)
No description available.
896

Process capability modelling for manufacturing process selection in an integrated simultaneous engineering workstation

Naish, Jane Catherine January 1999 (has links)
No description available.
897

Learning matrix and functional models in high-dimensions

Balasubramanian, Krishnakumar 27 August 2014 (has links)
Statistical machine learning methods provide us with a principled framework for extracting meaningful information from noisy high-dimensional data sets. A significant feature of such procedures is that the inferences made are statistically significant, computationally efficient and scientifically meaningful. In this thesis we make several contributions to such statistical procedures. Our contributions are two-fold. We first address prediction and estimation problems in non-standard situations. We show that even when given no access to labeled samples, one can still consistently estimate error rate of predictors and train predictors with respect to a given (convex) loss function. We next propose an efficient procedure for predicting with large output spaces, that scales logarithmically in the dimensionality of the output space. We further propose an asymptotically optimal procedure for sparse multi-task learning when the tasks share a joint support. We show consistency of the proposed method and derive rates of convergence. We next address the problem of learning meaningful representations of data. We propose a method for learning sparse representations that takes into account the structure of the data space and demonstrate how it enables one to obtain meaningful features. We establish sample complexity results for the proposed approach. We then propose a model-free feature selection procedure and establish its sure-screening property in the high dimensional regime. Furthermore we show that with a slight modification, the approach previously proposed for sparse multi-task learning enables one to obtain sparse representations for multiple related tasks simultaneously.
898

Modeling Mobile User Behavior for Anomaly Detection

Buthpitiya, Senaka 01 April 2014 (has links)
As ubiquitous computing (ubicomp) technologies reach maturity, smart phones and context-based services are gaining mainstream popularity. A smart phone accompanies its user throughout (nearly) all aspects of his life, becoming an indispensable assistant the busy user relies on to help navigate his life, using map applications to navigate the physical world, email and instant messaging applications to keep in touch, media player applications to be entertained, etc. As a smart phone is capable of sensing the physical and virtual context of the user with an array of “hard” sensors (e.g., GPS, accelerometer) and “soft” sensors (e.g., email, social network, calendar),it is well-equipped to tailor the assistance it provides to the user. Over the life of a smart phone, it is entrusted with an enormous amount of personal information, everything from context-information sensed by the phone to contact lists to call-logs to passwords. Based on this rich set of information it is possible to model the behavior of the user, and use the models to detect anomalies (i.e., significant variations) in the user’s behavior. Anomaly detection capabilities enable a variety of application domains such as device theft detection, improved authentication mechanisms, impersonation, prevention, physical emergency detection, remote elder-care monitoring, and other proactive services. There has been extensive prior research on anomaly detection in various application domain areas (e.g., fraud detection, intrusion detection). Yet these approaches cannot be used in ubicomp environments as 1) they are very application-specific and not versatile enough to learn complex day to day behavior of users, 2) they work with a very small number of information sources with a relatively uniform stream of information (unlike sensor data from mobile devices), and 3) most approaches require labeled or semi-labeled data about anomalies (in ubicomp environments, it is very costly to create labeled datasets). Existing work in the field of anomaly detection in ubicomp environments is quite sparse. Most of the existing work focuses on using a single sensor information stream (GPS in most cases) to detect anomalies in the user’s behavior. However there exists a somewhat richer vein of prior work in modeling user behavior with the goal of behavior prediction; this is again limited mostly to a single sensor stream or single type of prediction (mostly location). This dissertation presents the notion of modeling mobile user behavior as a collection of models each capturing an aspect of the user’s behavior such as indoor mobility, typing patterns, calling patterns. A novel mechanism is developed for combining these models (i.e.,CobLE), which operate on asynchronous information sources from the mobile device, taking into consider how well each model is estimated to perform in the current context. These ideas are concretely implemented in an extensible framework, McFAD. Evaluations carried out using real-world datasets on this framework in contrast to prior work show that the framework for detecting anomalous behavior, 1) vastly reduces the training data requirement, 2) increases coverage, and 3) dramatically increases performance.
899

Development of a blown tubular film take-off system

Pierce, Hugh A. January 1975 (has links)
This creative project has investigated the engineering principles relative to the design and construction of a blown tubular film take-off system. The study has also made a careful analysis of the equipment necessary to construct a blown tubular film extrusion line.In addition, the creative project has discussed alternate methods of producing polyolefin filets, it has suggested possible solutions for troubleshooting which can provide valuable assistance in the successful production of quality blown tubular film.
900

Improved sequence-read simulation for (meta)genomics

2014 September 1900 (has links)
There are many programs available for generating simulated whole-genome shotgun sequence reads. The data generated by many of these programs follow predefined models, which limits their use to the authors' original intentions. For example, many models assume that read lengths follow a uniform or normal distribution. Other programs generate models from actual sequencing data, but are limited to reads from single-genome studies. To our knowledge, there are no programs that allow a user to generate simulated data for metagenomics applications following empirical read-length distributions and quality profiles based on empirically-derived information from actual sequencing data. We present BEAR (Better Emulation for Artificial Reads), a program that uses a machine-learning approach to generate reads with lengths and quality values that closely match empirically-derived distributions. BEAR can emulate reads from various sequencing platforms, including Illumina, 454, and Ion Torrent. BEAR requires minimal user input, as it automatically determines appropriate parameter settings from user-supplied data. BEAR also uses a unique method for deriving run-specific error rates, and extracts useful statistics from the metagenomic data itself, such as quality-error models. Many existing simulators are specific to a particular sequencing technology; however, BEAR is not restricted in this way. Because of its flexibility, BEAR is particularly useful for emulating the behaviour of technologies like Ion Torrent, for which no dedicated sequencing simulators are currently available. BEAR is also the first metagenomic sequencing simulator program that automates the process of generating abundances, which can be an arduous task. BEAR is useful for evaluating data processing tools in genomics. It has many advantages over existing comparable software, such as generating more realistic reads and being independent of sequencing technology, and has features particularly useful for metagenomics work.

Page generated in 0.4777 seconds