• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3248
  • 1209
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8701
  • 4039
  • 2505
  • 2429
  • 2429
  • 805
  • 805
  • 588
  • 579
  • 554
  • 551
  • 525
  • 486
  • 480
  • 471
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

VLSI design methodology

Mhar, Javeed I. January 1990 (has links)
The development of FIRST was a significant step in the field of silicon compilation. With FIRST, bit-serial signal processing systems could be rapidly implemented in silicon by high-level designers without requiring layout expertise. This thesis explores extensions to the compiler, but the methodology and techniques are not specific to FIRST and could be used in the more general VLSI arena. One major theme is the use of process independent layout, allowing the rapid update of a cell library to current state of the art process rules. After surveying other layout strategies, one particular layout style, gate matrix, was evaluated through the manual layout of a bit-serial, two's complement, multiplier utilising novel architectural features. The operation and architectural features of the multiplier are described, as these features were to be incorporated as options in newly generated cell libraries. SECOND, a full span silicon compiler; taking the high-level input description of FIRST but synthesizing layout to a process independent form (gate matrix) was developed using ideas gained from the manual assembly procedure. SECOND maintains and extends the hierarchy of FIRST using different assembly strategies for differing levels of hierarchy in the synthesis procedure. The hierarchy is described and the placement, routing and assembly procedures of the new elements of the hierarchy are covered. The automation tools used to generate the gate matrix layout of the lowest hierarchy level of SECOND are covered in a separate chapter. Using the same concepts of hierarchy, a tool ENGEN which transforms FIRST intermediate code to a gate level network description in HILO is also described as an alternative to SECOND in the search for process independence. The thesis ends with a suggestion of a bit-serial/bit-parallel frame for encouraging the acceptability of bit-serial systems.

Non-linear adaptive equalization based on a multi-layer perceptron architecture

Siu, Sammy January 1991 (has links)
The subject of this thesis is the original study of the application of the multi-layer perceptron architecture to channel equalization in digital communications systems. Both theoretical analyses and simulations were performed to explore the performance of the perceptron-based equalizer (including the decision feedback equalizer). Topics covered include the factors that affect performance of the structures including, the parameters (learning gain and momentum parameter) in the learning algorithm, the network topology (input dimension, number of neurons and the number of hidden layers), and the power metrics on the error cost function. Based on the geometric hyperplane analysis of the multi-layer perceptron, the results offer valuable insight into the properties and complexity of the network. Comparisons of the bit error rate performance and the dynamic behaviour of the decision boundary of the perceptron-based equalizer with both the optimal non-linear equalizer and the optimal linear equalizer are provided. Through comparisons, some asymptotic results for the performance in the perceptron-based equalizer are obtained. Furthermore, a comparison of the performance of the perceptron-based equalizer (including the decision feedback equalizer) with the least mean squares linear transversal equalizer (including decision feedback equalizer) indicates that the former offers significant reduction in the bit error rate. This is because it has the ability to form highly nonlinear decision regions, in contrast with the linear equalizer which only forms linear decision regions. The linearity of the decision regions limits the performance of the conventional linear equalizer.

A methodology for automated service level agreement compliance prediction

Yassin Kassab, Rouaa January 2013 (has links)
Service Level Agreement (SLA) specification languages express monitorable contracts between service providers and consumers. It is of interest to determine if predictive models can be derived for SLAs expressed in such languages, if possible in a fashion that is as automated as possible. Assuming that the service developer or user uses some SLA specification languages during the service development or deployment process, the Service level agreement Compliance Prediction(SlaCP) methodology is proposed as a general engineering methodology for predicting SLA compliance. This methodology helps contractual parties to assess the probability of SLA compliance, as automatically as is feasible, by mapping an existing SLA on a stochastic model of the service and using existing numerical solution algorithms or discrete event simulation to solve the model. The SlaCP methodology is generic, but the methodology is mostly described, in this thesis, assuming the use of the Web Service Level Agreement (WSLA) and the Stochastic Discrete Event Systems (SDES)formalism. The approach taken in this methodology is firstly to associate formal semantics with WSLA elements in order to be understood mathematically precise. Then a five-step mapping process between the source and the target formalisms is conducted. These steps include: mapping into model primitives, reward metrics, expressions for functions of the semetrics, the time at which the prediction occurs, and the ultimate probability of SLA compliance. The proposed methodology is implemented in a software tool that automates most of its steps using Mobius and SPNP. The methodology is evaluated using a case study which shows the methodology's feasibility and limitations in both theoretical and practical terms.

Trust models for mobile content-sharing applications

Quercia, D. January 2009 (has links)
Using recent technologies such as Bluetooth, mobile users can share digital content (e.g., photos, videos) with other users in proximity. However, to reduce the cognitive load on mobile users, it is important that only appropriate content is stored and presented to them. This dissertation examines the feasibility of having mobile users filter out irrelevant content by running trust models. A trust model is a piece of software that keeps track of which devices are trusted (for sending quality content) and which are not. Unfortunately, existing trust models are not fit for purpose. Specifically, they lack the ability to: (1) reason about ratings other than binary ratings in a formal way; (2) rely on the trustworthiness of stored third-party recommendations; (3) aggregate recommendations to make accurate predictions of whom to trust; and (4) reason across categories without resorting to ontologies that are shared by all users in the system. We overcome these shortcomings by designing and evaluating algorithms and protocols with which portable devices are able automatically to maintain information about the reputability of sources of content and to learn from each other’s recommendations. More specifically, our contributions are: 1. An algorithm that formally reasons on generic (not necessarily binary) ratings using Bayes’ theorem. 2. A set of security protocols with which devices store ratings in (local) tamper-evident tables and are able to check the integrity of those tables through a gossiping protocol. 3. An algorithm that arranges recommendations in a “Web of Trust” and that makes predictions of trustworthiness that are more accurate than existing approaches by using graph-based learning. 4. An algorithm that learns the similarity between any two categories by extracting similarities between the two categories’ ratings rather than by requiring a universal ontology. It does so automatically by using Singular Value Decomposition. We combine these algorithms and protocols and, using real-world mobility and social network data, we evaluate the effectiveness of our proposal in allowing mobile users to select reputable sources of content. We further examine the feasibility of implementing our proposal on current mobile phones by examining the storage and computational overhead it entails. We conclude that our proposal is both feasible to implement and performs better across a range of parameters than a number of current alternatives.

ROAR : increasing the flexibility and performance of distributed search

Raiciu, C. January 2011 (has links)
Search engines are a fundamental building block of the web. Be they general purpose web search engines, product search engines for online catalogues or people search in online networks, search engines provide easy access to a huge amount of information. To cope with large amounts of information, search engines use many distributed servers to perform their functionality. For instance, to search the web quickly, search engines partition the web index over many machines, and consult every partition when answering a query. To increase throughput, replicas are added for each of these machines. The key parameter of these search algorithms is the trade-off between replication and partitioning: increasing the partitioning level typically improves query completion time since more servers handle the query. However, partitioning too much also has drawbacks: startup costs for each sub-query are not negligible, and will decrease total throughput. Finding the right operating point and adapting to it can significantly improve performance and reduce costs. In this thesis we propose that the tradeoff between partitioning and replication should be easily configurable. To this end we introduce Rendezvous On a Ring (ROAR), a novel distributed algorithm that enables on-the-fly re-configuration of the partitioning level. ROAR can add and remove servers without stopping the system, cope with server failures, and provide good load-balancing even with a heterogeneous server pool. We experimentally show that it is possible to dynamically adjust the partitioning level to cope with different loads while meeting target query delays, and in doing so the system can reduce its power consumption significantly. To test ROAR we introduce Privacy Preserving Search: a particular search application that allows users to store encrypted data online while being able to easily search that data. Our contributions include novel protocols that allow PPS for numeric values, as well as a proof of concept implementation of PPS running on top of ROAR and allowing users to match as many as 5 million files in well under 1s.

Private and censorship-resistant communication over public networks

Rogers, M. J. January 2011 (has links)
Society’s increasing reliance on digital communication networks is creating unprecedented opportunities for wholesale surveillance and censorship. This thesis investigates the use of public networks such as the Internet to build robust, private communication systems that can resist monitoring and attacks by powerful adversaries such as national governments. We sketch the design of a censorship-resistant communication system based on peer-to-peer Internet overlays in which the participants only communicate directly with people they know and trust. This ‘friend-to-friend’ approach protects the participants’ privacy, but it also presents two significant challenges. The first is that, as with any peer-to-peer overlay, the users of the system must collectively provide the resources necessary for its operation; some users might prefer to use the system without contributing resources equal to those they consume, and if many users do so, the system may not be able to survive. To address this challenge we present a new game theoretic model of the problem of encouraging cooperation between selfish actors under conditions of scarcity, and develop a strategy for the game that provides rational incentives for cooperation under a wide range of conditions. The second challenge is that the structure of a friend-to-friend overlay may reveal the users’ social relationships to an adversary monitoring the underlying network. To conceal their sensitive relationships from the adversary, the users must be able to communicate indirectly across the overlay in a way that resists monitoring and attacks by other participants. We address this second challenge by developing two new routing protocols that robustly deliver messages across networks with unknown topologies, without revealing the identities of the communication endpoints to intermediate nodes or vice versa. The protocols make use of a novel unforgeable acknowledgement mechanism that proves that a message has been delivered without identifying the source or destination of the message or the path by which it was delivered. One of the routing protocols is shown to be robust to attacks by malicious participants, while the other provides rational incentives for selfish participants to cooperate in forwarding messages.

The role of goal relevance in the occurrence of systematic slip errors in routine procedural tasks

Ament, M. G. A. January 2011 (has links)
Slip errors can have severe consequences but are notoriously difficult to reduce. Training, visual cues and increasing motivation are generally not effective in eliminating these slips. Instead, the approach this work takes is to identify which steps in a routine task are most error prone, so that these can be designed out of device interactions. In particular, device- and task-oriented steps are investigated. Device-oriented steps are "extra" steps imposed by the device that do not directly contribute towards the task goal. Conversely, task-oriented steps directly bring the user closer to their goal. The main hypothesis addressed in this work is that device-oriented steps are more problematic than task-oriented ones. The concepts of device- and task-oriented steps are investigated more closely, by analysing the literature on routine action and mental representations of different steps. The core diff erence between the steps is found to be how relevant a step is to the goal. This is further supported by two qualitative studies. A series of experimental studies investigates the cognitive mechanisms underlying device and task-oriented steps. This is addressed through six experiments that address error rates, step times, proportion of omissions and sensitivity to working memory load. Participants learned one of three routine tasks, with several carefully controlled device- and task-oriented steps. The results show that on device-oriented steps, error rates are higher, step times are longer, the proportion of omissions is greater, and working memory load has an increased effect. These findings support the hypothesis that activation levels are lower on device-oriented steps. The thesis concludes that a step's relevance to the task goal plays an important role in the occurrence of errors. This work has implications for both our understanding of routine procedural action as well as the design of devices.

Structured sparsity with convex penalty functions

Morales, J. M. January 2012 (has links)
We study the problem of learning a sparse linear regression vector under additional conditions on the structure of its sparsity pattern. This problem is relevant in Machine Learning, Statistics and Signal Processing. It is well known that a linear regression can benefit from knowledge that the underlying regression vector is sparse. The combinatorial problem of selecting the nonzero components of this vector can be “relaxed” by regularising the squared error with a convex penalty function like the ℓ1 norm. However, in many applications, additional conditions on the structure of the regression vector and its sparsity pattern are available. Incorporating this information into the learning method may lead to a significant decrease of the estimation error. In this thesis, we present a family of convex penalty functions, which encode prior knowledge on the structure of the vector formed by the absolute values of the regression coefficients. This family subsumes the ℓ1 norm and is flexible enough to include different models of sparsity patterns, which are of practical and theoretical importance. We establish several properties of these penalty functions and discuss some examples where they can be computed explicitly. Moreover, for solving the regularised least squares problem with these penalty functions, we present a convergent optimisation algorithm and proximal method. Both algorithms are useful numerical techniques taylored for different kinds of penalties. Extensive numerical simulations highlight the benefit of structured sparsity and the advantage offered by our approach over the Lasso method and other related methods, such as using other convex optimisation penalties or greedy methods.

Development and application of model selection methods for investigating brain function

Duarte Rosa, M. J. January 2012 (has links)
The goal of any scientific discipline is to learn about nature, usually through the process of evaluating competing hypotheses for explaining observations. Brain research is no exception. Investigating brain function usually entails comparing models, expressed as mathematical equations, of how the brain works. The aim of this thesis is to provide and evaluate new model comparison techniques that facilitate this research. In addition, it applies existing comparison methods to disambiguate between hypotheses of how neuronal activity relates to blood ow, a topic known as neurovascular coupling. In neuroimaging, techniques such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) allow to routinely image the brain, whilst statistical frameworks, such as statistical parametric mapping (SPM), allow to identify regionally specific responses, or brain activations. In this thesis, SPM is first used to address the problem of neurovascular coupling, and compare different putative coupling functions, which relate fMRI signals to different features of the EEG power spectrum. These inferences are made using linear models and a model selection approach based on F-tests. Although valid, this approach is restricted to nested models. This thesis then focuses on the development of a Bayesian technique, to construct posterior model probability maps (PPMs) for group studies. PPMs are analogous to F-tests but not limited to nested hypotheses. The work presented here then focuses again on neurovascular coupling, this time from a mechanistic perspective, not afforded by linear models. For this purpose, a detailed biophysical framework is used to explore the contribution of synaptic and spiking activity in the generation of hemodynamic signals in visual cortex, using simultaneous EEG-fMRI. This approach is a special case of brain connectivity models. Finally, using fMRI data, this thesis validates a recently proposed Bayesian approach for quickly comparing large numbers of connectivity models based on inverting a single model.

Representations and completions for ordered algebraic structures

Egrot, R. E. L. January 2013 (has links)
The primary concerns of this thesis are completions and representations for various classes of poset expansion, and a recurring theme will be that of axiomatizability. By a representation we mean something similar to the Stone representation whereby a Boolean algebra can be homomorphically embedded into a field of sets. So, in general we are interested in order embedding posets into fields of sets in such a way that existing meets and joins are interpreted naturally as set theoretic intersections and unions respectively. Our contributions in this area are an investigation into the ostensibly second order property of whether a poset can be order embedded into a field of sets in such a way that arbitrary meets and/or joins are interpreted as set theoretic intersections and/or unions respectively. Among other things we show that unlike Boolean algebras, which have such a ‘complete’ representation if and only if they are atomic, the classes of bounded, distributive lattices and posets with complete representations have no first order axiomatizations (though they are pseudoelementary). We also show that the class of posets with representations preserving arbitrary joins is pseudoelementary but not elementary (a dual result also holds). We discuss various completions relating to the canonical extension, whose classical construction is related to the Stone representation. We claim some new results on the structure of classes of poset meet-completions which preserve particular sets of meets, in particular that they form a weakly upper semimodular lattice. We make explicit the construction of \Delta_{1}-completions using a two stage process involving meet- and join-completions. Linking our twin topics we discuss canonicity for the representation classes we deal with, and by building representations using a meet-completion construction as a base we show that the class of representable ordered domain algebras is finitely axiomatizable. Our method has the advantage of representing finite algebras over finite bases.

Page generated in 0.0318 seconds