• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 763
  • 170
  • 24
  • 21
  • 21
  • 21
  • 21
  • 21
  • 21
  • 6
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 2872
  • 2872
  • 2521
  • 2129
  • 1312
  • 553
  • 527
  • 462
  • 443
  • 382
  • 373
  • 306
  • 262
  • 223
  • 208
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Quarc : an architecture for efficient on-chip communication

Moadeli, Mahmoud January 2010 (has links)
The exponential downscaling of the feature size has enforced a paradigm shift from computation-based design to communication-based design in system on chip development. Buses, the traditional communication architecture in systems on chip, are incapable of addressing the increasing bandwidth requirements of future large systems. Networks on chip have emerged as an interconnection architecture offering unique solutions to the technological and design issues related to communication in future systems on chip. The transition from buses as a shared medium to networks on chip as a segmented medium has given rise to new challenges in system on chip realm. By leveraging the shared nature of the communication medium, buses have been highly efficient in delivering multicast communication. The segmented nature of networks, however, inhibits the multicast messages to be delivered as efficiently by networks on chip. Relying on extensive research on multicast communication in parallel computers, several network on chip architectures have offered mechanisms to perform the operation, while conforming to resource constraints of the network on chip paradigm. Multicast communication in majority of these networks on chip is implemented by establishing a connection between source and all multicast destinations before the message transmission commences. Establishing the connections incurs an overhead and, therefore, is not desirable; in particular in latency sensitive services such as cache coherence. To address high performance multicast communication, this research presents Quarc, a novel network on chip architecture. The Quarc architecture targets an area-efficient, low power, high performance implementation. The thesis covers a detailed representation of the building blocks of the architecture, including topology, router and network interface. The cost and performance comparison of the Quarc architecture against other network on chip architectures reveals that the Quarc architecture is a highly efficient architecture. Moreover, the thesis introduces novel performance models of complex traffic patterns, including multicast and quality of service-aware communication.
242

Electrochemical sensor system architecture using the CMOS-MEMS technology for cytometry applications

Piechocinski, Marek January 2012 (has links)
This thesis presents the development process of an integrated sensor-system-on-chip for recording the parameters of blood cells. The CMOS based device consists of the two flow-through sensor arrays, stacked one on top of the other. The sensors are able to detect the biological cell in terms of its physical size and the surface charge on a cell’s membrane. The development of the measurement system was divided into several stages these were to design and implement the two sensor arrays complemented with readout circuitry onto a single CMOS chip to create an on-chip membrane with embedded flow-through micro-channels by a CMOS compatible post-processing techniques to encapsulate and hermeti-cally package the device for liquid chemistry experiments, to test and characterise the two sensor arrays together with readout electronics, to develop control and data acquisition software and to detect the biological cells using the complete measurement system. Cy-tometry and haematology fields are closely related to the presented work, hence it is envis-aged that the developed technology enables further integration and miniaturisation of the biomedical instrumentation. The two vertically stacked 4 x 4 flow-through sensor arrays, embedded into an on-chip membrane, were implemented in a single silicon chip device together with a readout circuitry for each of the sensor sets. To develop a CMOS-MEMS device the design and fabrication was carried out using a commercial process design kit (0.35 µm 4-Metal, 2-Poly, CMOS) as well as the foundry service. Thereafter the device was post-processed in-house to develop the on-chip membrane and open the sensing micro-apertures. The two types of sensor were integrated on the silicon dice for multi-parametric characterisation of the analyte. To read the cell membrane charge the ion sensitive field effect transistor (ISFET) was utilised and for cell size (volume) detection an impedance sensor (Coulter counter) was used. Both sensors rely on a flow-through mode of operation, hence the constant flow of the analyte sample could be maintained. The Coulter counter metal electrode was exposed to the solution, while the ISFET floating gate electrode maintained contact with the analyte through a charge sensitive membrane constructed of a dielectric material (silicon dioxide) lining the inside of the micro-pore. The outside size of each of the electrodes was 100 µm x 100 µm and the inside varied from 20 µm x 20 µm to 58 µm x 58 µm. The sense aperture size also varied from 10 µm x 10 µm to 16 µm x 16 µm. The two stacked micro-electrode arrays were layed out on an area of 5002 µm2. The CMOS-MEMS device was fit into a custom printed circuit board (PCB) chip carrier, thereafter insulated and hermetically packaged. Microfluidic ports were attached to the packaged module so that the analyte can be introduced and drained by a flow-through mode of operation. The complete microfluidic system and packaging was assembled and thereafter evaluated for correct operation. Undisturbed flow of the analyte solution is es-sential for the sensor operation. This is related to the fact that the electrochemical response of both sensors depends on the analyte flow through the sense micro-apertures thus any aggregation of the sample within the microfluidic system would cause clogging of the mi-cro-pores. The on-chip electronic circuitry was characterised, and after comparison with the simulated results found to be within an error margin of what enables it for reliable sensor signal readout. The measurement system is automated by software control so that the bias parame-ters can be set precisely, it also helped while error debugging. Analogue signals from the two sensor arrays were acquired, later processed and stored by a data acquisition system. Both control and data capture systems are implemented in a high level programming lan-guage. Furthermore both are integrated and operated in a one window based graphical user interface (GUI). A fully functional measurement system was used as a flow-through cytometer for living cells detection. The measurements results showed that the system is capable of single cell detection and on-the-fly data display.
243

Implicit feedback for interactive information retrieval

White, Ryen William January 2004 (has links)
Searchers can find the construction of query statements for submission to Information Retrieval (IR) systems a problematic activity. These problems are confounded by uncertainty about the information they are searching for, or an unfamiliarity with the retrieval system being used or collection being searched. On the World Wide Web these problems are potentially more acute as searchers receive little or no training in how to search effectively. Relevance feedback (RF) techniques allow searchers to directly communicate what information is relevant and help them construct improved query statements. However, the techniques require explicit relevance assessments that intrude on searchers’ primary lines of activity and as such, searchers may be unwilling to provide this feedback. Implicit feedback systems are unobtrusive and make inferences of what is relevant based on searcher interaction. They gather information to better represent searcher needs whilst minimising the burden of explicitly reformulating queries or directly providing relevance information. In this thesis I investigate implicit feedback techniques for interactive information retrieval. The techniques proposed aim to increase the quality and quantity of searcher interaction and use this interaction to infer searcher interests. I develop search interfaces that use representations of the top-ranked retrieved documents such as sentences and summaries to encourage a deeper examination of search results and drive the information seeking process. Implicit feedback frameworks based on heuristic and probabilistic approaches are described. These frameworks use interaction to identify needs and estimate changes in these needs during a search. The evidence gathered is used to modify search queries and make new search decisions such as re-searching the document collection or restructuring already retrieved information. The term selection models from the frameworks and elsewhere are evaluated using a simulation-based evaluation methodology that allows different search scenarios to be modelled. Findings show that the probabilistic term selection model generated the most effective search queries and learned what was relevant in the shortest time. Different versions of an interface that implements the probabilistic framework are evaluated to test it with human subjects and investigate how much control they want over its decisions. The experiment involved 48 subjects with different skill levels and search experience. The results show that searchers are happy to delegate responsibility to RF systems for relevance assessment (through implicit feedback), but not more severe search decisions such as formulating queries or selecting retrieval strategies. Systems that help searchers make these decisions are preferred to those that act directly on their behalf or await searcher action.
244

Efficient algorithms for bipartite matching problems with preferences

Sng, Colin Thiam Soon January 2008 (has links)
Matching problems involve a set of participants, where each participant has a capacity and a subset of the participants rank a subset of the others in order of preference (strictly or with ties). Matching problems are motivated in practice by large-scale applications, such as automated matching schemes, which assign participants together based on their preferences over one another. This thesis focuses on bipartite matching problems in which there are two disjoint sets of participants (such as medical students and hospitals). We present a range of efficient algorithms for finding various types of optimal matchings in the context of these problems. Our optimality criteria involve a diverse range of concepts that are alternatives to classical stability. Examples include so-called popular and Pareto optimal matchings, and also matchings that are optimal with respect to their profile (the number of participants obtaining their first choice, second choice and so on). The first optimality criterion that we study is the notion of a Pareto optimal matching, a criterion that economists regard as a fundamental property to be satisfied by an optimal matching. We present the first algorithmic results on Pareto optimality for the Capacitated House Allocation problem (CHA), which is a many-to-one variant of the classical House Allocation problem, as well as for the Hospitals-Residents problem (HR), a generalisation of the classical Stable Marriage problem. For each of these problems, we obtain a characterisation of Pareto optimal matchings, and then use this to obtain a polynomial-time algorithm for finding a maximum Pareto optimal matching. The next optimality criterion that we study is the notion of a popular matching. We study popular matchings in CHA and present a polynomial-time algorithm for finding a maximum popular matching or reporting that none exists, given any instance of CHA. We extend our findings to the case in CHA where preferences may contain ties (CHAT) by proving the extension of a well-known result in matching theory to the capacitated bipartite graph case, and using this to obtain a polynomial-time algorithm for finding a maximum popular matching, or reporting that none exists. We next study popular matchings in the Weighted Capacitated House Allocation problem (WCHA), which is a variant of CHA where the agents have weights assigned to them. We identify a structure in the underlying graph of the problem that singles out those edges that cannot belong to a popular matching. We then use this to construct a polynomial-time algorithm for finding a maximum popular matching or reporting that none exists, for the case where preferences are strict. We then study popular matchings in a variant of the classical Stable Marriage problem with Ties and Incomplete preference lists (SMTI), where preference lists are symmetric. Here, we provide the first characterisation results on popular matchings in the bipartite setting where preferences are two-sided, which can either lead to a polynomial-time algorithm for solving the problem or help establish that it is NP-complete. We also provide the first algorithm for testing if a matching is popular in such a setting. The remaining optimality criteria that we study involve profile-based optimal matchings. We define three versions of what it means for a matching to be optimal based on its profile, namely so-called greedy maximum, rank-maximal and generous maximum matchings. We study each of these in the context of CHAT and the Hospitals-Residents problem with Ties (HRT). For each problem model, we give polynomial-time algorithms for finding a greedy maximum, a rank-maximal and a generous maximum matching.
245

Composing graphical user interfaces in a purely functional language

Finnie, Sigbjorn O. January 1998 (has links)
This thesis is about building interactive graphical user interfaces in a compositional manner. Graphical user interface application hold out the promise of providing users with an interactive, graphical medium by which they can carry out tasks more effectively and conveniently. The application aids the user to solve some task. Conceptually, the user is in charge of the graphical medium, controlling the order and the rate at which individual actions are performed. This user-centred nature of graphical user interfaces has considerable ramifications for how software is structured. Since the application now services the user rather than the other way around, it has to be capable of responding to the user's actions when and in whatever order they might occur. This transfer of overall control towards the user places heavy burden on programming systems, a burden that many systems don't support too well. Why? Because the application now has to be structured so that it is responsive to whatever action the user may perform at any time. The main contribution of this thesis is to present a compositional approach to constructing graphical user interface applications in a purely functional programming language The thesis is concerned with the software techniques used to program graphical user interface applications, and not directly with their design. A starting point for the work presented here was to examine whether an approach based on functional programming could improve how graphical user interfaces are built. Functional programming languages, and Haskell in particular, contain a number of distinctive features such as higher-order functions, polymorphic type systems, lazy evaluation, and systematic overloading, that together pack quite a punch, at least according to proponents of these languages. A secondary contribution of this thesis is to present a compositional user interface framework called Haggis, which makes good use of current functional programming techniques. The thesis evaluates the properties of this framework by comparing it to existing systems.
246

Design and implementation of an array language for computational science on a heterogeneous multicore architecture

Keir, Paul January 2012 (has links)
The packing of multiple processor cores onto a single chip has become a mainstream solution to fundamental physical issues relating to the microscopic scales employed in the manufacture of semiconductor components. Multicore architectures provide lower clock speeds per core, while aggregate floating-point capability continues to increase. Heterogeneous multicore chips, such as the Cell Broadband Engine (CBE) and modern graphics chips, also address the related issue of an increasing mismatch between high processor speeds, and huge latency to main memory. Such chips tackle this memory wall by the provision of addressable caches; increased bandwidth to main memory; and fast thread context switching. An associated cost is often reduced functionality of the individual accelerator cores; and the increased complexity involved in their programming. This dissertation investigates the application of a programming language supporting the first-class use of arrays; and capable of automatically parallelising array expressions; to the heterogeneous multicore domain of the CBE, as found in the Sony PlayStation 3 (PS3). The language is a pre-existing and well-documented proper subset of Fortran, known as the ‘F’ programming language. A bespoke compiler, referred to as E , is developed to support this aim, and written in the Haskell programming language. The output of the compiler is in an extended C++ dialect known as Offload C++, which targets the PS3. A significant feature of this language is its use of multiple, statically typed, address spaces. By focusing on generic, polymorphic interfaces for both the generated and hand constructed code, a number of interesting design patterns relating to the memory locality are introduced. A suite of medium-sized (100-700 lines), real-world benchmark programs are used to evaluate the performance, correctness, and scalability of the compiler technology. Absolute speedup values, well in excess of one, are observed for all of the programs. The work ultimately demonstrates that an array language can significantly reduce the effort expended to utilise a parallel heterogeneous multicore architecture, while retaining high performance. A substantial, related advantage in using standard ‘F’ is that any Fortran compiler can create debuggable, and competitively performing serial programs.
247

Understanding the performance of Internet video over residential networks

Ellis, Martin January 2012 (has links)
Video streaming applications are now commonplace among home Internet users, who typically access the Internet using DSL or Cable technologies. However, the effect of these technologies on video performance, in terms of degradations in video quality, is not well understood. To enable continued deployment of applications with improved quality of experience for home users, it is essential to understand the nature of network impairments and develop means to overcome them. In this dissertation, I demonstrate the type of network conditions experienced by Internet video traffic, by presenting a new dataset of the packet level performance of real-time streaming to residential Internet users. Then, I use these packet level traces to evaluate the performance of commonly used models for packet loss simulation, and finding the models to be insufficient, present a new type of model that more accurately captures the loss behaviour. Finally, to demonstrate how a better understanding of the network can improve video quality in a real application scenario, I evaluate the performance of forward error correction schemes for Internet video using the measurements. I show that performance can be poor, devise a new metric to predict performance of error recovery from the characteristics of the input, and validate that the new packet loss model allows more realistic simulations. For the effective deployment of Internet video systems to users of residential access networks, a firm understanding of these networks is required. This dissertation provides insights into the packet level characteristics that can be expected from such networks, and techniques to realistically simulate their behaviour, promoting development of future video applications.
248

Probabilistic reasoning and inference for systems biology

Vyshemirsky, Vladislav January 2007 (has links)
One of the important challenges in Systems Biology is reasoning and performing hypotheses testing in uncertain conditions, when available knowledge may be incomplete and the experimental data may contain substantial noise. In this thesis we develop methods of probabilistic reasoning and inference that operate consistently within an environment of uncertain knowledge and data. Mechanistic mathematical models are used to describe hypotheses about biological systems. We consider both deductive model based reasoning and model inference from data. The main contributions are a novel modelling approach using continuous time Markov chains that enables deductive derivation of model behaviours and their properties, and the application of Bayesian inferential methods to solve the inverse problem of model inference and comparison, given uncertain knowledge and noisy data. In the first part of the thesis, we consider both individual and population based techniques for modelling biochemical pathways using continuous time Markov chains, and demonstrate why the latter is the most appropriate. We illustrate a new approach, based on symbolic intervals of concentrations, with an example portion of the ERK signalling pathway. We demonstrate that the resulting model approximates the same dynamic system as traditionally defined using ordinary differential equations. The advantage of the new approach is quantitative logical analysis; we formulate a number of biologically significant queries in the temporal logic CSL and use probabilistic symbolic model checking to investigate their veracity. In the second part of the thesis, we consider the inverse problem of model inference and testing of alternative hypotheses, when models are defined by non-linear ordinary differential equations and the experimental data is noisy and sparse. We compare and evaluate a number of statistical techniques, and implement an effective Bayesian inferential framework for systems biology based on Markov chain Monte Carlo methods and estimation of marginal likelihoods by annealing-melting integration. We illustrate the framework with two case studies, one of which involves an open problem concerning the mediation of ERK phosphorylation in the ERK pathway.
249

Probabilistic symmetry reduction

Power, Christopher January 2012 (has links)
Model checking is a technique used for the formal verification of concurrent systems. A major hindrance to model checking is the so-called state space explosion problem where the number of states in a model grows exponentially as variables are added. This means even trivial systems can require millions of states to define and are often too large to feasibly verify. Fortunately, models often exhibit underlying replication which can be exploited to aid in verification. Exploiting this replication is known as symmetry reduction and has yielded considerable success in non probabilistic verification. The main contribution of this thesis is to show how symmetry reduction techniques can be applied to explicit state probabilistic model checking. In probabilistic model checking the need for such techniques is particularly acute since it requires not only an exhaustive state-space exploration, but also a numerical solution phase to compute probabilities or other quantitative values. The approach we take enables the automated detection of arbitrary data and component symmetries from a probabilistic specification. We define new techniques to exploit the identified symmetry and provide efficient generation of the quotient model. We prove the correctness of our approach, and demonstrate its viability by implementing a tool to apply symmetry reduction to an explicit state model checker.
250

An investigation into error detection and recovery in UK National Health Service screening programmes

Chozos, Nick January 2009 (has links)
The purpose of this thesis is to gain an understanding of the problems that may impede detection and recovery of NHS laboratory screening errors. This is done by developing an accident analysis technique that isolates and further analyzes error handling activities, and applying it in four case studies; four recent incidents where laboratory errors in NHS screening programmes resulted in multiple misdiagnoses over months or even years. These errors resulted in false yet plausible test results, thus being masked and almost impossible to detect in isolated cases. This technique is based on a theoretical framework that draws upon cognitive science and systems engineering, in order to explore the impact of plausibility on the entire process of error recovery. The four analyses are then integrated and compared, in order to produce a set of conclusions and recommendations. The main output of this work is the “Screening Error Recovery Model”; a model which captures and illustrates the different kinds of activities that took place during the organizational incident response of these four incidents. The model can be used to analyze and design error recovery procedures in complex, inter-organizational settings, such as the NHS, and its Primary/Secondary care structure.

Page generated in 0.0872 seconds