• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Applications of signal detection theory to the performance of imaging systems, human observers and artificial intelligence in radiography

Manning, David J. January 1998 (has links)
An investigation was carried out to evaluate diagnostic performance in medical radiology. A critical review of methods available for the assessment of image quality in terms of physical objective measurements and quantitative observer performance was followed by a series of experiments which applied the techniques of Receiver Operating Characteristics (ROC) to radiographic problems. An appraisal of the performance of six currently available imaging systems designed for chest radiography was performed using expert observers and an anthropomorphic phantom. Results showed a range of abilities to demonstrate pulmonary nodules (ROC areas of 0.866 to 0.961). The ROC outcomes for the imaging systems were shown to correlate well with signal to noise ratio (SNR) measurements for the images (0.78, p< 0.05) although comparisons of ROC and threshold detection indices (HT) gave a lower level of agreement (0.6, p<0.05). The SNR method of image evaluation could probably be used as an alternative to ROC techniques in routine quality assurance procedures when time is short. Observers from a group of undergraduate radiography students were tested by an ROC study into their ability to detect pulmonary lesions in chest images. Their ROC areas (Az) ranged from 0.616 to 0.857 (mean 0.74) compared with an expert mean score of 0.872. The low score for the students was investigated in terms of the cognitive task and their search strategy training. Their (Az ) scores showed no significant correlation with simple psychometric tests. A neural network was tested against radiologists, radiographers and student radiographers in its ability to identify fractures in wrist radiographs. All observers performed to a similar level of ROC Az score but the artificial intelligence showed higher specificity values. This attribute was used to filter some of the normals from the test population and resulted in changes to the mean Az score for the human observers.
122

A structural approach to matching problems with preferences

McDermid, Eric J. January 2011 (has links)
This thesis is a study of a number of matching problems that seek to match together pairs or groups of agents subject to the preferences of some or all of the agents. We present a number of new algorithmic results for five specific problem domains. Each of these results is derived with the aid of some structural properties implicitly embedded in the problem. We begin by describing an approximation algorithm for the problem of finding a maximum stable matching for an instance of the stable marriage problem with ties and incomplete lists (MAX-SMTI). Our polynomial time approximation algorithm provides a performance guarantee of 3/2 for the general version of MAX-SMTI, improving upon the previous best approximation algorithm, which gave a performance guarantee of 5/3. Next, we study the sex-equal stable marriage problem (SESM). We show that SESM is W[1]-hard, even if the men's and women's preference lists are both of length at most three. This improves upon the previously known hardness results. We contrast this with an exact, low-order exponential time algorithm. This is the first non-trivial exponential time algorithm known for this problem, or indeed for any hard stable matching problem. Turning our attention to the hospitals / residents problem with couples (HRC), we show that HRC is NP-complete, even if very severe restrictions are placed on the input. By contrast, we give a linear-time algorithm to find a stable matching with couples (or report that none exists) when stability is defined in terms of the classical Gale-Shapley concept. This result represents the most general polynomial time solvable restriction of HRC that we are aware of. We then explore the three dimensional stable matching problem (3DSM), in which we seek to find stable matchings across three sets of agents, rather than two (as in the classical case). We show that under two natural definitions of stability, finding a stable matching for a 3DSM instance is NP-complete. These hardness results resolve some open questions in the literature. Finally, we study the popular matching problem (POP-M) in the context of matching a set of applicants to a set of posts. We provide a characterization of the set of popular matchings for an arbitrary POP-M instance in terms of a new structure called the switching graph exploited to yield efficient algorithms for a range of associated problems, extending and improving upon the previously best-known results for this problem.
123

Bigraphs with sharing and applications in wireless networks

Sevegnani, Michele January 2012 (has links)
Bigraphs are a fully graphical process algebraic formalism, capable of representing both the position in space of agents and their inter-connections. However, they assume a topology based on sets of trees and thus cannot represent spatial locations that are shared among several entities in a simple or intuitive way. This is a problem, because shared locations are often a requirement, for example, when modelling scenarios in the physical world or in modern complex computer systems such as wireless networks and spatial-aware applications in ubiquitous computing. We propose bigraphs with sharing, a generalisation of the original definition of bigraphs, to allow for overlapping topologies. The new locality model is based on directed acyclic graphs. We demonstrate the new formalism can be defined in the general framework of bigraphical theories and wide reactive systems, as originally devised by Robin Milner. We do so by defining a categorical interpretation of bigraphs with sharing, an axiomatisation derived from the equations of a bialgebra over finite ordinals, and a normal form to express bigraphical terms. We illustrate how sharing is essential for modelling overlapping localities by presenting two example case studies in the field of wireless networking. We show that bigraphs with sharing can be used realistically in a production environment by describing the implementation of an efficient matching algorithm and a software tool for the definition, simulation, visualisation and analysis of bigraphical reactive systems.
124

Part of speech N-grams for information retrieval

Lioma, Christina Amalia January 2008 (has links)
The increasing availability of information on the World Wide Web (Web), and the need to access relevant specs of this information provide an important impetus for the development of automatic intelligent Information Retrieval (IR) technology. IR systems convert human authored language into representations that can be processed by computers, with the aim to provide humans with access to knowledge. Specifically, IR applications locate and quantify informative content in data, and make statistical decisions on the topical similarity, or relevance, between different items of data. The wide popularity of IR applications in the last decades has driven intensive research and development into theoretical models of information and relevance, and their implementation into usable applications, such as commercial search engines. The majority of IR systems today typically rely on statistical manipulations of individual lexical frequencies (i.e., single word counts) to estimate the relevance of a document to a user request, on the assumption that such lexical statistics can be sufficiently representative of informative content. Such estimations implicitly assume that words occur independently of each other, and as such ignore the compositional semantics of language. This assumption however is not entirely true, and can cause several problems, such as ambiguity in understanding textual information, misinterpreting or falsifying the original informative intent, and limiting the semantic scope of text. These problems can hinder the accurate estimation of relevance between texts, and hence harm the performance of an IR application. This thesis investigates the use of non-lexical statistics by IR models, with the goal to enhance the estimation of relevance between a document and a user request. These non-lexical statistics consist of part of speech information. The parts of speech are the grammatical classes of words (e.g., noun, verb). Part of speech statistics are modelled in the form of part of speech (POS) n-grams, which are contiguous sequences of parts of speech, extracted from text. The distribution of POS n-grams in language is statistically analysed. It is shown that there exists a relationship between the frequency and informative content of POS n-grams. Based on this, different applications of POS n-grams to IR technology are described and evaluated with state of the art systems. Experimental results show that POS n-grams can assist the retrieval process.
125

A specialised constraint approach for stable matching problems

Unsworth, Chris January 2008 (has links)
Constraint programming is a generalised framework designed to solve combinatorial problems. This framework is made up of a set of predefined independent components and generalised algorithms. This is a very versatile structure which allows for a variety of rich combinatorial problems to be represented and solved relatively easily. Stable matching problems consist of a set of participants wishing to be matched into pairs or groups in a stable manner. A matching is said to be stable if there is no pair or group of participants that would rather make a private arrangement to improve their situation and thus undermine the matching. There are many important "real life" applications of stable matching problems across the world. Some of which includes the Hospitals/Residents problem in which a set of graduating medical students, also known as residents, need to be assigned to hospital posts. Some authorities assign children to schools as a stable matching problem. Many other such problems are also tackled as stable matching problems. A number of classical stable matching problems have efficient specialised algorithmic solutions. Constraint programming solutions to stable matching problems have been investigated in the past. These solutions have been able to match the theoretically optimal time complexities of the algorithmic solutions. However, empirical evidence has shown that in reality these constraint solutions run significantly slower than the specialised algorithmic solutions. Furthermore, their memory requirements prohibit them from solving problems which the specialised algorithmic solutions can solve in a fraction of a second. My contribution investigates the possibility of modelling stable matching problems as specialised constraints. The motivation behind this approach was to find solutions to these problems which maintain the versatility of the constraint solutions, whilst significantly reducing the performance gap between constraint and specialised algorithmic solutions. To this end specialised constraint solutions have been developed for the stable marriage problem and the Hospitals/Residents problem. Empirical evidence has been presented which shows that these solutions can solve significantly larger problems than previously published constraint solutions. For these larger problem instances it was seen that the specialised constraint solutions came within a factor of four of the time required by algorithmic solutions. It has also been shown that, through further specialisation, these constraint solutions can be made to run significantly faster. However, these improvements came at the cost of versatility. As a demonstration of the versatility of these solutions it is shown that, by adding simple side constraints, richer problems can be easily modelled. These richer problems add additional criteria and/or an optimisation requirement to the original stable matching problems. Many of these problems have been proven to be NP-Hard and some have no known algorithmic solutions. Included with these models are results from empirical studies which show that these are indeed feasible solutions to the richer problems. Results from the studies also provide some insight into the structure of these problems, some of which have had little or no previous study.
126

The Web Engineering Security (WES) methodology

Glisson, William Bradley January 2008 (has links)
The World Wide Web has had a significant impact on basic operational economical components in global information rich civilizations. This impact is forcing organizations to provide justification for security from a business case perspective and to focus on security from a web application development environment perspective. This increased focus on security was the basis of a business case discussion and led to the acquisition of empirical evidence gathered from a high level Web survey and more detailed industry surveys to analyse security in the Web application development environment. Along with this information, a collection of evidence from relevant literature was also gathered. Individual aspects of the data gathered in the previously mentioned activities contributed to the proposal of the Essential Elements (EE) and the Security Criteria for Web Application Development (SCWAD). The Essential Elements present the idea that there are essential, basic organizational elements that need to be identified, defined and addressed before examining security aspects of a Web Engineering Development process. The Security Criteria for Web Application Development identifies criteria that need to be addressed by a secure Web Engineering process. Both the EE and SCWAD are presented in detail along with relevant justification of these two elements to Web Engineering. SCWAD is utilized as a framework to evaluate the security of a representative selection of recognized software engineering processes used in Web Engineering application development. The software engineering processes appraised by SCWAD include: the Waterfall Model, the Unified Software Development Process (USD), Dynamic Systems Development Method (DSDM) and eXtreme Programming (XP). SCWAD is also used to assess existing security methodologies which are comprised of the Orion Strategy; Survivable / Viable IS approaches; Comprehensive Lightweight Application Security Process (CLASP) and Microsoft’s Trust Worthy Computing Security Development Lifecycle. The synthesis of information provided by both the EE and SCWAD were used to develop the Web Engineering Security (WES) methodology. WES is a proactive, flexible, process neutral security methodology with customizable components that is based on empirical evidence and used to explicitly integrate security throughout an organization’s chosen application development process. In order to evaluate the practical application of the EE, SCWAD and the WES methodology, two case studies were conducted during the course of this research. The first case study describes the application of both the EE and SCWAD to the Hunterian Museum and Art Gallery’s Online Photo Library (HOPL) Internet application project. The second case study presents the commercial implementation of the WES methodology within a Global Fortune 500 financial service sector organization. The assessment of the WES methodology within the organization consisted of an initial survey establishing current security practices, a follow-up survey after changes were implemented and an overall analysis of the security conditions assigned to projects throughout the life of the case study.
127

Contextual mobile adaptation

Hall, Malcolm January 2008 (has links)
Ubiquitous computing (ubicomp) involves systems that attempt to fit in with users’ context and interaction. Researchers agree that system adaptation is a key issue in ubicomp because it can be hard to predict changes in contexts, needs and uses. Even with the best planning, it is impossible to foresee all uses of software at the design stage. In order for software to continue to be helpful and appropriate it should, ideally, be as dynamic as the environment in which it operates. Changes in user requirements, contexts of use and system resources mean software should also adapt to better support these changes. An area in which adaptation is clearly lacking is in ubicomp systems, especially those designed for mobile devices. By improving techniques and infrastructure to support adaptation it is possible for ubicomp systems to not only sense and adapt to the environments they are running in, but also retrieve and install new functionality so as to better support the dynamic context and needs of users in such environments. Dynamic adaptation of software refers to the act of changing the structure of some part of a software system as it executes, without stopping or restarting it. One of the core goals of this thesis is to discover if such adaptation is feasible, useful and appropriate in the mobile environment, and how designers can create more adaptive and flexible ubicomp systems and associated user experiences. Through a detailed study of existing literature and experience of several early systems, this thesis presents design issues and requirements for adaptive ubicomp systems. This thesis presents the Domino framework, and demonstrates that a mobile collaborative software adaptation framework is achievable. This system can recommend future adaptations based on a history of use. The framework demonstrates that wireless network connections between mobile devices can be used to transport usage logs and software components, with such connections made either in chance encounters or in designed multi–user interactions. Another aim of the thesis is to discover if users can comprehend and smoothly interact with systems that are adapting. To evaluate Domino, a multiplayer game called Castles has been developed, in which game buildings are in fact software modules that are recommended and transferred between players. This evaluation showed that people are comfortable receiving semi–automated software recommendations; these complement traditional recommendation methods such as word of mouth and online forums, with the system’s support freeing users to discuss more in–depth aspects of the system, such as tactics and strategies for use, rather than forcing them to discover, acquire and integrate software by themselves.
128

Quarc : an architecture for efficient on-chip communication

Moadeli, Mahmoud January 2010 (has links)
The exponential downscaling of the feature size has enforced a paradigm shift from computation-based design to communication-based design in system on chip development. Buses, the traditional communication architecture in systems on chip, are incapable of addressing the increasing bandwidth requirements of future large systems. Networks on chip have emerged as an interconnection architecture offering unique solutions to the technological and design issues related to communication in future systems on chip. The transition from buses as a shared medium to networks on chip as a segmented medium has given rise to new challenges in system on chip realm. By leveraging the shared nature of the communication medium, buses have been highly efficient in delivering multicast communication. The segmented nature of networks, however, inhibits the multicast messages to be delivered as efficiently by networks on chip. Relying on extensive research on multicast communication in parallel computers, several network on chip architectures have offered mechanisms to perform the operation, while conforming to resource constraints of the network on chip paradigm. Multicast communication in majority of these networks on chip is implemented by establishing a connection between source and all multicast destinations before the message transmission commences. Establishing the connections incurs an overhead and, therefore, is not desirable; in particular in latency sensitive services such as cache coherence. To address high performance multicast communication, this research presents Quarc, a novel network on chip architecture. The Quarc architecture targets an area-efficient, low power, high performance implementation. The thesis covers a detailed representation of the building blocks of the architecture, including topology, router and network interface. The cost and performance comparison of the Quarc architecture against other network on chip architectures reveals that the Quarc architecture is a highly efficient architecture. Moreover, the thesis introduces novel performance models of complex traffic patterns, including multicast and quality of service-aware communication.
129

Electrochemical sensor system architecture using the CMOS-MEMS technology for cytometry applications

Piechocinski, Marek January 2012 (has links)
This thesis presents the development process of an integrated sensor-system-on-chip for recording the parameters of blood cells. The CMOS based device consists of the two flow-through sensor arrays, stacked one on top of the other. The sensors are able to detect the biological cell in terms of its physical size and the surface charge on a cell’s membrane. The development of the measurement system was divided into several stages these were to design and implement the two sensor arrays complemented with readout circuitry onto a single CMOS chip to create an on-chip membrane with embedded flow-through micro-channels by a CMOS compatible post-processing techniques to encapsulate and hermeti-cally package the device for liquid chemistry experiments, to test and characterise the two sensor arrays together with readout electronics, to develop control and data acquisition software and to detect the biological cells using the complete measurement system. Cy-tometry and haematology fields are closely related to the presented work, hence it is envis-aged that the developed technology enables further integration and miniaturisation of the biomedical instrumentation. The two vertically stacked 4 x 4 flow-through sensor arrays, embedded into an on-chip membrane, were implemented in a single silicon chip device together with a readout circuitry for each of the sensor sets. To develop a CMOS-MEMS device the design and fabrication was carried out using a commercial process design kit (0.35 µm 4-Metal, 2-Poly, CMOS) as well as the foundry service. Thereafter the device was post-processed in-house to develop the on-chip membrane and open the sensing micro-apertures. The two types of sensor were integrated on the silicon dice for multi-parametric characterisation of the analyte. To read the cell membrane charge the ion sensitive field effect transistor (ISFET) was utilised and for cell size (volume) detection an impedance sensor (Coulter counter) was used. Both sensors rely on a flow-through mode of operation, hence the constant flow of the analyte sample could be maintained. The Coulter counter metal electrode was exposed to the solution, while the ISFET floating gate electrode maintained contact with the analyte through a charge sensitive membrane constructed of a dielectric material (silicon dioxide) lining the inside of the micro-pore. The outside size of each of the electrodes was 100 µm x 100 µm and the inside varied from 20 µm x 20 µm to 58 µm x 58 µm. The sense aperture size also varied from 10 µm x 10 µm to 16 µm x 16 µm. The two stacked micro-electrode arrays were layed out on an area of 5002 µm2. The CMOS-MEMS device was fit into a custom printed circuit board (PCB) chip carrier, thereafter insulated and hermetically packaged. Microfluidic ports were attached to the packaged module so that the analyte can be introduced and drained by a flow-through mode of operation. The complete microfluidic system and packaging was assembled and thereafter evaluated for correct operation. Undisturbed flow of the analyte solution is es-sential for the sensor operation. This is related to the fact that the electrochemical response of both sensors depends on the analyte flow through the sense micro-apertures thus any aggregation of the sample within the microfluidic system would cause clogging of the mi-cro-pores. The on-chip electronic circuitry was characterised, and after comparison with the simulated results found to be within an error margin of what enables it for reliable sensor signal readout. The measurement system is automated by software control so that the bias parame-ters can be set precisely, it also helped while error debugging. Analogue signals from the two sensor arrays were acquired, later processed and stored by a data acquisition system. Both control and data capture systems are implemented in a high level programming lan-guage. Furthermore both are integrated and operated in a one window based graphical user interface (GUI). A fully functional measurement system was used as a flow-through cytometer for living cells detection. The measurements results showed that the system is capable of single cell detection and on-the-fly data display.
130

Implicit feedback for interactive information retrieval

White, Ryen William January 2004 (has links)
Searchers can find the construction of query statements for submission to Information Retrieval (IR) systems a problematic activity. These problems are confounded by uncertainty about the information they are searching for, or an unfamiliarity with the retrieval system being used or collection being searched. On the World Wide Web these problems are potentially more acute as searchers receive little or no training in how to search effectively. Relevance feedback (RF) techniques allow searchers to directly communicate what information is relevant and help them construct improved query statements. However, the techniques require explicit relevance assessments that intrude on searchers’ primary lines of activity and as such, searchers may be unwilling to provide this feedback. Implicit feedback systems are unobtrusive and make inferences of what is relevant based on searcher interaction. They gather information to better represent searcher needs whilst minimising the burden of explicitly reformulating queries or directly providing relevance information. In this thesis I investigate implicit feedback techniques for interactive information retrieval. The techniques proposed aim to increase the quality and quantity of searcher interaction and use this interaction to infer searcher interests. I develop search interfaces that use representations of the top-ranked retrieved documents such as sentences and summaries to encourage a deeper examination of search results and drive the information seeking process. Implicit feedback frameworks based on heuristic and probabilistic approaches are described. These frameworks use interaction to identify needs and estimate changes in these needs during a search. The evidence gathered is used to modify search queries and make new search decisions such as re-searching the document collection or restructuring already retrieved information. The term selection models from the frameworks and elsewhere are evaluated using a simulation-based evaluation methodology that allows different search scenarios to be modelled. Findings show that the probabilistic term selection model generated the most effective search queries and learned what was relevant in the shortest time. Different versions of an interface that implements the probabilistic framework are evaluated to test it with human subjects and investigate how much control they want over its decisions. The experiment involved 48 subjects with different skill levels and search experience. The results show that searchers are happy to delegate responsibility to RF systems for relevance assessment (through implicit feedback), but not more severe search decisions such as formulating queries or selecting retrieval strategies. Systems that help searchers make these decisions are preferred to those that act directly on their behalf or await searcher action.

Page generated in 0.0621 seconds