• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 53
  • 45
  • 26
  • 7
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 497
  • 209
  • 208
  • 208
  • 208
  • 208
  • 77
  • 77
  • 57
  • 52
  • 49
  • 42
  • 40
  • 39
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Evolutionary algorithms and machine learning techniques for information retrieval

Ibrahim, Osman Ali Sadek January 2017 (has links)
In the context of Artificial Intelligence research, Evolutionary Algorithms and Machine Learning (EML) techniques play a fundamental role for optimising Information Retrieval (IR). However, numerous research studies did not consider the limitation of using EML at the beginning of establishing the IR systems, while other research studies compared EML techniques by only presenting overall final results without analysing important experimental settings such as the training or evolving run-times against IR effectiveness obtained. Furthermore, most papers describing research on EML techniques in IR domain did not consider the memory size requirements for applying such techniques. This thesis seeks to address some research gaps of applying EML techniques to IR systems. It also proposes to apply (1+1)-Evolutionary Strategy ((1+1)-ES) with and without gradient step-size to achieve improvements in IR systems. The thesis starts by identifying the limitation of applying EML techniques at the beginning of the IR system. This limitation is that all IR test collections are only partially judged to only some user queries. This means that the majority of documents in the IR test collections have no relevance labels for any of the user queries. These relevance labels are used to check the quality of the evolved solution in each evolving iteration of the EML techniques. Thus, this thesis introduces a mathematical approach instead of the EML technique in the early stage of establishing the IR system. It also shows the impact of the pre-processing procedure in this mathematical approach. The heuristic limitations in the IR processes such as in pre-processing procedure inspires the demands of EML technique to optimise IR systems after gathering the relevance labels. This thesis proposes a (1+1)-Evolutionary Gradient Strategy ((1+1)-EGS) to evolve Global Term Weights (GTW) in IR documents. The GTW is a value assigned to each index term to indicate the topic of the documents. It has the discrimination value of the term to discriminate between documents in the same collection. The (1+1)-EGS technique is used by two methods for fully and partially evolved procedures. In the two methods, partially evolved method outperformed the mathematical model (Term Frequency-Average Term Occurrence (TF-ATO)), the probabilistic model (Okapi-BM25) and the fully evolved method. The evaluation metrics for these experiments were the Mean Average Precision (MAP), the Average Precision (AP) and the Normalized Discounted Cumulative Gain (NDCG). Another important process in IR is the supervised Learning to Rank (LTR) of the fully judged datasets after gathering the relevance labels from user interaction. The relevance labels indicate that every document is either relevant or irrelevant in a certain degree to a user query. LTR is one of the current problems in IR that attracts the attention from researchers. The LTR problem is mainly about ranking the retrieved documents in search engines, question answering and product recommendation systems. There are a number of LTR approaches from the areas of EML. Most approaches have the limitation of being too slow or not being very effective or presenting too large a problem size. This thesis investigates a new application of a (1+1)-Evolutionary Strategy with three initialisation techniques hence resulting in three algorithm variations (ES-Rank, IESR-Rank and IESVM-Rank), to tackle the LTR problem. Experimental results from comparing the proposed method to fourteen EML techniques from the literature, show that IESR-Rank achieves the overall best performance. Ten datasets; which are MSLR-WEB10K dataset, LETOR 4 datasets, LETOR 3 datasets; and five performance metrics, Mean Average Precision (MAP), Root Mean Square Error (RMSE), Precision (P@10), Reciprocal Rank (RR@10), Normalised Discounted Cumulative Gain (NDCG@10) at top-10 query-document pairs retrieved, were used in the experiments. Finally, this thesis presents the benefits of using ES-Rank to optimise online click model that simulate user click interactions. Generally, the contribution of this thesis is an effective and efficient EML method for tackling various processes within IR. The thesis advances the understanding of how EML techniques can be applied to improve IR systems.
172

Scaleable audio for collaborative environments

Radenkovic, Milena January 2002 (has links)
This thesis is concerned with supporting natural audio communication in collaborative environments across the Internet. Recent experience with Collaborative Virtual Environments, for example, to support large on-line communities and highly interactive social events, suggest that in the future there will be applications in which many users speak at the same time. Such applications will generate large and dynamically changing volumes of audio traffic that can cause congestion and hence packet loss in the network and so seriously impair audio quality. This thesis reveals that no current approach to audio distribution can combine support for large number of simultaneous speakers with TCP-fair responsiveness to congestion. A model for audio distribution called Distributed Partial Mixing (DPM) is proposed that dynamically adapts both to varying numbers of active audio streams in collaborative environments and to congestion in the network. Each DPM component adaptively mixes subsets of its input audio streams into one or more mixed streams, which it then forwards to the other components along with any unmixed streams. DPM minimises the amount of mixing performed so that end users receive as many separate audio streams as possible within prevailing network resource constraints. This is important in order to allow maximum flexibility of audio presentation (especially spatialisation) to the end user. A distributed partial mixing prototype is realised as part of the audio service in MASSIVE-3. A series of experiments over a single network link demonstrate that DPM gracefully manages the tradeoff between preserving stable audio quality and being responsive to congestion and achieving fairness towards competing TCP traffic. The problem of large scale deployment of DPM over heterogeneous networks is also addressed. The thesis proposes that a shared tree of DPM servers and clients, where the nodes of the tree can perform distributed partial mixing, is an effective basis for wide area deployment. Two models for realising this in two contrasting situations are then explored in more detail: a static, centralised, subscription-based DPM service suitable for fully managed networks, and a fully distributed self-organising DPM service suitable for unmanaged networks (such as the current Internet).
173

Visualizing set relations and cardinalities using Venn and Euler diagrams

Micallef, Luana January 2013 (has links)
In medicine, genetics, criminology and various other areas, Venn and Euler diagrams are used to visualize data set relations and their cardinalities. The data sets are represented by closed curves and the data set relationships are depicted by the overlaps between these curves. Both the sets and their intersections are easily visible as the closed curves are preattentively processed and form common regions that have a strong perceptual grouping effect. Besides set relations such as intersection, containment and disjointness, the cardinality of the sets and their intersections can also be depicted in the same diagram (referred to as area-proportional) through the size of the curves and their overlaps. Size is a preattentive feature and so similarities, differences and trends are easily identified. Thus, such diagrams facilitate data analysis and reasoning about the sets. However, drawing these diagrams manually is difficult, often impossible, and current automatic drawing methods do not always produce appropriate diagrams. This dissertation presents novel automatic drawing methods for different types of Euler diagrams and a user study of how such diagrams can help probabilistic judgement. The main drawing algorithms are: eulerForce, which uses a force-directed approach to lay out Euler diagrams; eulerAPE, which draws area-proportional Venn diagrams with ellipses. The user study evaluated the effectiveness of area- proportional Euler diagrams, glyph representations, Euler diagrams with glyphs and text+visualization formats for Bayesian reasoning, and a method eulerGlyphs was devised to automatically and accurately draw the assessed visualizations for any Bayesian problem. Additionally, analytic algorithms that instantaneously compute the overlapping areas of three general intersecting ellipses are provided, together with an evaluation of the effectiveness of ellipses in drawing accurate area-proportional Venn diagrams for 3-set data and the characteristics of the data that can be depicted accurately with ellipses.
174

FPGA-based high throughput regular expression pattern matching for network intrusion detection systems

Modi, Bala January 2015 (has links)
Network speeds and bandwidths have improved over time. However, the frequency of network attacks and illegal accesses have also increased as the network speeds and bandwidths improved over time. Such attacks are capable of compromising the privacy and confidentiality of network resources belonging to even the most secure networks. Currently, general-purpose processor based software solutions used for detecting network attacks have become inadequate in coping with the current network speeds. Hardware-based platforms are designed to cope with the rising network speeds measured in several gigabits per seconds (Gbps). Such hardware-based platforms are capable of detecting several attacks at once, and a good candidate is the Field-programmable Gate Array (FPGA). The FPGA is a hardware platform that can be used to perform deep packet inspection of network packet contents at high speed. As such, this thesis focused on studying designs that were implemented with Field-programmable Gate Arrays (FPGAs). Furthermore, all the FPGA-based designs studied in this thesis have attempted to sustain a more steady growth in throughput and throughput efficiency. Throughput efficiency is defined as the concurrent throughput of a regular expression matching engine circuit divided by the average number of look up tables (LUTs) utilised by each state of the engine"s automata. The implemented FPGA-based design was built upon the concept of equivalence classification. The concept helped to reduce the overall table size of the inputs needed to drive the various Nondeterministic Finite Automata (NFA) matching engines. Compared with other approaches, the design sustained a throughput of up to 11.48 Gbps, and recorded an overall reduction in the number of pattern matching engines required by up to 75%. Also, the overall memory required by the design was reduced by about 90% when synthesised on the target FPGA platform.
175

Awareness & perception in rapid serial visual presentation

Gootjes-Dreesbach, Ellis Luise January 2015 (has links)
This thesis explores the subjective experience of targets in rapid serial visual presentation (RSVP), an experimental paradigm where visual stimuli are displayed in rapid succession. In RSVP, items appear on the screen so briefly that not every item in the stream can be encoded reliably. Thus, it allows observation of conscious experience at the fringe of perception. The Attentional Blink (AB) - an effect in which an RSVP target is likely to be missed if it follows a fully processed first target - has been used in order to manipulate the accuracy of item identification by varying the target separation and presentation speed. The main focus of studies using RSVP presentation to make inferences about conscious perception has been the question of whether conscious perception is all-or-none or gradual. We initially present some thoughts on the suitability of the AB paradigm for answering this question. Not much is known about the effect of different variables in the paradigm on subjective experience, and it is possible that AB mechanisms affect experience quite differently from other paradigms, limiting the generalisability of findings derived from work within the AB paradigm. Based on this, we follow two lines of evidence: First, we explore the possibility of finding gradations in subjective visibility of targets measured on ratings scales and in the response of the electroencephalogram using a simple single target RSVP. Second, we investigate the effect of target separation and perceived order on this subjective experience in the AB paradigm. Our results indicate that items in single-target RSVP can be perceived in a graded manner, with possible indications of a non-linear jump in brain activity between not-seen and seen items. Regarding subjective experience when separation of two targets is varied, we find a disconnect between accuracy and visibility of the second target when in close proximity to the first, showing relatively low subjective experience considering the high report accuracy. Target separation also affects the number of order confusions, which we find to reduce target visibility under specific conditions. These results add to our understanding of how targets are perceived in RSVP and have implications for research into conscious perception.
176

Trust in virtual reality

Salanitri, Davide January 2018 (has links)
The current era has seen unrestrained technological progress. New technologies are replacing common work practices and processes in several fields, such as industry, healthcare, and commerce. The main reasons for using these technologies is the reduction of time to develop products, increased quality of products and processes, and increases in security and communication. This thesis focuses on Virtual Reality (VR). VR is currently replacing old systems and modifying practices and processes in fields such as automotive, healthcare, training and psychological therapies. However, when applying technologies, it is fundamental to study the interaction between the technology and the end users. This thesis takes into consideration one aspect of human-computer interaction: trust. Trust has been seen as fundamental in technologies such as e-commerce, e-marketing, autonomous systems and social networks. This is because trust has been found to be associated with the intention to use a technology, and lack of trust could deter users from adopting the technology. This concept is particularly important for VR, since it is only recently gaining widespread adoption. However, studies on users’ trust in VR systems are limited in the literature and there is uncertainty regarding the factors which could influence end user trust. This research aimed at developing a model to investigate trust in VR. The goal was to identify the factors which have a theoretical influence on trust in VR through an analysis of the literature on trust in VR and trust in technology in general. This permitted the creation of a framework with usability, technology acceptance and presence as possible predictors of trust in VR. In order to validate this framework, six user experiments were conducted. The experiments investigated the relationships among the factors identified in the literature and their influence on trust. The first study was designed to explore possible methodological issues. The next three studies, conducted in collaboration with researchers at the University of Nottingham, analysed further the relationship between usability and trust and between technology acceptance and presence with trust. The fifth experiment was conducted to specifically explore the influence of presence on trust. The last study looked at all factors, and validated the framework, demonstrating that technology acceptance and presence are predictors of trust in VR, and usability has an indirect effect on trust, as it is a strong predictor of technology acceptance. This research generated a model which includes well-studied factors in human computer interaction and human factors and could be applied to study trust in VR for different systems. This model increases the amount of information on VR, both on an academic and industrial point of view. In addition, guidelines based on the model were generated to inform the evaluation of existing VR systems and the design of new ones.
177

The worker-wrapper transformation : getting it right and making it better

Hackett, Jennifer L. P. January 2017 (has links)
A program optimisation must have two key properties: it must preserve the meaning of programs (correctness) while also making them more efficient (improvement). An optimisation's correctness can often be rigorously proven using formal mathematical methods, but improvement is generally considered harder to prove formally and is thus typically demonstrated with empirical techniques such as benchmarking. The result is a conspicuous ``reasoning gap'' between correctness and efficiency. In this thesis, we focus on a general-purpose optimisation: the worker\slash wrapper transformation. We develop a range of theories for establishing correctness and improvement properties of this transformation that all share a common structure. Our development culminates in a single theory that can be used to reason about both correctness and efficiency in a unified manner, thereby bridging the reasoning gap.
178

Putting trajectories to work : translating a HCI framework into design practice

Velt, Raphael January 2018 (has links)
One major challenge for the academic Human-Computer Interaction (HCI) research community is the adoption of its findings and theoretical output by the interaction design practitioners whose work they are meant to support. To address this “research-practice gap”, this thesis takes the example of trajectories, a HCI conceptual framework derived from studies of mixed-reality performances spanning complex spaces, timeframes, participant roles, and interface ecologies. Trajectories’ authors have called for their work to be used to inform the design of a broader variety of experiences. This thesis explores what is required to fulfil this ambition, with a specific focus on using the framework to improve the experience of live events, and on professional design practitioners as the users of the framework. This exploration follows multiple approaches, led both by researchers and practitioners. This thesis starts by reviewing past uses of the trajectories framework – including for design purposes – and by discussing work that has previously tried to bridge the research-practice gap. In a first series of studies, the thesis identifies live events – such as music festivals and running races – as a rich setting where trajectories may be used both to study existing experiences and to design new ones. This leads to a series of design guidelines grounded both in knowledge about the setting and in trajectories. The thesis then discusses multiple approaches through which HCI researchers and practitioners at a large media company have joined forces to try to use trajectories in industrial design and production processes. Finally, the last strand of work returns to live events, with a two-year long Research through Design study in which trajectories have been used to improve the experience of a local music festival and to develop a mobile app to support it. This last study provides first-hand insight into the integration of theoretical concerns into design. This thesis provides three major classes of contributions. First, extensions to the original trajectories framework, which include refined definitions for the set of concepts that the framework comprises, as well as considerations for open-ended experiences where control is shared between stakeholders and participants. Secondly, a model describing the use of trajectories throughout design and production processes offers a blueprint for practitioners willing to use the framework. Finally, a discussion on the different ways trajectories have been translated into practice leads to proposing a model for locating translations of HCI knowledge with regards to the gap between academic research and design practice, and the gap between theoretical knowledge and design artefacts.
179

Interval type-2 Atanassov-intuitionistic fuzzy logic for uncertainty modelling

Eyoh, Imo January 2018 (has links)
This thesis investigates a new paradigm for uncertainty modelling by employing a new class of type-2 fuzzy logic system that utilises fuzzy sets with membership and non-membership functions that are intervals. Fuzzy logic systems, employing type-1 fuzzy sets, that mark a shift from computing with numbers towards computing with words have made remarkable impacts in the field of artificial intelligence. Fuzzy logic systems of type-2, a generalisation of type-1 fuzzy logic systems that utilise type-2 fuzzy sets, have created tremendous advances in uncertainty modelling. The key feature of the type-2 fuzzy logic systems, with particular reference to interval type-2 fuzzy logic systems, is that the membership functions of interval type-2 fuzzy sets are themselves fuzzy. These give interval type-2 fuzzy logic systems an advantage over their type-1 counterparts which have precise membership functions. Whilst the interval type-2 fuzzy logic systems are effective in modelling uncertainty, they are not able to adequately handle an indeterminate/neutral characteristic of a set, because interval type-2 fuzzy sets are only specified by membership functions with an implicit assertion that the non-membership functions are complements of the membership functions (lower or upper). In a real life scenario, it is not necessarily the case that the non-membership function of a set is complementary to the membership function. There may be some degree of hesitation arising from ignorance or a complete lack of interest concerning a particular phenomenon. Atanassov intuitionistic fuzzy set, another generalisation of the classical fuzzy set, captures this thought process by simultaneously defining a fuzzy set with membership and non-membership functions such that the sum of both membership and non-membership functions is less than or equal to 1. In this thesis, the advantages of both worlds (interval type-2 fuzzy set and Atanassov intuitionistic fuzzy set) are explored and a new and enhanced class of interval type-2 fuzzy set namely, interval type-2 Atanassov intuitionistic fuzzy set, that enables hesitation, is introduced. The corresponding fuzzy logic system namely, interval type-2 Atanassov intuitionistic fuzzy logic system is rigorously and systematically formulated. In order to assess this thesis investigates a new paradigm for uncertainty modelling by employing a new class of type-2 fuzzy logic system that utilises fuzzy sets with membership and non-membership functions that are intervals. Fuzzy logic systems, employing type-1 fuzzy sets, that mark shift from computing with numbers towards computing with words have made remarkable impacts in the field of artificial intelligence. Fuzzy logic systems of type-2, a generalisation of type-1 fuzzy logic systems that utilise type-2 fuzzy sets, have created tremendous advances in uncertainty modelling. The key feature of the type-2 fuzzy logic systems, with particular reference to interval type-2 fuzzy logic systems, is that the membership functions of interval type-2 fuzzy sets are themselves fuzzy. These give interval type-2 fuzzy logic systems an advantage over their type-1 counterparts which have precise membership functions. Whilst the interval type-2 fuzzy logic systems are effective in modelling uncertainty, they are not able to adequately handle an indeterminate/neutral characteristic of a set, because interval type-2 fuzzy sets are only specified by membership functions with an implicit assertion that the non-membership functions are complements of the membership functions (lower or upper). In a real life scenario, it is not necessarily the case that the non-membership function of a set is complementary to the membership function. There may be some degree of hesitation arising from ignorance or a complete lack of interest concerning a particular phenomenon. Atanassov intuitionistic fuzzy set, another generalisation of the classical fuzzy set, captures this thought process by simultaneously defining a fuzzy set with membership and non-membership functions such that the sum of both membership and non-membership functions is less than or equal to 1. In this thesis, the advantages of both worlds (interval type-2 fuzzy set and Atanassov intuitionistic fuzzy set) are explored and a new and enhanced class of interval type-2 fuzz set namely, interval type-2 Atanassov intuitionistic fuzzy set, that enables hesitation, is introduced. The corresponding fuzzy logic system namely, interval type-2 Atanassov intuitionistic fuzzy logic system is rigorously and systematically formulated. In order to assess the viability and efficacy of the developed framework, the possibilities of the optimisation of the parameters of this class of fuzzy systems are rigorously examined. First, the parameters of the developed model are optimised using one of the most popular fuzzy logic optimisation algorithms such as gradient descent (first-order derivative) algorithm and evaluated on publicly available benchmark datasets from diverse domains and characteristics. It is shown that the new interval type-2 Atanassov intuitionistic fuzzy logic system is able to handle uncertainty well through the minimisation of the error of the system compared with other approaches on the same problem instances and performance criteria. Secondly, the parameters of the proposed framework are optimised using a decoupledextended Kalman filter (second-order derivative) algorithm in order to address the shortcomings of the first-order gradient descent method. It is shown statistically that the performance of this new framework with fuzzy membership and non-membership functions is significantly better than the classical interval type-2 fuzzy logic systems which have only the fuzzy membership functions, and its type-1 counterpart which are specified by single membership and non-membership functions. The model is also assessed using a hybrid learning of decoupled extended Kalman filter and gradient descent methods. The proposed framework with hybrid learning algorithm is evaluated by comparing it with existing approaches reported in the literature on the same problem instances and performance metrics. The simulation results have demonstrated the potential benefits of using the proposed framework in uncertainty modelling. In the overall, the fusion of these two concepts (interval type-2 fuzzy logic system and Atanassov intuitionistic fuzzy logic system) provides a synergistic capability in dealing with imprecise and vague information.
180

Generating vague geographic information through data mining of passive web data

Brindley, Paul January 2016 (has links)
Vagueness is an inherent property of geographic data. This thesis develops a geocomputational method that demonstrates that vague information has the potential to be incorporated within GIS in straightforward manner. This method applies vagueness to the elements of place: types, names and spatial boundaries, generating vague geographic objects by extracting and filtering the differing opinions and perceptions held within web derived data. The aim of the research is threefold: (1) to investigate an approach to automatically generate vague, probabilistic geographical information concerning place by mining differing perspectives from passive web data; (2) to assure the quality of the vague information produced and test the hypothesis that its results are indistinguishable from directly surveying public opinion; and (3) to demonstrate the value of integrating vague information into geospatial applications via examples of its use. To achieve the first aim, the thesis develops methods to extract differing perspectives of place from web data - constructing (i) vague place type settlement classification and (ii) vague place names and boundaries for ‘neighbourhood’ level units. The methods developed are automated, suitable for generating output at a national scale and use a wide range of different source data to collect the differing opinions. The second aim assesses the quality of the data produced, determining if output extracted from the web was representative of that obtained from asking people directly. Statistical analysis of regression models demonstrates that data were representative of that collected through asking people directly both for vague settlement classifications and vague urban locale boundaries. Importantly, the validation data, drawn from public opinion, also supported the notion that vagueness was omnipresent within geographic information concerning place. The third aim was addressed through the use of case studies in order to demonstrate the added value of such data and subsequent integration of vague geographic objects within other socio-economic data. Critically, the incorporation of vagueness within place models not only add value to geographic data but also improve the accuracy of real-world representations within GIS.

Page generated in 0.0261 seconds