21 |
Multi-dimensional clustering in user profilingCufoglu, Ayse January 2012 (has links)
User profiling has attracted an enormous number of technological methods and applications. With the increasing amount of products and services, user profiling has created opportunities to catch the attention of the user as well as achieving high user satisfaction. To provide the user what she/he wants, when and how, depends largely on understanding them. The user profile is the representation of the user and holds the information about the user. These profiles are the outcome of the user profiling. Personalization is the adaptation of the services to meet the user’s needs and expectations. Therefore, the knowledge about the user leads to a personalized user experience. In user profiling applications the major challenge is to build and handle user profiles. In the literature there are two main user profiling methods, collaborative and the content-based. Apart from these traditional profiling methods, a number of classification and clustering algorithms have been used to classify user related information to create user profiles. However, the profiling, achieved through these works, is lacking in terms of accuracy. This is because, all information within the profile has the same influence during the profiling even though some are irrelevant user information. In this thesis, a primary aim is to provide an insight into the concept of user profiling. For this purpose a comprehensive background study of the literature was conducted and summarized in this thesis. Furthermore, existing user profiling methods as well as the classification and clustering algorithms were investigated. Being one of the objectives of this study, the use of these algorithms for user profiling was examined. A number of classification and clustering algorithms, such as Bayesian Networks (BN) and Decision Trees (DTs) have been simulated using user profiles and their classification accuracy performances were evaluated. Additionally, a novel clustering algorithm for the user profiling, namely Multi-Dimensional Clustering (MDC), has been proposed. The MDC is a modified version of the Instance Based Learner (IBL) algorithm. In IBL every feature has an equal effect on the classification regardless of their relevance. MDC differs from the IBL by assigning weights to feature values to distinguish the effect of the features on clustering. Existing feature weighing methods, for instance Cross Category Feature (CCF), has also been investigated. In this thesis, three feature value weighting methods have been proposed for the MDC. These methods are; MDC weight method by Cross Clustering (MDC-CC), MDC weight method by Balanced Clustering (MDC-BC) and MDC weight method by changing the Lower-limit to Zero (MDC-LZ). All of these weighted MDC algorithms have been tested and evaluated. Additional simulations were carried out with existing weighted and non-weighted IBL algorithms (i.e. K-Star and Locally Weighted Learning (LWL)) in order to demonstrate the performance of the proposed methods. Furthermore, a real life scenario is implemented to show how the MDC can be used for the user profiling to improve personalized service provisioning in mobile environments. The experiments presented in this thesis were conducted by using user profile datasets that reflect the user’s personal information, preferences and interests. The simulations with existing classification and clustering algorithms (e.g. Bayesian Networks (BN), Naïve Bayesian (NB), Lazy learning of Bayesian Rules (LBR), Iterative Dichotomister 3 (Id3)) were performed on the WEKA (version 3.5.7) machine learning platform. WEKA serves as a workbench to work with a collection of popular learning schemes implemented in JAVA. In addition, the MDC-CC, MDC-BC and MDC-LZ have been implemented on NetBeans IDE 6.1 Beta as a JAVA application and MATLAB. Finally, the real life scenario is implemented as a Java Mobile Application (Java ME) on NetBeans IDE 7.1. All simulation results were evaluated based on the error rate and accuracy.
|
22 |
Online and verification problems under uncertaintyCharalambous, George January 2016 (has links)
In the under uncertainty setting we study problems with imprecise input data for which precise data can be obtained. There exists an underlying problem with a feasible solution, but is computable only if the input is precise enough. We are interested in measuring how much of the imprecise input data has to be updated in order to be precise enough. We look at the problem for both the online and the offline (verification) cases. In the verification under uncertainty setting an algorithm is given imprecise input data and also an assumed set of precise input data. The aim of the algorithm is to update the smallest number of input data such that if the updated input data is the same as the corresponding assumed input data (i.e. verified), a solution for the underlying problem can be calculated. In the online adaptive under uncertainty setting the task is similar except the assumed set of precise data is not given to the algorithm, and the performance of the algorithm is measured by comparing the number of input data that have been updated against the result obtained in the verification setting of the same problem. We have studied these settings for a few geometric and graph problems and found interesting results. Geometric problems include several variations of the maximal points problem where, in its classical form, given a set of points in the plane we want to compute the set of all points that are maximal. The uncertain element here is the actual location of each point. Graph problems include a few variations of the graph diameter problem where, in its classical form, given a graph we want to calculate a farthest pair of vertices. The uncertain element is the weight of each edge.
|
23 |
Low-overhead fault-tolerant logic for field-programmable gate arraysDavis, James January 2015 (has links)
While allowing for the fabrication of increasingly complex and efficient circuitry, transistor shrinkage and count-per-device expansion have major downsides: chiefly increased variation, degradation and fault susceptibility. For this reason, design-time consideration of faults will have to be given to increasing numbers of electronic systems in the future to ensure yields, reliabilities and lifetimes remain acceptably high. Many mathematical operators commonly accelerated in hardware are suited to modification resulting in datapath error detection and correction capabilities with far lower area, performance and/or power consumption overheads than those incurred through the utilisation of more established, general-purpose fault tolerance methods such as modular redundancy. Field-programmable gate arrays are uniquely placed to allow further area savings to be made thanks to their dynamic reconfigurability. The majority of the technical work presented within this thesis is based upon a benchmark hardware accelerator - a matrix multiplier - that underwent several evolutions in order to detect and correct faults manifesting along its datapath at runtime. In the first instance, fault detectability in excess of 99% was achieved in return for 7.87% additional area and 45.5% extra latency. In the second, the ability to correct errors caused by those faults was added at the cost of 4.20% more area, while 50.7% of this - and 46.2% of the previously incurred latency overhead - was removed through the introduction of partial reconfiguration in the third. The fourth demonstrates further reductions in both area and performance overheads - of 16.7% and 8.27%, respectively - through systematic data width reduction by allowing errors of less than ±0.5% of the maximum output value to propagate.
|
24 |
Supporting user appropriation of public displaysClinch, Sarah January 2013 (has links)
Despite their prevalence, public engagement with pervasive public displays is typically very low. One method for increasing the relevance of displayed content (and therefore hopefully improving engagement) is to allow the viewer themselves to affect the content shown on displays they encounter – for example, personalising an existing news feed or invoking a specific application on a display of their choosing. We describe this process as viewer appropriation of public displays. This thesis aims to provide the foundations for appropriation support in future ‘open’ pervasive display networks. Our architecture combines three components: Yarely, a scheduler and media player; Tacita, a system for allowing users to make privacy-preserving appropriation requests, and Mercury, an application store for distributing content. Interface points between components support integration with thirdparty systems; a prime example is the provision of Content Descriptor Sets (CDSs) to describe the media items and constraints that determine what is played at each display. Our evaluation of the architecture is both quantitive and qualitative and includes a mixture of user studies, surveys, focus groups, performance measurements and reflections. Overall we show that it is feasible to construct a robust open pervasive display network that supports viewer appropriation. In particular, we show that Yarely’s thick-client approach enables the development of a signage system that provides continuous operation even in periods of network disconnection yet is able to respond to viewer appropriation requests. Furthermore, we show that CDSs can be used as an effective means of information exchange in an open architecture. Performance measures indicate that demanding personalisation scenarios can be satisfied, and our qualitative work indicates that both display owners and viewers are positive about the introduction of appropriation into future pervasive display systems.
|
25 |
An integrated architecture analysis framework for component-based software developmentAdmodisastro, Novia January 2011 (has links)
The importance of architecture in reuse-driven development is widely recognized. Architecture provides a framework for establishing a match between available components and the system context. It is a key part of the system documentation; enforces the integrity of component composition and provides a basis for managing change. However, one of the most difficult problems in component-based system development (CBD) is ensuring that the software architecture provides an acceptable match with its intended application, business and evolutionary context. Unlike custom development where architectural design relies solely on a detailed requirements specification and where deficiencies in application context can be corrected by ‘tweaking’ the source code, in component-based system development the typical unit of development is often a black-box component whose source code is inaccessible to the developer. Getting the architecture right is therefore key to ensuring quality in a component-based system. Architecture analysis in CBD provides the developer with a means to expose interface mismatches, assess configurations with respect to specific structural and behavioural constraints and to verify the adequacy of compositions with respect to quality constraints. However, support for key component-based system design issues is still patchy in most architecture analysis approaches. My solution has been to develop, Component-based Software Architecture analysis FramEwork (CSAFE), a scenario-driven architecture analysis approach that combines and extends the strengths of current approaches using pluggable analysis. CSAFE is process- pluggable and recognises that negotiation (trade-off analysis) is central to black-box software development. However, while CSAFE is primarily intended to support black-box development, we recognise that there may be aspects of the system for which a black-box solution is not feasible. CSAFE supports custom development in such situations by treating abstract components as placeholders for custom development. CSAFE is supported by an extensible toolset.
|
26 |
A socio-cognitive theory of information systems and initial applicationsHemingway, Christopher John January 1999 (has links)
Much has been written in the academic literature about designing information systems (IS) to satisfy organizational, rather than purely technical, objectives. The design of systems to address the requirements of end-users has also received considerable attention. Little has been said, however, about the relationship between these two facets of "best practice'' and how they might be reconciled. This is of concern because the relationship is fundamental to the success of organizational systems, the value of which is ultimately realized through the activities of individuals and workgroups. The practical benefit of achieving an integrated approach is clear. Systems can be developed in light of the relationship between worker and organization, rather than as a result of a compromise between two `competing' viewpoints. An integrated theory would also reduce the conceptual distance between current conceptions of individuals, which tend to downplay their status as social beings, and of social organizations, which often overestimate the influence of social organization on individuals' actions. A process of conceptual analysis and theory development that addresses this disjunction is presented in this thesis. As the main contribution of this research, the socio-cognitive theory of information systems is a first attempt at providing an integrated treatment of IS phenomena. The theory is developed using a dialectic research method by drawing upon existing work in human-computer interaction, information systems, psychology and sociology. Following a consideration of dialectic as a research method, it is applied to existing conceptions of the individual and of social organization in these disciplines. The theory is then constructed to provide an explanation of information systems phenomena in socio-cognitive, rather than social and cognitive, terms. Having presented the theory, its potential contribution to realizing the practical benefits of integrated approaches to IS development is illustrated through the development of a systems development lifecycle and an evaluation methodology. Recognizing that IS development is primarily concerned with the relationship between individuals and social organizations, the lifecycle model focuses attention on addressing skills issues during the development process. Extending the focus on skills and intersubjective communication, the evaluation methodology outlines a method, consistent with the socio-cognitive theory, for analysing working practices and assessing the impacts upon them of IS-related change.
|
27 |
Practical fault-tolerant quantum computingNickerson, Naomi January 2015 (has links)
Quantum computing has the potential to transform information technology by offering algorithms for certain tasks, such as quantum simulation, that are vastly more efficient than what is possible with any classical device. But experimentally implementing practical quantum information processing is a very difficult task. Here we study two important, and closely related, aspects of this challenge: architectures for quantum computing, and quantum error correction Exquisite quantum control has now been achieved in small ion traps, in nitrogen-vacancy centres and in superconducting qubit clusters, but the challenge remains of how to scale these systems to build practical quantum devices. In Part I of this thesis we analyse one approach to building a scalable quantum computer by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. Here we describe a method by which even these error-prone cells can perform quantum error correction. Groups of cells generate and purify shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (10% error rate) we find that our protocol can succeed provided that all intra-cell error rates are below 0.8%. Furthermore, we show that with some adjustments, the protocols we employ can be made robust also against high levels of loss in the network interconnects. We go on to analyse the potential running speed of such a device. Using levels of fidelity that are either already achievable in experimental systems, or will be in the near-future, we find that employing a surface code approach in a highly noisy and lossy network architecture can result in kilohertz computer clock speeds. In Part II we consider the question of quantum error correction beyond the surface code. We consider several families of topological codes, and determine the minimum requirements to demonstrate proof-of-principle error suppression in each type of code. A particularly promising code is the gauge color code, which admits a universal transversal gate set. Furthermore, a recent result of Bombin shows the gauge color code supports an error-correction protocol that achieves tolerance to noisy measurements without the need for repeated measurements, so called single-shot error correction. Here, we demonstrate the promise of single-shot error correction by designing a decoder and investigating its performance. We simulate fault-tolerant error correction with the gauge color code, and estimate a sustainable error rate, i.e. the threshold for the long time limit, of ~0.31% for a phenomenological noise model using a simple decoding algorithm.
|
28 |
A study of an abstract associative processor in a set-manipulation environmentPollard, Richard January 1977 (has links)
No description available.
|
29 |
Application of Boolean difference concept to fault diagnosisNanda, Navnit Kumar January 1977 (has links)
No description available.
|
30 |
A case study of balance and integration in worth-focused research through designGeorge, Jennifer January 2016 (has links)
Understandings of, and objectives for, Interaction Design have been extended over the last few decades. Firstly, a single user-centred focus for Interaction Design is no longer regarded as adequate where any single central focus for design is now questioned. Post-centric approaches such as Balanced, Integrated and Generous (BIG) Design propose to achieve a broadened worth-focused content scope for Interaction Design, where worth is the balance of increasing benefits over reducing costs and generosity of choice. Secondly, there has been a broadened scope for disciplinary values in Human-Computer Interaction research, with the initial engineering and human science values of User-Centred Design and Human-Computer Interaction now complemented by the rapidly maturing creative field of Research through Design (RtD). Thirdly, RtD as a form of creative reflective practice does not have a sequential process, but needs parallel activities that can achieve total iteration potential (i.e., no restrictions on iteration sequences). Structured reflective tools such as the Working to Choose Framework may reveal this potential. An important opportunity remained that a complete challenging case study that integrated these domains (worth-focus) and tools (RtD, structured reflection) was carried out. The case study addressed the challenging social issues associated with supporting care circles of individuals with disabilities. It is original in completely tracking the combination of RtD with worth-focused Interaction Design, supported by established user-centred practices. The resulting research has made contributions through the tracking of the RtD process to: worth-focused design and evaluation resources; structured reflection; demonstration of innovative parallel balanced and integrated forms of iteration; and to future social innovation for disability support.
|
Page generated in 0.0143 seconds