91 |
Intelligent assembly of wind turbine hubsDeters, Christian January 2014 (has links)
The fast growing wind turbine industry is expected to play a major role in solving our energy needs in the future. At present turbine manufacturing is performed mostly manually. Due to market growth the economical peak point is reached where an automated assembly concept can be introduced. This thesis focuses on the assembly of the wind turbine hub, in particular, on the bolt tightening process for the wind turbine bearing assembly, contributing to EU project COSMOS. Within this industrial research project, bolt tightening has been identified as an important research problem and the control strategies derived in this PhD thesis contributed to the activities of COSMOS. A wind turbine hub has three bearings which are assembled using multiple bolts (in current wind turbines, this can be up to 128 bolts). With the need to conform to stringent safety requirements and with the aim to produce long-lasting systems, the desired clamping force between the nut and a counteracting flange needs to be accurately and reliably achieved as a result of the tightening process. This thesis analyses the bolt tightening process divided into several tightening stages, with each stage addressing particular control and safety problems. The introduced fuzzy control architecture makes use of membership functions combined with linguistic rules to set the control target (which are specific torque and angle levels for the investigated wind turbine assembly process) to ensure that the desired clamping force is reached successfully and accurately. The control results (step response of the final control values and final clamping force) have been compared to more traditional control paradigms, including the proportional-integral-derivative (PID) controller. Experiments have shown that the accuracy improved and the standard deviation of the Fuzzy controller is more than 4 times lower than the one achieved using the PID controller. The bolt system has been further analysed and a numerical state space model has been identified using an experimental identification method. The found model has been used to identify suitable control gains for a proportional-integral (PI) control strategy and were then fine-tuned using an online learning process based on a genetic algorithm (GA). Error detection and avoidance is another important aspect when assembling safety-critical systems such as wind turbines. This PhD study introduces an error detection mechanism that is active during the bolt tightening process and integrated with the fuzzy control architecture used for bolt tightening. This is achieved by defining additional membership functions and linguistic rules for error detection. The error detection mechanism is based on a logic based approach terminating the tightening process when critical control parameters are exceeded.
|
92 |
Practical zero-knowledge protocols based on the discrete logarithm assumptionBayer, S. G. M. January 2014 (has links)
Zero-knowledge proofs were introduced by Goldwasser, Micali, and Rackoff. A zero-knowledge proof allows a prover to demonstrate knowledge of some information, for example that they know an element which is a member of a list or which is not a member of a list, without disclosing any further information about that element. Existing constructions of zero-knowledge proofs which can be applied to all languages in NP are impractical due to their communication and computational complexity. However, it has been known since Guillou and Quisquater's identification protocol from 1988 and Schnorr's identification protocol from 1991 that practical zero-knowledge protocols for specific problems exist. Because of this, a lot of work was undertaken over the recent decades to find practical zero-knowledge proofs for various other specific problems, and in recent years many protocols were published which have improved communication and computational complexity. Nevertheless, to find more problems which have an efficient and practical zero-knowledge proof system and which can be used as building blocks for other protocols is an ongoing challenge of modern cryptography. This work addresses the challenge, and constructs zero-knowledge arguments with sublinear communication complexity, and achievable computational demands. The security of our protocols is only based on the discrete logarithm assumption. Polynomial evaluation arguments are proposed for univariate polynomials, for multivariate polynomials, and for a batch of univariate polynomials. Furthermore, the polynomial evaluation argument is applied to construct practical membership and non-membership arguments. Finally, an efficient method for proving the correctness of a shuffle is proposed. The proposed protocols have been tested against current state of the art versions in order to verify their practicality in terms of run-time and communication cost. We observe that the performance of our protocols is fast enough to be practical for medium range parameters. Furthermore, all our verifiers have a better asymptotic behavior than earlier verifiers independent of the parameter range, and in real life settings our provers perform better than provers of existing protocols. The analysis of the results shows that the communication cost of our protocols is very small; therefore, our new protocols compare very favorably to the current state of the art.
|
93 |
Methods for proving non-termination of programsNimkar, K. N. January 2015 (has links)
The search for reliable and scalable automated methods for finding counterexamples to termination or alternatively proving non-termination is still widely open. The thesis studies the problem of proving non-termination of programs and presents new methods for the same. It also provides a thorough comparison of new methods along with the previous methods. In the first method, we show how the problem of non-termination proving can be reduced to a question of underapproximation search guided by a safety prover. This reduction leads to new non-termination proving implementation strategies based on existing tools for safety proving. Furthermore, our approach leads to easy support for programs with unbounded non-determinism. In the second method, we show how Max-SMT-based invariant generation can be exploited for proving non-termination of programs. The construction of the proof of non-termination is guided by the generation of quasi-invariants - properties such that if they hold at a location during execution once, then they will continue to hold at that location from then onwards. The check that quasi-invariants can indeed be reached is then performed separately. Our technique produces more generic witnesses of non-termination than existing methods. Moreover, it can handle programs with unbounded non-determinism and is more likely to converge than previous approaches. When proving non-termination using known techniques, abstractions that overapproximate the program's transition relation are unsound. In the third method, we introduce live abstractions, a natural class of abstractions that can be combined with the concept of closed recurrence sets to soundly prove non-termination. To demonstrate the practical usefulness of this new approach we show how programs with non-linear, non-deterministic, and heap-based commands can be shown non-terminating using linear overapproximations. All three methods introduced in this thesis have been implemented in different tools. We also provide experimental results which show great performance improvements over existing methods.
|
94 |
Pushing the limits of indoor localization in today's Wi-Fi networksXiong, J. January 2015 (has links)
Wireless networks are ubiquitous nowadays and play an increasingly important role in our everyday lives. Many emerging applications including augmented reality, indoor navigation and human tracking, rely heavily on Wi-Fi, thus requiring an even more sophisticated network. One key component for the success of these applications is accurate localization. While we have GPS in the outdoor environment, indoor localization at a sub-meter granularity remains challenging due to a number of factors, including the presence of strong wireless multipath reflections indoors and the burden of deploying and maintaining any additional location service infrastructure. On the other hand, Wi-Fi technology has developed significantly in the last 15 years evolving from 802.11b/a/g to the latest 802.11n and 802.11ac standards. Single user multiple-input, multiple-output (SU-MIMO) technology has been adopted in 802.11n while multi-user MIMO is introduced in 802.11ac to increase throughput. In Wi-Fi’s development, one interesting trend is the increasing number of antennas attached to a single access point (AP). Another trend is the presence of frequency-agile radios and larger bandwidths in the latest 802.11n/ac standards. These opportunities can be leveraged to increase the accuracy of indoor wireless localization significantly in the two systems proposed in this thesis: ArrayTrack employs multi-antenna APs for angle-of-arrival (AoA) information to localize clients accurately indoors. It is the first indoor Wi-Fi localization system able to achieve below half meter median accuracy. Innovative multipath identification scheme is proposed to handle the challenging multipath issue in indoor environment. ArrayTrack is robust in term of signal to noise ratio, collision and device orientation. ArrayTrack does not require any offline training and the computational load is small, making it a great candidate for real-time location services. With six 8-antenna APs, ArrayTrack is able to achieve a median error of 23 cm indoors in the presence of strong multipath reflections in a typical office environment. ToneTrack is a fine-grained indoor localization system employing time difference of arrival scheme (TDoA). ToneTrack uses a novel channel combination algorithm to increase effective bandwidth without increasing the radio’s sampling rate, for higher resolution time of arrival (ToA) information. A new spectrum identification scheme is proposed to retrieve useful information from a ToA profile even when the overall profile is mostly inaccurate. The triangle inequality property is then applied to detect and discard the APs whose direct path is 100% blocked. With a combination of only three 20 MHz channels in the 2.4 GHz band, ToneTrack is able to achieve below one meter median error, outperforming the traditional super-resolution ToA schemes significantly.
|
95 |
Improving tool support for Personal Task Management (PTM)Kamsin, A. January 2014 (has links)
Personal Task Management (PTM) describes the planning, prioritising and list-making of tasks employed by an individual user. There are hundreds of commercial electronic PTM tools available on the market which users can choose from. There appears to be little attempt to develop a framework for describing people’s task management behaviour, making it difficult to determine the extent to which these tools meet users’ needs. The aims of this thesis were therefore to understand how academics manage their tasks, to identify the conceptual gaps between them and the existing electronic tools, and to establish requirements for guiding the design and evaluation of PTM tools. The research adopts a user-centred design methodology. This includes both empirical and analytical approaches, conducted through four different studies. Firstly, a semi-structured interview study develops a PTM framework, describing the components of PTM (i.e. the underlying activities and contextual factors). Secondly, a member-checking study tests the accuracy of the framework. Thirdly, a video-diary study examines the inconsistencies discovered between the interview and member-checking studies. The findings extend the PTM framework to include other aspects of users (e.g. challenges, context awareness, etc.), broadening the understanding of the complexity of PTM behaviours. The data gathered in the user studies was analysed using a grounded theory (GT) approach, and the findings were then used to build personas of academics. Finally, an in-depth expert analytical evaluation of a set of existing tools using CASSM identifies the conceptual misfits between users and the existing tools. The contributions of this thesis are a development of the PTM framework, describing the key factors that influence academics in managing their tasks; a development of personas, explaining characteristics of different groups of academics and PTM strategies that they employ over time; and an evaluation of existing PTM tools, determining their strengths and limitations and providing recommendations.
|
96 |
Face recognition in uncontrolled environmentsFu, Y. January 2015 (has links)
This thesis concerns face recognition in uncontrolled environments in which the images used for training and test are collected from the real world instead of laboratories. Compared with controlled environments, images from uncontrolled environments contain more variation in pose, lighting, expression, occlusion, background, image quality, scale, and makeup. Therefore, face recognition in uncontrolled environments is much more challenging than in controlled conditions. Moreover, many real world applications require good recognition performance in uncontrolled environments. Example applications include social networking, human-computer interaction and electronic entertainment. Therefore, researchers and companies have shifted their interest from controlled environments to uncontrolled environments over the past seven years. In this thesis, we divide the history of face recognition into four stages and list the main problems and algorithms at each stage. We find that face recognition in unconstrained environments is still an unsolved problem although many face recognition algorithms have been proposed in the last decade. Existing approaches have two major limitations. First, many methods do not perform well when tested in uncontrolled databases even when all the faces are close to frontal. Second, most current algorithms cannot handle large pose variation, which has become a bottleneck for improving performance. In this thesis, we investigate Bayesian models for face recognition. Our contributions extend Probabilistic Linear Discriminant Analysis (PLDA) [Prince and Elder 2007]. In PLDA, images are described as a sum of signal and noise components. Each component is a weighted combination of basis functions. We firstly investigate the effect of degree of the localization of these basis functions and find better performance is obtained when the signal is treated more locally and the noise more globally. We call this new algorithm multi-scale PLDA and our experiments show it can handle lighting variation better than PLDA but fails for pose variation. We then analyze three existing Bayesian face recognition algorithms and combine the advantages of PLDA and the Joint Bayesian Face algorithm [Chen et al. 2012] to propose Joint PLDA. We find that our new algorithm improves performance compared to existing Bayesian face recognition algorithms. Finally, we propose Tied Joint Bayesian Face algorithm and Tied Joint PLDA to address large pose variations in the data, which drastically decreases performance in most existing face recognition algorithms. To provide sufficient training images with large pose difference, we introduce a new database called the UCL Multi-pose database. We demonstrate that our Bayesian models improve face recognition performance when the pose of the face images varies.
|
97 |
User behaviour in personal data disclosureMalheiros, M. January 2014 (has links)
Organisations see the collection and use of data about their customers, citizens or employees as necessary to enable value-adding activities such as personalised service or targeted advertising. At the same time, the increased efficiency and cost-effectiveness of information systems have removed most economic disincentives for widespread collection of personal data. HCI privacy research has mainly focused on identifying features of information systems or organisational practices that lead to privacy invasions and making recommendations on how to address them. This approach fails to consider that the organisations deploying these systems may have a vested interest in potentially privacy invasive features. This thesis approaches the problem from a utilitarian perspective and posits that organisational data practices construed as unfair or invasive by individuals can lead them to engage in privacy protection behaviours that have a negative impact on the organisation’s data quality. The main limitations of past privacy research include (1) overreliance on self-reported data; (2) difficulty in explaining the dissonance between privacy attitudes and privacy practice; (3) excessive focus on specific contexts and resulting lack of generalisation. This thesis addressed these limitations by proposing a context-neutral model for personal data disclosure behaviour that identifies factors that influence individuals’ perception of data requests from organisations and links those perceptions to actual disclosure decisions. This model synthesises findings from a series of interviews, questionnaires, and experiments on privacy perceptions of (1) loan application forms; (2) serious-games; (3) the UK census of 2011; and (4) targeted advertising, as well as existing research. Results in this thesis show that individuals’ decision to comply or not with data collection efforts of organisations depends largely on the same factors regardless of the context. In particular, a validation field experiment on online disclosure with 320 participants showed that perceptions of unfair data requests or expected use of the data lead to lower response rates and increased falsification of answers. Both these outcomes negatively impact organisations’ data quality and ability to make informed decisions suggesting that more privacy conscious data collection procedures may lead to increased utility for both organisations and individuals.
|
98 |
Volare mobile context-aware adaptation for the CloudPapakos, P. January 2014 (has links)
As the explosive growth in the proliferation and use of mobile devices accelerates, more web service providers move their premises on the Cloud under the Software as a Service (SaaS) service model. Mobile environments present new challenges that Service Discovery methods developed for non-mobile environments cannot address. The requirements a mobile client device will have from internet services may change, even at runtime, due to variable context, which may include hardware resources, environmental variables (like network availability) and user preferences. Binding to a discovered service having QoS levels different from the ones imposed by current context and policy requirements may lead to low application performance, excessive consumption of mobile resources such as battery life and service disruption, especially for long lasting foreground applications like media-streaming, navigation etc. This thesis presents the Volare approach for performing parameter adaptation for service requests to Cloud services, in SaaS architecture. For this purpose, we introduce an adaptive mobile middleware solution that performs context-aware QoS parameter adaptation. When service discovery is initiated, the middleware calculates the optimal service requests QoS levels under the current context, policy requirements and goals and adapts the service request accordingly. At runtime, it can trigger dynamic service rediscovery following significant context changes, to ensure optimal binding. The adaptation logic is built through the characteristics of the declarative domain-specific Volare Adaptation Policy Specification Language (APSL). Key characteristics of this approach include two-level policy support (providing both device specific and application specific adaptation), integration of a User Preferences Model and high behavioral (parameter adaptation) variability, by allowing multiple weighted adaptation rules to influence each QoS variable. The Volare approach supports unanticipated quantitative long term performance goals (LTPGs) with finite horizons. A use case and a proof-of-concept implementation have been developed on cloud service discovery through a cloud service provider, as well as an appropriate case study, which demonstrates significant savings in battery consumption, provider data usage and monetary cost, compared to unadapted QoS service bindings, while consistently avoiding service disruptions caused by QoS levels that the device cannot support. In addition, adaptation policies using the Volare approach tend to increase in size, in a mostly linear fashion, instead of the combinatorial increase of more conventional situation-action approaches.
|
99 |
In touch with the wild : exploring real-time for learning to play the violinJohnson, R. M. G. January 2014 (has links)
Real-time feedback has great potential for enhancing learning complex motor-skills by enabling people to correct their mistakes as they go. Multimodal real-time cues could provide reinforcement to inform players whether they are making the correct or incorrect movements at a given time. However, little is known about how best to communicate information in real-time so that people can readily perceive and apply it to improving their movement while learning complex motor-skills. This thesis addresses this gap in knowledge by investigating how real-time feedback can enhance learning to play the violin. It explores how haptic and visual feedback are perceived, understood and acted upon in real-time when engaged in the primary task of playing the violin. Prototypes were built with sensors to measure movement and either vibrations on the body or visual signals as feedback. Three in-the-wild user studies were conducted: one comparing visual and vibrotactile feedback for individual practice; one investigating shared feedback at a musical summer school; and one examining real-time feedback as part of a programme of learning at a high school. In-the-wild studies investigate users interacting with technology in a naturalistic setting, with all the demands that this entails. The findings show real-time feedback is effective at improving violin technique and can support learning in other ways such as encouraging mutual support between learners. The positive learning outcomes, however, need to be understood with respect to the complex interplay between the technology, demands of the setting and characteristics of individual learners. A conceptual framework is provided that outlines these interdependent factors. The findings are discussed regarding their applicability to learning other physical skills and the challenges and insights of using an in-the-wild methodology. The contribution of this thesis is to demonstrate empirically and theoretically how real-time vibrotactile and visual feedback can enhance learning a complex motor-skill.
|
100 |
Coherent dependence clusterIslam, S. S. January 2014 (has links)
This thesis introduces coherent dependence clusters and shows their relevance in areas of software engineering such as program comprehension and mainte- nance. All statements in a coherent dependence cluster depend upon the same set of statements and affect the same set of statements; a coherent cluster’s statements have ‘coherent’ shared backward and forward dependence. We introduce an approximation to efficiently locate coherent clusters and show that its precision significantly improves over previous approximations. Our empirical study also finds that, despite their tight coherence constraints, coherent dependence clusters are to be found in abundance in production code. Studying patterns of clustering in several open-source and industrial programs reveal that most contain multiple significant coherent clusters. A series of case studies reveal that large clusters map to logical functionality and pro- gram structure. Cluster visualisation also reveals subtle deficiencies of program structure and identify potential candidates for refactoring efforts. Supplemen- tary studies of inter-cluster dependence is presented where identification of coherent clusters can help in deriving hierarchical system decomposition for reverse engineering purposes. Furthermore, studies of program faults find no link between existence of coherent clusters and software bugs. Rather, a longi- tudinal study of several systems find that coherent clusters represent the core architecture of programs during system evolution. Due to the inherent conservativeness of static analysis, it is possible for unreachable code and code implementing cross-cutting concerns such as error- handling and debugging to link clusters together. This thesis studies their effect on dependence clusters by using coverage information to remove unexecuted and rarely executed code. Empirical evaluation reveals that code reduction yields smaller slices and clusters.
|
Page generated in 0.0482 seconds