• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A quality evaluation model for service-oriented software

Owrak, Ali January 2008 (has links)
No description available.
102

User quality judgements of interactive systems

Hartmann, Jan January 2008 (has links)
Understanding the complexities of user judgements of quality is an essential prerequisite for advancing interactive system research, evaluation, and design. Human-Computer Interaction (HCI) has led the shift from a functional system-centric definition ofquality towards a usercentred view by conceptualising usability, which has established itself as the widely accepted definition of quality within HCI. With the move of computing systems from work domains into leisure space, a whole range ofaspects beyond usability, such as pleasure, fun, and the 'user experience', are recognised as important for a system's success. As the traditional view of usability as the sole predictor for overall preference is increasingly challenged, a common understanding ofhow to incorporate such quality criteria beyond usefulness and usability into HCI is yet to be formulated. Aesthetics has thereby been receiving the most prominent share of attention in initial empirical work in the emerging user experience (UX) community, culminating in the claim that 'what is beautiful is usable'. However, empirical work is scarce and results are often conflicting. This thesis is anchored in the current discussion in HCI and UX towards a deeper understanding of the complexities of user judgement of interactive systems' quality. The thesis employs a theoretically grounded empirical investigation based on five studies of user judgement ofwebsites and a mobile content service. The thesis explores the relationships of individual quality criteria and overall preference, user biases in judgement, the relative importance of evaluation criteria for overall preference, and personal and contextual influences on user judgement, towards integrating'qualities beyond usefulness and usability in an understanding ofcontextually situated user judgement of interactive systems' quality. The thesis results demonstrate that while aesthetics can be a more powerful influence for overall judgements and preferences than traditional usability, the relationships between usability, aesthetics, and other evaluation criteria are significantly more complex. The thesis advances current understanding in HCI and UX by showing that while functional and nonfunctional attributes can both contribute to overall judgement, their relative importance depends on a personal prioritisation of the individual evaluation criteria for overall preference. The thesis further shows how the relative importance of evaluation criteria does not only depend on interpersonal differences, but also how users' prioritisations change according to the task at hand. The thesis thus significantly extends the currently accepted norm and argues for understanding user quality judgements as highly personally and contextually dependent. This reinforces the well known advice to 'know your audience' and augments it to know your audience's preferences, prioritisations, and expectations.
103

Digital watermark technology in security applications

Xu, Xin January 2008 (has links)
With the rising emphasis on security and the number of fraud related crimes around the world, authorities are looking for new technologies to tighten security of identity. Among many modern electronic technologies, digital watermarking has unique advantages to enhance the document authenticity. At the current status of the development, digital watermarking technologies are not as matured as other competing technologies to support identity authentication systems. This work presents improvements in performance of two classes of digital watermarking techniques and investigates the issue of watermark synchronisation. Optimal performance can be obtained if the spreading sequences are designed to be orthogonal to the cover vector. In this thesis, two classes of orthogonalisation methods that generate binary sequences quasi-orthogonal to the cover vector are presented. One method, namely "Sorting and Cancelling" generates sequences that have a high level of orthogonality to the cover vector. The Hadamard Matrix based orthogonalisation method, namely "Hadamard Matrix Search" is able to realise overlapped embedding, thus the watermarking capacity and image fidelity can be improved compared to using short watermark sequences. The results are compared with traditional pseudo-randomly generated binary sequences. The advantages of both classes of orthogonalisation inethods are significant. Another watermarking method that is introduced in the thesis is based on writing-on-dirty-paper theory. The method is presented with biorthogonal codes that have the best robustness. The advantage and trade-offs of using biorthogonal codes with this watermark coding methods are analysed comprehensively. The comparisons between orthogonal and non-orthogonal codes that are used in this watermarking method are also made. It is found that fidelity and robustness are contradictory and it is not possible to optimise them simultaneously. Comparisons are also made between all proposed methods. The comparisons are focused on three major performance criteria, fidelity, capacity and robustness. aom two different viewpoints, conclusions are not the same. For fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal codes has very strong advantage to preserve image fidelity and the advantage of capacity performance is also significant. However, from the power ratio point of view, the orthogonalisation methods demonstrate significant advantage on capacity and robustness. The conclusions are contradictory but together, they summarise the performance generated by different design considerations. The synchronisation of watermark is firstly provided by high contrast frames around the watermarked image. The edge detection filters are used to detect the high contrast borders of the captured image. By scanning the pixels from the border to the centre, the locations of detected edges are stored. The optimal linear regression algorithm is used to estimate the watermarked image frames. Estimation of the regression function provides rotation angle as the slope of the rotated frames. The scaling is corrected by re-sampling the upright image to the original size. A theoretically studied method that is able to synchronise captured image to sub-pixel level accuracy is also presented. By using invariant transforms and the "symmetric phase only matched filter" the captured image can be corrected accurately to original geometric size. The method uses repeating watermarks to form an array in the spatial domain of the watermarked image and the the array that the locations of its elements can reveal information of rotation, translation and scaling with two filtering processes.
104

Cost-driven autonomous mobility

Deng, Xiao Yan January 2007 (has links)
Developments in distributed system technology facilitate the sharing of resources, even at a global level. This thesis explores sharing computational resources using mobile computations, agents, and autonomic techniques. We propose autonomous mobile programs (AMPs) which are aware of their resource needs and sensitive to the environment in which they execute. AMPs periodically use a cost model to decide where to execute in a network. Unusually this form of autonomous mobility affects only where the program executes and not what it does. We present a generic AMP cost modei, together with a validated instantiation and comparative performance results for four AMPs. We demonstrate that AMPs are able to dynamically relocate themselves to minimise execution time in the presence of varying network resources. Collections of AMPs effectively perform decentralised dynamic load balancing. Experiments on small LANs show that collections of AMPs quickly obtain and maintain optimal or near-optimal ball1nce. The advantages of our decentralised approach are that it has the potential to scale to very large and dynamic networks, and to achieve improved balance, and offers guarantees to limit overheads under reasonable assumptions. In an autonomous mobile program, the program must contain explicit control of self-aware mobile coordination. To encapsulate this for common patterns of computation over collections, autonomous mobility skeletons (AMSs) are proposed. These are akin to algorithmic skeletons in being polymorphic higher order functions, but where algorithmic skeletons abstract over parallel coordination, AMSs abstract over autonomous mobile coordination. AMS cost models have been built over collection iterations. The automap, autofold and Autolterator AMSs are presented, together with performance measurements for Jocaml, Java Voyager, and JavaGo implementations on LANs. An AMS considers only the cost of the current collective computation, but it is more useful to know the cost of the entire program. We have extended our AMS cost models to be parameterised on the cost of the remainder of the program. A cost calculus to estimate the costs for the remainder of a computation at arbitrary points has been built. An automatic Jocaml cost analyser based oil the calculus produces cost equations parameterised on program variables in context, and may be used to find both cost in higher-order functions and the cost for the remainder of the program. Costed autonomous mobility skeletons (CAMSs) have been built, which not only encapsulate common patterns of autonomous mobility but take additional cost parameters to provide costs for the remainder of the program. Potential performance improvements are assessed by comparing CAMS to AMS programs. The results show that CAMS programs perform more effectively than AMS programs, because they have more accurate cost information. Hence a CAMS program may move to a faster location when the corresponding AMS program does not.
105

Object recognition in infrared imagery using appearance-based methods

Wang, Xun January 2008 (has links)
Object recognition in infrared imagery has important applications, for example, in security and defence, and surveillance, due to the passive night-time and bad weather capabilities of infrared sensors. The objective of this thesis is to find a preferred method for the identification of static targets in single infrared images, concentrating on appearance-based methods. This has included thermal modelling of infrared signatures and the identification of images of different objects with variation in pose and thermal state. This thesis reviews several popular approaches in object recognition In visible and infrared imagery, concentrating on the appearance based approach. Using principal component analysis, the variances among the images are extracted and represented in a low-dimensional feature Eigenspace. Any new image can be projected into the Eigenspace by taking an inner product with the basis. The object of interest can be recognized by a nearest-neighbour classification rule, made more accurate by application of over-sampling to the surface manifold by B-spline surface fitting, and made more efficient by a k-d tree search algorithm. To address the problems of recognizing targets in noisy and cluttered images, we have also employed a random sampling approach that is based on the principle of high-breakdown point estimation. As a final step, a probabilistic framework in employed to improve the recognition rate and give a confidence measure for the result. The probability is determined . by two facts: distance from the Eigenspace and distance in the Eigenspace. Using this probabilistic framework, we set an 'image window' on the test image and adjust the position of the window according to the recognition result in the form of probability. The 'image window' method makes the system able to bear small in-plane transformation of the object in the test image and to recognize poorly segmented test images. We also discuss the possibility of using a non-linear dimensionality reduction method, Isomap, to replace peA as the basis of data decomposition in the appearance based method. Results show that although Isomap has some advantages in separating poses, it does not improve the recognition result using sufficient basis vectors compared to PCA. Therefore, we still use PCA as the basis for dimensionality reduction. A new way of modelling thermal change has been proposed under the framework of an appearance based method. Possible thermal state changes of an object are modelled by several single component changes and the combinations of these changes. Hence, we build an Eigenspace model inwhich each object is represented by several lines (or vectors) in the Eigenspace and each line represents one pose and one thermal state change. Using this model, it is possible to predict subspace projection of changes in thermal state and to recognize new unseen thermal images. Using a recognition algorithm that measures scene to model similarity by the distance between the unknown point and the learnt linear object representation, we are able to show an improvement in the recognition accuracy over the conventional appearance based approach. We have made extensive use of simulated data for both learning and recognising targets by appearance. As we have two degrees of freedom in viewpoint, azimuth and elevation, and several further degrees of freedom in allowing thermal state changes on different parts of the object, we have used as many as 33700 thermal images for a single object in the most extreme case. Hence, it is not feasible to both control the thermal state and acquire infrared data for the complete set of objects and viewpoints in the learning phase. In the recognition phase, we have used simulated data to test the algorithms, but have also embedded simulated vehicles within real infrared image data, a practice which is common in the literature on IR target recognition which is reviewed in Chapter 2. Although the simulation package, CameoSim, has been evaluated in comparison with real data, this is less than ideal, but necessary in the circumstances to evaluate and test the approach.
106

Wavefront coding for alleviation of aberrations in incoherent imaging systems

Mezouari, Samir January 2003 (has links)
No description available.
107

Geometric decomposition tools for parallel computing

Obiala, Renata January 2007 (has links)
This thesis describes new geometric decomposition tools for parallel computing. A new complete process of model preparation for parallel analysis is proposed 'and investigated. The process focuses on applying geometrical entities rather than mesh elements to the decomposition problem. The study starts with an exploration of different geometrical representations in order to select the most suitable representation-f6i:- the purpose of this research. Next, the model is orthogollalised and cut into blocks to create a decomposed orthogonal model. The blocks composing the model are allocated to a giyen number of processors using weight factors determined by a grid mesh generated for the model. Finally, the dccomposed orthogonal model is mapped back into its original shape while preserving the relationship between the mesh elements and geometrical entities. A number of different methodologies are successfully applied to perform the whole process. Fuzzy Logic and Genetic Algorithms are used to orthogonalise the original model. The Gcnetic Algorithms are also llscd for a graph partitioning problem, where a weighted graph is designed to represent the decomposed model. Additionally, Extreme Vertices Model inspired the model's representation required for decomposition. Each part of the whole process presented in this thesis is followed by examples and discussion.
108

Cross-modal semantic-associative labelling, indexing and retrieval of multimodal data

Zhu, Meng January 2010 (has links)
No description available.
109

Boosting image annotation

Lin, Wei-Chao January 2010 (has links)
No description available.
110

Unified field multiplier for GF(p) and GF(2 n) with novel digit encoding

Au, Lai Sze January 2004 (has links)
In recent years, there has been an increase in demand for unified field multipliers for Elliptic Curve Cryptography in the electronics industry because they provide flexibility for customers to choose between Prime (GF(p)) and Binary (GF(2")) Galois Fields. Also, having the ability to carry out arithmetic over both GF(p) and GF(2") in the same hardware provides the possibility of performing any cryptographic operation that requires the use of both fields. The unified field multiplier is relatively future proof compared with multipliers that only perform arithmetic over a single chosen field. The security provided by the architecture is also very important. It is known that the longer the key length, the more susceptible the system is to differential power attacks due to the increased amount of data leakage. Therefore, it is beneficial to design hardware that is scalable, so that more data can be processed per cycle. Another advantage of designing a multiplier that is capable of dealing with long word length is improvement in performance in terms of delay, because less cycles are needed. This is very important because typical elliptic curve cryptography involves key size of 160 bits. A novel unified field radix-4 multiplier using Montgomery Multiplication for the use of G(p) and GF(2") has been proposed. This design makes use of the unexploited state in number representation for operation in GF(2") where all carries are suppressed. The addition is carried out using a modified (4:2) redundant adder to accommodate the extra 1 * state. The proposed adder and the partial product generator design are capable of radix-4 operation, which reduces the number of computation cycles required. Also, the proposed adder is more scalable than existing designs.

Page generated in 0.0265 seconds