• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Investigating adoption of, and success factors for, agile software development in Malaysia

Asnawi, Ani Liza January 2012 (has links)
Agile methods are sets of software practices that can produce products faster and at the same time deliver what customers want. Despite these benefits, however, few studies can be found from the Southeast Asia region, particularly Malaysia. Furthermore many of the software processes were developed and produced in the US and European countries so they are tailored to their culture and most empirical evidence come from these countries. In this research, the perception, challenges in relation to Agile adoption and how the methods can be used successfully (the impact/benefits) were investigated from the perspective of Malaysian software practitioners. Consequently the research introduced two models which provide interaction and causality among the factors which can help software practitioners in Malaysia to determine and understand aspects important for successful Agile adoption. Agile focuses on the ‘people aspect’ therefore the cultural differences need to be addressed. Malaysia is a country that has three different ethnicities groups (Malay, Chinese and Indian) and the first language is Malay. English is the second language in the country and it is a standard language used in the business environment including software business. This study started with investigating the awareness of software practitioners in Malaysia regarding Agile methods. Low awareness was identified and interestingly the language aspect and organisational structure/culture were found to have significant association with the awareness of Agile methods. Those using English language were found to be more aware about Agile methods. The adoption of Agile methods in the country seems to be low although this might be changing over time. Issues from the early adopters were qualitatively investigated (with seven organisations and 13 software practitioners) to understand Agile adoption in Malaysia. Customers’education, mind set, people and management were found important from these interviews. The initial results and findings served as background to further investigate factors important in relation to the adoption of Agile methods from the Malaysian perspective. The study continued with a survey and further interviews involving seven organisations (three local and four multinational companies) and 14 software practitioners. While the survey received 207 responses, the language aspect was found significant for Agile usage and the Agile beliefs. Agile usage was also found significant for organisation types (government/non-government), indicating lack of adoption from the government sector. In addition, all factors investigated were found to be significant for getting the impact and benefits of Agile. The strongest relationship was identified from the organisational aspect, followed with the knowledge and involvement from all parties. Qualitative investigation supported and explained the results obtained from the survey and from here, the top factors for adoption and success in applying Agile were discovered to be involvement from all parties which requiring organisation and people to make it happen. The most important factors (or dimensions) identified from both groups (Agile users and non-Agile) were in the dimensions of organisational and people-related aspects (including customers). Finally the study introduced two models which discovered causal relationships in predicting the impact and benefits (success) of Agile methods. This research is based on the empirical investigation; hence the study suggests that Agile methods must be adjusted to the organisation and the people to get involvement from all parties. Agile is more easily adopted in an organisation with low power distance and low uncertainty avoidance. In addition, multinational companies and private sectors were found to facilitate Agile methods. In these organisations, the employees were found to be proficient using English language.

On the analysis of structure in texture

Waller, Ben January 2014 (has links)
Until now texture has been largely viewed as a statistical or holistic paradigm: textures are described as a whole and by summary statistics. In this thesis it is assumed that there is a structure underlying the texture leading to models, reconstruction and to scale based analysis. Local Binary Patterns are used throughout as the basis functions for texture and methods have been developed to reconstruct texture images from arrays of their LBP codes. The reconstructed images contain identical texture properties to the original; providing the same array of LBP codes. An evidence gathering approach has been developed to provide a model for each texture class based on the spatial structure of these elements throughout the image. This method, called Evidence Gathering Texture Segmentation, provides good results for segmentation with smooth boundaries and minimal oversegmentation, when compared with existing methods. Analysing microand macro-structures confers ability to include scale in texture analysis. A novel combination of lowpass and highpass filters produces images devoid of structures at certain scales; allowing both the micro- and macro-structures to be analysed without occlusion by other scales of texture within the image. A two stage training process is used to learn the optimum filter sizes and to produce model histograms for each known texture class. The process, called Accumulative Filtering, gives superior results compared to the best multiresolution LBP configuration and analysis only using lowpass filters. By reconstruction, by evidence gathering and by analysis of micro- and macro-structures, new capabilities are described to exploit structure within the analysis of texture.

Intelligent agents for mobile location services

McInerney, James January 2014 (has links)
Understanding human mobility patterns is a significant research endeavour that has recently received considerable attention. Developing the science to describe and predict how people move from one place to another during their daily lives promises to address a wide range of societal challenges: from predicting the spread of infectious diseases, improving urban planning, to devising effective emergency response strategies. Individuals are also set to benefit from this area of research, as mobile devices will be able to analyse their mobility pattern and offer context-aware assistance and information. For example, a service could warn about travel disruptions before the user is likely to encounter them, or provide recommendations and mobile vouchers for local services that promise to be of high value to the user, based on their predicted future plans. More ambitiously, control systems for home heating and electric vehicle charging could be enhanced with knowledge of when the user will be home. In this thesis, we focus on such anticipatory computing. Some aspects of the vision of context-awareness have been pursued for many years, resulting in mature research in the area of ubiquitous systems. However, the combination of surprisingly rapid adoption of advanced mobile devices by consumers and the broad acceptance of location-based apps has surfaced not only new opportunities, but also a number of pressing challenges. In more detail, these challenges are the (i) prediction of future mobility, (ii) inference of features of human location behaviour, and (iii) use of prediction and inference to make decisions about timely information or control actions. Our research brings together, for the first time, the entire workflow that a mobile location service needs to follow, in order to achieve an understanding of mobile user needs and to act on such understanding effectively. This framing of the problem highlights the shortcomings of existing approaches which we seek to address. In the current literature, prediction is only considered for established users, which implicitly assumes that new users will continue to use an initially inaccurate prediction system long enough for it to improve and increase in accuracy over time. Additionally, inference of user behaviour is mostly concerned with interruptibility, which does not take into account the constructive role of intelligent location services that goes beyond simply avoiding interrupting the user at inopportune times (e.g., in a meeting, or while driving). Finally, no principled decision framework for intelligent location services has been provided that takes into account the results of prediction and inference. To address these shortcomings, we make three main contributions to the state of the art. Firstly, we provide a novel Bayesian model that relates the location behaviour of new and established users, allowing the reuse of structure learnt from rich mobility data. This model shows a factor of 2.4 improvement over the state-of-the-art baseline in heldout data likelihood in experiments using the Nokia Lausanne dataset. Secondly, we give new tools for the analysis and prediction of routine in mobility, which is a latent feature of human behaviour, that informs the service about the user’s availability to follow up on any information provided. And thirdly, we provide a fully worked example of an intelligent mobile location service (a crowdsourced package delivery service) that performs decision-making using predictive densities of current and future user mobility. Simulations using real mobility data from the Orange Ivory Coast dataset indicate a 81.3% improvement in service efficiency when compared with the next best (non-anticipatory) approach.

Large-scale reordering models for statistical machine translation

Alrajeh, Abdullah January 2015 (has links)
In state-of-the-art phrase-based statistical machine translation systems (SMT), modelling phrase reorderings is an important need to enhance naturalness of the translate outputs, particularly when the grammatical structures of the language pairs differ significantly. The challenge in developing machine learning methods for machine translation can be summarised in two points. First is the ability to characterise language features such as morphology, syntax and semantics. Second is adapting complex learning algorithms to process large corpora. Posing phrase movements as a classification problem, we exploit recent developments in solving large-scale SVM, Multiclass SVM and Multinomial Logistic Regression. Using dual coordinate descent methods for learning, we provide a mechanism to shrink the amount of training data required for each iteration. Hence, we produce significant saving in time and memory while preserving the accuracy of the models. These efficient classifiers allow us to build large-scale discriminative reordering models. We also explore a generative learning approach namely naive Bayes. Our Bayesian model is shown to be superior to the widely-used lexicalised reordering model. It is fast to train and the storage requirement is many times smaller than the lexicalised model. Although discriminative models might achieve higher accuracy than naive Bayes, the absence of iterative learning is a critical advantage for very large corpora. Our reordering models are fully integrated with the Moses machine translation system, widely used in the community. Evaluated in large-scale translation tasks, our model have proved successful for two very different language pairs: Arabic-English and German-English.

Bayesian learning for multi-agent coordination

Allen-Williams, Mair January 2009 (has links)
Multi-agent systems draw together a number of significant trends in modern technology: ubiquity, decentralisation, openness, dynamism and uncertainty. As work in these fields develops, such systems face increasing challenges. Two particular challenges are decision making in uncertain and partially-observable environments, and coordination with other agents in such environments. Although uncertainty and coordination have been tackled as separate problems, formal models for an integrated approach are typically restricted to simple classes of problem and are not scalable to problems with tens of agents and millions of states. We improve on these approaches by extending a principled Bayesian model into more challenging domains, using Bayesian networks to visualise specific cases of the model and thus as an aid in deriving the update equations for the system. One approach which has been shown to scale well for networked offline problems uses finite state machines to model other agents. We used this insight to develop an approximate scalable algorithm applicable to our general model, in combination with adapting a number of existing approximation techniques, including state clustering. We examine the performance of this approximate algorithm on several cases of an urban rescue problem with respect to differing problem parameters. Specifically, we consider first scenarios where agents are aware of the complete situation, but are not certain about the behaviour of others; that is, our model with all elements but the actions observable. Secondly, we examine the more complex case where agents can see the actions of others, but cannot see the full state and thus are not sure about the beliefs of others. Finally, we look at the performance of the partially observable state model when the system is dynamic or open. We find that our best response algorithm consistently outperforms a handwritten strategy for the problem, more noticeably as the number of agents and the number of states involved in the problem increase.

Person detection using wide angle overhead cameras

Ahmed, Imran January 2014 (has links)
In cluttered environments, the overhead view is often preferred because looking down can afford better visibility and coverage. However detecting people in this or any other extreme view can be challenging as there are significant variation in a person's appearances depending only on their position in the picture. The Histogram of Oriented Gradient (HOG) algorithm, a standard algorithm for pedestrian detection, does not perform well here, especially where the image quality is poor. We show that with the SCOVIS dataset, on average, 9 false detections occur per image. We propose a new algorithm where transforming the image patch containing a person to remove positional dependency and then applying the HOG algorithm eliminates 98% of the spurious detections in the noisy images from our industrial assembly line and detects people with a 95% efficiency. The algorithm is demonstrated as part of a simple but effective person tracking by detection system. This incorporates simple motion detection to highlight regions of the image that might contain people. These are then searched with our algorithm. This has been evaluated on a number of SCOVIS sequences and correctly tracks people approximately 99% of the time. By comparison, the exampled algorithms in the OpenCV are less than approximately 50% efficient. Finally, we show our algorithm's potential for generalization across different scenes. We show that a classifier trained on the SCOVIS dataset achieves a detection rate of 96% when applied to new overhead data recorded at Southampton. Using the output from this stage to generate labelled 'true positives' data we train a new model which achieves a detection rate of 98%. Both these results compared favourably with the performance of a model trained with manually labelled images. This achieves a detection rate of greater than 99%.

Design and experimental evaluation of iterative learning controllers on a multivariable test facility

Dinh Van, Thanh January 2013 (has links)
Iterative learning control (ILC) algorithms are employed in many applications, especially these involving single-input and single-output plants undertaking repeated tasks with finite-time interval. ILC is applicable to systems executing a repeated trajectory tracking task, and uses data recorded over previous trials in the construction of the next control input. The objective is to sequentially improve tracking accuracy as the trial number increases. This method has been shown to operate well in the presence of significant modeling uncertainty and exogenous disturbances. However, for MIMO (multiple input -multiple output) systems, there exist far fewer applications reported in the literature, and minimal benchmarking and evaluation studies have been undertaken. To tackle this shortcoming, this thesis focuses on designing an electromechanical test-bed which can verify the weaknesses and the advantages of various ILC methods on a purpose-built platform. The system has two inputs and two outputs and enables variation of the interaction between inputs and outputs through simple and rapid parameter modification. This interaction variation permits the control problem to be modified, allowing stipulation over the challenge presented to the ILC controller. The system is made up of two back-to-back differential gearboxes with mass-spring damper components to increase the system order and control difficulty. In its standard configuration, two motors provide torque to the two input ports and the two outputs are measured using encoders. This work enables a comparative summary of ILC approaches for MIMO systems, together with modifications for improved performance and robustness, and the development of new control schemes incorporating input and output constraints and point-to point tracking capability. The system can also be configured in a variety of other arrangements, varying the number of inputs and outputs, and allowing noise to be injected using a dc motor. Models of the system are derived using a lumped parameter system representation, as well as purely from experimental input and output data. Simple structure controllers such as proportional-type ILC, derivative-type ILC and phase-lead ILC are then applied to test the combined performance of the controller and the MIMO system, and establish its efficacy as a benchmarking platform. Advanced controllers are then derived and applied and experimental data are used to confirm theoretical findings concerning the link between interaction and convergence rate, input norm and robustness.

An artificial experimenter for automated response characterisation

Lovell, Christopher James January 2011 (has links)
Biology exhibits information processing capabilities, such as parallel processing and context sensitivity, which go far beyond the capabilities of modern conventional electronic computation. In particular the interactions of proteins suh as enzymes are interesting, as they appear to act as efficient biomolecular computers. Harnessing proteins as biomolecular computers is currently not possible, as little is understood about their interactions outside of a physiological context. Understanding these interactions can only occur through experimentation. However, the size and dimensionality of the available experiment parameter spaces far outsize the resources typically available to investigate them, creating a restriction on the knowledge aquisition possible. To address this restriction, new tools are required to enable the development of biomolecular computation. One such tool is autonomous experimentation, a union of machine learning and computer controlled laboratory equipment within in a closed-loop machine. Both the machine learning and experiment platforms can be designed to address the resource problem. The machine learning element attempts to provide techniques for intelligent experiment selection and effective data analysis that reduce the number of experiments required to learn from. Whilst resource efficient automated experiment platforms, such as lab on-chip technology, can minimise the volumes of reactants per experiment. Here the machine learning aspect of autonomous experimentation is considered. These machine learning techniques must act as an artificial experimenter, mimicking the processes of successful human experimenters, through developing hypotheses and selecting the experiments to perform. Using this biological domain as motivation, an investigation of learning from a small set of noisy and sometimes erroneous observations is presented. Presented is a principled multiple hypotheses technique motivated from philosophy of science and machine learning for producing potential response characteristics, combined with active learning techniques that provide a robust method for hypothesis separation and a Bayesian surprise method for managing the exploration{exploitation trade-off between new feature discovery and hypothesis disproving. The techniques are validated through a laboratory trial where successful biological characterisation has been shown

Towards a framework and model for acceptable user experiences in e-government physical and virtual identity access management systems

Alotaibi, Sara Jeza January 2013 (has links)
The wide spread of services on the internet has aggravated the issue of maintaining multiple identities such as the virtual identities that are based on specific login credentials like username, passwords and PINs. On the other hand, multiple physical identities also prove to be difficult to maintain since different sources require the presence of different smart cards, mobile devices or other proofs of identity. Therefore, the modern world is populated with so many virtual and physical Identity Access Management Systems (IAMS) that individuals are required to maintain multiple passwords and login credentials. The tedious task of remembering these can be minimised through the utilisation of an innovative approach of single sign-in mechanisms. During recent times, several systems have been developed to provide physical and virtual IAMS; however, most have not been very successful according to specific criteria. Furthermore, alongside increasing the level of awareness for the need to deploy interoperable physical and virtual IAMS, there exists an immediate need for the establishment of clear guidelines for the successful integration of the two media. The importance of and motivation for the integration of the two media will be discussed in this thesis with respect to three perspectives: security, which includes identity; user experience, comprising usability; and acceptability, containing accessibility. Not many frameworks and models abide by all guidelines for all of these perspectives; thus, the thesis addresses the immediate need to establish a framework and a model for acceptable user experience for successful integration of the two media for public services within the e-government domain. The IAMS framework is based on the attributes from the researched theories of the three perspectives and expert evaluations of the unique nine themes. Regarding the users evaluation to test the proposed Unified Theory of Acceptance and Use of Technology Model(UTAUT), there is an indirect effect on behavioural intentions to use a new prototype system (Ubiquitous Identity Access Management System "UbIAMS") through performance expectancy, effort expectancy, social influence, and through items pertaining to acceptability and user experience.

Expressive and efficient bounded model checking of concurrent software

Morse, Jeremy January 2015 (has links)
To improve automated verification techniques for ANSI-C software, I examine temporal logics for describing program properties, and techniques for increasing the speed of program verification,for both single threaded and concurrent programs, based on the model checker ESBMC. A technique for evaluating LTL formulae over finite program traces is proposed and evaluated over a piece of industrial software and a suite of benchmarks, with favourable results. Efficient formulations of the model checking problem for SMT solvers are evaluated, and the performance of different solvers compared. Finally a number of optimisations for concurrent program verification not previously applied to symbolic software model checking are evaluated, resulting in an order of magnitude performance improvement over ESBMCs prior and already internationally competitive performance.

Page generated in 0.8667 seconds