• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Compositional software verification based on game semantics

Dimovski, Aleksandar January 2007 (has links)
One of the major challenges in computer science is to put programming on a firmer mathematical basis, in order to improve the correctness of computer programs. Automatic program verification is acknowledged to be a very hard problem, but current work is reaching the point where at least the foundational�· aspects of the problem can be addressed and it is becoming a part of industrial software development. This thesis presents a semantic framework for verifying safety properties of open sequ;ptial programs. The presentation is focused on an Algol-like programming language that embodies many of the core ingredients of imperative and functional languages and incorporates data abstraction in its syntax. Game semantics is used to obtain a compositional, incremental way of generating accurate models of programs. Model-checking is made possible by giving certain kinds of concrete automata-theoretic representations of the model. A data-abstraction refinement procedure is developed for model-checking safety properties of programs with infinite integer types. The procedure starts by model-checking the most abstract version of the program. If no counterexample, or a genuine one, is found, the procedure terminates. Otherwise, it uses a spurious counterexample to refine the abstraction for the next iteration. Abstraction refinement, assume-guarantee reasoning and the L* algorithm for learning regular languages are combined to yield a procedure for compositional verification. Construction of a global model is avoided using assume-guarantee reasoning and the L* algorithm, by learning assumptions for arbitrary subprograms. An implementation based on the FDR model checker for the CSP process algebra demonstrates practicality of the methods.
82

Creative software development : an empirical modelling framework

Ness, Paul Edward January 1997 (has links)
The commercial success of software development depends on innovation [Nar93a]. However, conventional approaches inhibit the development of innovative products that embody novel concepts. This thesis argues that this limitation of conventional software development is largely due to its use of analytical artefacts, and that other activities, notably Empirical Modelling and product design, avoid the same limitation by using creative artefacts. Analytical artefacts promote the methodical representation of familiar subjects whereas creative artefacts promote the exploratory representation of novel subjects. The subjects, constraints, environments and knowledge associated with a design activity are determined by the nature of its artefacts. The importance of artefacts was discovered by examining the representation of different kinds of lift system in respect of Empirical Modelling, product design and software development. The artefacts were examined by identifying creative properties, as characterized in the theory of creative cognition [FWS92], together with their analytical counterparts. The processes of construction were examined by identifying generative and exploratory actions. It was found that, in software development, the artefacts were analytical and the processes transformational, whereas, in Empirical Modelling and product design, the artefacts were both creative and analytical, and the processes exploratory. A creative approach to software development using both creative and analytical artefacts is proposed for the development of innovative products. This new approach would require a radical departure from the established ideas and principles of software development. The existing paradigm would be replaced by a framework based on Empirical Modelling. Empirical Modelling can be though of as a situated approach to modelling that uses the computer in exploratory ways to construct artefacts. The likelihood of the new paradigm being adopted is assessed by considering how it addresses the topical issues in software development.
83

Trust-based social mechanism to counter deceptive behaviour

Lim Choi Keung, Sarah Niukyun January 2011 (has links)
The actions of an autonomous agent are driven by its individual goals and its knowledge and beliefs about its environment. As agents can be assumed to be selfinterested, they strive to achieve their own interests and therefore their behaviour can sometimes be difficult to predict. However, some behaviour trends can be observed and used to predict the future behaviour of agents, based on their past behaviour. This is useful for agents to minimise the uncertainty of interactions and ensure more successful transactions. Furthermore, uncertainty can originate from malicious behaviour, in the form of collusion, for example. Agents need to be able to cope with this to maximise their benefits and reduce poor interactions with collusive agents. This thesis provides a mechanism to support countering deceptive behaviour by enabling agents to model their agent environment, as well as their trust in the agents they interact with, while using the data they already gather during routine agent interactions. As agents interact with one another to achieve the goals they cannot achieve alone, they gather information for modelling the trust and reputation of interaction partners. The main aim of our trust and reputation model is to enable agents to select the most trustworthy partners to ensure successful transactions, while gathering a rich set of interaction and recommendation information. This rich set of information can be used for modelling the agents' social networks. Decentralised systems allow agents to control and manage their own actions, but this suffers from limiting the agents' view to only local interactions. However, the representation of the social networks helps extend an agent's view and thus extract valuable information from its environment. This thesis presents how agents can build such a model of their agent networks and use it to extract information for analysis on the issue of collusion detection.
84

An approach to source-code plagiarism detection investigation using latent semantic analysis

Cosma, Georgina January 2008 (has links)
This thesis looks at three aspects of source-code plagiarism. The first aspect of the thesis is concerned with creating a definition of source-code plagiarism; the second aspect is concerned with describing the findings gathered from investigating the Latent Semantic Analysis information retrieval algorithm for source-code similarity detection; and the final aspect of the thesis is concerned with the proposal and evaluation of a new algorithm that combines Latent Semantic Analysis with plagiarism detection tools. A recent review of the literature revealed that there is no commonly agreed definition of what constitutes source-code plagiarism in the context of student assignments. This thesis first analyses the findings from a survey carried out to gather an insight into the perspectives of UK Higher Education academics who teach programming on computing courses. Based on the survey findings, a detailed definition of source-code plagiarism is proposed. Secondly, the thesis investigates the application of an information retrieval technique, Latent Semantic Analysis, to derive semantic information from source-code files. Various parameters drive the effectiveness of Latent Semantic Analysis. The performance of Latent Semantic Analysis using various parameter settings and its effectiveness in retrieving similar source-code files when optimising those parameters are evaluated. Finally, an algorithm for combining Latent Semantic Analysis with plagiarism detection tools is proposed and a tool is created and evaluated. The proposed tool, PlaGate, is a hybrid model that allows for the integration of Latent Semantic Analysis with plagiarism detection tools in order to enhance plagiarism detection. In addition, PlaGate has a facility for investigating the importance of source-code fragments with regards to their contribution towards proving plagiarism. PlaGate provides graphical output that indicates the clusters of suspicious files and source-code fragments.
85

Ontology engineering for ICT systems using semantic relationship mining and statistical social network analysis

Ma, Xiao January 2011 (has links)
In information science, ontology is a formal representation of knowledge as a set of concepts within a domain, and the relationships between those concepts. It is used to reason about the entities within that domain, and may be used to describe the domain. (Wikipedia, 2011) This research takes two case study ICT applications in engineering and medicine, and evaluates the applications and supporting ontology to identify the main requirements for ontology in ICT systems. A study of existing ontology engineering methodology revealed difficulties in generating sufficient breadth and depth in domain concepts that contain rich internal relationships. These restrictions usually arise because of a heavy dependence on human experts in these methodologies. This research has developed a novel ontology engineering methodology – SEA, which economically, quickly and reliably generates ontology for domains that can provide the breadth and depth of coverage required for automated ICT systems. Normally SEA only requires three pairs of keywords from a domain expert. Through an automated snowballing mechanism that retrieves semantically related terms from the Internet, ontology can be generated relatively quickly. This mechanism also enhances and enriches the binary relationships in the generated ontology to form a network structure, rather than a traditional hierarchy structure. The network structure can then be analysed through a series of statistical network analysis methods. These enable concept investigation to be undertaken from multiple perspectives, with fuzzy matching and enhanced reasoning through directional weight-specified relationships. The SEA methodology was used to derive medical and engineering ontology for two existing ICT applications. The derived ontology was quicker to generate, relied less on expert contribution, and provided richer internal relationships. The methodology potentially has the flexibility and utility to be of benefit in a wide range of applications. SEA also exhibits "reliability" and "generalisability" as an ontology engineering methodology. It appears to have application potential in areas such as machine translation, semantic tagging and knowledge discovery. Future work needs to confirm its potential for generating ontology in other domains, and to assess its operation in semantic tagging and knowledge discovery.
86

A multiple-SIMD architecture for image and tracking analysis

Kerbyson, Darren James January 1992 (has links)
The computational requirements for real-time image based applications are such as to warrant the use of a parallel architecture. Commonly used parallel architectures conform to the classifications of Single Instruction Multiple Data (SIMD), or Multiple Instruction Multiple Data (MIMD). Each class of architecture has its advantages and dis-advantages. For example, SIMD architectures can be used on data-parallel problems, such as the processing of an image. Whereas MIMD architectures are more flexible and better suited to general purpose computing. Both types of processing are typically required for the analysis of the contents of an image. This thesis describes a novel massively parallel heterogeneous architecture, implemented as the Warwick Pyramid Machine. Both SIMD and MIMD processor types are combined within this architecture. Furthermore, the SIMD array is partitioned, into smaller SIMD sub-arrays, forming a Multiple-SIMD array. Thus, local data parallel, global data parallel, and control parallel processing are supported. After describing the present options available in the design of massively parallel machines and the nature of the image analysis problem, the architecture of the Warwick Pyramid Machine is described in some detail. The performance of this architecture is then analysed, both in terms of peak available computational power and in terms of representative applications in image analysis and numerical computation. Two tracking applications are also analysed to show the performance of this architecture. In addition, they illustrate the possible partitioning of applications between the SIMD and MIMD processor arrays. Load-balancing techniques are then described which have the potential to increase the utilisation of the Warwick Pyramid Machine at run-time. These include mapping techniques for image regions across the Multiple-SIMD arrays, and for the compression of sparse data. It is envisaged that these techniques may be found useful in other parallel systems.
87

Swarm-inspired solution strategy for the search problem of unmanned aerial vehicles

Li, Xingbo January 2010 (has links)
Learning from the emergent behaviour of social insects, this research studies the influences of environment to collective problem-solving of insect behaviour and distributed intelligent systems. Literature research has been conducted to understand the emergent paradigms of social insects, and to investigate current research and development of distributed intelligent systems. On the basis of the literature investigation, the environment is considered to have significant impact on the effectiveness and efficiency of collective problem-solving. A framework of collective problem-solving is developed in an interdisciplinary context to describe the influences of the environment to insect behaviour and problem-solving of distributed intelligent systems. The environment roles and responsibilities are transformed into and deployed as a problem-solving mechanism for distributed intelligent systems. A swarm-inspired search strategy is proposed as a behaviour-based cooperative search solution. It is applied to the cooperative search problem of Unmanned Aerial Vehicles (UAVs) with a series of experiments implemented for evaluation. The search environment represents the specification and requirements of the search problem; defines tasks to be achieved and maintained; and it is where targets are locally observable and accessible to UAVs. Therefore, the information provided through the search environment is used to define rules of behaviour for UAVs. The initial detection of target signal refers to modified configurations of the search environment, which mediates local communications among UAVs and is used as a means of coordination. The experimental results indicate that, the swarm-inspired search strategy is a valuable alternative solution to current approaches of cooperative search problem of UAVs. In the proposed search solution, the diagonal formation of two UAVs is able to produce superior performance than the triangular formation of three UAVs for the average detection time and the number of targets located within the maximum time length.
88

A complexity theory of parallel computation

Parberry, Ian January 1984 (has links)
Parallel complexity theory is currently one of the fastest growing fields of theoretical computer science. This rapid growth has led to a proliferation of parallel machine models and theoretical frameworks. Our aim is to construct a unified theory of parallel computation based on a network model. We claim that the network paradigm is fundamental to the understanding of parallel computation. and support this claim by providing new and Improved theoretical results, and new approaches to old questions concerning "reasonable" and "practical" models. This thesis is made up of eight chapters. Chapter 1 contains the introduction. In chapter 2 we define the basic model, and justify our choice of a unit- cost measure of time, a uniform assignment of programs to processors, and simultaneous processor activation. Chapter 3 compares the network model to a variety of others, including ' fixed-structure networks and shared-memory machines. We explore the concepts of "reasonableness" and "practicality" in parallel machine models, and show that even "reasonable" parallel computers are much taster than sequential ones. Chapter 4 is devoted to programming techniques for a "practical" network model, (which we call a feasible network), covering interconnection patterns, useful algorithms, and some processor-saving theorems. In chapter 5 we find efficient simulations of the general network model on more practical machines, including a universal feasible network, and uniform circuits. Chapter 6 extends the network model, and defines a new resource, that of arity. Although increasing arity Increases computing power, some efficient constant-arity universal machines are found. Chapter 7 takes a final look at universal networks, concentrating on lower-bounds and the conditions under which they hold. Chapter 6 contains the conclusion.
89

Resource allocation for multi-sensory virtual environments

Doukakis, Efstratios January 2016 (has links)
Fidelity is of key importance if virtual environments are to be used as authentic representations of real life scenarios. However, simulating the multitude of senses that comprise the human sensory system is a computationally challenging task. With limited computational resources it is essential to distribute these carefully in order to simulate the most ideal perceptual experience for the user. This thesis investigates this balance of resources across multiple scenarios where combined audio, visual and olfactory cues are delivered to the user. Starting with bi-modal virtual environments where audio and visual stimuli are delivered to the users, a subjective experimental study, denoted as E1, was undertaken where participants (N = 35) allocated five fixed resource budgets for adjusting the quality of the displayed graphics and acoustics stimuli. In this experiment, increasing the quality of one of the sensory stimuli decreased the quality of the other. Findings demonstrate that participants allocate more resources to graphics, however as the computational budget is increased the allocation ratio between graphics and acoustics decreases significantly. Based on the results, an audio-visual quality prediction model is proposed and successfully validated against previously untested budgets and scenarios. The introduction of realistic olfactory stimuli is considered necessary if multisensory virtual environments are to be used as genuine representations of real life experiences. The estimation and delivery of smell impulses includes many challenges and significantly differs from the methods used for computing and displaying auditory and visual cues. Furthermore, the absence of a quality metric for assessing olfactory stimuli makes the introduction of an olfactory quality scale in the resource allocation framework significantly challenging. The work presented in this thesis investigates whether better spatial discretisation of the computational domain, a frequently used technique for increasing numerical stability in fluid transport simulations, can be used as a successful smell quality metric that can elicit a perceptual impact to the users of the virtual environment. In this context, better spatial discretisation levels are evaluated based on the estimation of the Just Noticeable Difference (JND) threshold for smell intensity using an experimental study (N = 20) and implemented in two phases. This experiment is denoted as E2 throughout this thesis. Findings demonstrate that the JND threshold is larger than the concentration differences given by progressively accurate smell transport simulations. This outcome enables computational savings from avoiding exhaustive smell transport simulations that provide no perceptual benefit to the user. Having considered the limitations associated with assessing smell impulses in terms of quality, a third experimental study is proposed, denoted as E3, and is exploited for resource allocation in tri-modal virtual environments. The experimental layout of E3 (N = 25) builds on the framework proposed in E1 including the delivery of physically accurate smell impulses to the user. The display of the smell bursts is implemented in a binary fashion (two levels or ON/OFF smell) along with the quality levels for the senses of vision and hearing as selected in E1. The smell concentration level presented in this experiment follows from the results of the JND threshold estimation for odour concentration presented in the psychophysics study E2. In conclusion, the research presented, shows that human preference criteria can be fully exploited in the design and delivery of multi-sensory virtual experiences. Experimental data can be used to derive computationally inexpensive prediction models that direct resource allocation in rendering pipelines where diverse sensory stimuli are simulated and delivered to the users.
90

On reducing the data sparsity in collaborative filtering recommender systems

Guan, Xin January 2017 (has links)
A recommender system is one of the most common software tools and techniques for generating personalized recommendations. Collaborative filtering, as an effective recommender system approach, predicts a user's preferences (ratings) on an item based on the previous preferences of other users. However, collaborative filtering suffers from the data sparsity problem, that is, the users' preference data on items are usually too few to understand the users’ true preferences, which makes the recommendation task difficult. This thesis focuses on approaches to reducing the data sparsity in collaborative filtering recommender systems. Active learning algorithms are effective in reducing the sparsity problem for recommender systems by requesting users to give ratings to some items when they come in. However, this process focuses on new users and is often based on the assumption that a user can provide ratings for any queried items, which is unrealistic and costly. Take movie recommendation for example, to rate a movie that is generated by an active learning strategy, a user has to watch it. On the other hand, the user may be frustrated when asked to rate a movie that he/she has not watched. This could lower the customer's confidence and expectation of the recommender system. Instead, an ESVD algorithm is proposed which combines classic matrix factorization algorithms with ratings completion inspired by active learning, allowing the system to 'add' ratings automatically through learning. This general framework can be incorporated with different SVD-based algorithms such as SVD++ by proposing the ESVD++ method. The proposed EVSD model is further explored by presenting the MESVD approach, which learns the model iteratively, to get more precise prediction results. Two variants of ESVD model: IESVD and UESVD are also proposed to handle the imbalanced datasets that contains more users than items or more items than users, respectively. These algorithms can be seen as pure collaborative filtering algorithms since they do not require human efforts to give ratings. Experimental results show the reduction of the prediction error when compared with collaborative filtering algorithms (matrix factorization). Secondly, traditional active learning methods only evaluate each user or items independently and only consider the benefits of the elicitations to new users or items, but pay less attention to the effects of the system. In this thesis, the traditional methods are extended by proposing a novel generalized system-driven active learning framework. Specifically, it focuses on the elicitations of the past users instead of the new users and considers a more general scenario where users repeatedly come back to the system instead of during the sign-up process. In the proposed framework the ratings are elicited by combining the user-focused active learning with item-focused active learning, for the purpose of improving the performance of the whole system. A variety of active learning strategies are evaluated on the proposed framework. Experimental results demonstrate its effectiveness on reducing the sparsity, and then enables improvements on the system performance. Thirdly, traditional recommender systems suggest items belonging to a single domain, therefore existing research on active learning only applies and evaluates elicitation strategies on a single-domain scenario. Cross-domain recommendation utilizes the knowledge derived from the auxiliary domain(s) with sufficient ratings to alleviate the data sparsity in the target domain. A special case of cross-domain recommendation is multi-domain recommendation that utilizes the shared knowledge across multiple domains to alleviate the data sparsity in all domains. A multi-domain active learning framework is proposed by combining active learning with the cross-domain collaborative filtering algorithm (RMGM) in the multi-domain scenarios, in which the sparsity problem can be further alleviated by sharing knowledge among multiple sources, along with the data acquired from users. The proposed algorithms are evaluated on real-world recommender system datasets and experimental results confirmed their effectiveness.

Page generated in 0.8654 seconds