31 |
Learning failure-free PRISM programsAlsanie, Waleed January 2012 (has links)
First-order logic can be used to represent relations amongst objects. Probabilistic graphical models encode uncertainty over propositional data. Following the demand of combining the advantages of both representations, probabilistic logic programs provide the ability to encode uncertainty over relational data. PRISM is a probabilistic logic programming formalism based on the distribution semantics. PRISM allows learning the parameters when the programs are known. This thesis proposes algorithms to learn failure-free PRISM programs. It combines ideas from both areas of inductive logic programming and learning Bayesian networks. The learned PRISM programs generalise dynamic Bayesian networks by defining a halting distribution over the sampling process. Each dynamic Bayesian network models either an infinite sequential generative process or a sequential generative process of a fixed length. In both cases, only a fixed length of sequences can be sampled. On the other hand, the PRISM programs considered in this thesis represent self-terminating functions from which sequences of different lengths can be obtained. The effectiveness of the proposed algorithms on learning five programs is shown.
|
32 |
Extraction of linguistic resources from multilingual corpora and their exploitationShahid, Ahmad January 2012 (has links)
Increasing availability of on-line and off-line multilingual resources along with the developments in the related automatic tools that can process this information, such as GIZA++ (Och & Ney 2003), has made it possible to build new multilingual resources that can be used for NLP/IR tasks. Lexicon generation is one such task, which if done by hand is quite expensive with human and capital costs involved. Generation of multilingual lexicons can now be automated, as is done in this research work. Wikipedia, an on-line multilingual resource was gainfully employed to automatically build multilingual lexicons using simple search strategies. Europarl parallel corpus (Koehn 2002) was used to create multilingual sets of synonyms, that were later used to carry out the task of Word Sense Disambiguation (WSD) on the original corpus from which they were derived. The theoretical analysis of the methodology validated our approach. The multilingual sets of synonyms were then used to learn unsupervised models of word morphology in the individual languages. The set of experiments we carried out, along with another unsupervised technique, were evaluated against the gold standard. Our results compared very favorably with the other approach. The combination of the two approaches gave even better results.
|
33 |
Formal analysis of concurrent programsArmstrong, Alasdair January 2015 (has links)
In this thesis, extensions of Kleene algebras are used to develop algebras for rely-guarantee style reasoning about concurrent programs. In addition to these algebras, detailed denotational models are implemented in the interactive theorem prover Isabelle/HOL. Formal soundness proofs link the algebras to their models. This follows a general algebraic approach for developing correct by construction verification tools within Isabelle. In this approach, algebras provide inference rules and abstract principles for reasoning about the control flow of programs, while the concrete models provide laws for reasoning about data flow. This yields a rapid, lightweight approach for the construction of verification and refinement tools. These tools are used to construct a popular example from the literature, via refinement, within the context of a general-purpose interactive theorem proving environment.
|
34 |
Improving software remodularisationHall, Mathew J. January 2013 (has links)
Maintenance is estimated to be the most expensive stage of the software development lifecycle. While documentation is widely considered essential to reduce the cost of maintaining software, it is commonly neglected. Auto- mated reverse engineering tools present a potential solution to this problem by allowing documentation, in the form of models, to be produced cheaply. State machines, module dependency graphs (MDGs), and other software models may be extracted automatically from software using reverse engineering tools. However the models are typically large and complex due to a lack of abstraction. Solutions to this problem use transformations (state machines) or “remodularisation” (MDGs) to enrich the diagram with a hierarchy to uncover the system’s structure. This task is complicated by the subjectivity of the problem. Automated techniques aim to optimise the structure, either through design quality metrics or by grouping elements by the limited number of available features. Both of these approaches can lead to a mismatch between the algorithm’s output and the developer’s intentions. This thesis addresses the problem from two perspectives: firstly, the improvement of automated hierarchy generation to the extent possible, and then augmentation using additional expert knowledge in a refinement process. Investigation begins on the application of remodularisation to the state machine hierarchy generation problem, which is shown to be feasible, due to the common underlying graph structure present in both MDGs and state machines. Following this success, genetic programming is investigated as a means to improve upon this result, which is found to produce hierarchies that better optimise a quality metric at higher levels. The disparity between metric-maximising performance and human-acceptable performance is then examined, resulting in the SUMO algorithm, which in- corporates domain knowledge to interactively refine a modularisation. The thesis concludes with an empirical user study conducted with 35 participants, showing, while its performance is highly dependent on the individual user, SUMO allows a modularisation of a 122 file component to be refined in a short period of time (within an hour for most participants).
|
35 |
Information systems : operationalization of agile software development 2003 – 2007Shams, Siamak January 2007 (has links)
No description available.
|
36 |
Paralysis : an extensible multi-tiered guidance environment for program parallelization and analysisMcCool, Stuart January 2017 (has links)
GPU computing is a relatively nascent technology. Nonetheless, its potential to greatly accelerate program performance is well known. However, so too is its complexity. For the GPU is a specialized processor with niche applications - a program’s suitability for GPU-based execution can only truly be determined following extensive prerequisite analysis. Furthermore, the porting and tuning process is highly involved, requiring an intimate understanding of the target hardware. Consequently, development times can become prolonged. For organisations with extensive legacy codes, the challenge is greater still. They cannot necessarily afford to rewrite their software using the latest Application Programming Interfaces or Domain Specific Languages, no matter how simplified they might be. Such organisations might consider the likes of NVIDIA’S OpenACC compiler, theoretically enabling them to quickly leverage the benefits of GPU computing today by way of program annotations. But how can such a compiler enable their serial programmers to prepare for the parallel programming of tomorrow? The answer - as a black-box, it cannot. But what if there were a development environment that provided the benefits of such a compiler - acceleration in a timely, cost effective manner - and yet at the same time, provided programmers with the tools to understand how the accelerated program was derived? In this way, programmers could learn “on the job” and become the experts that their organisations will need tomorrow. This thesis describes the inception, development and evaluation of such an environment, namely Paralysis - an extensible guidance environment for program PARALIelization and analYSIS, tiered for varying programmer competencies. Ultimately, Paralysis achieves in minutes what took months to achieve when coding manually. Furthermore, Paralysis is not only found to provide better insight than NVIDIA’s OpenACC compiler, it is also found to outperform it in three out of nine evaluation case-studies and to achieve comparable performance in the remaining six.
|
37 |
The grounded incident fault theories (GIFTs) methodNaqvi, Syed Asad Ali January 2014 (has links)
Accidents, and incidents of faults and failures are an unavoidable reality for even moderately complex systems. Accidents, though unfortunate events, also provide an opportunity to uncover vulnerabilities and latent errors in systems. In this vein accident and incident analysis plays an important role in improving system dependability and robustness. Incidents when analysed individually often seem to be caused due to isolated reasons. However, when incidents are analysed in the context of other incidents in the broader domain then patterns begin to emerge between them. These patterns may indicate basic and underlying reasons for incidents, known as root causes. The practice of analysing a number of incidents together is called Multi-incident analysis. The state of the art of multi-incident analysis is dominated by quantitative methods that mostly use statistical analysis to find correlations between concepts. These methods are limited in their ability to identify systemic reasons for accidents, faults, and failures. To overcome this shortcoming, qualitative methods are sometimes used in incident analysis; in an effort to acquire a better understanding of the incident space. However, these methods do not provide any methodological support to guide the qualitative analysis towards the discovery of root causes. This thesis presents the Grounded Incident Fault Theories (GIFTs) method for multi-incident analysis. GIFTs is a qualitative multi-incident analysis method that provides methodological support to identify root causes and mitigation strategies by analysing past incident in a particular domain. GIFTs is a synthesis of two methods: The Incident Fault Tree (IFT), which is a method for incident analysis and documentation; and The Grounded Theory Method (GTM), which is a qualitative analysis method for building theories and discovering insights about phenomenon through the aggregation of data. GIFTs merges these two methods in a way that the whole is greater than the sum of its parts. In GIFTs the Incident Fault Tree guides the Grounded Theory process to efficiently identify the most important concepts with respect to understanding and mitigating faults and failures.
|
38 |
Improvements to multivariable systems under computer controlJumah, M. M. A. January 1976 (has links)
No description available.
|
39 |
Abstraction and refinement of process actionsStreader, David January 2000 (has links)
No description available.
|
40 |
Web-based simulation : the three-phase worldview and JavaCassel, Ricardo Augusto January 2000 (has links)
No description available.
|
Page generated in 0.0564 seconds