151 |
Analysing the security properties of object-capability patternsMurray, Toby January 2010 (has links)
No description available.
|
152 |
Genericity, extensibility and type : safety in the VISITOR patternOliveira, Bruno César dos Santos January 2007 (has links)
A software component is, in a general sense, a piece of software that can be safely reused and flexibly adapted by some other piece of software. The safety can be ensured by a type system that guarantees the right usage of the component; the flexibility stems from the fact that components are parametrizable over different aspects affecting their behaviours. Component-oriented programming (COP), a programming style where software would be built out of several independent components, has for a long time eluded the software industry. Several reasons have been raised over time, but one that is consistently pointed out is the inadequacy of existing programming languages . for the development of software components. Generic Programming (GP) usually manifests itself as a kind of parametrization. By abstracting from the differences of what would otherwise be separate but otherwise similar specific programs, one can develop a single unified generic program. Instantiating the parameter in various ways retrieves the various specific programs (and ideally some new ones too). Instances of GP include the generics (parametrization by types) mechanism as found in recent versions Java and C# and Datatype-Generic Programming (DOP) (parametrization by shape). Both mechanisms allow novel ways to parametrize programs that can largely increase the flexibility of programs. Software components and OP, and in particular DGP, are clearly related: OP and DGP provide novel ways to parametrize software, while software components benefit from parametrization in order to be flexible. However, DOP and COP have mostly been studied in isolation, with the former being a research topic among some functional programming communities and the latter being mostly studied within the object-oriented communities. In this thesis we wiII argue for the importance of the parametrization mechanisms provided by GP, and in particular DGP, in COP. We will defend that many design patterns can be captured as software components when using such kinds of parametrization. As evidence for this we will, using DOP techniques, develop a component library for the VISITOR pattern that is generic (i.e. can be used on several concrete visitors); extensible (i.e. concrete visitors may be extended); and type-safe (i.e. its usage is statically type checked). A second aspect of this thesis concerns the adaptation of functional DGP techniques to object-oriented languages. We argue that parametrization by datatypes should be replaced by parametrization by visitors, since visitors can be viewed as encodings of datatypes and, through those encodings, the functional techniques naturally translate into an 00 setting.
|
153 |
Generative templates for formal metamodel designWu, Nicolas January 2010 (has links)
No description available.
|
154 |
Visualising variable requirements and inter-dependencies within software product linesSellier, David H. January 2008 (has links)
No description available.
|
155 |
Analysing the resolution of security bugs in software maintenanceSaleem, Saad January 2015 (has links)
Security bugs in software systems are often reported after incidents of malicious attacks. Developers often need to resolve these bugs quickly in order to maintain the security of such systems. Bug resolution includes two kinds of activities: triaging confirms that the bugs are indeed security problems, after which fixing involves making changes to the code. It is reported in the literature that, statistically, security bugs are reopened more often compared to others, which poses two new research questions: (a) Are developers “rushing” to triage security bugs too soon under the pressure of deadlines? (b) Do developers need to spend more time fixing security bugs to avoid frequent reopening? This thesis explores these questions in order to determine whether security bug fixing should take a higher priority than other bugs to avoid malicious attackers exploiting vulnerabilities before the problems are fixed, and whether security bug fixing should take a higher priority than other bugs. In this thesis a quantitative approach has been adopted by conducting statistical empirical studies to observe the behaviour of software developers engaged in dealing with security bugs. Firstly, the concept of "rush'' has been borrowed from the time management literature to refer to the behaviour of people delivering work under the pressure of deadlines. By observing how developers deliver bug resolution before the deadline of releases, the degree of rush has been measured as the ratio between the actual time spent by developers during triaging and the theoretical time the developers have by delaying the fixes until the next regular release. In this thesis, a suggest that delaying bug assignment helps find the right developer and gives the developer more time to prepare for the same workload with more relaxed planning constraints. Secondly, to analyse the complexity of security bug fixes, the fan-in complexity of functions relevant to security bugs has been measured, rather than simply measuring the time spent by the software developers on the fixing of such bugs. The first null hypothesis is tested using a Man-Whitney method on five software case studies, Samba, MozillaFirefox, RedHat, FreeBSD and Mozilla. The second null hypothesis is tested by comparing the results of fixing security and non-security bugs from the Samba and MozillaFirefox case studies. Statistically significant results suggest that security bugs are triaged in a rush compared to non-security bugs for RedHat, FreeBSD and Mozilla. In terms of fan-in, the results of the Samba and MozillaFirefox case studies suggest that security bugs are more complex to fix compared to non-security bugs.
|
156 |
Illustrative non-photorealistic rendering techniques for GPU architecturesSayeed, Rezwan January 2010 (has links)
No description available.
|
157 |
Detecting Prolog programming techniques using abstract interpretationBowles, Andrew W. January 1992 (has links)
There have been a number of attempts at developing intelligent tutoring systems (ITSs) for teaching students various programming languages. An important component of such an ITS is a debugger capable of recognizing errors in the code the student writes and possibly suggesting ways of correcting such errors. The debugging process involves a wealth of knowledge about the programming language, the student and the individual problem at hand, and an automated debugging component makes use of a number of tools which apply this knowledge. Successive ITSs have incorporated a wider range of knowledge and more powerful tools. The research decribed in this thesis should be seen as carrying on with this succession. Specifically, we attempt to enhance an existing Prolog ITS (PITS) debugger called APROPOS2 developed by Loci. The enhancements take the form of a richer language with which to describe Prolog code and more powerful tools with which constructs in this language may be detected in Prolog code. The richer language is based on the notion of programming techniques - common patterns in code which catpure in some sense an expert's understanding of Prolog. The tools are based on Prolog abstract interpretation - a program analysis method for inferring dynamic properties of code. Our research makes contributions to both these areas. We develop a language for describing classes of Prolog programming techniques that manipulate data-structures. We define classes in this language for common Prolog techniques such as accumulator pairs and difference structures. We use abstract interpretation to infer the non-syntactic features with which techniques are described. We develop a general framework for abstract interpretation which is described in Prolog, so leading directly to an implementation. We develop two abstract domains - one which infers general data flow information about the code and one which infers particularly detailed type information - and describe the implementation of the former.
|
158 |
Nonmonotonic Reasoning with Description LogicsKe, Peihong January 2011 (has links)
No description available.
|
159 |
Probabilistic uncertainty in an interoperable frameworkWilliams, Matthew William January 2011 (has links)
This thesis provides an interoperable language for quantifying uncertainty using probability theory. A general introduction to interoperability and uncertainty is given, with particular emphasis on the geospatial domain. Existing interoperable standards used within the geospatial sciences are reviewed, including Geography Markup Language (GML), Observations and Measurements (O&M) and the Web Processing Service (WPS) specifications. The importance of uncertainty in geospatial data is identified and probability theory is examined as a mechanism for quantifying these uncertainties. The Uncertainty Markup Language (UncertML) is presented as a solution to the lack of an interoperable standard for quantifying uncertainty. UncertML is capable of describing uncertainty using statistics, probability distributions or a series of realisations. The capabilities of UncertML are demonstrated through a series of XML examples. This thesis then provides a series of example use cases where UncertML is integrated with existing standards in a variety of applications. The Sensor Observation Service - a service for querying and retrieving sensor-observed data - is extended to provide a standardised method for quantifying the inherent uncertainties in sensor observations. The INTAMAP project demonstrates how UncertML can be used to aid uncertainty propagation using a WPS by allowing UncertML as input and output data. The flexibility of UncertML is demonstrated with an extension to the GML geometry schemas to allow positional uncertainty to be quantified. Further applications and developments of UncertML are discussed.
|
160 |
Lexical database enrichment through semi-automated morphological analysisRichens, Thomas Martin January 2011 (has links)
Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.
|
Page generated in 0.0639 seconds