• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 32
  • 30
  • 22
  • 16
  • 10
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 387
  • 387
  • 97
  • 92
  • 59
  • 55
  • 50
  • 45
  • 36
  • 34
  • 33
  • 33
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Kodgenerering i CASE-verktyg : En undersökning hur CASE-verktyg uppfyller experters kodgenereringskrav

Andersson, Martin January 2001 (has links)
<p>Denna rapport undersöker krav, tagna från ett ramverk för evaluering av CASE-verktyg i ett kontextuellt sammanhang, i två representativa CASE-verktyg. Ramverket utnyttjar en modell som föreslagits av Lundell och Lings för att extrahera både krav och förväntningar som en organisation (www.it.volvo.com) hade på vad ett CASE-verktyg är och kan utföra.</p><p>Ramverket extraherar krav i ett organisationell kontext, dvs. utvärderingen utfördes innan verktyget som evaluerades användes i organisationen. Detta indikerar på att kraven inte är knutna till ett specifikt verktyg, samt att CASE-verktyg inte säkert stödjer dessa krav.</p><p>Resultatet för denna rapport är att viss semantisk förlust uppstod vid transformering av kod och modeller.</p>
162

Learning gene interactions from gene expression data dynamic Bayesian networks

Sigursteinsdottir, Gudrun January 2004 (has links)
<p>Microarray experiments generate vast amounts of data that evidently reflect many aspects of the underlying biological processes. A major challenge in computational biology is to extract, from such data, significant information and knowledge about the complex interplay between genes/proteins. An analytical approach that has recently gained much interest is reverse engineering of genetic networks. This is a very challenging approach, primarily due to the dimensionality of the gene expression data (many genes, few time points) and the potentially low information content of the data. Bayesian networks (BNs) and its extension, dynamic Bayesian networks (DBNs) are statistical machine learning approaches that have become popular for reverse engineering. In the present study, a DBN learning algorithm was applied to gene expression data produced from experiments that aimed to study the etiology of necrotizing enterocolitis (NEC), a gastrointestinal inflammatory (GI) disease that is the most common GI emergency in neonates. The data sets were particularly challenging for the DBN learning algorithm in that they contain gene expression measurements for relatively few time points, between which the sampling intervals are long. The aim of this study was, therefore, to evaluate the applicability of DBNs when learning genetic networks for the NEC disease, i.e. from the above-mentioned data sets, and use biological knowledge to assess the hypothesized gene interactions. From the results, it was concluded that the NEC gene expression data sets were not informative enough for effective derivation of genetic networks for the NEC disease with DBNs and Bayesian learning.</p>
163

Multiple representation databases for topographic information

Dunkars, Mats January 2004 (has links)
No description available.
164

Dynamic Application Level Security Sensors

Rathgeb, Christopher Thomas 01 May 2010 (has links)
The battle for cyber supremacy is a cat and mouse game: evolving threats from internal and external sources make it difficult to protect critical systems. With the diverse and high risk nature of these threats, there is a need for robust techniques that can quickly adapt and address this evolution. Existing tools such as Splunk, Snort, and Bro help IT administrators defend their networks by actively parsing through network traffic or system log data. These tools have been thoroughly developed and have proven to be a formidable defense against many cyberattacks. However, they are vulnerable to zero-day attacks, slow attacks, and attacks that originate from within. Should an attacker or some form of malware make it through these barriers and onto a system, the next layer of defense lies on the host. Host level defenses include system integrity verifiers, virus scanners, and event log parsers. Many of these tools work by seeking specific attack signatures or looking for anomalous events. The defenses at the network and host level are similar in nature. First, sensors collect data from the security domain. Second, the data is processed, and third, a response is crafted based on the processing. The application level security domain lacks this three step process. Application level defenses focus on secure coding practices and vulnerability patching, which is ineffective. The work presented in this thesis uses a technique that is commonly employed by malware, dynamic-link library (DLL) injection, to develop dynamic application level security sensors that can extract fine-grain data at runtime. This data can then be processed to provide stronger application level defense by shrinking the vulnerability window. Chapters 5 and 6 give proof of concept sensors and describe the process of developing the sensors in detail.
165

Unit testing database applications using SpecDB: A database of software specifications

Mikhail, Rana Farid 01 June 2006 (has links)
In this dissertation I introduce SpecDB, a database created to represent and host software specifications in a machine-readable format. The specifications represented in SpecDB are for the purpose of unit testing database operations. A structured representation aids in the processes of both automated software testing and software code generation, based on the actual software specifications. I describe the design of SpecDB, the underlying database that can hold the specifications required for unit testing database operations.Specifications can be fed directly into SpecDB, or, if available, the formal specifications can be translated to the SpecDB representation. An algorithm that translates formal specifications to the SpecDB representation is described. The Z formal specification language has been chosen as an example for the translation algorithm. The outcome of the translation algorithm is a set of machine-readable formal specifications.To demonstrate the use of Sp ecDB, two automated tools are presented. The first automatically generates database constraints from represented business rules in SpecDB. This constraint generator gives the advantage of enforcing some business rules at the database level for better data quality. The second automated application of SpecDB is a reverse engineering tool that logs the actual execution of the program from the code. By Automatically comparing the output of this tool to the specifications in SpecDB, errors of commission are highlighted that might otherwise not be identified. Some errors of commission including coding unspecified behavior together with correct coding of the specifications cannot be discovered through black box testing techniques, since these techniques cannot observe what other modifications or outputs have happened in the background. For example, black box, functional testing techniques cannot identify an error if the software being tested produced the correct specified output but mor e over, sent classified data to insecure locations. Accordingly, the decision of whether a software application passed a test depends on whether it coded all the specifications and only the specifications for that unit. Automated tools, using the reverse engineering application introduced in this dissertation, can thus automatically make the decision whether the software passed a test or not based on the provided specifications.
166

Μοντελοποίηση εφαρμογών Παγκόσμιου Ιστού μέσω τεχνικών αντίστροφης μηχανίκευσης / Modeling web applications through reverse engineering techniques

Μποβίλας, Κώστας 24 November 2014 (has links)
Στόχοι της παρούσας διπλωματικής εργασίας είναι η μελέτη τεχνικών αντίστροφης μηχανίκευσης εφαρμογών παγκόσμιου ιστού και η αξιολόγησή τους, εξάγοντας χρήσιμα συμπεράσματα σχετικά με την τρέχουσα κατάσταση και τις διαμορφούμενες μελλοντικές κατευθύνσεις. Αρχικά, γίνεται επισκόπηση των μεθόδων μοντελοποίησης εφαρμογών παγκόσμιου ιστού που έχουν προταθεί από την ερευνητική κοινότητα και παρουσιάζονται τα σχεδιαστικά πρότυπα που έχουν οριστεί πάνω σε αυτές τις μεθόδους. Κατόπιν, παρουσιάζονται οι βασικές έννοιες της αντίστροφης μηχανίκευσης καθώς και συγκεκριμένες τεχνικές που έχουν αναπτυχθεί για την επίτευξή της. Τελικά, παραθέτουμε χρήσιμα συμπεράσματα που προκύπτουν από τη σύγκριση και αξιολόγηση των προτεινόμενων τεχνικών αντίστροφης μηχανίκευσης. / The main goal of this thesis is to study reverse engineering methods and techniques applied to web applications and to evaluate these methods extracting useful conclusions about the present and the future directions of this research area. At start, we study the various modeling methods that have been proposed, as well as the design patterns that have been defined and the reverse engineering methods that have been developed. Then, we present the basic concepts of reverse engineering and some of the methods that have been developed from the research community. Finally, we state our conclusions extracted from the evaluation of the techniques.
167

Feature Model Synthesis

She, Steven 29 August 2013 (has links)
Variability provides the ability to adapt and customize a software system's artifacts for a particular context or circumstance. Variability enables code reuse, but its mechanisms are often tangled within a software artifact or scattered over multiple artifacts. This makes the system harder to maintain for developers, and harder to understand for users that configure the software. Feature models provide a centralized source for describing the variability in a software system. A feature model consists of a hierarchy of features—the common and variable system characteristics—with constraints between features. Constructing a feature model, however, is a arduous and time-consuming manual process. We developed two techniques for feature model synthesis. The first, Feature-Graph-Extraction, is an automated algorithm for extracting a feature graph from a propositional formula in either conjunctive normal form (CNF), or disjunctive normal form (DNF). A feature graph describes all feature diagrams that are complete with respect to the input. We evaluated our algorithms against related synthesis algorithms and found that our CNF variant was significantly faster than the previous comparable technique, and the DNF algorithm performed similarly to a comparable, but newer technique, with the exception of several models where our algorithm was faster. The second, Feature-Tree-Synthesis, is a semi-automated technique for building a feature model given a feature graph. This technique uses both logical constraints and text to address the most challenging part of feature model synthesis—constructing the feature hierarchy—by ranking potential parents of a feature with a textual similarity heuristic. We found that the procedure effectively reduced a modeler's choices from thousands, to five or less when synthesizing the Linux and eCos variability models. Our third contribution is the analysis of Kconfig—a language similar to feature modeling used to specify the variability model of the Linux kernel. While large feature models are reportedly used in industry, these models have not been available to the research community for benchmarking feature model analysis and synthesis techniques. We compare Kconfig to feature modeling, reverse engineer formal semantics, and translate 12 open-source Kconfig models—including the Linux model with over 6000 features—to propositional logic.
168

Reverse Engineering of Temporal Gene Expression Data Using Dynamic Bayesian Networks And Evolutionary Search

Salehi, Maryam 17 September 2008 (has links)
Capturing the mechanism of gene regulation in a living cell is essential to predict the behavior of cell in response to intercellular or extra cellular factors. Such prediction capability can potentially lead to development of improved diagnostic tests and therapeutics [21]. Amongst reverse engineering approaches that aim to model gene regulation are Dynamic Bayesian Networks (DBNs). DBNs are of particular interest as these models are capable of discovering the causal relationships between genes while dealing with noisy gene expression data. At the same time, the problem of discovering the optimum DBN model, makes structure learning of DBN a challenging topic. This is mainly due to the high dimensionality of the search space of gene expression data that makes exhaustive search strategies for identifying the best DBN structure, not practical. In this work, for the first time the application of a covariance-based evolutionary search algorithm is proposed for structure learning of DBNs. In addition, the convergence time of the proposed algorithm is improved compared to the previously reported covariance-based evolutionary search approaches. This is achieved by keeping a fixed number of good sample solutions from previous iterations. Finally, the proposed approach, M-CMA-ES, unlike gradient-based methods has a high probability to converge to a global optimum. To assess how efficient this approach works, a temporal synthetic dataset is developed. The proposed approach is then applied to this dataset as well as Brainsim dataset, a well known simulated temporal gene expression data [58]. The results indicate that the proposed method is quite efficient in reconstructing the networks in both the synthetic and Brainsim datasets. Furthermore, it outperforms other algorithms in terms of both the predicted structure accuracy and the mean square error of the reconstructed time series of gene expression data. For validation purposes, the proposed approach is also applied to a biological dataset composed of 14 cell-cycle regulated genes in yeast Saccharomyces Cerevisiae. Considering the KEGG1 pathway as the target network, the efficiency of the proposed reverse engineering approach significantly improves on the results of two previous studies of yeast cell cycle data in terms of capturing the correct interactions. / Thesis (Master, Computing) -- Queen's University, 2008-09-09 11:35:33.312
169

Inférence statique et par contraintes des relations de composition dans des programmes Java

Habti, Norddin January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
170

A model-based approach for extracting business rules out of legacy information systems

Cosentino, Valerio 18 December 2013 (has links) (PDF)
Today's business world is very dynamic and organizations have to quickly adjust their internal policies to follow the market changes. Such adjustments must be propagated to the business logic embedded in the organization's information systems, that are often legacy applications not designed to represent and operationalize the business logic independently from the technical aspects of the programming language employed. Consequently, the business logic buried in the system must be discovered and understood before being modified. Unfortunately, such activities slow down the modification of the system to new requirements settled in the organization policies and threaten the consistency and coherency of the organization business. In order to simplify these activities, we provide amodel-based approach to extract and represent the business logic, expressed as a set of business rules, from the behavioral and structural parts of information systems. We implement such approach for Java, COBOL and relational database management systems. The proposed approach is based on Model Driven Engineering,that provides a generic and modular solution adaptable to different languages by offering an abstract and homogeneous representation of the system.

Page generated in 0.0743 seconds