• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 30
  • 17
  • 10
  • 7
  • 7
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 295
  • 149
  • 121
  • 72
  • 53
  • 41
  • 34
  • 31
  • 30
  • 30
  • 27
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Metodologias de suporte a verificação e análise de modelos de plataformas em alto nível de abstração / Analysis and verification support methodologies for high abstractions level platforms

Albertini, Bruno de Carvalho, 1980- 10 June 2011 (has links)
Orientadores: Sandro Rigo, Guido Costa Souza de Araújo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-20T09:24:51Z (GMT). No. of bitstreams: 1 Albertini_BrunodeCarvalho_D.pdf: 1167408 bytes, checksum: 90cc3a4bfc9be31f93622556f91b0b51 (MD5) Previous issue date: 2011 / Resumo: A crescente complexidade das descrições de hardware em alto nível tem motivado a criação de metodologias de desenvolvimento por vários anos, sendo o mais recente nível de abstração representado pelo que é chamado de projeto Electronic System Level (ESL) e os projetos baseados em plataformas. Neste cenário, a exploração simultânea de diversos modelos arquiteturais, como os Systems-on-Chip (SoC), é a chave para se obter um bom balanceamento no particionamento hardware-software e melhorar o desempenho tanto do hardware quanto do software. Isto demanda uma infraestrutura de simulação de plataformas capaz de simular com rapidez, em um alto nível de abstração, tanto o software quanto o hardware. SystemC despontou como uma das linguagens de descrição mais adotadas e, juntamente com a modelagem em nível de transação (TLM, do inglês Transaction Level Modeling), vem sendo amplamente reconhecido como a técnica mais propícia para desenvolvimento em ESL. Uma das características mais marcantes de TLM é a possibilidade de reutilizar toda a infraestrutura da plataforma para a simulação de hardware e software [12]. A integração da verificação no fluxo de projeto é muito importante em uma metodologia baseada em TLM. Uma das técnicas de verificação mais conhecidas é a injeção de estímulos, usada para guiar a simulação para casos limite. Este tipo de funcionalidade é útil para aumentar a cobertura da verificação. As ferramentas disponíveis para descrições SystemC não permitem injeção de estímulos sem que o modelo seja alterado ou sem a utilização de um núcleo de simulação modificado para tal tarefa. Para a depuração, não temos notícia de nenhuma ferramenta de código aberto disponível, porém existem boas ferramentas comerciais preparadas especificamente para a depuração de modelos em SystemC. Nesta tese são propostas três metodologias para melhorar a capacidade de introspecção, depuração e análise de modelos de hardware descritos em alto nível de abstração. A primeira delas é composta por uma metodologia de reflexão computacional aplicável a módulos SystemC através da inserção de módulos de inspeção, que chamamos de ReFlexBox. A segunda técnica desenvolvida foi chamada de SignalReplay, e representa uma evolução da primeira técnica voltada para a captura, análise e injeção dos dados coletados pela reflexão. A última metodologia proposta, chamada de PDFA (do inglês, Platform Dataflow Analysis) visa extrair metadados através da reflexão de tipos sobrecarregados, permitindo que o projetista aplique técnicas de compiladores para a análise de hardware. Os resultados obtidos são apresentados como experimentos, implementados na forma de estudos de caso. Estes experimentos permitiram avaliar a eficácia das técnicas propostas que, ao contrário de trabalhos correlatos, aderem a seis princípios que consideramos fundamentais: (1) não são intrusivas em relação as modificações no modelo que podem ser necessárias para implementar a introspecção; (2) não necessitam de modificações no ambiente de simulação, compiladores ou bibliotecas, incluindo nossa linguagem alvo: SystemC; (3) geram uma sobrecarga pequena no tempo de simulação; (4) proveem observabilidade e controlabilidade; (5) são extensíveis, permitindo a adaptação para utilização em trabalhos similares, com pouca ou nenhuma modificação nas metodologias; e (6) protegem a propriedade intelectual do módulo sob verificação / Abstract: The increasing complexity of high level hardware descriptions has motivated the creation of development methodologies for several years, being the most recent level of abstraction represented by projects based on platforms and on the so called Electronic System Level design (ESL). In this scenario, simultaneously exploring different architectural models, like Systems-on-Chip (SoC), is the key to achieve a good balance on hardware-software partitioning and improve performance of both hardware and software. This requires a platform simulation infrastructure able to simulate at high speeds and high level of abstraction, both software and hardware. SystemC emerged as one of the most widely adopted description languages and, when used with the Transaction Level Modeling (TLM), has been widely recognized as the most suitable for ESL development. One of the most striking features of TLM is the possibility to reuse all the infrastructure platform for the simulation of hardware and software [12]. Integration of the verification into design flow is a key point in a TLM-based methodology. One well-known verification technique is the injection stimuli, used to guide the simulation to borderline states. This kind of functionality is useful to increase the coverage of the verification. The tools currently available for SystemC descriptions do not allow stimuli injection without model modifications, or without the use of a modified SystemC simulation core specially crafted for this task. We could not find any open source tool for debugging, but there are good commercial tools specifically prepared to SystemC model debugging. This thesis proposes three methodologies focused on improving the support for introspection, debug, and analysis of hardware models described in high abstraction level. First one is a methodology using computational reflection, applicable to SystemC descriptions by inserting inspection modules, that we call ReflexBoxes. The second technique is called SignalReplay, an evolution of the first technique focused on the capture, injection, and analysis of data collected by reflection. The last proposed methodology, called Platform Dataflow Analysis (PDFA), aims on the metadata extraction through overloaded type reflection, allowing the designer to use compiler techniques for hardware analysis. The results are presented as experiments, implemented as case studies. These experiments allowed us to evaluate the effectiveness of the proposed techniques that, unlike related work, adhere to what we consider six fundamental principles: (1) are not intrusive regarding any model modifications that may be necessary to implement introspection; (2) do not require any change in simulation environment, compilers, or libraries, including our target language: SystemC; (3) generate minimal overhead in simulation time; (4) provide observability and controllability; (5) are extensible, allowing the adaptation for use in similar work with little or no change in the methodology; and (6) protect the intellectual property of the module under verification / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
272

Combinaison des techniques de Bounded Model Checking et de programmation par contraintes pour l'aide à la localisation d'erreurs : exploration des capacités des CSP pour la localisation d'erreurs / Combining techniques of Bounded Model Checking and constraint programming to aid for error localization : exploration of CSP capacities for error localization

Bekkouche, Mohammed 11 December 2015 (has links)
Un vérificateur de modèle peut produire une trace de contreexemple, pour un programme erroné, qui est souvent difficile à exploiter pour localiser les erreurs dans le code source. Dans ma thèse, nous avons proposé un algorithme de localisation d'erreurs à partir de contreexemples, nommé LocFaults, combinant les approches de Bounded Model Checking (BMC) avec un problème de satisfaction de contraintes (CSP). Cet algorithme analyse les chemins du CFG (Control Flow Graph) du programme erroné pour calculer les sous-ensembles d'instructions suspectes permettant de corriger le programme. En effet, nous générons un système de contraintes pour les chemins du graphe de flot de contrôle pour lesquels au plus k instructions conditionnelles peuvent être erronées. Ensuite, nous calculons les MCSs (Minimal Correction Sets) de taille limitée sur chacun de ces chemins. La suppression de l'un de ces ensembles de contraintes donne un sous-ensemble satisfiable maximal, en d'autres termes, un sous-ensemble maximal de contraintes satisfaisant la postcondition. Pour calculer les MCSs, nous étendons l'algorithme générique proposé par Liffiton et Sakallah dans le but de traiter des programmes avec des instructions numériques plus efficacement. Cette approche a été évaluée expérimentalement sur des programmes académiques et réalistes. / A model checker can produce a trace of counter-example for erroneous program, which is often difficult to exploit to locate errors in source code. In my thesis, we proposed an error localization algorithm from counter-examples, named LocFaults, combining approaches of Bounded Model-Checking (BMC) with constraint satisfaction problem (CSP). This algorithm analyzes the paths of CFG (Control Flow Graph) of the erroneous program to calculate the subsets of suspicious instructions to correct the program. Indeed, we generate a system of constraints for paths of control flow graph for which at most k conditional statements can be wrong. Then we calculate the MCSs (Minimal Correction Sets) of limited size on each of these paths. Removal of one of these sets of constraints gives a maximal satisfiable subset, in other words, a maximal subset of constraints satisfying the postcondition. To calculate the MCSs, we extend the generic algorithm proposed by Liffiton and Sakallah in order to deal with programs with numerical instructions more efficiently. This approach has been experimentally evaluated on a set of academic and realistic programs.
273

Automated Debugging and Bug Fixing Solutions : A Systematic Literature Review and Classification / Automated Debugging and Bug Fixing Solutions : A Systematic Literature Review and Classification

shafiq, Hafiz Adnan, Arshad, Zaki January 2013 (has links)
Context: Bug fixing is the process of ensuring correct source code and is done by developer. Automated debugging and bug fixing solutions minimize human intervention and hence minimize the chance of producing new bugs in the corrected program. Scope and Objectives: In this study we performed a detailed systematic literature review. The scope of work is to identify all those solutions that correct software automatically or semi-automatically. Solutions for automatic correction of software do not need human intervention while semi-automatic solutions facilitate a developer in fixing a bug. We aim to gather all such solutions to fix bugs in design, i.e., code, UML design, algorithms and software architecture. Automated detection, isolation and localization of bug are not in our scope. Moreover, we are only concerned with software bugs and excluding hardware and networking domains. Methods: A detailed systematic literature review (SLR) has been performed. A number of bibliographic sources are searched, including Inspec, IEEE Xplore, ACM digital library, Scopus, Springer Link and Google Scholar. Inclusion/exclusion, study quality assessment, data extraction and synthesis have been performed in depth according to guidelines provided for performing SLR. Grounded theory is used to analyze literature data. To check agreement level between two researchers, Kappa analysis is used. Results: Through SLR we identified 46 techniques. These techniques are classified in automated/semi-automated debugging and bug fixing. Strengths and weaknesses of each of them are identified, along with which types of bugs each can fix and in which language they can be implement. In the end, classification is performed which generate a list of approaches, techniques, tools, frameworks, methods and systems. Along, this classification and categorization we separated bug fixing and debugging on the bases of search algorithms. Conclusion: In conclusion achieved results are all automated/semi-automated debugging and bug fixing solutions that are available in literature. The strengths/benefits and weaknesses/limitations of these solutions are identified. We also recognize type of bugs that can be fixed using these solutions. And those programming languages in which these solutions can be implemented are discovered as well. In the end a detail classification is performed. / alla automatiska / halvautomatiska felsökning och felrättning lösningar som är tillgängliga i litteraturen. De styrkor / fördelar och svagheter / begränsningar av dessa lösningar identifieras. Vi erkänner också typ av fel som kan fastställas med hjälp av dessa lösningar. Och de programmeringsspråk där dessa lösningar kan genomföras upptäcks också. Till slut en detalj klassificering utförs / +46 763 23 93 87, +46 70 966 09 51
274

Execution trace management to support dynamic V&V for executable DSMLs / Gestion de traces d'exécution pour permettre la vérification et la validation pour des langages de modélisation dédiés exécutables

Bousse, Erwan 03 December 2015 (has links)
Les techniques dynamiques de vérification et validation (V&V) de modèles sont nécessaires pour assurer la qualité des modèles exécutables. La plupart de ces techniques reposent sur la concept de trace d'exécution, une séquence contenant un ensemble d'informations sur une exécution. Par conséquent, pour permettre la V&V dynamique de modèles exécutables conformes à n'importe quel langage de modélisation dédié exécutable (LMDx), il est crucial de fournir des outils pour construire et manipuler toutes sortes de traces d'exécution. À cet effet, nous proposons d'abord une approche de clonage efficace de modèles afin de pouvoir construire des traces d'exécution génériques à base de clones. À l'aide d'un générateur aléatoire de métamodèles, nous montrons que cette approche passe à l'échelle avec seulement un léger surcoût lors de la manipulation de clones. Nous présentons ensuite une approche générative pour définir des métamodèles dédiés et multidimensionnels pour représenter des traces d'exécution, qui consiste à créer la structure de données spécifique aux traces d'exécution d'un LMDx donné. Ainsi, les traces d'exécution de modèles conformes à ce LMDx peuvent être capturées et manipulées efficacement de manière dédiée et à l'aide de différentes dimensions. Nous appliquons cette approche à deux techniques de V&V dynamiques existantes, à savoir la différentiation sémantique et le débogage omniscient. Nous montrons qu'un tel métamodèle de traces d'exécution généré fournit une bonne facilité d'usage et un bon passage à l'échelle pour la V&V dynamique au plus tôt pour n'importe quel LMDx. Nous avons intégré notre travail au sein du GEMOC Studio, un environnement de définition de langages et de modélisation issu de l'initiative internationale du même nom. / Dynamic verification and validation (V&V) techniques are required to ensure the correctness of executable models. Most of these techniques rely on the concept of execution trace, which is a sequence containing information about an execution. Therefore, to enable dynamic V&V of executable models conforming to any executable domain-specific modeling language (xDSML), it is crucial to provide efficient facilities to construct and manipulate all kinds of execution traces. To that effect, we first propose a scalable model cloning approach to conveniently construct generic execution traces using model clones. Using a random metamodel generator, we show that this approach is scalable in memory with little manipulation overhead. We then present a generative approach to define multidimensional and domain-specific execution trace metamodels, which consists in creating the execution trace data structure specific to an xDSML. Thereby, execution traces of models conforming to this xDSML can be efficiently captured and manipulated in a domain-specific way. We apply this approach to two existing dynamic V&V techniques, namely semantic differencing and omniscient debugging. We show that such a generated execution trace metamodel provides good usability and scalability for dynamic early V&V support for any xDSML. Our work have been implemented and integrated within the GEMOC Studio, which is a language and modeling workbench resulting from the eponym international initiative.
275

Fostering User Involvement in Ontology Alignment and Alignment Evaluation

Ivanova, Valentina January 2017 (has links)
The abundance of data at our disposal empowers data-driven applications and decision making. The knowledge captured in the data, however, has not been utilized to full potential, as it is only accessible to human interpretation and data are distributed in heterogeneous repositories. Ontologies are a key technology unlocking the knowledge in the data by providing means to model the world around us and infer knowledge implicitly captured in the data. As data are hosted by independent organizations we often need to use several ontologies and discover the relationships between them in order to support data and knowledge transfer. Broadly speaking, while ontologies provide formal representations and thus the basis, ontology alignment supplies integration techniques and thus the means to turn the data kept in distributed, heterogeneous repositories into valuable knowledge. While many automatic approaches for creating alignments have already been developed, user input is still required for obtaining the highest-quality alignments. This thesis focuses on supporting users during the cognitively intensive alignment process and makes several contributions. We have identified front- and back-end system features that foster user involvement during the alignment process and have investigated their support in existing systems by user interface evaluations and literature studies. We have further narrowed down our investigation to features in connection to the, arguably, most cognitively demanding task from the users’ perspective—manual validation—and have also considered the level of user expertise by assessing the impact of user errors on alignments’ quality. As developing and aligning ontologies is an error-prone task, we have focused on the benefits of the integration of ontology alignment and debugging. We have enabled interactive comparative exploration and evaluation of multiple alignments at different levels of detail by developing a dedicated visual environment—Alignment Cubes—which allows for alignments’ evaluation even in the absence of reference alignments. Inspired by the latest technological advances we have investigated and identified three promising directions for the application of large, high-resolution displays in the field: improving the navigation in the ontologies and their alignments, supporting reasoning and collaboration between users.
276

Assessment of spectrum-based fault localization for practical use / Avaliação de localização de defeitos baseada em espectro para uso prático

Higor Amario de Souza 17 April 2018 (has links)
Debugging is one of the most time-consuming activities in software development. Several fault localization techniques have been proposed in the last years, aiming to reduce development costs. A promising approach, called Spectrum-based Fault localization (SFL), consists of techniques that provide a list of suspicious program elements (e.g., statements, basic blocks, methods) more likely to be faulty. Developers should inspect a suspiciousness list to search for faults. However, these fault localization techniques are not yet used in practice. These techniques are based on assumptions about the developer\'s behavior when inspecting such lists that may not hold in practice. A developer is supposed to inspect an SFL list from the most to the least suspicious program elements (e.g., statements) until reaching the faulty one. This assumption leads to some implications: the techniques are assessed only by the position of a bug in a list; a bug is deemed as found when the faulty element is reached. SFL techniques should pinpoint the faulty program elements among the first picks to be useful in practice. Most techniques use ranking metrics to assign suspiciousness values to program elements executed by the tests. These ranking metrics have presented similar modest results, which indicates the need for different strategies to improve the effectiveness of SFL. Moreover, most techniques use only control-flow spectra due to the high execution costs associated with other spectra, such as data-flow. Also, little research has investigated the use of SFL techniques by practitioners. Understanding how developers use SFL may help to clarify the theoretical assumptions about their behavior, which in turn can collaborate with the proposal of techniques more feasible for practical use. Therefore, user studies are a valuable tool for the development of the area. The goal of this thesis research was to propose strategies to improve spectrum-based fault localization, focusing on its practical use. This thesis presents the following contributions. First, we investigate strategies to provide contextual information for SFL. These strategies helped to reduce the amount of code to be inspected until reaching the faults. Second, we carried out a user study to understand how developers use SFL in practice. The results show that developers can benefit from SFL to locate bugs. Third, we explore the use of data-flow spectrum for SFL. Data-flow spectrum singles out faults significantly better than control-flow spectrum, improving the fault localization effectiveness. / Depuração é uma das atividades mais custosas durante o desenvolvimento de programas. Diversas técnicas de localização de defeitos têm sido propostas nos últimos anos com o objetivo de reduzir custos de desenvolvimento. Uma abordagem promissora, chamada Localização de Defeitos baseada em Espectro (LDE), é formada por técnicas que fornecem listas contendo elementos de código (comandos, blocos básicos, métodos) mais suspeitos de conter defeitos. Desenvolvedores deveriam inspecionar uma lista de suspeição para procurar por defeitos. No entanto, essas técnicas de localização de defeitos ainda não são usadas na prática. Essas técnicas baseiam-se em suposições sobre o comportamento de desenvolvedores durante a inspeção de tais listas que podem não ocorrer na prática. Um desenvolvedor supostamente inspeciona uma lista de LDE a partir do elemento mais suspeito para o menos suspeito até atingir o elemento defeituoso. Essa suposição leva a algumas implicações: as técnicas são avaliadas somente pela posição dos defeitos nas listas; um defeito é considerado como encontrado quando o elemento defeituoso é atingido. Técnicas de LDE deveriam posicionar os elementos de código defeituosos entre as primeiras posições para serem úteis na prática. A maioria das técnicas usa métricas de ranqueamento para atribuir valores de suspeição aos elementos executados pelos testes. Essas métricas de ranqueamento têm apresentado resultados semelhantes, o que indica a necessidade de estratégias diferentes para melhorar a eficácia de LDE. Além disso, a maioria das técnicas usa somente espectros de fluxo de controle devido ao alto custo de execução associado a outros espectros, tais como fluxo de dados. Também, poucas pesquisas têm investigado o uso de técnicas de LDE por programadores. Entender como desenvolvedores usam LDE pode ajudar a esclarecer as suposições teóricas sobre seu comportamento, o que por sua vez pode para colaborar para a proposição de técnicas mais viáveis para uso prático. Portanto, estudos com usuários são importantes para o desenvolvimento da área. O objetivo desta pesquisa de doutorado foi propor estratégias para melhorar a localização de defeitos baseada em espectro focando em seu uso prático. Esta tese apresenta as seguintes contribuições originais. Primeiro, nós investigamos estratégias para fornecer informação de contexto para LDE. Essas estratégias ajudaram a reduzir quantidade de código a ser inspecionado até atingir os defeitos. Segundo, nós realizamos um estudo com usuários para entender como desenvolvedores usam LDE na prática. Os resultados mostram que desenvolvedores podem beneficiar-se de LDE para localizar defeitos. Terceiro, nós exploramos o uso de espectros de fluxo de dados para LDE. Mostramos que o espectro de fluxo de dados seleciona defeitos significamente melhor que espectro de fluxo de controle, aumentando a eficácia de localização de defeitos.
277

Réponses manquantes : Débogage et Réparation de requêtes / Query Debugging and Fixing to Recover Missing Query Results

Tzompanaki, Aikaterini 14 December 2015 (has links)
La quantité croissante des données s’accompagne par l’augmentation du nombre de programmes de transformation de données, généralement des requêtes, et par la nécessité d’analyser et comprendre leurs résultats : (a) pourquoi telle réponse figure dans le résultat ? ou (b) pourquoi telle information n’y figure pas ? La première question demande de trouver l’origine ou la provenance des résultats dans la base, un problème très étudié depuis une 20taine d’années. Par contre, expliquer l’absence de réponses dans le résultat d’une requête est un problème peu exploré jusqu’à présent. Répondre à une question Pourquoi-Pas consiste à fournir des explications quant à l’absence de réponses. Ces explications identifient pourquoi et comment les données pertinentes aux réponses manquantes sont absentes ou éliminées par la requête. Notre travail suppose que la base de données n’est pas source d’erreur et donc cherche à fournir des explications fondées sur (les opérateurs de) la requête qui peut alors être raffinée ultérieurement en modifiant les opérateurs "fautifs". Cette thèse développe des outils formels et algorithmiques destinés au débogage et à la réparation de requêtes SQL afin de traiter des questions de type Pourquoi-Pas. Notre première contribution, inspirée par une étude critique de l’état de l’art, utilise un arbre de requête pour rechercher les opérateurs "fautifs". Elle permet de considérer une classe de requêtes incluant SPJA, l’union et l’agrégation. L’algorithme NedExplain développé dans ce cadre, a été validé formellement et expérimentalement. Il produit des explications de meilleure qualité tout en étant plus efficace que l’état de l’art.L’approche précédente s’avère toutefois sensible au choix de l’arbre de requête utilisé pour rechercher les explications. Notre deuxième contribution réside en la proposition d’une notion plus générale d’explication sous forme de polynôme qui capture toutes les combinaisons de conditions devant être modifiées pour que les réponses manquantes apparaissent dans le résultat. Cette méthode s’applique à la classe des requêtes conjonctives avec inégalités. Sur la base d’un premier algorithme naïf, Ted, ne passant pas à l’échelle, un deuxième algorithme, Ted++, a été soigneusement conçu pour éliminer entre autre les calculs itérés de sous-requêtes incluant des produits cartésien. Comme pour la première approche, une évaluation expérimentale a prouvé la qualité et l’efficacité de Ted++. Concernant la réparation des requêtes, notre contribution réside dans l’exploitation des explications polynômes pour guider les modifications de la requête initiale ce qui permet la génération de raffinements plus pertinents. La réparation des jointures "fautives" est traitée de manière originale par des jointures externes. L’ensemble des techniques de réparation est mis en oeuvre dans FixTed et permet ainsi une étude de performance et une étude comparative. Enfin, Ted++ et FixTed ont été assemblés dans une plate-forme pour le débogage et la réparation de requêtes relationnelles. / With the increasing amount of available data and data transformations, typically specified by queries, the need to understand them also increases. “Why are there medicine books in my sales report?” or “Why are there not any database books?” For the first question we need to find the origins or provenance of the result tuples in the source data. However, reasoning about missing query results, specified by Why-Not questions as the latter previously mentioned, has not till recently receivedthe attention it is worth of. Why-Not questions can be answered by providing explanations for the missing tuples. These explanations identify why and how data pertinent to the missing tuples were not properly combined by the query. Essentially, the causes lie either in the input data (e.g., erroneous or incomplete data) or at the query level (e.g., a query operator like join). Assuming that the source data contain all the necessary relevant information, we can identify the responsible query operators formingquery-based explanations. This information can then be used to propose query refinements modifying the responsible operators of the initial query such that the refined query result contains the expected data. This thesis proposes a framework targeted towards SQL query debugging and fixing to recover missing query results based on query-based explanations and query refinements.Our contribution to query debugging consist in two different approaches. The first one is a tree-based approach. First, we provide the formal framework around Why-Not questions, missing from the state-of-the-art. Then, we review in detail the state-of-the-art, showing how it probably leads to inaccurate explanations or fails to provide an explanation. We further propose the NedExplain algorithm that computes correct explanations for SPJA queries and unions there of, thus considering more operators (aggregation) than the state of the art. Finally, we experimentally show that NedExplain is better than the both in terms of time performance and explanation quality. However, we show that the previous approach leads to explanations that differ for equivalent query trees, thus providing incomplete information about what is wrong with the query. We address this issue by introducing a more general notion of explanations, using polynomials. The polynomial captures all the combinations in which the query conditions should be fixed in order for the missing tuples to appear in the result. This method is targeted towards conjunctive queries with inequalities. We further propose two algorithms, Ted that naively interprets the definitions for polynomial explanations and the optimized Ted++. We show that Ted does not scale well w.r.t. the size of the database. On the other hand, Ted++ is capable ii of efficiently computing the polynomial, relying on schema and data partitioning and advantageous replacement of expensive database evaluations by mathematical calculations. Finally, we experimentally evaluate the quality of the polynomial explanations and the efficiency of Ted++, including a comparative evaluation.For query fixing we propose is a new approach for refining a query by leveraging polynomial explanations. Based on the input data we propose how to change the query conditions pinpointed by the explanations by adjusting the constant values of the selection conditions. In case of joins, we introduce a novel type of query refinements using outer joins. We further devise the techniques to compute query refinements in the FixTed algorithm, and discuss how our method has the potential to be more efficient and effective than the related work.Finally, we have implemented both Ted++ and FixTed in an system prototype. The query debugging and fixing platform, short EFQ allows users to nteractively debug and fix their queries when having Why- Not questions.
278

Comparing Mobile Applications' Energy Consumption

Wilke, Claas, Richly, Sebastian, Piechnick, Christian, Götz, Sebastian, Püschel, Georg, Aßmann, Uwe 17 January 2013 (has links)
As mobile devices are nowadays used regularly and everywhere, their energy consumption has become a central concern for their users. However, mobile applications often do not consider energy requirements and users have to install and try them to reveal information on their energy behavior. In this paper, we compare mobile applications from two domains and show that applications reveal different energy consumption while providing similar services. We define microbenchmarks for emailing and web browsing and evaluate applications from these domains. We show that non-functional features such as web page caching can but not have to have a positive influence on applications' energy consumption.
279

DebEAQ - debugging empty-answer queries on large data graphs

Lehner, Wolfgang, Vasilyeva, Elena, Heinze, Thomas, Thiele, Maik 12 January 2023 (has links)
The large volume of freely available graph data sets impedes the users in analyzing them. For this purpose, they usually pose plenty of pattern matching queries and study their answers. Without deep knowledge about the data graph, users can create ‘failing’ queries, which deliver empty answers. Analyzing the causes of these empty answers is a time-consuming and complicated task especially for graph queries. To help users in debugging these ‘failing’ queries, there are two common approaches: one is focusing on discovering missing subgraphs of a data graph, the other one tries to rewrite the queries such that they deliver some results. In this demonstration, we will combine both approaches and give the users an opportunity to discover why empty results were delivered by the requested queries. Therefore, we propose DebEAQ, a debugging tool for pattern matching queries, which allows to compare both approaches and also provides functionality to debug queries manually.
280

Statistical Estimation of Software Reliability and Failure-causing Effect

Shu, Gang 02 September 2014 (has links)
No description available.

Page generated in 0.082 seconds