• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 719
  • 238
  • 238
  • 121
  • 67
  • 48
  • 21
  • 19
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • Tagged with
  • 1771
  • 529
  • 473
  • 274
  • 184
  • 139
  • 137
  • 117
  • 117
  • 115
  • 114
  • 109
  • 107
  • 102
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Étude de signaux laser speckle : méthodes pour la mesure de paramètres hémodynamiques de la microcirculation et de la macrocirculation / Methods for hemodynamic parameters measurement using the laser speckle effect in macro and microcirculation

Vaz, Pedro Guilherme 12 December 2016 (has links)
Le speckle laser est un effet d'interférence longtemps considéré comme néfaste lors de l'utilisation de sources de lumière cohérente. Cependant, pour certaines applications, cet effet peut être bénéfique et utilisé comme source d'information. C’est le cas du domaine biomédical.Ainsi, le speckle laser est utilisé depuis des décennies pour la surveillance du flux sanguin microvasculaire. Il commence à être considéré aussi pour l'extraction de paramètres de la macrocirculation sanguine. Ce travail vise donc tout d’abord à démontrer que le speckle laser permet d'évaluer les paramètres hémodynamiques de la macrocirculation avec fiabilité et à partir d’une technique identique à celle employée dans l’étude de la microcirculation. Ceci conduira à une intégration rapide du dispositif dans les instruments existants. Par ailleurs, l'un des problèmes les plus importants du speckle laser,empêchant l’obtention d’une analyse totalement quantitative, est l'effet des diffuseurs statiques. Ce type de diffuseurs influence fortement le contraste de speckle, conduisant à une mauvaise interprétation des données. Le second objectif de ce travail est donc d'étudier l'effet des diffuseurs statiques sur la corrélation et le contraste du speckle laser. Nos résultats montrent tout d’abord que le speckle laser est un phénomène intéressant pour extraire les paramètres hémodynamiques de la macrocirculation. Par ailleurs, nos études révèlent que le calcul de la corrélation du speckle laser permet d'estimer le rapport entre diffuseurs statiques / dynamiques avec une bonne fiabilité. En outre, le contraste temporel permet de déterminer les diffuseurs dynamiques possédant des vitesses différentes. / The laser speckle is an interference effect that has been considered as a main drawback in the use of coherent light sources. However, for a specific set of applications, this effect can become a source of information. Among these applications there are the biomedical ones. The laser speckle has been used for decades to monitor microvascular blood flow but only now starts to be considered as a method that can also be used for macrocirculation parameters extraction. This work first aims at demonstrating that laser speckle can be used for macrocirculation assessment with good reliability, using the same technique as the one employed in microcirculation assessment. The use of the same methods could lead to a rapid inclusion of this new evaluation in the existing devices. Furthermore, one of the most important laser speckle issues, that prevents a fully quantitative analysis, is the effect of static scatterers. This type of scatterers strongly influences the speckle contrast, leading to a wrong interpretation of the data. The second objective of this work is to study the effect of statics catterers on the laser speckle correlation and contrast. Our results show that the laser speckle is an interesting phenomenon to extract hemodynamic parameters in the macrocirculation. This work also demonstrates that the laser speckle correlation is able to estimate the ratio between static/dynamic scatterers with good reliability. Moreover, the temporal speckle contrast achieved a very good performance in discerning dynamic scatterers with different velocities.
482

Identification and Analysis of Combined Quality Assurance Approaches

Nha, Vi Tran Ngoc January 2010 (has links)
Context: Due to the increasing size and complexity of software today, the amount of effort for software quality assurance (QA) is growing and getting more and more expensive. There are many techniques lead to the improvement in software QA. Static analysis can obtain very good coverage while analyze program without execution, but it has the weakness of imprecision by false errors. In contrast, dynamic analysis can obtain only partial coverage due to a large number of possible test cases, but the reported errors are more precise. Static and dynamic analyses can complement each other by providing valuable information that would be missed by using isolated analysis technique. Although many studies investigate the QA approaches that combine static and dynamic QA techniques, it is unclear what we have learned from these studies, because no systematic synthesis exists to date. Method: This thesis is intended to provide basic key concepts for combined QA approaches. A major part of this thesis presents the systematic review that brings details discussion about state of the art on the approaches that combine static and dynamic QA techniques. The systematic review is aimed at the identification of the existed combined QA approaches, how to classify them, their purposes and input as well as introduce which combination is available. Result: The results show that, there are two relations in the combination of static and dynamic techniques such as integration and separation. Besides, the objectives of combined QA approaches were introduced according to QA process quality and product quality. The most common inputs for combined approaches were also discussed. Moreover, we identified which combination of static and dynamic techniques should or should not be used as well as the potential combination for further research.
483

Intelligent Code Inspection using Static Code Features : An approach for Java

Moriggl, Irene January 2010 (has links)
Effective defect detection is still a hot issue when it comes to software quality assurance. Static source code analysis plays thereby an important role, since it offers the possibility for automated defect detection in early stages of the development. As detecting defects can be seen as a classification problem, machine learning is recently investigated to be used for this purpose. This study presents a new model for automated defect detection by means of machine learn- ers based on static Java code features. The model comprises the extraction of necessary features as well as the application of suitable classifiers to them. It is realized by a prototype for the feature extraction and a study on the prototype’s output in order to identify the most suitable classifiers. Finally, the overall approach is evaluated in a using an open source project. The suitability study and the evaluation show, that several classifiers are suitable for the model and that the Rotation Forest, Multilayer Perceptron and the JRip classifier make the approach most effective. They detect defects with an accuracy higher than 96%. Although the approach comprises only a prototype, it shows the potential to become an effective alternative to nowa- days defect detection methods.
484

Information Visualization and Machine Learning Applied on Static Code Analysis

Kacan, Denis, Sidlauskas, Darius January 2008 (has links)
Software engineers will possibly never see the perfect source code in their lifetime, but they are seeing much better analysis tools for finding defects in software. The approaches used in static code analysis emerged from simple code crawling to usage of statistical and probabilistic frameworks. This work presents a new technique that incorporates machine learning and information visualization into static code analysis. The technique learns patterns in a program’s source code using a normalized compression distance and applies them to classify code fragments into faulty or correct. Since the classification frequently is not perfect, the training process plays an essential role. A visualization element is used in the hope that it lets the user better understand the inner state of the classifier making the learning process transparent. An experimental evaluation is carried out in order to prove the efficacy of an implementation of the technique, the Code Distance Visualizer. The outcome of the evaluation indicates that the proposed technique is reasonably effective in learning to differentiate between faulty and correct code fragments, and the visualization element enables the user to discern when the tool is correct in its output and when it is not, and to take corrective action (further training or retraining) interactively, until the desired level of performance is reached.
485

Static Code Features for a Machine Learning based Inspection : An approach for C

Tribus, Hannes January 2010 (has links)
Delivering fault free code is the clear goal of each devel- oper, however the best method to achieve this aim is still an open question. Despite that several approaches have been proposed in literature there exists no overall best way. One possible solution proposed recently is to combine static source code analysis with the discipline of machine learn- ing. An approach in this direction has been defined within this work, implemented as a prototype and validated subse- quently. It shows a possible translation of a piece of source code into a machine learning algorithm’s input and further- more its suitability for the task of fault detection. In the context of the present work two prototypes have been de- veloped to show the feasibility of the presented idea. The output they generated on open source projects has been collected and used to train and rank various machine learn- ing classifiers in terms of accuracy, false positive and false negative rates. The best among them have subsequently been validated again on an open source project. Out of the first study at least 6 classifiers including “MultiLayerPer- ceptron”, “Ibk” and “ADABoost” on a “BFTree” could convince. All except the latter, which failed completely, could be validated in the second study. Despite that the it is only a prototype, it shows the suitability of some machine learning algorithms for static source code analysis.
486

Design and Implementation of a Source Code Profiling Toolset for Embedded System Analysis

Qin, An January 2010 (has links)
The market needs for embedded or mobile devices were exploding in the last few years. Customers demand for devices that not only have high capacity of managing various complex jobs, but also can do it fast. Manufacturers therefore, are looking for a new field of processors that fits the special needs of embedded market, for example low power consumption, highly integrated with most components, but also provides the ability to handle different use cases. The traditional ASICs satisfied the market with great performance-per-watt but limited scalability. ASIP processors on the other hand, impact the new market with the ability of high-speed optimized general computing while energy efficiency is only slightly lower than ASICs. One essential problem in ASIP design is how to find the algorithms that can be accelerated. Hardware engineers used to optimize the instruction set manually. But with the toolset introduced in this thesis, design automation can be made by program profiling and the development cycle can be trimmed therefore reducing the cost. Profiling is the process of exposing critical parts of a certain program via static code analysis or dynamic performance analysis. This thesis introduced a code profiler that targeted at discovering repetition section of a program through static and dynamic analysis. The profiler also measures the payload of each loop and provides profiling report with a user friendly GUI client.
487

Code Profiling : Static Code Analysis

Borchert, Thomas January 2008 (has links)
Capturing the quality of software and detecting sections for further scrutiny within are of high interest for industry as well as for education. Project managers request quality reports in order to evaluate the current status and to initiate appropriate improvement actions and teachers feel the need of detecting students which need extra attention and help in certain programming aspects. By means of software measurement software characteristics can be quantified and the produced measures analyzed to gain an understanding about the underlying software quality. In this study, the technique of code profiling (being the activity of creating a summary of distinctive characteristics of software code) was inspected, formulized and conducted by means of a sample group of 19 industry and 37 student programs. When software projects are analyzed by means of software measurements, a considerable amount of data is produced. The task is to organize the data and draw meaningful information from the measures produced, quickly and without high expenses. The results of this study indicated that code profiling can be a useful technique for quick program comparisons and continuous quality observations with several application scenarios in both industry and education.
488

A comparison of helium dilution and plethysmography in measuring static lung volumes

Guldbrand, Anna January 2008 (has links)
In order to examine the usefulness of the multi breath helium dilution method (MB) it was compared to the single breath helium dilution method (SB) and body plethysmography (BP). Residual volume (RV), total lung capacity (TLC) and vital capacity (VC) were measured in seventeen subjects with obstructive (11) or restrictive (6) lung disease and four normal subjects. With information from professional literature and current periodicals, advantages and disadvantages with all three methods were compared. ANOVA and Student's t-test were performed on the measurement results. The results of the statistical tests tell us there are differences among the methods in the group of obstructive patients. They also reveal a notable difference between the MB and SB methods when measuring the same parameter. In addition, it was noted that none of the existing sets of prediction equations fulfill the requirements established on high quality lung function testing. Although a thorough evaluation of the reproducibility of the method is still required, it appears to be a viable alternative to body plethysmography. We claim that measuring the above mentioned static lung volumes with only the single breath helium dilution method cannot be considered a satisfactory practice.
489

Static Code Analysis: A Systematic Literature Review and an Industrial Survey

Ilyas, Bilal, Elkhalifa, Islam January 2016 (has links)
Context: Static code analysis is a software verification technique that refers to the process of examining code without executing it in order to capture defects in the code early, avoiding later costly fixations. The lack of realistic empirical evaluations in software engineering has been identified as a major issue limiting the ability of research to impact industry and in turn preventing feedback from industry that can improve, guide and orient research. Studies emphasized rigor and relevance as important criteria to assess the quality and realism of research. The rigor defines how adequately a study has been carried out and reported, while relevance defines the potential impact of the study on industry. Despite the importance of static code analysis techniques and its existence for more than three decades, the number of empirical evaluations in this field are less in number and do not take into account the rigor and relevance into consideration. Objectives: The aim of this study is to contribute toward bridging the gap between static code analysis research and industry by improving the ability of research to impact industry and vice versa. This study has two main objectives. First, developing guidelines for researchers, which will explore the existing research work in static code analysis research to identify the current status, shortcomings, rigor and industrial relevance of the research, reported benefits/limitations of different static code analysis techniques, and finally, give recommendations to researchers to help improve the future research to make it more industrial oriented. Second, developing guidelines for practitioners, which will investigate the adoption of different static code analysis techniques in industry and identify benefits/limitations of these techniques as perceived by industrial professionals. Then cross-analyze the findings of the SLR and the surbvey to draw final conclusions, and finally, give recommendations to professionals to help them decide which techniques to adopt. Methods: A sequential exploratory strategy characterized by the collection and analysis of qualitative data (systematic literature review) followed by the collection and analysis of quantitative data (survey), has been used to conduct this research. In order to achieve the first objective, a thorough systematic literature review has been conducted using Kitchenham guidelines. To achieve the second study objective, a questionnaire-based online survey was conducted, targeting professionals from software industry in order to collect their responses regarding the usage of different static code analysis techniques, as well as their benefits and limitations. The quantitative data obtained was subjected to statistical analysis for the further interpretation of the data and draw results based on it. Results: In static code analysis research, inspection and static analysis tools received significantly more attention than the other techniques. The benefits and limitations of static code analysis techniques were extracted and seven recurrent variables were used to report them. The existing research work in static code analysis field significantly lacks rigor and relevance and the reason behind it has been identified. Somre recommendations are developed outlining how to improve static code analysis research and make it more industrial oriented. From the industrial point of view, static analysis tools are widely used followed by informal reviews, while inspections and walkthroughs are rarely used. The benefits and limitations of different static code analysis techniques, as perceived by industrial professionals, have been identified along with the influential factors. Conclusions: The SLR concluded that the techniques having a formal, well-defined process and process elements have receive more attention in research, however, this doesn’t necessarily mean that technique is better than the other techniques. The experiments have been used widely as a research method in static code analysis research, but the outcome variables in the majority of the experiments are inconsistent. The use of experiments in academic context contributed nothing to improve the relevance, while the inadequate reporting of validity threats and their mitigation strategies contributed significantly to poor rigor of research. The benefits and limitations of different static code analysis techniques identified by the SLR could not complement the survey findings, because the rigor and relevance of most of the studies reporting them was weak. The survey concluded that the adoption of static code analysis techniques in the industry is more influenced by the software life-cycle models in practice in organizations, while software product type and company size do not have much influence. The amount of attention a static code analysis technique has received in research doesn’t necessarily influence its adoption in industry which indicates a wide gap between research and industry. However, the company size, product type, and software life-cycle model do influence professionals perception on benefits and limitations of different static code analysis techniques.
490

Static code metrics vs. process metrics for software fault prediction using Bayesian network learners

Stanic, Biljana January 2015 (has links)
Software fault prediction (SFP) has an important role in the process of improving software product quality by identifying fault-prone modules. Constructing quality models includes a usage of metrics that describe real world entities defined by numbers or attributes. Examining the nature of machine learning (ML), researchers proposed its algorithms as suitable for fault prediction. Moreover, information that software metrics contain will be used as statistical data necessary to build models for a certain ML algorithm. One of the most used ML algorithms is a Bayesian network (BN), which is represented as a graph, with a set of variables and relations between them. This thesis will be focused on the usage of process and static code metrics with BN learners for SFP. First, we provided an informal review on non-static code metrics. Furthermore, we created models that contained different combinations of process and static code metrics, and then we used them to conduct an experiment. The results of the experiment were statistically analyzed using a non-parametric test, the Kruskal-Wallis test. The informal review reported that non-static code metrics are beneficial for the prediction process and its usage is highly recommended for industrial projects. Finally, experimental results did not provide a conclusion which process metric gives a statistically significant result; therefore, a further investigation is needed.

Page generated in 0.2755 seconds