• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 49
  • 15
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 56
  • 48
  • 47
  • 29
  • 19
  • 18
  • 17
  • 17
  • 16
  • 16
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Information Visualization and Machine Learning Applied on Static Code Analysis

Kacan, Denis, Sidlauskas, Darius January 2008 (has links)
Software engineers will possibly never see the perfect source code in their lifetime, but they are seeing much better analysis tools for finding defects in software. The approaches used in static code analysis emerged from simple code crawling to usage of statistical and probabilistic frameworks. This work presents a new technique that incorporates machine learning and information visualization into static code analysis. The technique learns patterns in a program’s source code using a normalized compression distance and applies them to classify code fragments into faulty or correct. Since the classification frequently is not perfect, the training process plays an essential role. A visualization element is used in the hope that it lets the user better understand the inner state of the classifier making the learning process transparent. An experimental evaluation is carried out in order to prove the efficacy of an implementation of the technique, the Code Distance Visualizer. The outcome of the evaluation indicates that the proposed technique is reasonably effective in learning to differentiate between faulty and correct code fragments, and the visualization element enables the user to discern when the tool is correct in its output and when it is not, and to take corrective action (further training or retraining) interactively, until the desired level of performance is reached.
52

Combinational Watermarking for Medical Images

Chakravarthy Chinna Narayana Swamy, Thrilok 01 January 2015 (has links)
Digitization of medical data has become a very important part of the modern healthcare system. Data can be transmitted easily at any time to anywhere in the world using Internet to get the best diagnosis possible for a patient. This digitized medical data must be protected at all times to preserve the doctor-patient confidentiality. Watermarking can be used as an effective tool to achieve this. In this research project, image watermarking is performed both in the spatial domain and the frequency domain to embed a shared image with medical image data and the patient data which would include the patient identification number. For the proposed system, Structural Similarity (SSIM) is used as an index to measure the quality of the watermarking process instead of Peak Signal to Noise Ratio (PSNR) since SSIM takes into account the visual perception of the images compared to PSNR which uses the intensity levels to measure the quality of the watermarking process. The system response under ideal conditions as well as under the influence of noise were measured and the results were analyzed.
53

Does the Pareto Distribution of Hurricane Damage Inherit its Fat Tail from a Zipf Distribution of Assets at Hazard?

Hernandez, Javiera I 02 July 2014 (has links)
Tropical Cyclones are a continuing threat to life and property. Willoughby (2012) found that a Pareto (power-law) cumulative distribution fitted to the most damaging 10% of US hurricane seasons fit their impacts well. Here, we find that damage follows a Pareto distribution because the assets at hazard follow a Zipf distribution, which can be thought of as a Pareto distribution with exponent 1. The Z-CAT model is an idealized hurricane catastrophe model that represents a coastline where populated places with Zipf- distributed assets are randomly scattered and damaged by virtual hurricanes with sizes and intensities generated through a Monte-Carlo process. Results produce realistic Pareto exponents. The ability of the Z-CAT model to simulate different climate scenarios allowed testing of sensitivities to Maximum Potential Intensity, landfall rates and building structure vulnerability. The Z-CAT model results demonstrate that a statistical significant difference in damage is found when only changes in the parameters create a doubling of damage.
54

Information Retrieval by Identification of Signature Terms in Clusters

Muppalla, Sesha Sai Krishna Koundinya 24 May 2022 (has links)
No description available.
55

Production and Inefficiency

Bhattacharyya, Arunava 01 May 1990 (has links)
The overall purpose of this three-part dissertation is to specify and estimate various components of inefficiency in the production and profit-generating processes. Flexibility in inefficiency-measurement techniques is introduced using stochastic fun ctional forms to overcome the restrictions of the simplifying assumptions used in previous studies. In addition, the profit function approach is used to measure firm specific inefficiency and to view profit inefficiency in the multiple output context. Empirical application of each approach is also attempted. Application of the measurement of the inefficiency component in the first two essays is made using data taken from Indian agricu lture. The multiple output model of the third essay is applied to data of the U. S. unit bank taken from the Functional Cost Analysis programme of the Federal Reserve banking system. In the first essay, a quasi-translog production function is introduced and allocative, technical, and scale infficiencies are estimated for Indian agriculture with large and small farm divisions. Results obtained contradict earlier conclusions regarding the efficiency of Indian farms. In the second essay, a Normalized Restricted Profit function is used to estimate allocative, scale, and profit inefficiency for the same set of farms. Empirical results confirm the conclusions of the first essay. Technical inefficiency cannot be isolated in this case, because the impact of technical inefficiency is confounded in the measure of profit inefficiency. In the third essay, a translog profit function is used to estimate profit and allocative inefficiency in U. S. banking operations.
56

Digital Soil Mapping Using Landscape Stratification for Arid Rangelands in the Eastern Great Basin, Central Utah

Fonnesbeck, Brook B. 01 May 2015 (has links)
Digital soil mapping typically involves inputs of digital elevation models, remotely sensed imagery, and other spatially explicit digital data as environmental covariates to predict soil classes and attributes over a landscape using statistical models. Digital imagery from Landsat 5, a digital elevation model, and a digital geology map were used as environmental covariates in a 67,000-ha study area of the Great Basin west of Fillmore, UT. A “pre-map” was created for selecting sampling locations. Several indices were derived from the Landsat imagery, including a normalized difference vegetation index, normalized difference ratios from bands 5/2, bands 5/7, bands 4/7, and bands 5/4. Slope, topographic curvature, inverse wetness index, and area solar radiation were calculated from the digital elevation model. The greatest variation across the study area was found by calculating the Optimum Index Factor of covariates, choosing band 7, normalized difference ratio bands 5/2, normalized difference vegetation index, slope, profile curvature, and area solar radiation. A 20-class ISODATA unsupervised classification of these six data layers was reduced to 12. Comparing the 12-class map to a geologic map, 166 sites were chosen weighted by areal extent; 158 sites were visited. Twelve points were added using case-based reasoning to total 170 points for model training. A validation set of 50 sites was selected using conditioned Latin Hypercube Sampling. Density plots of sample sets compared to raw data produced comparable results. Geology was used to stratify the study area into areas above and below the Lake Bonneville highstand shoreline. Raster data were subset to these areas, and predictions were made on each area. Spatial modeling was performed with three different models: random forests, support vector machines, and bagged classification trees. A set of covariates selected by random forests variable importance and the set of Optimum Index Factor covariates were used in the models. The Optimum Index Factor covariates produced the best classification using random forests. Classification accuracy was 45.7%. The predictive rasters may not be useful for soil map unit delineation, but using a hybrid method to guide further sampling using the pre-map and standard sampling techniques can produce a reasonable soil map.
57

Parallel Processing For Adaptive Optics Optical Coherence Tomography (AO-OCT) Image Registration Using GPU

Do, Nhan Hieu 08 July 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Adaptive Optics Optical Coherence Tomography (AO-OCT) is a high-speed, high-resolution ophthalmic imaging technique offering detailed 3D analysis of retina structure in vivo. However, AO-OCT volume images are sensitive to involuntary eye movements that occur even during steady fixation and include tremor, drifts, and micro-saccades. To correct eye motion artifacts within a volume and to stabilize a sequence of volumes acquired of the same retina area, we propose a stripe-wise 3D image registration algorithm with phase correlation. In addition, using several ideas such as coarse-to-fine approach, spike noise filtering, pre-computation caching, and parallel processing on a GPU, our approach can register a volume of size 512 x 512 x 512 in less than 6 seconds, which is a 33x speedup as compared to an equivalent CPU version in MATLAB. Moreover, our 3D registration approach is reliable even in the presence of large motions (micro-saccades) that distort the volumes. Such motion was an obstacle for a previous en face approach based on 2D projected images. The thesis also investigates GPU implementations for 3D phase correlation and 2D normalized cross-correlation, which could be useful for other image processing algorithms.
58

Image-based Flight Data Acquisition

Bassie, Abby L 04 May 2018 (has links)
Flight data recorders (FDRs) play a critical role in determining root causes of aviation mishaps. Some aircraft record limited amounts of information during flight (e.g. T-1A Jayhawk), while others have no FDR on board (B-52 Stratofortress). This study explores the use of image-based flight data acquisition to overcome a lack of available digitally-recorded FDR data. In this work, images of cockpit gauges were unwrapped vertically, and 2-D cross-correlation was performed on each image of the unwrapped gauge versus a template of the unwrapped gauge needle. Points of high correlation between the unwrapped gauge and needle template were used to locate the gauge needle, and interpolation and extrapolation were performed (based on locations of gauge tick marks) to quantify the value to which the gauge needle pointed. Results suggest that image-based flight data acquisition could provide key support to mishap investigations when aircraft lack sufficient FDR data.
59

Effect of Carbon Steel Composition and Microstructure on CO2 Corrosion

Akeer, Emad S. 22 September 2014 (has links)
No description available.
60

Iron Carbide Development and its Effect on Inhibitor Performance

Al-Asadi, Akram A. January 2014 (has links)
No description available.

Page generated in 0.0588 seconds