• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 153
  • 153
  • 153
  • 35
  • 33
  • 33
  • 32
  • 22
  • 20
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Evaluating the Effectiveness of Certain Metrics in Measuring the Quality of End User Documentation

Morrison, Ronald 01 January 1993 (has links)
Traditional methods of evaluating quality in computer end user documentation have been subjective in nature, and have not been widely used in practice. Attempts to quantify quality and more narrowly define the essential features of quality have been limited -- leaving the issue of quality largely up to the writer of the user manual. Quantifiable measures from the literature, especially Velotta (1992) and Brockman (1990), have been assembled into a set of uniformly weighted metrics for the measurement of document quality. This measure has been applied to the end user documentation of eighty-two personal computer packages. End user documentation is defined in terms of paper documents only. The research examined only those manuals that were titled “user guide,” “training manual,” “tutorial,” or similar title. The research examined six categories of software: applications, graphics, utilities, spreadsheets, databases, and word processing. Following the recommendation of Duffy (1985), a panel of experts was assembled and asked to evaluate several of the 82 end user manuals in order to determine what correlation exists between the set of metrics and the subjective opinion of experts. The eighty-two documents in the sample were scored by the metrics using a convenient random sampling technique. This technique was selected based the consistency of the material in commercial software manuals and the methods of Velotta (1992). Data from the metrics suggest that there is little correlation between quality, category, price, page length, version number, and experience. On a scale of 0.0 to 1.0, the minimum total score from the metrics was .2; the maximum score .83; the mean total score was. 70; the median .697 with a standard deviation of .093. The distribution is slightly skewed and leptokurtic (steeper than a normal curve). The metrics further suggest a declining score as the integration of sentences into chapters and chapters into the document progresses. Of the metrics two consistently had lower scores: those relating to the transition between sections of the document; and the reference tools provided. Though not conclusive, the analysis of data from the panel of experts compared with the model results suggests only a moderate correlation. However, by varying the weighting scheme, it is possible to improve model performance - essentially by "tuning" the model to match the sample data from the panelists. Further research would be required to verify if these weights have more global application.
32

Mantis: A Predictive Driving Directions Recommendation System

Hoover, Christopher 01 June 2013 (has links) (PDF)
This thesis presents Mantis, a system designed to evaluate possible driving routes and recommend the optimal route based on current and predicted travel conditions. The system uses the Bing Maps REST service to obtain a set of routes. Traffic data from the California Department of Transportation’s Performance Measurement System (PeMS) is then used to estimate travel times for these routes. In addition to simple travel time estimation based on instantaneous traffic conditions, Mantis can use historic data to predict traffic speeds at future times. This allows Mantis to more effectively account for regularly repeating traffic patterns such as rush hour, increasing the accuracy of its travel time estimates. Mantis is also capable of monitoring traffic incidents reported by the California Highway Patrol and identifying incidents that will be encountered along a route’s path.
33

A Unified Resource Platform for the Rapid Development of Scalable Web Applications

Palmiter, Russell 01 January 2009 (has links) (PDF)
This thesis presents Web Utility Kit (WUT): a platform that helps to simplify the process of creating modern web applications. It addresses the need to simplify the web development process through the creation of a hosted service that provides access to a unified set of resources. The resources are made available through a variety of protocols and formats to help simplify their consumption. It also provides a uniform model across all of its resources making multi-resource development an easier and more familiar task. WUT saves the time and cost associated with deployment, maintenance, and hosting of the hardware and software in which resources depend. It has a relatively low overhead averaging 123 ms per request and has been shown capable of linear scaling with each application server capable of handling 120+ requests per minute. This important property of being able to seamlessly scale to developer's needs helps to eliminate the expensive scaling process. Initial users of the platform have found it to be extremely easy to use and have paved the way for future developments.
34

Analysis of the SLO Bay Microbiome from a Network Perspective

Nguyen, Lien Viet 01 July 2021 (has links) (PDF)
Microorganisms are key players in the ecosystem functioning. In this thesis, we developed a framework to preprocess raw microbiome data, build a correlation network, and analyze co-occurrence patterns between microbes. We then applied this framework to a marine microbiome dataset. The dataset used in this study comes from a year-long time-series to characterize the microbial communities in our coastal waters off the Cal Poly Pier. In analyzing this dataset, we were able to observe and confirm previously discovered patterns of interactions and generate hypotheses about new patterns. The analysis of co-occurrences between prokaryotic and eukaryotic taxa is relatively novel and can provide new insight into how marine microbial communites are structured and interact.
35

Acceleration of Jaccard's Index Algorithm for Training to Tag Damage on Post-earthquake Images

Mulligan, Kyle John 01 June 2018 (has links) (PDF)
There are currently different efforts to use Supervised Neural Networks (NN) to automatically label damages on images of above ground infrastructure (buildings made of concrete) taken after an earthquake. The goal of the supervised NN is to classify raw input data according to the patterns learned from an input training set. This input training data set is usually supplied by experts in the field, and in the case of this project, structural engineers carefully and mostly manually label these images for different types of damage. The level of expertise of the professionals labeling the training set varies widely, and some data sets contain pictures that different people have labeled in different ways when in reality the label should have been the same. Therefore, we need to get several experts to evaluate the same data set; the bigger the ground truth/training set the more accurate the NN classifier will be. To evaluate these variations among experts, which can be considered equal to the task of evaluating the quality of the expert, using probabilistic theory we first need to implement a tool able to compare different images classified by different experts and apply a certainty level to the experts tagged labels. This master's thesis implements this comparative tool. We also decided to implement the comparative tool using parallel programming paradigms since we foresee that it will be used to train multiple young engineering students/professionals or even novice citizen volunteers (“trainees”) during after-earthquake meetings and workshops. The implementation of this software tool involves selecting around 200 photographs tagged by an expert with proven accuracy (“ground truth”) and comparing them to files tagged by the trainees. The trainees are then provided with instantaneous feedback on the accuracy of their damage assessment. The aforementioned problem of evaluating trainee results against the expert is not as simple as comparing and finding differences between two sets of image files. We anticipate challenges in that each trainee will select a slightly different sized area for the same occurrence of damage, and some damage-structure pairs are more difficult to recognize and tag. Results show that we can compare 500 files in 1.5 seconds which is an improvement of 2x faster compared to sequential implementation.
36

Defect Localization using Dynamic Call Tree Mining and Matching and Request Replication: An Alternative to QoS-aware Service Selection

Yousefi, Anis 04 1900 (has links)
<p>This thesis is concerned with two separate subjects; (i) Defect localization using tree mining and tree matching, and (ii) Quality-of-service-aware service selection; it is divided into these parts accordingly.</p> / <p>This thesis is concerned with two separate subjects; (i) Defect localization using tree mining and tree matching, and (ii) Quality-of-service-aware service selection; it is divided into these parts accordingly.</p> <p>In the first part of this thesis we present a novel technique for defect localization which is able to localize call-graph-affecting defects using tree mining and tree matching techniques. In this approach, given a set of successful executions and a failing execution and by following a series of analyses we generate an extended report of suspicious method calls. The proposed defect localization technique is implemented as a prototype and evaluated using four subject programs of various sizes, developed in Java or C. Our experiments show comparable results to similar defect localization tools, but unlike most of its counterparts, we do not require the availability of multiple failing executions to localize the defects. We believe that this is a major advantage, since it is often the case that we have only a single failing execution to work with. Potential risks of the proposed technique are also investigated.</p> <p>In the second part of this thesis we present an alternative strategy for service selection in service oriented architecture, which provides better quality services for less cost. The proposed Request Replication technique replicates a client’s request over a number of cheap, low quality services to gain the required quality of service. Following this approach, we also present a number of recommendations about how service providers should advertise non-functional properties of their services.</p> / Doctor of Philosophy (PhD)
37

ANALYZING THE GEO-DEPENDENCE OF HUMAN FACE APPEARANCE AND ITS APPLICATIONS

Islam, Mohammad T. 01 January 2016 (has links)
Human faces have been a subject of study in computer science for decades. The rich of set features from human faces have been used in solving various problems in computer vision, including person identification, facial expression analysis, and attribute classification. In this work, I explore the human facial features that depend on the geo-location using a data- driven approach. I analyze millions of public domain images to extract the geo-dependent human facial features and explore their applications. Using various machine learning and statistical techniques, I show that the geo-dependent features of human faces can be used to solve the image geo-localization task of given an image, predict where it was taken. Deep Convolutional Neural Networks (CNN) have been recently shown to excel at the image classification task; I have used CNNs to geo-localize images using the human face as a cue. I also show that the facial features used in image localization can be used to solve other problems, such as ethnicity, gender, and age estimation.
38

An Introduction to the Theory and Applications of Bayesian Networks

Jaitha, Anant 01 January 2017 (has links)
Bayesian networks are a means to study data. A Bayesian network gives structure to data by creating a graphical system to model the data. It then develops probability distributions over these variables. It explores variables in the problem space and examines the probability distributions related to those variables. It conducts statistical inference over those probability distributions to draw meaning from them. They are good means to explore a large set of data efficiently to make inferences. There are a number of real world applications that already exist and are being actively researched. This paper discusses the theory and applications of Bayesian networks.
39

Android Memory Capture and Applications for Security and Privacy

Sylve, Joseph T 17 December 2011 (has links)
The Android operating system is quickly becoming the most popular platform for mobiledevices. As Android’s use increases, so does the need for both forensic and privacy toolsdesigned for the platform. This thesis presents the first methodology and toolset for acquiringfull physical memory images from Android devices, a proposed methodology for forensicallysecuring both volatile and non-volatile storage, and details of a vulnerability discovered by theauthor that allows the bypass of the Android security model and enables applications to acquirearbitrary permissions.
40

Volatile Memory Message Carving: A "per process basis" Approach

Ali-Gombe, Aisha Ibrahim 01 December 2012 (has links)
The pace at which data and information transfer and storage has shifted from PCs to mobile devices is of great concern to the digital forensics community. Android is fast becoming the operating system of choice for these hand-held devices, hence the need to develop better forensic techniques for data recovery cannot be over-emphasized. This thesis analyzes the volatile memory for Motorola Android devices with a shift from traditional physical memory extraction to carving residues of data on a “per process basis”. Each Android application runs in a separate process within its own Dalvik Virtual Machine (JVM) instance, thus, the proposed “per process basis” approach. To extract messages, we first extract the runtime memory of the MotoBlur application, then carve and reconstruct both deleted and undeleted messages (emails and chat messages). An experimental study covering two Android phones is also presented.

Page generated in 0.1032 seconds