• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 56
  • 31
  • 28
  • 23
  • 19
  • 19
  • 18
  • 18
  • 17
  • 16
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Data Exploration Interface for Digital Forensics

Dontula, Varun 17 December 2011 (has links)
The fast capacity growth of cheap storage devices presents an ever-growing problem of scale for digital forensic investigations. One aspect of scale problem in the forensic process is the need for new approaches to visually presenting and analyzing large amounts of data. Current generation of tools universally employ three basic GUI components—trees, tables, and viewers—to present all relevant information. This approach is not scalable as increasing the size of the input data leads to a proportional increase in the amount of data presented to the analyst. We present an alternative approach, which leverages data visualization techniques to provide a more intuitive interface to explore the forensic target. We use tree visualization techniques to give the analyst both a high-level view of the file system and an efficient means to drill down into the details. Further, we provide means to search for keywords and filter the data by time period.
62

Improving Table Scans for Trie Indexed Databases

Toney, Ethan 01 January 2018 (has links)
We consider a class of problems characterized by the need for a string based identifier that reflect the ontology of the application domain. We present rules for string-based identifier schemas that facilitate fast filtering in databases used for this class of problems. We provide runtime analysis of our schema and experimentally compare it with another solution. We also discuss performance in our solution to a game engine. The string-based identifier schema can be used in addition scenarios such as cloud computing. An identifier schema adds metadata about an element. So the solution hinges on additional memory but as long as queries operate only on the included metadata there is no need to load the element from disk which leads to huge performance gains.
63

THE I: A CLIENT-BASED POINT-AND-CLICK PUZZLE GAME

Lewis, Aldo 01 June 2014 (has links)
Given mobile devices’ weak computational power, game programmers must learn to create games with simple graphics that are engaging and easy to play. Though seldom created for phones and tablets, puzzle games are a perfect fit. In recent years, the genre has gained a following and even won some acclaim. Games like Myst, The Seventh Guest and Portal all engage gamers with challenging puzzles and then reward them with story components upon task fulfillment. Few such games have been created for mobile devices, in part due to the difficulty of developing for devices with different operating systems. Android, WebIOS and Windows Phone all have different software development kits that produce a final product incompatible with operating systems other than what it was developed for. One promising solution is to use browser technology to deliver games since all devices are geared to interact with the Web through browsers such as Internet Explorer, Mozilla Firefox, and Google Chrome. The aim of this project was to build a puzzle game that can be run on any digital device. The project can be accessed without any plug-ins and was created by using web technologies such as JQuery, Touch Punch, local storage, and WebGL. JQuery allows drag and drop functionality and Touch Punch allows the JQuery functionality intended for a mouse to work on a touch interface. Local storage provides storage on a user’s device, as opposed to a server, and WebGL enables graphics processing on a user’s tablet or phone through web commands.
64

INVESTIGATING SMOKE EXPOSURE AND CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD) WITH A CALIBRATED AGENT BASED MODEL (ABM) OF IN VITRO FIBROBLAST WOUND HEALING.

Ratti, James A 01 January 2018 (has links)
COPD is characterized by tissue inflammation and impaired remodeling that suggests fibroblast maintenance of structural homeostasis is dysregulated. Thus, we performed in vitro wound healing experiments on normal and diseased human lung fibroblasts and developed an ABM of fibroblasts closing a scratched monolayer using NetLogo to evaluate differences due to COPD or cigarette smoke condensate exposure. This ABM consists of a rule-set governing the healing response, accounting for cell migration, proliferation, death, activation and senescence rates; along with the effects of heterogeneous activation, phenotypic changes, serum deprivation and exposure to cigarette smoke condensate or bFGF. Simulations were performed to calibrate parameter-sets for each cell type using in vitro data of scratch-induced migration, viability, senescence-associated beta-galactosidase and alpha-smooth muscle actin expression. Parameter sensitivities around each calibrated parameter-set were analyzed. This model represents the prototype of a tool designed to explore fibroblast functions in the pathogenesis of COPD and evaluate potential therapies.
65

A Machine Learning Approach to Predicting Alcohol Consumption in Adolescents From Historical Text Messaging Data

Bergh, Adrienne 28 May 2019 (has links)
Techniques based on artificial neural networks represent the current state-of-the-art in machine learning due to the availability of improved hardware and large data sets. Here we employ doc2vec, an unsupervised neural network, to capture the semantic content of text messages sent by adolescents during high school, and encode this semantic content as numeric vectors. These vectors effectively condense the text message data into highly leverageable inputs to a logistic regression classifier in a matter of hours, as compared to the tedious and often quite lengthy task of manually coding data. Using our machine learning approach, we are able to train a logistic regression model to predict adolescents' engagement in substance abuse during distinct life phases with accuracy ranging from 76.5% to 88.1%. We show the effects of grade level and text message aggregation strategy on the efficacy of document embedding generation with doc2vec. Additional examination of the vectorizations for specific terms extracted from the text message data adds quantitative depth to this analysis. We demonstrate the ability of the method used herein to overcome traditional natural language processing concerns related to unconventional orthography. These results suggest that the approach described in this thesis is a competitive and efficient alternative to existing methodologies for predicting substance abuse behaviors. This work reveals the potential for the application of machine learning-based manipulation of text messaging data to development of automatic intervention strategies against substance abuse and other adolescent challenges.
66

Nuclei/Cell Detection in Microscopic Skeletal Muscle Fiber Images and Histopathological Brain Tumor Images Using Sparse Optimizations

Su, Hai 01 January 2014 (has links)
Nuclei/Cell detection is usually a prerequisite procedure in many computer-aided biomedical image analysis tasks. In this thesis we propose two automatic nuclei/cell detection frameworks. One is for nuclei detection in skeletal muscle fiber images and the other is for brain tumor histopathological images. For skeletal muscle fiber images, the major challenges include: i) shape and size variations of the nuclei, ii) overlapping nuclear clumps, and iii) a series of z-stack images with out-of-focus regions. We propose a novel automatic detection algorithm consisting of the following components: 1) The original z-stack images are first converted into one all-in-focus image. 2) A sufficient number of hypothetical ellipses are then generated for each nuclei contour. 3) Next, a set of representative training samples and discriminative features are selected by a two-stage sparse model. 4) A classifier is trained using the refined training data. 5) Final nuclei detection is obtained by mean-shift clustering based on inner distance. The proposed method was tested on a set of images containing over 1500 nuclei. The results outperform the current state-of-the-art approaches. For brain tumor histopathological images, the major challenges are to handle significant variations in cell appearance and to split touching cells. The proposed novel automatic cell detection consists of: 1) Sparse reconstruction for splitting touching cells. 2) Adaptive dictionary learning for handling cell appearance variations. The proposed method was extensively tested on a data set with over 2000 cells. The result outperforms other state-of-the-art algorithms with F1 score = 0.96.
67

Towards the Prediction of Mutations in Genomic Sequences

Martinez, Juan Carlos 15 November 2013 (has links)
Bio-systems are inherently complex information processing systems. Furthermore, physiological complexities of biological systems limit the formation of a hypothesis in terms of behavior and the ability to test hypothesis. More importantly the identification and classification of mutation in patients are centric topics in today’s cancer research. Next generation sequencing (NGS) technologies can provide genome-wide coverage at a single nucleotide resolution and at reasonable speed and cost. The unprecedented molecular characterization provided by NGS offers the potential for an individualized approach to treatment. These advances in cancer genomics have enabled scientists to interrogate cancer-specific genomic variants and compare them with the normal variants in the same patient. Analysis of this data provides a catalog of somatic variants, present in tumor genome but not in the normal tissue DNA. In this dissertation, we present a new computational framework to the problem of predicting the number of mutations on a chromosome for a certain patient, which is a fundamental problem in clinical and research fields. We begin this dissertation with the development of a framework system that is capable of utilizing published data from a longitudinal study of patients with acute myeloid leukemia (AML), who’s DNA from both normal as well as malignant tissues was subjected to NGS analysis at various points in time. By processing the sequencing data at the time of cancer diagnosis using the components of our framework, we tested it by predicting the genomic regions to be mutated at the time of relapse and, later, by comparing our results with the actual regions that showed mutations (discovered at relapse time). We demonstrate that this coupling of the algorithm pipeline can drastically improve the predictive abilities of searching a reliable molecular signature. Arguably, the most important result of our research is its superior performance to other methods like Radial Basis Function Network, Sequential Minimal Optimization, and Gaussian Process. In the final part of this dissertation, we present a detailed significance, stability and statistical analysis of our model. A performance comparison of the results are presented. This work clearly lays a good foundation for future research for other types of cancer.
68

The Effects of Latency on 3D Interactive Data Visualizations

Korenevsky, Allen 01 June 2016 (has links)
Interactive data visualizations must respond fluidly to user input to be effective, or so we assume. In fact it is unknown exactly how fast a visualization must run to present every facet within a dataset. An engineering team with limited resources is left with intuition and estimates to determine if their application performs sufficiently well. This thesis studies how latency affects users' comprehension of data visualizations, specifically 3D geospatial visualizations with large data sets. Subjects used a climate visualization showing temperatures spanning from the 19th to the 21st century to answer multiple choice questions. Metrics like their eye movements, time per question, and test score were recorded. Unbeknownst to the participants the latency was toggled between questions, subjugating frame rendering times to intervals between 33 1/3 and 200 milliseconds. Analysis of eye movements and question completion time and accuracy fail to show that latency has an impact on how users explore the visualization or comprehend the data presented. User fixation times on overlaid 2D visualization tools however are impacted by latency, although the fixation times do not significantly differ over 3D elements. The finding speaks to how resilient users are in navigating and understanding virtual 3D environments --- a conclusion supported by previous studies about video game latency.
69

Laff-O-Tron: Laugh Prediction in TED Talks

Acosta, Andrew D 01 October 2016 (has links)
Did you hear where the thesis found its ancestors? They were in the "parent-thesis"! This joke, whether you laughed at it or not, contains a fascinating and mysterious quality: humor. Humor is something so incredibly human that if you squint, the two words can even look the same. As such, humor is not often considered something that computers can understand. But, that doesn't mean we won't try to teach it to them. In this thesis, we propose the system Laff-O-Tron to attempt to predict when the audience of a public speech would laugh by looking only at the text of the speech. To do this, we create a corpus of over 1700 TED Talks retrieved from the TED website. We then adapted various techniques used by researchers to identify humor in text. We also investigated features that were specific to our public speaking environment. Using supervised learning, we try to classify if a chunk of text would cause the audience to laugh or not based on these features. We examine the effects of each feature, classifier, and size of the text chunk provided. On a balanced data set, we are able to accurately predict laughter with up to 75% accuracy in our best conditions. Medium level conditions prove to be around 70% accuracy; while our worst conditions result in 66% accuracy. Computers with humor recognition capabilities would be useful in the fields of human computer interaction and communications. Humor can make a computer easier to interact with and function as a tool to check if humor was properly used in an advertisement or speech.
70

Complementary Companion Behavior in Video Games

Scott, Gavin 01 June 2017 (has links)
Companion characters in are present in many video games across genres, serving the role of the player's partner. Their goal is to support the player's strategy and to immerse the player by providing a believable companion. These companions often only perform rigidly scripted actions and fail to adapt to an individual player's play-style, detracting from their usefulness. Behavior like this can also become frustrating to the player if the companions become more of a hindrance than they are a benefit. Other work, including this project's precursor, focused on building companions that mimic the player. These strategies customize the companion's actions to each player, but are limited. In the same context, an ideal companion would help further the player's strategy by finding complementary actions rather than blind emulation. We propose a game-development framework that adds complementary (rather than mimicking) companions to a video game. For the purposes of this framework a "complementary" action is defined as any that furthers the player's strategy both in the immediate future as well as in the long-term. This is determined through a combination of both player-action and game-state prediction processes, while allowing the companion to experiment with actions the player hasn't tried. We used a new method to determine the location of companion actions based on a dynamic set of regions customized to the individual player. A user study of game-development students showed promising results, with a seventeen out of twenty-five participants reacting positively to the companion behavior, and nineteen saying that they would consider using the framework in future games.

Page generated in 0.1374 seconds