• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • Tagged with
  • 164
  • 164
  • 77
  • 53
  • 35
  • 32
  • 30
  • 28
  • 27
  • 27
  • 27
  • 24
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

HIGH-ORDER INTEGRAL EQUATION METHODS FOR QUASI-MAGNETOSTATIC AND CORROSION-RELATED FIELD ANALYSIS WITH MARITIME APPLICATIONS

Pfeiffer, Robert 01 January 2018 (has links)
This dissertation presents techniques for high-order simulation of electromagnetic fields, particularly for problems involving ships with ferromagnetic hulls and active corrosion-protection systems. A set of numerically constrained hexahedral basis functions for volume integral equation discretization is presented in a method-of-moments context. Test simulations demonstrate the accuracy achievable with these functions as well as the improvement brought about in system conditioning when compared to other basis sets. A general method for converting between a locally-corrected Nyström discretization of an integral equation and a method-of-moments discretization is presented next. Several problems involving conducting and magnetic-conducting materials are solved to verify the accuracy of the method and to illustrate both the reduction in number of unknowns and the effect of the numerically constrained bases on the conditioning of the converted matrix. Finally, a surface integral equation derived from Laplace’s equation is discretized using the locally-corrected Nyström method in order to calculate the electric fields created by impressed-current corrosion protection systems. An iterative technique is presented for handling nonlinear boundary conditions. In addition we examine different approaches for calculating the magnetic field radiated by the corrosion protection system. Numerical tests show the accuracy achievable by higher-order discretizations, validate the iterative technique presented. Various methods for magnetic field calculation are also applied to basic test cases.
122

Learning to Map the Visual and Auditory World

Salem, Tawfiq 01 January 2019 (has links)
The appearance of the world varies dramatically not only from place to place but also from hour to hour and month to month. Billions of images that capture this complex relationship are uploaded to social-media websites every day and often are associated with precise time and location metadata. This rich source of data can be beneficial to improve our understanding of the globe. In this work, we propose a general framework that uses these publicly available images for constructing dense maps of different ground-level attributes from overhead imagery. In particular, we use well-defined probabilistic models and a weakly-supervised, multi-task training strategy to provide an estimate of the expected visual and auditory ground-level attributes consisting of the type of scenes, objects, and sounds a person can experience at a location. Through a large-scale evaluation on real data, we show that our learned models can be used for applications including mapping, image localization, image retrieval, and metadata verification.
123

ESTIMATING THE RESPIRATORY LUNG MOTION MODEL USING TENSOR DECOMPOSITION ON DISPLACEMENT VECTOR FIELD

Kang, Kingston 01 January 2018 (has links)
Modern big data often emerge as tensors. Standard statistical methods are inadequate to deal with datasets of large volume, high dimensionality, and complex structure. Therefore, it is important to develop algorithms such as low-rank tensor decomposition for data compression, dimensionality reduction, and approximation. With the advancement in technology, high-dimensional images are becoming ubiquitous in the medical field. In lung radiation therapy, the respiratory motion of the lung introduces variabilities during treatment as the tumor inside the lung is moving, which brings challenges to the precise delivery of radiation to the tumor. Several approaches to quantifying this uncertainty propose using a model to formulate the motion through a mathematical function over time. [Li et al., 2011] uses principal component analysis (PCA) to propose one such model using each image as a long vector. However, the images come in a multidimensional arrays, and vectorization breaks the spatial structure. Driven by the needs to develop low-rank tensor decomposition and provided the 4DCT and Displacement Vector Field (DVF), we introduce two tensor decompositions, Population Value Decomposition (PVD) and Population Tucker Decomposition (PTD), to estimate the respiratory lung motion with high levels of accuracy and data compression. The first algorithm is a generalization of PVD [Crainiceanu et al., 2011] to higher order tensor. The second algorithm generalizes the concept of PVD using Tucker decomposition. Both algorithms are tested on clinical and phantom DVFs. New metrics for measuring the model performance are developed in our research. Results of the two new algorithms are compared to the result of the PCA algorithm.
124

Big Networks: Analysis and Optimal Control

Nguyen, Hung The 01 January 2018 (has links)
The study of networks has seen a tremendous breed of researches due to the explosive spectrum of practical problems that involve networks as the access point. Those problems widely range from detecting functionally correlated proteins in biology to finding people to give discounts and gain maximum popularity of a product in economics. Thus, understanding and further being able to manipulate/control the development and evolution of the networks become critical tasks for network scientists. Despite the vast research effort putting towards these studies, the present state-of-the-arts largely either lack of high quality solutions or require excessive amount of time in real-world `Big Data' requirement. This research aims at affirmatively boosting the modern algorithmic efficiency to approach practical requirements. That is developing a ground-breaking class of algorithms that provide simultaneously both provably good solution qualities and low time and space complexities. Specifically, I target the important yet challenging problems in the three main areas: Information Diffusion: Analyzing and maximizing the influence in networks and extending results for different variations of the problems. Community Detection: Finding communities from multiple sources of information. Security and Privacy: Assessing organization vulnerability under targeted-cyber attacks via social networks.
125

User-Centric Privacy Preservation in Mobile and Location-Aware Applications

Guo, Mingming 10 April 2018 (has links)
The mobile and wireless community has brought a significant growth of location-aware devices including smart phones, connected vehicles and IoT devices. The combination of location-aware sensing, data processing and wireless communication in these devices leads to the rapid development of mobile and location-aware applications. Meanwhile, user privacy is becoming an indispensable concern. These mobile and location-aware applications, which collect data from mobile sensors carried by users or vehicles, return valuable data collection services (e.g., health condition monitoring, traffic monitoring, and natural disaster forecasting) in real time. The sequential spatial-temporal data queries sent by users provide their location trajectory information. The location trajectory information not only contains users’ movement patterns, but also reveals sensitive attributes such as users’ personal habits, preferences, as well as home and work addresses. By exploring this type of information, the attackers can extract and sell user profile data, decrease subscribed data services, and even jeopardize personal safety. This research spans from the realization that user privacy is lost along with the popular usage of emerging location-aware applications. The outcome seeks to relive user location and trajectory privacy problems. First, we develop a pseudonym-based anonymity zone generation scheme against a strong adversary model in continuous location-based services. Based on a geometric transformation algorithm, this scheme generates distributed anonymity zones with personalized privacy parameters to conceal users’ real location trajectories. Second, based on the historical query data analysis, we introduce a query-feature-based probabilistic inference attack, and propose query-aware randomized algorithms to preserve user privacy by distorting the probabilistic inference conducted by attackers. Finally, we develop a privacy-aware mobile sensing mechanism to help vehicular users reduce the number of queries to be sent to the adversarial servers. In this mechanism, mobile vehicular users can selectively query nearby nodes in a peer-to-peer way for privacy protection in vehicular networks.
126

A Geospatial Based Decision Framework for Extending MARSSIM Regulatory Principles into the Subsurface

Stewart, Robert Nathan 01 August 2011 (has links)
The Multi-Agency Radiological Site Survey Investigation Manual (MARSSIM) is a regulatory guidance document regarding compliance evaluation of radiologically contaminated soils and buildings (USNRC, 2000). Compliance is determined by comparing radiological measurements to established limits using a combination of hypothesis testing and scanning measurements. Scanning allows investigators to identify localized pockets of contamination missed during sampling and allows investigators to assess radiological exposure at different spatial scales. Scale is important in radiological dose assessment as regulatory limits can vary with the size of the contaminated area and sites are often evaluated at more than one scale (USNRC, 2000). Unfortunately, scanning is not possible in the subsurface and direct application of MARSSIM breaks down. This dissertation develops a subsurface decision framework called the Geospatial Extension to MARSSIM (GEM) to provide multi-scale subsurface decision support in the absence of scanning technologies. Based on geostatistical simulations of radiological activity, the GEM recasts the decision rule as a multi-scale, geospatial decision rule called the regulatory limit rule (RLR). The RLR requires simultaneous compliance with all scales and depths of interest at every location throughout the site. The RLR is accompanied by a compliance test called the stochastic conceptual site model (SCSM). For those sites that fail compliance, a remedial design strategy is developed called the Multi-scale Remedial Design Model (MrDM) that spatially indicates volumes requiring remedial action. The MrDM is accompanied by a sample design strategy known as the Multi-scale Remedial Sample Design Model (MrsDM) that refines this remedial action volume through careful placement of new sample locations. Finally, a new sample design called “check and cover” is presented that can support early sampling efforts by directly using prior knowledge about where contamination may exist. This dissertation demonstrates how these tools are used within an environmental investigation and situates the GEM within existing regulatory methods with an emphasis on the Environmental Protection Agency’s Triad method which recognizes and encourages the use of advanced decision methods. The GEM is implemented within the Spatial Analysis and Decision Assistance (SADA) software and applied to a hypothetical radiologically contaminated site.
127

Using Statistical Methods to Determine Geolocation Via Twitter

Wright, Christopher M. 01 May 2014 (has links)
With the ever expanding usage of social media websites such as Twitter, it is possible to use statistical inquires to form a geographic location of a person using solely the content of their tweets. According to a study done in 2010, Zhiyuan Cheng, was able to detect a location of a Twitter user within 100 miles of their actual location 51% of the time. While this may seem like an already significant find, this study was done while Twitter was still finding its ground to stand on. In 2010, Twitter had 75 million unique users registered, as of March 2013, Twitter has around 500 million unique users. In this thesis, my own dataset was collected and using Excel macros, a comparison of my results to that of Cheng’s will see if the results have changed over the three years since his study. If found to be that Cheng’s 51% can be shown more efficiently using a simpler methodology, this could have a significant impact on Homeland Security and cyber security measures.
128

Colormoo: An Algorithmic Approach to Generating Color Palettes

Rael, Joshua 01 January 2014 (has links)
Selecting one color can be done with relative ease, but this task becomes more difficult with each subsequent color. Colormoo is an online tool aimed at solving this problem. We implement three algorithms for generating color palettes based off of a starting color. Data is collected for each palette that is generated. Our analysis reveals two of the algorithms are preferred, but under different circumstances. Furthermore, we find that users prefer palettes containing colors that are compatible, but not too similar. With refined heuristics, we believe these techniques can be extended and applied beyond the field of graphic design alone.
129

Analysis of Eye-Tracking Data in Visualization and Data Space

Alam, Sayeed Safayet 12 May 2017 (has links)
Eye-tracking devices can tell us where on the screen a person is looking. Researchers frequently analyze eye-tracking data manually, by examining every frame of a visual stimulus used in an eye-tracking experiment so as to match 2D screen-coordinates provided by the eye-tracker to related objects and content within the stimulus. Such task requires significant manual effort and is not feasible for analyzing data collected from many users, long experimental sessions, and heavily interactive and dynamic visual stimuli. In this dissertation, we present a novel analysis method. We would instrument visualizations that have open source code, and leverage real-time information about the layout of the rendered visual content, to automatically relate gaze-samples to visual objects drawn on the screen. Since such visual objects are shown in a visualization stand for data, the method would allow us to necessarily detect data that users focus on or Data of Interest (DOI). This dissertation has two contributions. First, we demonstrated the feasibility of collecting DOI data for real life visualization in a reliable way which is not self-evident. Second, we formalized the process of collecting and interpreting DOI data and test whether the automated DOI detection can lead to research workflows, and insights not possible with traditional, manual approaches.
130

Feature Selection and Classification Methods for Decision Making: A Comparative Analysis

Villacampa, Osiris 01 January 2015 (has links)
The use of data mining methods in corporate decision making has been increasing in the past decades. Its popularity can be attributed to better utilizing data mining algorithms, increased performance in computers, and results which can be measured and applied for decision making. The effective use of data mining methods to analyze various types of data has shown great advantages in various application domains. While some data sets need little preparation to be mined, whereas others, in particular high-dimensional data sets, need to be preprocessed in order to be mined due to the complexity and inefficiency in mining high dimensional data processing. Feature selection or attribute selection is one of the techniques used for dimensionality reduction. Previous research has shown that data mining results can be improved in terms of accuracy and efficacy by selecting the attributes with most significance. This study analyzes vehicle service and sales data from multiple car dealerships. The purpose of this study is to find a model that better classifies existing customers as new car buyers based on their vehicle service histories. Six different feature selection methods such as; Information Gain, Correlation Based Feature Selection, Relief-F, Wrapper, and Hybrid methods, were used to reduce the number of attributes in the data sets are compared. The data sets with the attributes selected were run through three popular classification algorithms, Decision Trees, k-Nearest Neighbor, and Support Vector Machines, and the results compared and analyzed. This study concludes with a comparative analysis of feature selection methods and their effects on different classification algorithms within the domain. As a base of comparison, the same procedures were run on a standard data set from the financial institution domain.

Page generated in 0.0593 seconds