• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • Tagged with
  • 51
  • 51
  • 51
  • 8
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Communication complexity and information complexity

Pankratov, Denis 21 August 2015 (has links)
<p> Information complexity enables the use of information-theoretic tools in communication complexity theory. Prior to the results presented in this thesis, information complexity was mainly used for proving lower bounds and direct-sum theorems in the setting of communication complexity. We present three results that demonstrate new connections between information complexity and communication complexity.</p><p> In the first contribution we thoroughly study the information complexity of the smallest nontrivial two-party function: the AND function. While computing the communication complexity of AND is trivial, computing its exact information complexity presents a major technical challenge. In overcoming this challenge, we reveal that information complexity gives rise to rich geometrical structures. Our analysis of information complexity relies on new analytic techniques and new characterizations of communication protocols. We also uncover a connection of information complexity to the theory of elliptic partial differential equations. Once we compute the exact information complexity of AND, we can compute <i> exact communication complexity</i> of several related functions on <i> n</i>-bit inputs with some additional technical work. Previous combinatorial and algebraic techniques could only prove bounds of the form &Theta;(<i> n</i>). Interestingly, this level of precision is typical in the area of information theory, so our result demonstrates that this meta-property of precise bounds carries over to information complexity and in certain cases even to communication complexity. Our result does not only strengthen the lower bound on communication complexity of disjointness by making it more exact, but it also shows that information complexity provides the exact upper bound on communication complexity. In fact, this result is more general and applies to a whole class of communication problems.</p><p> In the second contribution, we use self-reduction methods to prove strong lower bounds on the information complexity of two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product mod 2 (IP). In our first result we affirm the conjecture that the information complexity of GHD is linear even under the uniform distribution. This strengthens the &Omega;(<i>n</i>) bound shown by Kerenidis et al. (2012) and answers an open problem by Chakrabarti et al. (2012). We also prove that the information complexity of IP is arbitrarily close to the trivial upper bound <i>n</i> as the permitted error tends to zero, again strengthening the &Omega;(<i>n</i>) lower bound proved by Braverman and Weinstein (2011). More importantly, our proofs demonstrate that self-reducibility makes the connection between information complexity and communication complexity lower bounds a two-way connection. Whereas numerous results in the past used information complexity techniques to derive new communication complexity lower bounds, we explore a generic way, in which communication complexity lower bounds imply information complexity lower bounds <i> in a black-box manner</i>.</p><p> In the third contribution we consider the roles that private and public randomness play in the definition of information complexity. In communication complexity, private randomness can be trivially simulated by public randomness. Moreover, the communication cost of simulating public randomness with private randomness is well understood due to Newman's theorem (1991). In information complexity, the roles of public and private randomness are reversed: public randomness can be trivially simulated by private randomness. However, the information cost of simulating private randomness with public randomness is not understood. We show that protocols that use only public randomness admit a rather strong compression. In particular, efficient simulation of private randomness by public randomness would imply a version of a direct sum theorem in the setting of communication complexity. This establishes a yet another connection between the two areas. (Abstract shortened by UMI.)</p>
22

Retrieving quantifiable social media data from human sensor networks for disaster modeling and crisis mapping

Aulov, Oleg 29 October 2014 (has links)
<p> This dissertation presents a novel approach that utilizes quantifiable social media data as a human aware, near real-time observing system, coupled with geophysical predictive models for improved response to disasters and extreme events. It shows that social media data has the potential to significantly improve disaster management beyond informing the public, and emphasizes the importance of different roles that social media can play in management, monitoring, modeling and mitigation of natural and human-caused extreme disasters. </p><p> In the proposed approach Social Media users are viewed as "human sensors" that are "deployed" in the field, and their posts are considered to be "sensor observations", thus different social media outlets all together form a Human Sensor Network. We utilized the "human sensor" observations, as boundary value forcings, to show improved geophysical model forecasts of extreme disaster events when combined with other scientific data such as satellite observations and sensor measurements. Several recent extreme disasters are presented as use case scenarios. </p><p> In the case of the Deepwater Horizon oil spill disaster of 2010 that devastated the Gulf of Mexico, the research demonstrates how social media data from Flickr can be used as a boundary forcing condition of GNOME oil spill plume forecast model, and results in an order of magnitude forecast improvement. In the case of Hurricane Sandy NY/NJ landfall impact of 2012, we demonstrate how the model forecasts, when combined with social media data in a single framework, can be used for near real-time forecast validation, damage assessment and disaster management. Owing to inherent uncertainties in the weather forecasts, the NOAA operational surge model only forecasts the worst-case scenario for flooding from any given hurricane. Geolocated and time-stamped Instagram photos and tweets allow near real-time assessment of the surge levels at different locations, which can validate model forecasts, give timely views of the actual levels of surge, as well as provide an upper bound beyond which the surge did not spread. </p><p> Additionally, we developed AsonMaps&mdash;a crisis-mapping tool that combines dynamic model forecast outputs with social media observations and physical measurements to define the regions of event impacts.</p>
23

Exploring a Threat-Focused Acquisition Methodology Using Multi-Criteria Decision Methods to Increase Delivery-Cycles for Information and Cyber-Related Capabilities

Wilson, Jeffery Dwane 22 March 2018 (has links)
<p>The use of information has dramatically changed over the past decade. In addition to traditional cyber-attacks, there has been an increase in the use of social-media to spread misinformation and opinion through the internet. The United States Department of Defense (DOD) is actively developing capabilities to defend against these cyberspace and information threats. Unfortunately, one of the principal dilemmas of this challenge is the speed at which these threats can be introduced. Adversaries are not constrained by U.S. regulations allowing deployment of new threats simply by modifying an available commercial product or application. The DOD needs similar agility to deliver counter-threat capabilities while remaining compliant with regulations. This research centers on increasing the government?s delivery speed by adding a transformational threat-focused acquisition method to compliment the traditional capability-based development process. The DOD?s current resourcing, programming, and acquisition processes are studied to offer a pragmatic synergistic threat-focused approach to continuously resource, research, and provide the latest in available technology to the information environment. This recommended approach offers an institutional method to acquire technology targeting delivery within months while maintaining the inherent ability to annually transition these products into the traditional process for sustainability. To enhance the speed of the research and refresh capability, the use of multi-criteria decision models (MCDM) are evaluated to accelerate the assessment of alternatives and engineering trade-offs. A mathematical model is provided to assess the selection of decision criteria and moreover its impact on the final alternative selection. Sensitivity analysis techniques are then used to identify the points where outcome probabilities are subject to change given different criteria. Ultimately, a stable methodology to accelerate the DOD?s acquisition of cyber-unique capabilities is provided.
24

Emergency Responders as Inventors| An Action Research Examination of Public Information Work

St. Denis, Lise Ann 31 December 2015 (has links)
<p> The development of information and communication technologies (ICTs) has expanded the ways that people communicate and share information with one another. In the context of disaster, this has disrupted and reshaped the nature of the communication of emergency information and public participation in the emergency response process itself. Members of the public have been much quicker at adapting and improvising solutions in this new communication ecology than emergency response organizations. This difference in adoption reflects key differences in the formal constraints and responsibilities faced by emergency responders in comparison to the ability in the public sphere to improvise and organize more fluidly. My research focuses on the design and ongoing development of sociotechnical solutions within a community of emergency responders interested in integrating social media into emergency response practices. I look at both the solutions emerging across this community and the sociotechnical arrangements that support ongoing communication and the evolution of new ideas in a continual process of invention. My research spans four years, starting with an initial case study and progressing over time into a collaborative role that leverages my skills and knowledge of crisis informatics in the joint exploration of data analysis strategies and communication strategies.</p>
25

Information Weighted Consensus for Distributed Estimation in Vision Networks

Kamal, Ahmed Tashrif 28 December 2013 (has links)
<p> Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multi-target tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms work by exchanging information between sensors that are communication neighbors. Since most cameras are directional sensors, it is often the case that neighboring sensors may not be sensing the same target. Such sensors that do not have information about a target are termed as ''naive'' with respect to that target. State-of-the-art distributed state estimation algorithms (e.g., the Kalman Consensus Filter (KCF)) in the sensor networks community are not directly applicable to tracking applications in camera networks mainly due to this naivety issue. In our work, we propose generalized distributed algorithms for state estimation in a sensor network taking the naivety issue into account. </p><p> For multi-target tracking, along with the tracking framework, a data association step is necessary where the measurements in each camera's view are associated with the appropriate targets' tracks. At first, under the assumption that the data association is given, we develop distributed state estimation algorithms addressing the naivety issue. In this process, first, we propose the Generalized Kalman Consensus Filter (GKCF) where an information-weighting scheme is utilized to account for the naivety issue. Next, we propose the Information-weighted Consensus Filter (ICF) which can achieve optimal centralized performance while also accounting for naivety. This is the core contribution of this thesis. Next, we introduce the aspect of multi-target tracking where a probabilistic data association scheme is incorporated in the distributed tracking scheme resulting the Multi-Target Information Consensus (MTIC) algorithm. The incorporation of the probabilistic data association mechanism makes the MTIC algorithm very robust to false measurements/clutter. </p><p> The aforementioned algorithms are derived under the assumption that the measurements are related to the state variables using a linear relationship. However, in general, this is not true for many sensors including camera sensors. Thus, to account for the non-linearity in the observation model, we propose non-linear extensions of the previous algorithms which we denote as the Extended ICF (EICF) and the Extended MTIC (EMTIC) algorithms. In-depth theoretical and experimental analysis are provided to compare these algorithms with existing ones.</p>
26

Information management and animal welfare in crisis| The role of collaborative technologies and cooperative work in emergency response

White, Joanne Isobel 11 June 2015 (has links)
<p>When making decisions about what to do in a disaster, people consider the welfare of their animals. Most people consider their pets to be "part of the family." There are more than 144 million pet dogs and cats in homes around the US, and Colorado is home to a $3 billion livestock industry. In emergency response, supporting the human-animal bond is one important way we can assist people in making good decisions about evacuation, and improve their ability to recover after the emergency period is over. There is an opportunity to leverage social computing tools to support the information needs of people concerned with animals in disasters. This research uses three major studies to examine the information management and cooperative work done around animals in this domain: First, an online study of the response of animal advocates in the 2012 Hurricane Sandy event; second, a study bridging the online and offline response of equine experts following the 2013 Colorado floods; and third, an extended 22-month ethnographic study of the work done at animal evacuation sites, beginning with on-the-ground participant observation at two fairground evacuation sites during the Black Forest Fire in Southern Colorado in 2013, and including the design of two information support tools. The research provides lessons about how information online, information offline, and the bridging of information in those arenas both supports and limits the potential for innovation in addressing the unusual and emergent ill-structured problems that are hallmarks of disaster response. The role of expertise as a vital resource in emergency response, and recommendations for policy improvements that appreciate the conscious inclusion of spontaneous volunteers are two contributions from this work.
27

An Electroencephalogram (EEG) Based Biometrics Investigation for Authentication| A Human-Computer Interaction (HCI) Approach

Rodriguez, Ricardo J. 09 October 2015 (has links)
<p> Encephalogram (EEG) devices are one of the active research areas in human-computer interaction (HCI). They provide a unique brain-machine interface (BMI) for interacting with a growing number of applications. EEG devices interface with computational systems, including traditional desktop computers and more recently mobile devices. These computational systems can be targeted by malicious users. There is clearly an opportunity to leverage EEG capabilities for increasing the efficiency of access control mechanisms, which are the first line of defense in any computational system. </p><p> Access control mechanisms rely on a number of authenticators, including &ldquo;what you know&rdquo;, &ldquo;what you have&rdquo;, and &ldquo;what you are&rdquo;. The &ldquo;what you are&rdquo; authenticator, formally known as a biometrics authenticator, is increasingly gaining acceptance. It uses an individual&rsquo;s unique features such as fingerprints and facial images to properly authenticate users. An emerging approach in physiological biometrics is cognitive biometrics, which measures brain&rsquo;s response to stimuli. These stimuli can be measured by a number of devices, including EEG systems. </p><p> This work shows an approach to authenticate users interacting with their computational devices through the use of EEG devices. The results demonstrate the feasibility of using a unique hard-to-forge trait as an absolute biometrics authenticator by exploiting the signals generated by different areas of the brain when exposed to visual stimuli. The outcome of this research highlights the importance of the prefrontal cortex and temporal lobes to capture unique responses to images that trigger emotional responses. </p><p> Additionally, the utilization of logarithmic band power processing combined with LDA as the machine learning algorithm provides higher accuracy when compared against common spatial patterns or windowed means processing in combination with GMM and SVM machine learning algorithms. These results continue to validate the value of logarithmic band power processing and LDA when applied to oscillatory processes.</p>
28

Towards a New Model of Information Validation| Modeling the Information Validation Process of Police Investigators

Nizich, Michael P. 03 November 2015 (has links)
<p> This study explores the information validation process of police investigators. The purpose of the research was to create a formal process model of the information validation process of a group of professional investigators. In this study I argue that the existence of such a model will help researchers in various disciplines by providing a baseline to which the validation process of other groups of information seekers can be tested and compared.</p><p> The study subjects consisted of 45 police investigators and data was collected using 4 distinct methods including semi-structured interviews, talk aloud sessions, a controlled experiment, and a Joint Application Design (JAD) session. The research culminated in a new process model of the information validation process of police investigators. The study also provides a new research framework for the future study of information validation processes of various groups of information seekers.</p><p> Several new discoveries emerging from the study include, but are not limited to, the findings that when validating new information, police investigator&rsquo;s consider disparities between the behavioral, physical, visual, evidentiary, and potentially audible forms of information surrounding the information source and the investigator&rsquo;s own personal knowledge base and experiential database. Other discoveries were that police investigators use their knowledge base and experiential database to create a virtual descriptive scenario or pre-disposition of what they expect to find before the validation process begins. They then use an abductive process through a questioning and information exchange process to test the details of their own scenario moving towards the best possible explanation of their observation.</p><p> In summary the study provides a new model of information validation illustrating the entities, processes, and decisions that comprise the process as well as the relationships, inter-dependencies, and constraints that govern it. Using professional investigators as study subjects provides merit to the model as a baseline or foundation to which we can now begin to study and compare the information validation process of other information seekers to the new model. </p>
29

The association and probability of a predictive relationship between cyber security incidents and type of internet connectivity| A quantitative study

Lagrule, Carlos Manuel 07 May 2015 (has links)
<p> Research has shown that the cost of information security (IS) breaches to organizations has been estimated to be in the billions of dollars. Extant research has linked human error to about 65% of data breaches, which involve economic loss of more than $20 billion to US companies. Researchers concur and further add that end users' behaviors contribute to internal security breaches in organizations, and that these behaviors include employee negligence and non-compliance with policies. Research has also shown that individuals' self-efficacy to strengthen information security efforts starts at home; this behavior at home creates the foundation for Internet users or individuals to continue applying security behaviors at work. This study investigated the association and the probability of a predictive relationship between the independent variable (IV), <i>type of Internet connectivity</i> and the dependent variable (DV), <i>cyber security incidents,</i> among adult users of the Internet in the U.S.A. Findings from a Chi-square test indicated that no statistically significant association and no probability of a statistically significant predictive relationship existed between the IV and the DV. These Chi-square test's results supported the results of the binomial logistic regression.</p>
30

A correlational study on the absence of incentives to share knowledge in a virtual community

Strickland, Vhondy 30 August 2014 (has links)
<p> Extrinsic motivation may affect knowledge sharing in a virtual community. As virtual communities have become ubiquitous, understanding knowledge sharing in virtual communities has become very important. Knowledge sharing is one of the factors that allow virtual communities to be viable. This study sought to observe knowledge sharing in a virtual community, which does not use extrinsic motivation techniques as incentives to share knowledge. This correlation study used a framework that included the elements of social capital and outcome expectations. This study found that extrinsic rewards over time appear not to be important in knowledge sharing. The long term effect may be that extrinsic rewards are much less important than the design of the virtual community and the internal motivation of the members of the virtual community. One-hundred and thirty-three persons participated in this study.</p>

Page generated in 0.1577 seconds