• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 27
  • 27
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Manikarnika : Proactive Crowd-Sourcing for Location Based Services

Vaidyanathan, NA January 2010 (has links)
This thesis presents the design and evaluation of the location of a cell phone user, to enable more effective performance monitoring. One of the end-uses I propose is in emergency management, by means of a framework that distributes its functionality between establishing data-set characteristics that are relevant to the problem and a visual tool to evaluate resource-scheduling proposals. Manikarnika is a modular framework, which finds translation in a prototype for Reverse 111. The first steps in the process were to establish whether the parameters I hypothesized as useful, indeed were. Using a statistically significant amount of traces, obtained from real calls placed on the network, the utility of the location metric was established. In order to investigate utilizing a second metric of reputation, a benchmark for evaluating ideas from Social Networks research was proposed, in order to move from arbitrary testing to a more systematic environment. This dissertation details the measurement, design and evaluation of an end-to-end and modular framework for Emergency Management, where the functionality is distributed in order to easily incorporate the changing parameters of sources of information, emergency events, resource requirements of these events and identifying callers that might be able to provide better insight into a situation that is essentially very dynamic. The chasm between research proposals for various end-uses and the application of the same to real life is one that I have tried to bridge in my work. By incorporating pieces from core Electrical Engineering measurements and simulation and extending the use of what was originally a tool built for training Emergency Responders to analyze various resource scheduling agents, which take into account a diversity of administrative domains, I lay the ground work for one possible solution, Reverse 111, which proposes the use of proactive crowd-sourcing for emergency response, with easy extensions to commercial location-based applications.
2

Tattle - "Here's How I See It" : Crowd-Sourced Monitoring and Estimation of Cellular Performance Through Local Area Measurement Exchange

Liang, Huiguang 01 May 2015 (has links)
The operating environment of cellular networks can be in a constant state of change due to variations and evolutions of technology, subscriber load, and physical infrastructure. One cellular operator, which we interviewed, described two key difficulties. Firstly, they are unable to monitor the performance of their network in a scalable and fine-grained manner. Secondly, they find difficulty in monitoring the service quality experienced by each user equipment (UE). Consequently, they are unable to effectively diagnose performance impairments on a per-UE basis. They currently expend considerable manual efforts to monitor their network through controlled, small-scale drive-testing. If this is not performed satisfactorily, they risk losing subscribers, and also possible penalties from regulators. In this dissertation, we propose Tattle1, a distributed, low-cost participatory sensing framework for the collection and processing of UE measurements. Tattle is designed to solve three problems, namely coverage monitoring (CM), service quality monitoring (QM) and, per-device service quality estimation and classification (QEC). In Tattle, co-located UEs exchange uncertain location information and measurements using local-area broadcasts. This preserves the context of co-location of these measurements. It allows us to develop U-CURE, as well as its delay-adjusted variant, to discard erroneously-localized samples, and reduce localization errors respectively. It allows operators to generate timely, high-resolution and accurate monitoring maps. Operators can then make informed, expedient network management decisions, such as adjusting base-station parameters, to making long-term infrastructure investment. We propose a comprehensive statistical framework that also allows an individual UE to estimate and classify its own network performance. In our approach, each UE monitors its recent measurements, together with those reported by co-located UEs. Then, through our framework, UEs can automatically determine if any observed impairment is endemic amongst other co-located devices. Subscribers that experience isolated impairments can then take limited remedy steps, such as rebooting their devices. We demonstrate Tattle's effectiveness by presenting key results, using up to millions of real-world measurements. These were collected systematically using current generations of commercial-off-the-shelf (COTS) mobile devices. For CM, we show that in urban built-up areas, GPS locations reported by UEs may have significant uncertainties and can sometimes be several kilometers away from their true locations. We describe how U-CURE can take into account reported location uncertainty and the knowledge of measurement co-location to remove erroneously-localized readings. This allows us to retain measurements with very high location accuracy, and in turn derive accurate, fine-grained coverage information. Operators can then react and respond to specific areas with coverage issues in a timely manner. Using our approach, we showcase high-resolution results of actual coverage conditions in selected areas of Singapore. For QM, we show that localization performance in COTS devices may exhibit non-negligible correlation with network round-trip delay. This can result in localization errors of up to 605.32m per 1,000ms of delay. Naïve approaches that blindly accepts measurements with their reported locations will therefore result in grossly mis-localized data points. This affects the fidelity of any geo-spatial monitoring information derived from these data sets. We demonstrate that using the popular localization approach of combining Global-Positioning System together with Network-Assisted Localization, may result in a median root-mean-square (rms) error increase of over 60%. This is in comparison to simply using the Global-Positioning System on its own. We propose a network-delay-adjusted variant of U-CURE, to cooperatively improve the localization performance of COTS devices. We show improvements of up to 70% in terms of median rms location errors, even while subjected to uncertain real-world network delay conditions, with just 3 participating UEs. This allows us to refine the purported locations of delay measurements, and as a result, derive accurate, fine-grained and actionable cellular quality information. Using this approach, we present accurate cellular network delay maps that are of much higher spatial-resolution, as compared to those naively derived using raw data. For QEC, we report on the characteristics of the delay performance of co-located devices subscribed to 2 particular cellular network operators in Singapore. We describe the results of applying our proposed approach to addressing the QEC problem, on real-world measurements of over 443,500 data points. We illustrate examples where “normal” and “abnormal” performances occur in real networks, and report instances where a device can experience complete outage, while none of its neighbors are affected. We give quantitative results on how well our algorithm can detect an “abnormal” time series, with increasing effectiveness as the number of co-located UEs increases. With just 3 UEs, we are able to achieve a median detection accuracy of just under 70%. With 7 UEs, we can achieve a median detection rate of just under 90%. 1 The meaning of Tattle, as a verb, is to gossip idly. By letting devices communicate their observations with one another, we explore the kinds of insights that can elicited based on this peer-to-peer exchange.
3

The Architecture of Mass Collaboration: How Open Source Commoning Will Change Everything

Gardner, Alec J. 11 October 2013 (has links)
No description available.
4

My4Sight: A Human Computation Platform for Improving Flu Predictions

Akupatni, Vivek Bharath 17 September 2015 (has links)
While many human computation (human-in-the-loop) systems exist in the field of Artificial Intelligence (AI) to solve problems that can't be solved by computers alone, comparatively fewer platforms exist for collecting human knowledge, and evaluation of various techniques for harnessing human insights in improving forecasting models for infectious diseases, such as Influenza and Ebola. In this thesis, we present the design and implementation of My4Sight, a human computation system developed to harness human insights and intelligence to improve forecasting models. This web-accessible system simplifies the collection of human insights through the careful design of the following two tasks: (i) asking users to rank system-generated forecasts in order of likelihood; and (ii) allowing users to improve upon an existing system-generated prediction. The structured output collected from querying human computers can then be used in building better forecasting models. My4Sight is designed to be a complete end-to- end analytical platform, and provides access to data collection features and statistical tools that are applied to the collected data. The results are communicated to the user, wherever applicable, in the form of visualizations for easier data comprehension. With My4Sight, this thesis makes a valuable contribution to the field of epidemiology by providing the necessary data and infrastructure platform to improve forecasts in real time by harnessing the wisdom of the crowd. / Master of Science
5

Techniques et métriques non intrusives pour caractériser les réseaux Wi-Fi / Metrics and non-intrusive techniques to characterize Wi-Fi networks

Molina Troconis, Laudin Alessandro 05 July 2018 (has links)
Aujourd’hui, les appareils mobiles sont présents dans le monde entier. Ces appareils permettent aux utilisateurs d'accéder à l’Internet notamment par l'intermédiaire des réseaux WiFi. La diversité et le nombre de déploiements sans coordination centrale (y compris les utilisateurs à leur domicile) conduit à des déploiements qu’on peut qualifier de chaotiques. En conséquence, les réseaux WiFi sont largement déployés avec une forte densité dans les zones urbaines. Dans ce contexte, les utilisateurs et les opérateurs tentent d’exploiter ces déploiements pour obtenir une connectivité omniprésente, et éventuellement d'autres services. Cependant, pour tirer parti de ces déploiements, il faut des stratégies pour identifier les réseaux utilisables et choisir les plus adaptés aux besoins. Pour cela, nous étudions le processus de découverte des réseaux dans le contexte de ces déploiements. Ensuite, nous présentons une plateforme de partage de mesures sans fil, un système d'information collaboratif où les stations mobiles recueillent des mesures du réseau et les envoient à un système central. En rassemblant mesures provenant de différents utilisateurs, la plateforme donne accès à des caractéristiques du déploiement précieuses. Nous évaluons l'utilité de cette plateforme collaborative grâce à deux applications : (1) l’ensemble minimal de points d'accès, afin de réduire l'énergie nécessaire pour offrir une couverture WiFi dans une zone donnée. (2) l'optimisation des paramètres de recherche de réseau, afin de réduire le temps nécessaire pour découvrir les réseaux existants. Ensuite, nous étudions une méthode passive pour déterminer si un réseau fonctionne dans un canal saturé. / Nowadays, mobile devices are present worldwide, with over 4.40 Billion devices globally. These devices enable users to access the Internet via wireless networks. Different actors (e.g., home users, enterprises) are installing WiFi networks everywhere, without central coordination, creating chaotic deployments. As a result, WiFi networks are widely deployed all over the world, with high accesspoint (AP) density in urban areas. In this context, end-users and operators are trying to exploit these dense network deployments to obtain ubiquitous Internet connectivity, and possibly other services. However, taking advantage of these deployments requires strategies to gather and provide information about the available networks. In this dissertation, we first study the network discovery process within the context of these deployments. Then, we present the Wireless Measurements Sharing Platform, a collaborative information system, to which mobile stations send simple network measurements that they collected. By gathering and processing several network measurements from different users, the platform provides access to valuable characteristics of the deployment. We evaluate the usefulness of this collaborative platform thanks to two applications: (1) the minimal access point set, to reduce the energy needed to offer WiFi coverage in a given area.(2) The optimization of the scanning parameters,to reduce the time a mobile station needs for the network discovery. Finally, we describe a method to identify whether an AP operates ina saturated channel, by passively monitoring beacon arrival distribution.
6

Essays on Data Driven Insights from Crowd Sourcing, Social Media and Social Networks

Velichety, Srikar, Velichety, Srikar January 2016 (has links)
The beginning of this decade has seen a phenomenal raise in the amount of data generated in the world. While this increase provides us with opportunities to understand various aspects of human behavior and mechanisms behind new phenomena, the technologies, statistical techniques and theories required to gain an in depth and comprehensive understanding haven't progressed at an equal pace. As little as 5 years back, we used to deal with problems where there is insufficient prior social science or economic theory and the interest is only in prediction of the outcome or where there is an appropriate social science or economic theory and the interest is in explaining a given phenomenon. Today, we deal with problems where there is insufficient social science or economic theory but the interest is in explaining a given phenomenon. This creates a big challenge the solution to which is of equal interest to both academics and practitioners. In my research, I contribute towards addressing these challenges by building exploratory frameworks that leverage a variety of techniques including social network analysis, text and data mining, econometrics, statistical computing and visualization. My three essay dissertation focuses on understanding the antecedents to the quality of user generated content and on subscription and un-subscription behavior of users from lists on Social Media. Using a data science approach on population sized samples from Wikipedia and Twitter, I demonstrate the power of customized exploratory analyses in uncovering facts that social science or economic theory doesn't dictate and show how metrics from these analyses can be used to build prediction models with higher accuracy. I also demonstrate a method for combining exploration, prediction and explanatory modeling and propose to extend this methodology to provide causal inference. This dissertation has general implications for building better predictive and explanatory models and for mining text efficiently in Social Media.
7

Mobile Crowd Sensing in Edge Computing Environment

January 2019 (has links)
abstract: The mobile crowdsensing (MCS) applications leverage the user data to derive useful information by data-driven evaluation of innovative user contexts and gathering of information at a high data rate. Such access to context-rich data can potentially enable computationally intensive crowd-sourcing applications such as tracking a missing person or capturing a highlight video of an event. Using snippets and pictures captured from multiple mobile phone cameras with specific contexts can improve the data acquired in such applications. These MCS applications require efficient processing and analysis to generate results in real time. A human user, mobile device and their interactions cause a change in context on the mobile device affecting the quality contextual data that is gathered. Usage of MCS data in real-time mobile applications is challenging due to the complex inter-relationship between: a) availability of context, context is available with the mobile phones and not with the cloud, b) cost of data transfer to remote cloud servers, both in terms of communication time and energy, and c) availability of local computational resources on the mobile phone, computation may lead to rapid battery drain or increased response time. The resource-constrained mobile devices need to offload some of their computation. This thesis proposes ContextAiDe an end-end architecture for data-driven distributed applications aware of human mobile interactions using Edge computing. Edge processing supports real-time applications by reducing communication costs. The goal is to optimize the quality and the cost of acquiring the data using a) modeling and prediction of mobile user contexts, b) efficient strategies of scheduling application tasks on heterogeneous devices including multi-core devices such as GPU c) power-aware scheduling of virtual machine (VM) applications in cloud infrastructure e.g. elastic VMs. ContextAiDe middleware is integrated into the mobile application via Android API. The evaluation consists of overheads and costs analysis in the scenario of ``perpetrator tracking" application on the cloud, fog servers, and mobile devices. LifeMap data sets containing actual sensor data traces from mobile devices are used to simulate the application run for large scale evaluation. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
8

WiSDM: a platform for crowd-sourced data acquisition, analytics, and synthetic data generation

Choudhury, Ananya 15 August 2016 (has links)
Human behavior is a key factor influencing the spread of infectious diseases. Individuals adapt their daily routine and typical behavior during the course of an epidemic -- the adaptation is based on their perception of risk of contracting the disease and its impact. As a result, it is desirable to collect behavioral data before and during a disease outbreak. Such data can help in creating better computer models that can, in turn, be used by epidemiologists and policy makers to better plan and respond to infectious disease outbreaks. However, traditional data collection methods are not well suited to support the task of acquiring human behavior related information; especially as it pertains to epidemic planning and response. Internet-based methods are an attractive complementary mechanism for collecting behavioral information. Systems such as Amazon Mechanical Turk (MTurk) and online survey tools provide simple ways to collect such information. This thesis explores new methods for information acquisition, especially behavioral information that leverage this recent technology. Here, we present the design and implementation of a crowd-sourced surveillance data acquisition system -- WiSDM. WiSDM is a web-based application and can be used by anyone with access to the Internet and a browser. Furthermore, it is designed to leverage online survey tools and MTurk; WiSDM can be embedded within MTurk in an iFrame. WiSDM has a number of novel features, including, (i) ability to support a model-based abductive reasoning loop: a flexible and adaptive information acquisition scheme driven by causal models of epidemic processes, (ii) question routing: an important feature to increase data acquisition efficacy and reduce survey fatigue and (iii) integrated surveys: interactive surveys to provide additional information on survey topic and improve user motivation. We evaluate the framework's performance using Apache JMeter and present our results. We also discuss three other extensions of WiSDM: Adapter, Synthetic Data Generator, and WiSDM Analytics. The API Adapter is an ETL extension of WiSDM which enables extracting data from disparate data sources and loading to WiSDM database. The Synthetic Data Generator allows epidemiologists to build synthetic survey data using NDSSL's Synthetic Population as agents. WiSDM Analytics empowers users to perform analysis on the data by writing simple python code using Versa APIs. We also propose a data model that is conducive to survey data analysis. / Master of Science
9

Targeted feedback collection for data source selection with uncertainty

Cortés Ríos, Julio César January 2018 (has links)
The aim of this dissertation is to contribute to research on pay-as-you-go data integration through the proposal of an approach for targeted feedback collection (TFC), which aims to improve the cost-effectiveness of feedback collection, especially when there is uncertainty associated with characteristics of the integration artefacts. In particular, this dissertation focuses on the data source selection task in data integration. It is shown how the impact of uncertainty about the evaluation of the characteristics of the candidate data sources, also known as data criteria, can be reduced, in a cost-effective manner, thereby improving the solutions to the data source selection problem. This dissertation shows how alternative approaches such as active learning and simple heuristics have drawbacks that throw light into the pursuit of better solutions to the problem. This dissertation describes the resulting TFC strategy and reports on its evaluation against alternative techniques. The evaluation scenarios vary from synthetic data sources with a single criterion and reliable feedback to real data sources with multiple criteria and unreliable feedback (such as can be obtained through crowdsourcing). The results confirm that the proposed TFC approach is cost-effective and leads to improved solutions for data source selection by seeking feedback that reduces uncertainty about the data criteria of the candidate data sources.
10

群眾外包策略探究-以台灣流行服飾業者為例 / The crowd sourcing strategy- A case study on Fashion industry in Taiwan

林于涵 Unknown Date (has links)
十八世紀工業革命的推進,使得大量生產(Mass Production)幾乎改變了各產業,為社會帶來大量的財富,隨著網際網路以及社群網站的普及化,大量生產已經無法應付日漸複雜的市場環境,而需要新的商業模式來達到突破。 隨著社群的概念興起,群眾外包的觀念也隨之廣泛應用於各種產業,而不再只限於開放原始碼的用途,逐漸被應用於T-shirt、科技業、雜誌業等等的範疇,舉凡美國Threadless.com、Innocentive等等都是應用群眾外包之成功案例。 群眾外包即是指提供平台供群眾使用,並在該平台上提供創意發想的點子,最終經由一定的表決機制發展成新產品,而非傳統商品化方式─由廠商開發製造完成新產品。 本研究探討群眾外包應用於台灣服飾業者之商品化流程,進而找出關鍵成功策略。為了有效釐清群眾外包商品化之複雜關記以及與群眾之互動細節,採用多重個案研究法,該質化的研究方法可由深度訪談產業專家,以了解發展歷程,並藉由個案廠商的角度,探討群眾外包之關鍵策略。 研究發現,群眾外包之策略是否可行有以下四個檢核點:群眾獎勵機制、仲介網路平台、評選機制以及生產與營運。首先要建立凝聚相同興趣的社群,並提供自由發揮的平台,藉由公正且有效率的評選機制選出欲生產之商品,透過有效率的生產才能順利將商品打入市場。 / As internet become more and more popular, customers become pickier because all of the information is so clear in it. Also, it makes the social network become stronger, and become the new method for enterprises to obtain ideas and market their goods. That’s how crowd sourcing has been used in many industries. This study investigates how Taiwan T-shirt enterprises to use crowd sourcing as a method to obtain more works from the crowds on the internet. With the longitudinal study of three companies to investigate processes and content of crowd sourcing strategies. Study found out that the crowd sourcing strategy can be cut into four key points: give awards to attract people with the same interest involving in, establish a web platform for people to share ideas and put their works on, establish a fair appraise mechanism, and manufacture in a efficient way.

Page generated in 0.0731 seconds