• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 149
  • 28
  • 25
  • 13
  • 13
  • 12
  • 11
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 333
  • 47
  • 43
  • 34
  • 33
  • 33
  • 33
  • 32
  • 29
  • 29
  • 28
  • 27
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Implementation of Citizens’ Observations in Urban Pluvial Flood Modelling / Implementering av Medborgarobservationer i Urban Skyfallsmodellering

Schück, Fredrik January 2021 (has links)
Damages caused by urban pluvial floods are believed to increase due to climate change and urbanization as more citizens are impacted in densely populated cities and extreme rainfalls occur more frequently with higher intensities. To prepare cities for these calamities, urban pluvial flood models are created to provide knowledge about how an extreme rainfall event could inundate the studied city. However, due to the scarcity of observation data from these rainfall events, flood models are seldom calibrated which is necessary to ensure their accuracy.  To improve the feasibility of calibrations an emerging data source was tested, crowdsourced images from citizens. Citizens’ observations have become increasingly available due to the increase of mobile phones and the development of social media enabling citizens to document and upload their observations to the public. Researchers could use these observations as an unconventional data source to calibrate models and reduce the knowledge gap regarding urban floods. The aim of this study was to explore and increase our understanding of how citizen’s observations can be used to calibrate an urban pluvial flood model. A case study about the cloudburst event in Malmö was conducted to study this topic. During that event, more than 100 mm of rain fell over a period of 6 hours in the city and caused 60 million euros of damages.  A total of 297 images depicting the flood caused by the cloudburst event were gathered from social media platforms, newspapers archives, and by inquiring citizens. Images were screened and analysed: water levels were estimated in 66 images and were then used to calibrate a 2D flood model. Furthermore, a sensitivity analysis of the calibrated results was conducted by calculating the RMSE for different subsets and compare it with the RMSE for the full dataset of citizens’ observations. This was done to study how different characteristics, such as timestamp and source as well as sample size and location of the images influences the calibration procedure. After the model was calibrated, the importance of spatial variability in the rainfall input was tested by comparing the flood model output between the spatially varied observed rainfall and a Chicago Design Storm rainfall, which lacks spatial variability.  It was concluded that images from citizens can be used to calibrate an urban pluvial flood model, but the procedure is time-consuming. However, it was also evident that images directly inquired from citizens reduced the time needed as their local knowledge could be integrated. The calibration procedure was also sensitive to the quality of the observations, especially when the images were photographed in relation to the rainfall event. Even though the study had limitations it demonstrates new possibilities to calibrate urban pluvial flood models. / Konsekvenserna av översvämningar från skyfall i städer, så kallade pluviala översvämningar, förväntas öka på grund av urbanisering och klimatförändringar. Det är för att fler påverkas av översvämningar i tätbyggda städer samt att skyfall förväntas öka, både i intensitet och frekvens. Med hjälp av skyfallsmodeller kan dock förståelsen för hur extrema regn översvämmar städer öka. Med denna kunskap kan åtgärder för att minimera konsekvenserna implementeras, såsom blågrön infrastruktur. Däremot finns det en brist av observationsdata från pluviala översvämningar och vilket medför att dessa modeller ytterst sällan kalibreras. Kalibrering är viktig för att säkerställa tillförlitliga modeller.  För att öka möjligheten att kalibrera dessa modeller undersöktes hur observationer från medborgare kan implementeras. Dessa observationer är en relativ oprövad metod men har blivit alltmer tillgängliga tack vare allt bättre mobiltelefonkameror och utvecklingen av sociala medier, vilket gör det enkelt för medborgare att dokumentera och ladda upp sina observationer till allmänheten. Syftet med denna studie är därför att öka förståelsen för hur bilder från medborgare kan användas för att möjliggöra kalibreringen av översvämningsmodeller. En fallstudie över ett skyfall i Malmö 2014 används för att utvärdera denna metod. Under detta skyfall regnade det mer än 100 mm vilket orsakade skador för cirka 600 miljoner kronor.  Totalt samlades 297 bilder som föreställde översvämningen som orsakades av skyfallet. Bilderna samlades ifrån sociala media, tidningsbildarkiv och genom att fråga medborgare efter bilder. Vattennivåerna uppskattades i 66 bilder och de användes sedan för att kalibrera en 2D- skyfallsmodell. Utöver kalibreringen genomfördes en känslighetsanalys av de kalibrerade resultaten genom att jämföra medelfelet för olika subgrupper av bilderna mot medelfelet för alla bilder. Detta gjordes för att studera hur olika egenskaper, såsom när bilden togs och deras ursprung samt bildernas urvalsstorlek och placering påverkar kalibreringsprocessen. Efter att modellen kalibrerats testades också betydelsen av spatial variation i nederbörden genom att jämföra de simulerade vattennivåerna mellan den spatialt varierade historiska regnet och ett syntetiskt CDS-regn som saknar variation.  Utifrån det drogs slutsatsen att bilder från medborgare kan användas för att kalibrera en skyfallsmodell, men metoden är tidskrävande. Dock var det tydligt att bilder som direkt efterfrågades från medborgarna minskade arbetsbördan då deras lokalkännedom kunde inkluderas. Kalibreringen var också känslig för observationerna datakvalitet, särskilt när bilderna fotograferades i förhållande till regnet. Även om studien hade begränsningar visar den att det finns stora möjligheter att kalibrera skyfallsmodeller med observationer från medborgare.
222

Designing Security Defenses for Cyber-Physical Systems

Foruhandeh, Mahsa 04 May 2022 (has links)
Legacy cyber-physical systems (CPSs) were designed without considering cybersecurity as a primary design tenet especially when considering their evolving operating environment. There are many examples of legacy systems including automotive control, navigation, transportation, and industrial control systems (ICSs), to name a few. To make matters worse, the cost of designing and deploying defenses in existing legacy infrastructure can be overwhelming as millions or even billions of legacy CPS systems are already in use. This economic angle, prevents the use of defenses that are not backward compatible. Moreover, any protection has to operate efficiently in resource constraint environments that are dynamic nature. Hence, the existing approaches that require ex- pensive additional hardware, propose a new protocol from scratch, or rely on complex numerical operations such as strong cryptographic solutions, are less likely to be deployed in practice. In this dissertation, we explore a variety of lightweight solutions for securing different existing CPSs without requiring any modifications to the original system design at hardware or protocol level. In particular, we use fingerprinting, crowdsourcing and deterministic models as alternative backwards- compatible defenses for securing vehicles, global positioning system (GPS) receivers, and a class of ICSs called supervisory control and data acquisition (SCADA) systems, respectively. We use fingerprinting to address the deficiencies in automobile cyber-security from the angle of controller area network (CAN) security. CAN protocol is the de-facto bus standard commonly used in the automotive industry for connecting electronic control units (ECUs) within a vehicle. The broadcast nature of this protocol, along with the lack of authentication or integrity guarantees, create a foothold for adversaries to perform arbitrary data injection or modification and impersonation attacks on the ECUs. We propose SIMPLE, a single-frame based physical layer identification for intrusion detection and prevention on such networks. Physical layer identification or fingerprinting is a method that takes advantage of the manufacturing inconsistencies in the hardware components that generate the analog signal for the CPS of our interest. It translates the manifestation of these inconsistencies, which appear in the analog signals, into unique features called fingerprints which can be used later on for authentication purposes. Our solution is resilient to ambient temperature, supply voltage value variations, or aging. Next, we use fingerprinting and crowdsourcing at two separate protection approaches leveraging two different perspectives for securing GPS receivers against spoofing attacks. GPS, is the most predominant non-authenticated navigation system. The security issues inherent into civilian GPS are exacerbated by the fact that its design and implementation are public knowledge. To address this problem, first we introduce Spotr, a GPS spoofing detection via device fingerprinting, that is able to determine the authenticity of signals based on their physical-layer similarity to the signals that are known to have originated from GPS satellites. More specifically, we are able to detect spoofing activities and track genuine signals over different times and locations and propagation effects related to environmental conditions. In a different approach at a higher level, we put forth Crowdsourcing GPS, a total solution for GPS spoofing detection, recovery and attacker localization. Crowdsourcing is a method where multiple entities share their observations of the environment and get together as a whole to make a more accurate or reliable decision on the status of the system. Crowdsourcing has the advantage of deployment with the less complexity and distributed cost, however its functionality is dependent on the adoption rate by the users. Here, we have two methods for implementing Crowdsourcing GPS. In the first method, the users in the crowd are aware of their approximate distance from other users using Bluetooth. They cross validate this approximate distance with the GPS-derived distance and in case of any discrepancy they report ongoing spoofing activities. This method is a strong candidate when the users in the crowd have a sparse distribution. It is also very effective when tackling multiple coordinated adversaries. For method II, we exploit the angular dispersion of the users with respect to the direction that the adversarial signal is being transmitted from. As a result, the users that are not facing the attacker will be safe. The reason for this is that human body mostly comprises of water and absorbs the weak adversarial GPS signal. The safe users will help the spoofed users find out that there is an ongoing attack and recover from it. Additionally, the angular information is used for localizing the adversary. This method is slightly more complex, and shows the best performance in dense areas. It is also designed based on the assumption that the spoofing attack is only terrestrial. Finally, we propose a tandem IDS to secure SCADA systems. SCADA systems play a critical role in most safety-critical infrastructures of ICSs. The evolution of communications technology has rendered modern SCADA systems and their connecting actuators and sensors vulnerable to malicious attacks on both physical and application layers. The conventional IDS that are built for securing SCADA systems are focused on a single layer of the system. With the tandem IDS we break this habit and propose a strong multi-layer solution which is able to expose a wide range of attack. To be more specific, the tandem IDS comprises of two parts, a traditional network IDS and a shadow replica. We design the shadow replica as a deterministic IDS. It performs a workflow analysis and makes sure the logical flow of the events in the SCADA controller and its connected devices maintain their expected states. Any deviation would be a malicious activity or a reliability issue. To model the application level events, we leverage finite state machines (FSMs) to compute the anticipated states of all of the devices. This is feasible because in many of the existing ICSs the flow of traffic and the resulting states and actions in the connected devices have a deterministic nature. Consequently, it leads to a reliable and free of uncertainty solution. Aside from detecting traditional network attacks, our approach bypasses the attacker in case it succeeds in taking over the devices and also maintains continuous service if the SCADA controller gets compromised. / Doctor of Philosophy / Our lives are entangled with cyber-physical systems (CPSs) on a daily basis. Examples of these systems are vehicles, navigation systems, transportation systems, industrial control systems, etc. CPSs are mostly legacy systems and were built with a focus on performance, overlooking security. Security was not considered in the design of these old systems and now they are dominantly used in our everyday life. After numerous demonstration of cyber hacks, the necessity of protecting the CPSs from adversarial activities is no longer ambiguous. Many of the advanced cryptographic techniques are far too complex to be implemented in the existing CPSs such as cars, satellites, etc. We attempt to secure such resource constraint systems using simple backward compatible techniques in this dissertation. We design cheap lightweight solutions, with no modifications to the original system. In part of our research, we use fingerprinting as a technique to secure passenger cars from being hacked, and GPS receivers from being spoofed. For a brief description of fingerprinting, we use the example of two identical T-shirts with the same size and design. They will always have subtle differences between them no matter how hard the tailor tried to make them identical. This means that there are no two T-shirts that are exactly identical. This idea, when applied to analog signalling on electric devices, is called fingerprinting. Here, we fingerprint the mini computers inside a car, which enables us to identify these computers and prevent hacking. We also use the signal levels to design fingerprints for GPS signals. We use the fingerprints to distinguish counterfeit GPS signals from the ones that have originated from genuine satellites. This summarizes two major contributions in the dissertation. Our earlier contribution to GPS security was effective, but it was heavily dependent on the underlying hardware, requiring extensive training for each radio receiver that it was protecting. To remove this dependence of training for the specific underlying hardware, we design and implement the next framework using defenses that require application-layer access. Thus, we proposed two methods that leverage crowdsourcing approaches to defend against GPS spoofing attacks and, at the same time, improve the accuracy of localization for commodity mobile devices. Crowdsourcing is a method were several devices agree to share their information with each other. In this work, GPS users share their location and direction information, and in case of any discrepancy they figure that they are under attack and cooperate to recover from it. Last, we shift the gear to the industrial control systems (ICSs) and propose a novel IDS to protect them against various cyber attacks. Unlike the conventional IDSs that are focused on one of the layers of the system, our IDS comprises of two main components. A conventional component that exposes traditional attacks and a second component called a shadow replica. The replica mimics the behavior of the system and compares it with that of the actual system in a real-time manner. In case of any deviation between the two, it detects attacks that target the logical flow of the events in the system. Note that such attacks are more sophisticated and difficult to detect because they do not leave any obvious footprints behind. Upon detection of attacks on the original controller, our replica takes over the responsibilities of the original ICS controller and provides service continuity.
223

Supporting Open Source Investigative Journalism with Crowdsourced Image Geolocation

Kohler, Rachel 10 August 2017 (has links)
Journalists rely on image and video verification to support their investigations and often utilize open source tools to verify user generated content, but current practice requires experts be involved in every step of the process. Additionally, lacking custom tools to support verification efforts, experts are often limited to the utility of existing, openly available tools, which may or may not support the interactions and information gathering they require. We aim to support the process of geolocating images and videos through crowdsourcing. By enabling crowd workers to participate in the geolocation process, we can provide investigative journalists with efficient and complete verification of image locations. Parallelizing searching speeds up the verification process as well as provides a more extensive search, all while allowing the expert to follow up on other leads or investigative work. We produced a software prototype called GroundTruth which enables crowd workers to support investigative journalists in the geolocation of visual media quickly and accurately. Additionally, this work contributes experimental results demonstrating how the crowd can be utilized to support complex sensemaking tasks. / Master of Science
224

Smartphone Privacy in Citizen Science

Roth, Hannah Michelle 18 July 2017 (has links)
Group signature schemes enable anonymous-yet-accountable communications. Such a capability is extremely useful for modern applications such as smartphone-based crowdsensing and citizen science. A prototype named GROUPSENSE was developed to support anonymous-yet-accountable crowdsensing with SRBE in Android devices. From this prototype, an Android crowdsensing application was implemented to support privacy in citizen science. In this thesis, we will evaluate the usability of our privacy-preserving crowdsensing application for citizen science projects. An in person user study with 22 participants has been performed showing that participants understood the importance of privacy in citizen science and were willing to install privacy-enhancing applications, yet over half of the participants did not understand the privacy guarantee. Based on these results, modifications to the crowdsensing application have been made with the goal of improving the participants' understanding of the privacy guarantee. / Master of Science
225

Essays on the Management of Online Platforms: Bayesian Perspectives

Gupta, Debjit 06 August 2020 (has links)
This dissertation presents three essays that focus on various aspects pertaining to the management of online platforms, defined as "digital services that facilitate interactions between two or more distinct, but interdependent sets of users (whether firms or individuals) who interact through the service via the Internet" (OECD, 2019). The interactions benefit both the users and the platform. Managing online platforms involves developing strategies for one or more of three value adding functions: (a) lowering search costs for the parties connecting through the platform, (b) providing a technology infrastructure that facilitates transactions at scale by sharing both demand and supply side costs; and (c) locating other audiences or consumers for the output that results from the transaction. The platform manager must manage these value adding functions. Thus, one important management task is to recognize potential asymmetries in the economic and/or psychological motivations of the transacting parties connected through the platform. In this dissertation, I empirically examine these issues in greater detail. The first essay, "Incentivizing User-Generated Content—A Double-Edged Sword: Evidence from Field Data and a Controlled Experiment," addresses the conundrum faced by online platform managers interested in crowdsourcing user-generated content (UGC) in prosocial contexts. The dilemma stems from the fact that offering monetary incentives to stimulate UGC contributions also has a damping effect on peer approval, which is an important source of non-monetary recognition valued by UGC contributors in prosocial contexts. The second essay, "Matching and Making in Matchmaking Platforms: A Structural Analysis," examines matchmaking platforms, focusing specifically on the problem of misaligned incentives between the platform and the agents. Based on data from the Ultimate Fighting Championship (UFC) on fighter characteristics, and pay-per-view revenues associated with specific bouts, we identify the potential for conflicts of interest and examine strategies that may be used to mitigate such problems. The third essay, "Matching and Making in Matching Markets: A Managerial Decision Calculus," extends the empirical model and analytical work to a class of commonly encountered one-sided matching market problems. It provides the conceptual outline of a decision calculus that allows managers to explore the revenue and profitability implications of adaptive changes to the tier structures and matching algorithms. / Doctor of Philosophy / The 21st century has witnessed the rise of the platform economy. Consumers routinely interact with online platforms ways in their day to day activities. For instance, they interact with platforms such as Quora, StackOverflow, Uber, and Airbnb to name only a few. Such platforms address a variety of needs starting from providing users with answers to a variety of questions to matching them with a range of service providers (e.g., for travel and dining needs). However, the rapid growth of the platform economy has created a knowledge gap for both consumers and platforms. The three essays in this dissertation attempt to contribute to the literature in this area. The first essay, "Incentivizing User-Generated Content—A Double-Edged Sword: Evidence from Field Data and a Controlled Experiment," examines how crowdsourcing contests influence the quantity and quality of user-generated content (UGC). Analyzing data from the popular question and answer website Quora, we find that offering monetary incentives to stimulate UGC contributions increases contributions but also has a simultaneous damping effect on peer endorsement, which is an important source of non-monetary recognition for UGC contributors in prosocial contexts. The second essay, "Matching and Making in Matchmaking Platforms: A Structural Analysis," examines matchmaking platforms, focusing on the problem of misaligned incentives between the platform and the agents. Based on data from the Ultimate Fighting Championship (UFC) on fighter characteristics, and pay-per-view revenues associated with specific bouts, we identify the potential for conflicts of interest and examine strategies that may be used to mitigate such problems. The third essay, "Matching and Making in Matching Markets: A Managerial Decision Calculus," extends the empirical model and analytical work to a class of commonly encountered one-sided matching market problems. It provides the conceptual outline of a decision calculus that allows managers to explore the revenue and profitability implications of adaptive changes to the tier structures and matching algorithms.
226

Designing Human-AI Collaborative Systems for Historical Photo Identification

Mohanty, Vikram 30 August 2023 (has links)
Identifying individuals in historical photographs is important for preserving material culture, correcting historical records, and adding economic value. Historians, antiques dealers, and collectors often rely on manual, time-consuming approaches. While Artificial Intelligence (AI) offers potential solutions, it's not widely adopted due to a lack of specialized tools and inherent inaccuracies and biases. In my dissertation, I address this gap by combining the complementary strengths of human intelligence and AI. I introduce Photo Sleuth, a novel person identification pipeline that combines crowdsourced expertise with facial recognition, supporting users in identifying unknown portraits from the American Civil War era (1861--65). Despite successfully identifying numerous unknown photos, users often face the `last-mile problem' --- selecting the correct match(es) from a shortlist of high-confidence facial recognition candidates while avoiding false positives. To assist experts, I developed Second Opinion, an online tool that employs a novel crowdsourcing workflow, inspired by cognitive psychology, effectively filtering out up to 75% of facial recognition's false positives. Yet, as AI models continually evolve, changes in the underlying model can potentially impact user experience in such crowd--expert--AI workflows. I conducted an online study to understand user perceptions of changes in facial recognition models, especially in the context of historical person identification. Our findings showed that while human-AI collaborations were effective in identifying photos, they also introduced false positives. To reduce these misidentifications, I built Photo Steward, an information stewardship architecture that employs a deliberative workflow for validating historical photo identifications. Building on this foundation, I introduced DoubleCheck, a quality assessment framework that combines community stewardship and comprehensive provenance information, for helping users accurately assess photo identification quality. Through my dissertation, I explore the design and deployment of human-AI collaborative tools, emphasizing the creation of sustainable online communities and workflows that foster accurate decision-making in the context of historical photo identification. / Doctor of Philosophy / Identifying historical photos offers significant cultural and economic value; however, the identification process can be complex and challenging due to factors like poor source material and limited research resources. In my dissertation, I address this problem by leveraging the complementary strengths of human intelligence and Artificial Intelligence (AI). I built Photo Sleuth, an online platform, that helps users in identifying unknown portraits from the American Civil War era. This platform employs a novel person identification workflow that combines crowdsourced human expertise and facial recognition. While AI-based facial recognition is effective at quickly scanning thousands of photos, it can sometimes present challenges. Specifically, it provides the human expert with a shortlist of highly similar-looking candidates from which the expert must discern the correct matches; I call this as the `last-mile problem' of person identification. To assist experts in navigating this challenge, I developed Second Opinion, a tool that employs a novel crowdsourcing workflow inspired by cognitive psychology, named seed-gather-analyze. Further, I conducted an online study to understand the influence of changes in the underlying facial recognition models on the downstream person identification tasks. While these tools enabled numerous successful identifications, they also occasionally led to misidentifications. To address this issue, I introduced Photo Steward, an information stewardship architecture that encourages deliberative decision-making while identifying photos. Building upon the principles of information stewardship and provenance, I then developed DoubleCheck, a quality assessment framework that presents pertinent information, aiding users in accurately evaluating the quality of historical photo IDs. Through my dissertation, I explore the design and deployment of human-AI collaborative tools, emphasizing the creation of sustainable online communities and workflows that encourage accurate decision-making in the context of historical photo identification.
227

Leveraging Linguistic Insights for Uncertainty Calibration of ChatGPT and Evaluating Crowdsourced Annotations

Venkata Divya Sree Pulipati (18469230) 09 July 2024 (has links)
<p dir="ltr">The quality of crowdsource annotations has always been a challenge due to the variability in annotators backgrounds, task complexity, the subjective nature of many labeling tasks, and various other reasons. Hence, it is crucial to evaluate these annotations to ensure their reliability. Traditionally, human experts evaluate the quality of crowdsourced annotations, but this approach has its own challenges. Hence, this paper proposes to leverage large language models like ChatGPT-4 to evaluate one of the existing crowdsourced MAVEN dataset and explore its potential as an alternative solution. However, due to stochastic nature of LLMs, it is important to discern when to trust and question LLM responses. To address this, we introduce a novel approach that applies Rubin's framework for identifying and using linguistic cues within LLM responses as indicators of LLMs certainty levels. Our findings reveal that ChatGPT-4 successfully identified 63% of the incorrect labels, highlighting the potential for improving data label quality through human-AI collaboration on these identified inaccuracies. This study underscores the promising role of LLMs in evaluating crowdsourced data annotations offering a way to enhance accuracy and fairness of crowdsource annotations while saving time and costs.</p><p dir="ltr"><br></p>
228

群眾外包平台的服務參與良性循環之研究 / A Study of Virtuous Cycle of Service Participation on Crowdsourcing Platforms

李欣穎, Lee, Agnes H.Y. Unknown Date (has links)
隨著資通訊的快速發展,使得我們進入一個雲端、社群以及行動的時代,人們利用網際網路便捷的特性,創建各式平台,有效地把大家串聯在一起,從個人、企業、社區、政府到國家。近年來,不論學界或業界人們大量運用群眾外包這個概念,在網路上展開平台革命,用更有彈性的方式來解決各種問題和完成任何事情。然而這種靠著群眾力量而運作的平台,除了現有的外在技術環境(科技)以及引人入勝的產品服務(核心價值),更重要的是有快速應變的配套措施(策略佈局與操作機制),才能夠促使群眾外包平台不斷地隨著人流的成長,產生豐富的資訊流,進而帶動金流,讓平台得以持續壯大。 本研究的主要目的是探索與了解成功的群眾外包平台的經營,如何誘發和刺激群眾外包平台上產品服務提供者(供給方)以及產品服務要求者(需求方)的使用動機,使得該平台能夠維持有一個良好的運作循環、持續成長茁壯。本研究採用質性個案研究,透過少數的文獻和大量的網路、報章、媒體等個案資料,分為兩個階段進行研究。第一階段,根據蒐集的資料,(一)我們建立四種類型的群眾外包平台:資訊型、勞務型、線上和線下型、創造型;(二)找出使用者的平台服務參與之動機與平台供需平衡的經營管理之關鍵因素和概念架構圖。第二階段,我們鎖定四家在台灣的勞務型群眾外包平台,進行專家訪談,透過訪談來驗證並修改使用者平台服務參與之動機和平台良性循環之關鍵因素,更深入地了解(一)群眾外包平台上的供給者多受到內在動機的刺激,而要求者則多受到外在動機的刺激在平台上有服務參與的行為;(二)確認在經營群眾外包平台事,平台受到技術環境、策略佈局和操作機制等三個層面的多個關鍵因素影響,要讓平台朝向良好的正向運作循環經營下去,這三個重要層面的相關的關鍵因素是值得去注意和加強的。簡而言之,透過本研究,能夠幫助學界和業界的群眾外包平台的經營者都更進一步了解群眾外包平台及其成功經營的做法。 / Crowdsourcing is a phenomenon that is receiving attention both inside and outside of academia. With the rapid development of ICT and the prevalence of the Internet, the crowdsourcing platform business model has had a dynamic impact on the market. Crowdsourcing offers a good transactional environment in which to fulfill people’s needs and wants, seizing values from products and/or services that are provided and/or requested in more flexible ways to solve problems and accomplish virtually anything in recent years. However, it is important to understand why participants (on both the supply and demand sides) join platforms to provide and request products and/or services. In addition, the operation of such platforms using the power of crowds includes three dimensions – technology assistance, strategy deployment, and operational mechanisms – to constantly attract and balance the flow of crowds, to generate information flows, and to stimulate the cash flows that allow the platform to continue to grow. The objectives of this research are as follows: (1) to explore what drives people to deliver and capture values by providing and requesting products or services on crowdsourcing platforms and (2) to investigate how to manage a successful crowdsourcing platform by motivating their virtuous cycles. This is a qualitative multiple case study. There are two phases to this research. First, based on information gathered across industries and academia, we categorized crowdsourcing platforms into four major types: Information, Labor, Online plus Offline (OplusO), and Creation. We then revealed platform participants’ motivations for service participation and developed a conceptual framework to manage a virtuous cycle of service participation on crowdsourcing platforms. Second, we focused on Labor crowdsourcing platforms in Taiwan by conducting expert interviews to verify and revise our results from the first phase. This research provides a broader view of crowdsourcing platforms and their types in academia. Providers are typically motivated more by intrinsic motivations (autonomy, safety, and trust), whereas requestors are motivated by extrinsic motivations (such as finding practical (alternative) solutions, building reputation, and creating monetary wealth) in the crowdsourcing platform context. Moreover, this research provides practitioners with realistic management references in the technology, strategy, and operational dimensions to ready the platform and to meet the demand in the market. Technology is at the beginning of adopting crowdsourcing platforms and should be scalable. Scalability of the crowdsourcing platform involves constructing an ecosystem for a good transactional environment that facilitates growth, such that strategy deployment and operational mechanisms play important roles to support.
229

Au code, citoyens : mise en technologies des problèmes publics / Armed with code : from public problems to technologies of participation

Ermoshina, Kseniia 28 November 2016 (has links)
La thèse étudie les applications dites citoyennes pour mobiles et web qui sont développées en réponse aux problèmes publics variés et basées sur le principe de crowdsourcing. Elle s’intéresse à la fois à la conception de ces dispositifs, à leurs usages et aux façons dont ces outils transforment la communication des citoyens entre eux, et avec les pouvoirs publics. Elle explore les nouveaux formats d’innovation, comme les hackathons civiques, et interroge l’usage du code informatique en tant que nouvel instrument d’action collective.La thèse mobilise une méthodologie qui puise dans les répertoires des STS, de la sociologie des problèmes publics, de la science politique, des sciences de l’information et communication. Appuyée sur l’étude d'applications citoyennes en France et en Russie, elle pose différentes questions : comment traduit-on les problèmes publics en code informatique ; qu’est-ce que ces applications font et font faire ?; comment transforment-elles la participation citoyenne ?La recherche montre que les interfaces des applications façonnent et standardisent la participation en se basant sur les documents de référence : les lois, les réglementations normatives et techniques. Cependant, la standardisation a ses limites : se focalisant sur les moments de faille et des épreuves, telles que les tests, les mises à jour, le débogage des applications, l’enquête rend visibles les détournements et les bricolages mis en place par les usagers qui dépassent le cadrage par les interfaces et participent à la fois à la réécriture des applications et à la redéfinition des problèmes publics.La comparaison entre applications développées par les administrations publiques et projets portés par la société civile permet de distinguer deux façons de communiquer : les chaînes courtes et les chaînes longues. Sans les opposer, la thèse se place dans « l’entre-deux » et analyser les articulations, les agencements de ces réseaux socio-techniques. / The PhD dissertation studies new digital participative technologies called "civic apps", applications for mobile and web developed in response to a large scope of public problems and based on the principle of crowdsourcing. The research focuses on the conception of these tools, their usages and the way these tools transform the communication among citizens and between citizens and public administrations. It also explores new formats of civic tech innovation, such as civic hackathons, and question the usage of programming code as a new tool of collective action.The thesis calls upon the methodologies of sociology of science and technology, sociology of public problems, political science and science of information and communication. Based on a case-study of several civic apps in France and Russia, the inquiry adresses the following question: how does the translation of public problems into programming code occur ? And how do these applications transform civic participation?The research shows that the interfaces standardize and format the practices of participation, using documents such as laws, technical norms and standards. However, this standardization has its limits. Focusing on the moments of failure and trial, such as tests, updates or debug of applications, the inquiry highlights the practices of bricolage and detournement, deployed by users in order to overcome the framing by design and participate in the rewriting of the applications.The thesis compares civic applications with the applications developed by public administrations and distinguishes two models of communication called the "long chains" and the "short chains". However, instead of opposing administrative and civic initiatives, the thesis proposes to think from "in-between", analyzing the articulations and arrangements of these socio-technical networks.
230

CORE-MM: um modelo de crowdsourcing para cidades inteligentes baseado em gamificação

Orrego, Rodrigo Barbosa Sousa 31 July 2017 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2017-12-05T13:53:52Z No. of bitstreams: 1 Rodrigo Barbosa Sousa Orrego_.pdf: 1535012 bytes, checksum: b6e744cff7702628ea69ae095d198122 (MD5) / Made available in DSpace on 2017-12-05T13:53:52Z (GMT). No. of bitstreams: 1 Rodrigo Barbosa Sousa Orrego_.pdf: 1535012 bytes, checksum: b6e744cff7702628ea69ae095d198122 (MD5) Previous issue date: 2017-07-31 / Nenhuma / O surgimento de cidades que utilizam conceitos de tecnologia de ponta em várias áreas tem sido possibilitado graças aos progressos no desenvolvimento de sistemas de tecnologia da informação e comunicação. Avanços nas tecnologias de comunicação wireless e da tecnológica da informação e comunicação em geral oferecem oportunidades para a criação de um modelo de crowdsourcing, para registrar e atualizar recursos de uma cidade, baseado em gamificação para cidades inteligentes, objetivando ampliar a independência das pessoas que necessitam dos recursos das cidades e melhorar a qualidade de vida dos cidadãos. Esta dissertação aborda o problema do gerenciamento de recursos para cidades inteligentes utilizando crowdsourcing combinada com gamificação. Foi proposto um modelo, chamado CORE-MM, que permite a utilização de técnicas de crowdsourcing para que o gerenciamento de recursos das cidades seja feito pelos próprios cidadãos interessados, sem depender obrigatoriamente de uma organização ou da administração pública, e técnicas de gamificação para que este comportamento de participação neste processo de gerenciamento de recursos seja incentivado. O CORE-MM propõe o uso de crowdsourcing integrado com gamificação, para gerenciar recursos de uma cidade inteligente, com dois objetivos interdependentes: o de motivar a utilização do sistema pelos usuários, e também o de incentivar sua participação no compartilhamento e gerenciamento de informações. O nome CORE-MM significa COllaborative REsource Management Model, que em português significa Modelo de Gerenciamento Colaborativo de Recursos. / The emergence of cities that use state-of-the-art technology concepts in various areas has been made possible by advances in the development of information and communication technology systems. Advances in wireless communication technologies and information and communication technologies in general offer opportunities for creating a crowdsourcing model, based on gamification for smart cities, to manage cities resources, aiming to broaden the independence of the people that need the cities resources and improve the quality of life of citizens. This study addresses the problem of resource management for smart cities using crowdsourcing combined with gamification. A model called CORE-MM has been proposed by this study, which allows the use of crowdsourcing techniques so that the management of cities resources is done by the citizens themselves, without necessarily having to rely on an organization or public administration, and techniques of gamification to encourage this behavior of participation in this resource management process. CORE-MM proposes the use of crowdsourcing integrated with gamification to manage the resources of an intelligent city, with two interdependent objectives: to motivate the use of the system by the users, and also to encourage their participation in the sharing and management of information. The name CORE-MM stands for COllaborative REsource Management Model.

Page generated in 0.4634 seconds