• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 149
  • 28
  • 25
  • 13
  • 13
  • 11
  • 11
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 331
  • 47
  • 43
  • 34
  • 33
  • 33
  • 32
  • 31
  • 29
  • 29
  • 28
  • 27
  • 26
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

[en] SMART HOMES PROJECTS AND DESIGN THINKING: GENERATION AND SELECTION OF CONCEPTIONS BASED ON INNOVATIVE TECHNOLOGICAL SOLUTIONS / [pt] PROJETOS DE CASAS INTELIGENTES E DESIGN THINKING: GERAÇÃO E SELEÇÃO DE CONCEPÇÕES BASEADAS EM SOLUÇÕES TECNOLÓGICAS INOVADORAS

FLAVIO DE OLIVEIRA COELHO MARTINS 21 February 2018 (has links)
[pt] Nas últimas décadas, vários fatores socioeconômicos têm despertado o interesse por pesquisas sobre casas inteligentes e sua relação com os moradores. Dentre esses fatores, destacam-se as mudanças climáticas e a preocupação crescente com questões ambientais; a longevidade da população mundial; o uso eficiente de recursos naturais e de energia; e novas formas de gerenciar a rotina diária e o lazer. Nesse contexto, o objetivo da dissertação é propor e demonstrar um modelo para geração e seleção de concepções de casas inteligentes baseadas em soluções tecnológicas inovadoras, utilizando-se a abordagem de Design Thinking e integrando-se diversas ferramentas de gestão da inovação, incluindo a combinação de métodos multicritério de apoio à decisão. A pesquisa pode ser considerada descritiva, metodológica e participativa. Quanto aos meios de investigação, a metodologia compreendeu pesquisa bibliográfica e documental; modelagem do processo de geração e seleção de concepções de casas inteligentes baseada em Design Thinking; estudo empírico para demonstração da aplicabilidade do modelo no âmbito de um projeto de casa inteligente no Brasil (Projeto NO.V.A.). Destacam-se como principais resultados desta pesquisa um modelo conceitual desenvolvido segundo uma perspectiva mais empática, que permite colocar as pessoas no centro do desenvolvimento dos projetos de casas inteligentes; e a melhor concepção de casa inteligente para o Projeto NO.V.A., proposta segundo a abordagem de Design Thinking, com suporte de uma plataforma digital cooperativa que envolveu cerca de 35 mil pessoas de vários países, e emprego do método híbrido AHP-TOPSIS. / [en] In recent decades, several socioeconomic factors have stimulated research on smart homes and their relationship with their residents. Highlights are climate changes and the growing concern with environmental issues; longevity of the world s population; search for more efficient the use of natural resources and energy; and new habits and ways of managing daily routine and leisure. In this context, the objective of this dissertation is to propose and demonstrate a model for generation and selection of conceptions based on innovative technological solutions, adopting the Design Thinking approach, and integrating several innovation management tools, including crowdsourcing and the combination of multicriteria decision support methods. This research can be classified as descriptive, methodological, and participative. From the bibliographic and documentary review on the central themes of the research, a model based on Design Thinking approach was developed to generate and select the best conceptions of smart homes based on innovative technological solutions. Aiming to demonstrate the applicability of this model in the context of a smart home project in Brazil (NO.V.A. Project), an empirical study was carried out during the applied phase of this research. The main contributions are a conceptual model developed from a more empathic perspective, which allows people to be at the center of the development of smart home projects, and the best smart home conception based on based on innovative technological solutions for NO.VA. Project, as the main output of a Design Thinking process, which included a digital collaborative platform and the use of several innovation management tools, such as a hybrid multiple criteria decision-making method (AHP-TOPSIS).
122

Compréhension de contenus visuels par analyse conjointe du contenu et des usages / Combining content analysis with usage analysis to better understand visual contents

Carlier, Axel 30 September 2014 (has links)
Dans cette thèse, nous traitons de la compréhension de contenus visuels, qu’il s’agisse d’images, de vidéos ou encore de contenus 3D. On entend par compréhension la capacité à inférer des informations sémantiques sur le contenu visuel. L’objectif de ce travail est d’étudier des méthodes combinant deux approches : 1) l’analyse automatique des contenus et 2) l’analyse des interactions liées à l’utilisation de ces contenus (analyse des usages, en plus bref). Dans un premier temps, nous étudions l’état de l’art issu des communautés de la vision par ordinateur et du multimédia. Il y a 20 ans, l’approche dominante visait une compréhension complètement automatique des images. Cette approche laisse aujourd’hui plus de place à différentes formes d’interventions humaines. Ces dernières peuvent se traduire par la constitution d’une base d’apprentissage annotée, par la résolution interactive de problèmes (par exemple de détection ou de segmentation) ou encore par la collecte d’informations implicites issues des usages du contenu. Il existe des liens riches et complexes entre supervision humaine d’algorithmes automatiques et adaptation des contributions humaines via la mise en œuvre d’algorithmes automatiques. Ces liens sont à l’origine de questions de recherche modernes : comment motiver des intervenants humains ? Comment concevoir des scénarii interactifs pour lesquels les interactions contribuent à comprendre le contenu manipulé ? Comment vérifier la qualité des traces collectées ? Comment agréger les données d’usage ? Comment fusionner les données d’usage avec celles, plus classiques, issues d’une analyse automatique ? Notre revue de la littérature aborde ces questions et permet de positionner les contributions de cette thèse. Celles-ci s’articulent en deux grandes parties. La première partie de nos travaux revisite la détection de régions importantes ou saillantes au travers de retours implicites d’utilisateurs qui visualisent ou acquièrent des con- tenus visuels. En 2D d’abord, plusieurs interfaces de vidéos interactives (en particulier la vidéo zoomable) sont conçues pour coordonner des analyses basées sur le contenu avec celles basées sur l’usage. On généralise ces résultats en 3D avec l’introduction d’un nouveau détecteur de régions saillantes déduit de la capture simultanée de vidéos de la même performance artistique publique (spectacles de danse, de chant etc.) par de nombreux utilisateurs. La seconde contribution de notre travail vise une compréhension sémantique d’images fixes. Nous exploitons les données récoltées à travers un jeu, Ask’nSeek, que nous avons créé. Les interactions élémentaires (comme les clics) et les données textuelles saisies par les joueurs sont, comme précédemment, rapprochées d’analyses automatiques des images. Nous montrons en particulier l’intérêt d’interactions révélatrices des relations spatiales entre différents objets détectables dans une même scène. Après la détection des objets d’intérêt dans une scène, nous abordons aussi le problème, plus ambitieux, de la segmentation. / This thesis focuses on the problem of understanding visual contents, which can be images, videos or 3D contents. Understanding means that we aim at inferring semantic information about the visual content. The goal of our work is to study methods that combine two types of approaches: 1) automatic content analysis and 2) an analysis of how humans interact with the content (in other words, usage analysis). We start by reviewing the state of the art from both Computer Vision and Multimedia communities. Twenty years ago, the main approach was aiming at a fully automatic understanding of images. This approach today gives way to different forms of human intervention, whether it is through the constitution of annotated datasets, or by solving problems interactively (e.g. detection or segmentation), or by the implicit collection of information gathered from content usages. These different types of human intervention are at the heart of modern research questions: how to motivate human contributors? How to design interactive scenarii that will generate interactions that contribute to content understanding? How to check or ensure the quality of human contributions? How to aggregate human contributions? How to fuse inputs obtained from usage analysis with traditional outputs from content analysis? Our literature review addresses these questions and allows us to position the contributions of this thesis. In our first set of contributions we revisit the detection of important (or salient) regions through implicit feedback from users that either consume or produce visual contents. In 2D, we develop several interfaces of interactive video (e.g. zoomable video) in order to coordinate content analysis and usage analysis. We also generalize these results to 3D by introducing a new detector of salient regions that builds upon simultaneous video recordings of the same public artistic performance (dance show, chant, etc.) by multiple users. The second contribution of our work aims at a semantic understanding of fixed images. With this goal in mind, we use data gathered through a game, Ask’nSeek, that we created. Elementary interactions (such as clicks) together with textual input data from players are, as before, mixed with automatic analysis of images. In particular, we show the usefulness of interactions that help revealing spatial relations between different objects in a scene. After studying the problem of detecting objects on a scene, we also adress the more ambitious problem of segmentation.
123

Mercado preditivo: um método de previsão baseado no conhecimento coletivo / Prediction market: a forecasting method based on the collective knowledge

Ivan Roberto Ferraz 08 December 2015 (has links)
Mercado Preditivo (MP) é uma ferramenta que utiliza o mecanismo de preço de mercado para agregar informações dispersas em um grande grupo de pessoas, visando à geração de previsões sobre assuntos de interesse. Trata-se de um método de baixo custo, capaz de gerar previsões de forma contínua e que não exige amostras probabilísticas. Há diversas aplicações para esses mercados, sendo que uma das principais é o prognóstico de resultados eleitorais. Este estudo analisou evidências empíricas da eficácia de um Mercado Preditivo no Brasil, criado para fazer previsões sobre os resultados das eleições gerais do ano de 2014, sobre indicadores econômicos e sobre os resultados de jogos do Campeonato Brasileiro de futebol. A pesquisa teve dois grandes objetivos: i) desenvolver e avaliar o desempenho de um MP no contexto brasileiro, comparando suas previsões em relação a métodos alternativos; ii) explicar o que motiva as pessoas a participarem do MP, especialmente quando há pouca ou nenhuma interação entre os participantes e quando as transações são realizadas com uma moeda virtual. O estudo foi viabilizado por meio da criação da Bolsa de Previsões (BPrev), um MP online que funcionou por 61 dias, entre setembro e novembro de 2014, e que esteve aberto à participação de qualquer usuário da Internet no Brasil. Os 147 participantes registrados na BPrev efetuaram um total de 1.612 transações, sendo 760 no tema eleições, 270 em economia e 582 em futebol. Também foram utilizados dois questionários online para coletar dados demográficos e percepções dos usuários. O primeiro foi aplicado aos potenciais participantes antes do lançamento da BPrev (302 respostas válidas) e o segundo foi aplicado apenas aos usuários registrados, após dois meses de experiência de uso da ferramenta (71 respostas válidas). Com relação ao primeiro objetivo, os resultados sugerem que Mercados Preditivos são viáveis no contexto brasileiro. No tema eleições, o erro absoluto médio das previsões do MP na véspera do pleito foi de 3,33 pontos percentuais, enquanto o das pesquisas de opinião foi de 3,31. Considerando todo o período em que o MP esteve em operação, o desempenho dos dois métodos também foi parecido (erro absoluto médio de 4,20 pontos percentuais para o MP e de 4,09 para as pesquisas). Constatou-se também que os preços dos contratos não são um simples reflexo dos resultados das pesquisas, o que indica que o mercado é capaz de agregar informações de diferentes fontes. Há potencial para o uso de MPs em eleições brasileiras, principalmente como complemento às metodologias de previsão mais tradicionais. Todavia, algumas limitações da ferramenta e possíveis restrições legais podem dificultar sua adoção. No tema economia, os erros foram ligeiramente maiores do que os obtidos com métodos alternativos. Logo, um MP aberto ao público geral, como foi o caso da BPrev, mostrou-se mais indicado para previsões eleitorais do que para previsões econômicas. Já no tema futebol, as previsões do MP foram melhores do que o critério do acaso, mas não houve diferença significante em relação a outro método de previsão baseado na análise estatística de dados históricos. No que diz respeito ao segundo objetivo, a análise da participação no MP aponta que motivações intrínsecas são mais importantes para explicar o uso do que motivações extrínsecas. Em ordem decrescente de relevância, os principais fatores que influenciam a adoção inicial da ferramenta são: prazer percebido, aprendizado percebido, utilidade percebida, interesse pelo tema das previsões, facilidade de uso percebida, altruísmo percebido e recompensa percebida. Os indivíduos com melhor desempenho no mercado são mais propensos a continuar participando. Isso sugere que, com o passar do tempo, o nível médio de habilidade dos participantes tende a crescer, tornando as previsões do MP cada vez melhores. Os resultados também indicam que a prática de incluir questões de entretenimento para incentivar a participação em outros temas é pouco eficaz. Diante de todas as conclusões, o MP revelou-se como potencial técnica de previsão em variados campos de investigação. / Prediction Market (PM) is a tool which uses the market price mechanism to aggregate information scattered in a large group of people, aiming at generating predictions about matters of interest. It is a low cost method, able to generate forecasts continuously and it does not require random samples. There are several applications for these markets and one of the main ones is the prognosis of election outcomes. This study analyzed empirical evidences on the effectiveness of Prediction Markets in Brazil, regarding forecasts about the outcomes of the general elections in the year of 2014, about economic indicators and about the results of the Brazilian Championship soccer games. The research had two main purposes: i) to develop and evaluate the performance of PMs in the Brazilian context, comparing their predictions to the alternative methods; ii) to explain what motivates people´s participation in PMs, especially when there is little or no interaction among participants and when the trades are made with a virtual currency (play-money). The study was made feasible by means of the creation of a prediction exchange named Bolsa de Previsões (BPrev), an online marketplace which operated for 61 days, from September to November, 2014, being open to the participation of any Brazilian Internet user. The 147 participants enrolled in BPrev made a total of 1,612 trades, with 760 on the election markets, 270 on economy and 582 on soccer. Two online surveys were also used to collect demographic data and users´ perceptions. The first one was applied to potential participants before BPrev launching (302 valid answers) and the second was applied only to the registered users after two-month experience in tool using (71 valid answers). Regarding the first purpose, the results suggest Prediction Markets to be feasible in the Brazilian context. On the election markets, the mean absolute error of PM predictions on the eve of the elections was of 3.33 percentage points whereas the one of the polls was of 3.31. Considering the whole period in which BPrev was running, the performance of both methods was also similar (PM mean absolute error of 4.20 percentage points and poll´s 4.09). Contract prices were also found as not being a simple reflection of poll results, indicating that the market is capable to aggregate information from different sources. There is scope for the use of PMs in Brazilian elections, mainly as a complement of the most traditional forecasting methodologies. Nevertheless, some tool limitations and legal restrictions may hinder their adoption. On markets about economic indicators, the errors were slightly higher than those obtained by alternative methods. Therefore, a PM open to general public, as in the case of BPrev, showed as being more suitable to electoral predictions than to economic ones. Yet, on soccer markets, PM predictions were better than the criterion of chance although there had not been significant difference in relation to other forecasting method based on the statistical analysis of historical data. As far as the second purpose is concerned, the analysis of people´s participation in PMs points out intrinsic motivations being more important in explaining their use than extrinsic motivations. In relevance descending order, the principal factors that influenced tool´s initial adoption are: perceived enjoyment, perceived learning, perceived usefulness, interest in the theme of predictions, perceived ease of use, perceived altruism and perceived reward. Individuals with better performance in the market are more inclined to continue participating. This suggests that, over time, participants´ average skill level tends to increase, making PM forecasts better and better. Results also indicate that the practice of creating entertainment markets to encourage participation in other subjects is ineffective. Ratifying all the conclusions, PM showed as being a prediction potential technique in a variety of research fields.
124

L'uilisation du Crowdsourcing dans les entreprises de grande distribution / Crowdsourcing and Retail Companies

Ataollah, Homayoun 01 July 2016 (has links)
Cette recherche explique comment il est possible d’utiliser le concept de crowdsourcing dans le but de générer un modèle d’affaire innovant. Le terrain d’étude retenu est composé de données primaires et de données secondaires. Les données primaires sont issues d’un grand distributeur franco-iranien installé en Iran. Des données secondaires et permettant des comparaisons proviennent de 5 autres grands distributeurs dans le monde. Notre analyse permet alors de définir les apports du crowdsourcing pour chaque type de distributeurs. / This research explains the applications of crowdsourcing phenomena by generating an innovative business model canvas for a French-Iranian hypermarket retailer in Iran after analyzing the business models of five hypermarket companies worldwide (one Americans, one Japanese and three French hypermarkets). The following research question has been considered for this research: How retail companies can use crowdsourcing to be more innovative in order to enhance their competitive advantage?
125

Mechanism Design For Strategic Crowdsourcing

Nath, Swaprava 17 December 2013 (has links) (PDF)
This thesis looks into the economics of crowdsourcing using game theoretic modeling. The art of aggregating information and expertise from a diverse population has been in practice since a long time. The Internet and the revolution in communication and computational technologies have made this task easier and given birth to a new era of online resource aggregation, which is now popularly referred to as crowdsourcing. Two important features of this aggregation technique are: (a) crowdsourcing is always human driven, hence the participants are rational and intelligent, and they have a payoff function that they aim to maximize, and (b) the participants are connected over a social network which helps to reach out to a large set of individuals. To understand the behavior and the outcome of such a strategic crowd, we need to understand the economics of a crowdsourcing network. In this thesis, we have considered the following three major facets of the strategic crowdsourcing problem. (i) Elicitation of the true qualities of the crowd workers: As the crowd is often unstructured and unknown to the designer, it is important to ensure if the crowdsourced job is indeed performed at the highest quality, and this requires elicitation of the true qualities which are typically the participants' private information. (ii) Resource critical task execution ensuring the authenticity of both the information and the identity of the participants: Due to the diverse geographical, cultural, socio-economic reasons, crowdsourcing entails certain manipulations that are unusual in the classical theory. The design has to be robust enough to handle fake identities or incorrect information provided by the crowd while performing crowdsourcing contests. (iii) Improving the productive output of the crowdsourcing network: As the designer's goal is to maximize a certain measurable output of the crowdsourcing system, an interesting question is how one can design the incentive scheme and/or the network so that the system performs at an optimal level taking into account the strategic nature of the individuals. In the thesis, we design novel mechanisms to solve the problems above using game theoretic modeling. Our investigation helps in understanding certain limits of achievability, and provides design protocols in order to make crowdsourcing more reliable, effective, and productive.
126

Environnements numériques et PME : figures du chaos et nouveaux usages / Digital environments and SMEs : figures of chaos and new uses

Choquet, Isabelle 04 June 2015 (has links)
La « révolution » du web 2.0 s’est-elle imposée dans les PME ? La mise en réseau, porté par le web 2.0 qui met l’utilisateur au centre des processus de l’entreprise, a–t’elle opéré un changement de paradigme communicationnel au sens de Kuhn ? Sur fonds d’une société de plus en plus fluide, inscrite dans l’incertitude et la complexité, cette thèse s’intéresse à la question des ajustements transversaux en lien avec le web 2.0 et à leur prise en compte par les PME. Ce dernier peut être vu comme source de désordres dans l’organisation et la PME sera tentée de vouloir les réguler. Néanmoins, certaines PME choisiront de s’en servir comme d’un tremplin. Elles vont à la fois construire sur des compétences existantes et améliorer l’efficience de l’entreprise (activité d’exploitation) mais également explorer des champs totalement nouveaux (activité d’exploration) dont l’intelligence collective et le crowdsourcing sont des exemples. Les outils du web 2.0 indiquent le passage d’une technologie « outils » considérée comme stable vers une technologie « sociale » caractérisée par l’instabilité. Le web 2.0 a un effet de catalyseur qui suscite et facilite à la fois les ajustements transversaux, il peut se concevoir comme un levier de pilotage vers une organisation plus centrée sur les individus et les groupes. Ceci pose un certain nombre de défis aux PME. L’éclairage des sciences du chaos et de la complexité semble une piste intéressante pour comprendre cet équilibre à atteindre entre ordre et désordre. Cette thèse a un ancrage interdisciplinaire et montre l’intérêt de croiser les sciences de gestion et les sciences de l’information et de la communication lorsque ces disciplines sont appelées à prendre en compte la complexité des rapports transversaux mais aussi de co-construction entre consom’acteurs et PME. La recherche s’appuie entre autres, sur un terrain applicatif de 93 PME, auditées entre 2010 et 2014. / Is the revolution of the so-called Web 2.0 a real success for the SMEs? Do we really see a change of the communication paradigm (following Kuhn’s meaning) due to the arrival of the networks that put the individual at the centre of the organization processes? Within the framework of a “fluid society” characterised by uncertainty and complexity, this thesis focuses on the issue of internal and external adjustments in connection with the Web 2.0 within SMEs. These adjustments may be a source of disorder within the organization and SMEs would try to regulate them. Nevertheless, some SMEs will choose to use them as a springboard. They will build on existing skills, improve the efficiency of the company (operating activity) and also explore completely new fields (exploration activity) like collective intelligence and crowdsourcing. The web 2.0 indicates the passage of a "tools" technology considered as stable to a "social" technology characterized by instability. The Web 2.0 has a catalytic effect that encourages and facilitates transversal adjustments. It could be seen as a way to transform the organization in order to become more focused on individuals and groups. But it also brings a number of challenges for SMEs. Using the concepts of chaos and complexity was an interesting way to understand this balance to be attained between order and disorder. This thesis is interdisciplinary by objective. It intends to show the interest of using together theories, literature and fields coming from management, information science and communication when these disciplines are required to take into account the complexity of relationships but also co-construction between prosumers and SMEs, and management and employees. The research will be based inter alia on a total of 93 SMEs, audited by the author between 2010 and 2014.
127

Frontiers in Crowdsourced Data Integration

Braunschweig, Katrin, Eberius, Julian, Thiele, Maik, Lehner, Wolfgang 26 November 2020 (has links)
There is an ever-increasing amount and variety of open web data available that is insufficiently examined or not considered at all in decision making processes. This is because of the lack of end-user friendly tools that help to reuse this public data and to create knowledge out of it. Therefore, we propose a schema-optional data repository that provides the flexibility necessary to store and gradually integrate heterogeneous web data. Based on this repository, we propose a semi-automatic schema enrichment approach that efficiently augments the data in a “pay-as-you-go” fashion. Due to the inherently appearing ambiguities we further propose a crowd-based verification component that is able to resolve such conflicts in a scalable manner. / Die stetig wachsende Zahl offen verfügbarer Webdaten findet momentan viel zu wenig oder gar keine Berücksichtigung in Entscheidungsprozessen. Der Grund hierfür ist insbesondere in der mangelnden Unterstützung durch anwenderfreundliche Werkzeuge zu finden, die diese Daten nutzbar machen und Wissen daraus genieren können. Zu diesem Zweck schlagen wir ein schemaoptionales Datenrepositorium vor, welches ermöglicht, heterogene Webdaten zu speichern sowie kontinuierlich zu integrieren und mit Schemainformation anzureichern. Auf Grund der dabei inhärent auftretenden Mehrdeutigkeiten, soll dieser Prozess zusätzlich um eine Crowd-basierende Verifikationskomponente unterstützt werden.
128

Analyse der Geschäftsmodellelemente von Crowdsourcing-Marktplätzen

Ickler, Henrik, Baumöl, Ulrike January 2011 (has links)
No description available.
129

Digitala och tekniska lösningar inom sista mil-leveranser : En studie om hur leveransföretag kan bemöta konsumenters krav på sista mil-leveranser / Digital and technical solutions within last mile deliveries : A study of how delivery companies can meet consumers' demands for last-mile deliveries

Sörbom, Josefine, Bjurlemark, Adelaide January 2020 (has links)
E-commerce is in a growing phase where consumers are getting more aware, picky and comfortable when it comes to delivery of their packages. This case study investigates the requirements that consumers have regarding last-mile deliveries as well as various digital logistic solutions that may meet these requirements. In the study, relevant people in three different companies working with delivery, have been selected to get a deeper insight into how companies view consumers' new requirements and how they work to meet these and how they reflect about the future when it comes to last-mile deliveries. Consumers now expect a faster and more flexible service where they can design the last-mile delivery according to their premises, e.g. time, location,  etc. The demands emerging associated with the expansion of e-commerce include faster, cheaper and more flexible deliveries. Transparency in environmental issues and work processes has also become an important factor for many consumers today, which may be decisive for purchasing power. Collaboration with other actors enables e-commerce companies to streamline their delivery alternatives in the form of outsourcing. This approach can be done through collaboration with other logistics players or in the form of crowdsourcing. The term E-logistics implies a system that contains various parts of a logistic process in which the exchange takes place with both the help of technology and the internet. In combination with a functioning digital infrastructure and e-logistics, delivery processes can become more effective. New technologies such as parcel locks, drones and delivery robots are being analyzed as potential solutions to streamline last-mile deliveries. / E-handeln befinner sig i en växande fas där konsumenterna är allt mer medvetna, kräsna ochbekväma. Denna fallstudie undersöker de krav som konsumenter har kring sista-milen leveransersamt olika digitala logistiklösningar som kan komma att möta dessa krav. I studien har ävenrelevanta personer, i tre olika organisationer som arbetar med leverans, valts ut för att få en djupareinblick på hur företag ser på konsumenters nya krav. Studien undersöker hur de arbetar för attmöta dessa samt hur det ser på framtiden när det kommer till sista mil-leveranser. Konsumenternaförväntar sig nu mer flexibla och snabba leveranser där de själva får utforma sista mil-leveranserefter deras premisser. Kraven som växt fram i takt med den växande e-handeln är bland annatsnabbare, billigare och mer flexibla leveranser. Transparens inom miljöfrågor och arbetsprocesserhar även blivit en viktig faktor för många konsumenterna idag som kan komma att bli avgörandeför konsumentens köpvillighet. Samarbete med andra aktörer möjliggör för e-handelsföretag atteffektivisera sina leveransalternativ i form av outsourcing. Tillvägagångssättet kan ske genomsamarbete med andra logistikaktörer eller i form av privatpersoner, crowdsourcing. E-logistik ärdet system som innehåller olika delar i en logistikprocess där utbytet sker med hjälp av teknik ochinternet. I kombination av en fungerande digital infrastruktur och e-logistik så kanleveransprocesser effektiviseras. Ny teknik som paketlås, drönare och leveransrobot analyserassom potentiella leveransmedel för att effektivisera sista mil-leveranser.
130

On the use of smartphones as novel photogrammetric water gauging instruments: Developing tools for crowdsourcing water levels

Elias, Melanie 15 June 2021 (has links)
The term global climate change is omnipresent since the beginning of the last decade. Changes in the global climate are associated with an increase in heavy rainfalls that can cause nearly unpredictable flash floods. Consequently, spatio-temporally high-resolution monitoring of rivers becomes increasingly important. Water gauging stations continuously and precisely measure water levels. However, they are rather expensive in purchase and maintenance and are preferably installed at water bodies relevant for water management. Small-scale catchments remain often ungauged. In order to increase the data density of hydrometric monitoring networks and thus to improve the prediction quality of flood events, new, flexible and cost-effective water level measurement technologies are required. They should be oriented towards the accuracy requirements of conventional measurement systems and facilitate the observation of water levels at virtually any time, even at the smallest rivers. A possible solution is the development of a photogrammetric smartphone application (app) for crowdsourcing water levels, which merely requires voluntary users to take pictures of a river section to determine the water level. Today’s smartphones integrate high-resolution cameras, a variety of sensors, powerful processors, and mass storage. However, they are designed for the mass market and use low-cost hardware that cannot comply with the quality of geodetic measurement technology. In order to investigate the potential for mobile measurement applications, research was conducted on the smartphone as a photogrammetric measurement instrument as part of the doctoral project. The studies deal with the geometric stability of smartphone cameras regarding device-internal temperature changes and with the accuracy potential of rotation parameters measured with smartphone sensors. The results show a high, temperature-related variability of the interior orientation parameters, which is why the calibration of the camera should be carried out during the immediate measurement. The results of the sensor investigations show considerable inaccuracies when measuring rotation parameters, especially the compass angle (errors up to 90° were observed). The same applies to position parameters measured by global navigation satellite system (GNSS) receivers built into smartphones. According to the literature, positional accuracies of about 5 m are possible in best conditions. Otherwise, errors of several 10 m are to be expected. As a result, direct georeferencing of image measurements using current smartphone technology should be discouraged. In consideration of the results, the water gauging app Open Water Levels (OWL) was developed, whose methodological development and implementation constituted the core of the thesis project. OWL enables the flexible measurement of water levels via crowdsourcing without requiring additional equipment or being limited to specific river sections. Data acquisition and processing take place directly in the field, so that the water level information is immediately available. In practice, the user captures a short time-lapse sequence of a river bank with OWL, which is used to calculate a spatio-temporal texture that enables the detection of the water line. In order to translate the image measurement into 3D object space, a synthetic, photo-realistic image of the situation is created from existing 3D data of the river section to be investigated. Necessary approximations of the image orientation parameters are measured by smartphone sensors and GNSS. The assignment of camera image and synthetic image allows for the determination of the interior and exterior orientation parameters by means of space resection and finally the transfer of the image-measured 2D water line into the 3D object space to derive the prevalent water level in the reference system of the 3D data. In comparison with conventionally measured water levels, OWL reveals an accuracy potential of 2 cm on average, provided that synthetic image and camera image exhibit consistent image contents and that the water line can be reliably detected. In the present dissertation, related geometric and radiometric problems are comprehensively discussed. Furthermore, possible solutions, based on advancing developments in smartphone technology and image processing as well as the increasing availability of 3D reference data, are presented in the synthesis of the work. The app Open Water Levels, which is currently available as a beta version and has been tested on selected devices, provides a basis, which, with continuous further development, aims to achieve a final release for crowdsourcing water levels towards the establishment of new and the expansion of existing monitoring networks. / Der Begriff des globalen Klimawandels ist seit Beginn des letzten Jahrzehnts allgegenwärtig. Die Veränderung des Weltklimas ist mit einer Zunahme von Starkregenereignissen verbunden, die nahezu unvorhersehbare Sturzfluten verursachen können. Folglich gewinnt die raumzeitlich hochaufgelöste Überwachung von Fließgewässern zunehmend an Bedeutung. Pegelmessstationen erfassen kontinuierlich und präzise Wasserstände, sind jedoch in Anschaffung und Wartung sehr teuer und werden vorzugsweise an wasserwirtschaftlich-relevanten Gewässern installiert. Kleinere Gewässer bleiben häufig unbeobachtet. Um die Datendichte hydrometrischer Messnetze zu erhöhen und somit die Vorhersagequalität von Hochwasserereignissen zu verbessern, sind neue, kostengünstige und flexibel einsetzbare Wasserstandsmesstechnologien erforderlich. Diese sollten sich an den Genauigkeitsanforderungen konventioneller Messsysteme orientieren und die Beobachtung von Wasserständen zu praktisch jedem Zeitpunkt, selbst an den kleinsten Flüssen, ermöglichen. Ein Lösungsvorschlag ist die Entwicklung einer photogrammetrischen Smartphone-Anwendung (App) zum Crowdsourcing von Wasserständen mit welcher freiwillige Nutzer lediglich Bilder eines Flussabschnitts aufnehmen müssen, um daraus den Wasserstand zu bestimmen. Heutige Smartphones integrieren hochauflösende Kameras, eine Vielzahl von Sensoren, leistungsfähige Prozessoren und Massenspeicher. Sie sind jedoch für den Massenmarkt konzipiert und verwenden kostengünstige Hardware, die nicht der Qualität geodätischer Messtechnik entsprechen kann. Um das Einsatzpotential in mobilen Messanwendungen zu eruieren, sind Untersuchungen zum Smartphone als photogrammetrisches Messinstrument im Rahmen des Promotionsprojekts durchgeführt worden. Die Studien befassen sich mit der geometrischen Stabilität von Smartphone-Kameras bezüglich geräteinterner Temperaturänderungen und mit dem Genauigkeitspotential von mit Smartphone-Sensoren gemessenen Rotationsparametern. Die Ergebnisse zeigen eine starke, temperaturbedingte Variabilität der inneren Orientierungsparameter, weshalb die Kalibrierung der Kamera zum unmittelbaren Messzeitpunkt erfolgen sollte. Die Ergebnisse der Sensoruntersuchungen zeigen große Ungenauigkeiten bei der Messung der Rotationsparameter, insbesondere des Kompasswinkels (Fehler von bis zu 90° festgestellt). Selbiges gilt auch für Positionsparameter, gemessen durch in Smartphones eingebaute Empfänger für Signale globaler Navigationssatellitensysteme (GNSS). Wie aus der Literatur zu entnehmen ist, lassen sich unter besten Bedingungen Lagegenauigkeiten von etwa 5 m erreichen. Abseits davon sind Fehler von mehreren 10 m zu erwarten. Infolgedessen ist von einer direkten Georeferenzierung von Bildmessungen mittels aktueller Smartphone-Technologie abzusehen. Unter Berücksichtigung der gewonnenen Erkenntnisse wurde die Pegel-App Open Water Levels (OWL) entwickelt, deren methodische Entwicklung und Implementierung den Kern der Arbeit bildete. OWL ermöglicht die flexible Messung von Wasserständen via Crowdsourcing, ohne dabei zusätzliche Ausrüstung zu verlangen oder auf spezifische Flussabschnitte beschränkt zu sein. Datenaufnahme und Verarbeitung erfolgen direkt im Feld, so dass die Pegelinformationen sofort verfügbar sind. Praktisch nimmt der Anwender mit OWL eine kurze Zeitraffersequenz eines Flussufers auf, die zur Berechnung einer Raum-Zeit-Textur dient und die Erkennung der Wasserlinie ermöglicht. Zur Übersetzung der Bildmessung in den 3D-Objektraum wird aus vorhandenen 3D-Daten des zu untersuchenden Flussabschnittes ein synthetisches, photorealistisches Abbild der Aufnahmesituation erstellt. Erforderliche Näherungen der Bildorientierungsparameter werden von Smartphone-Sensoren und GNSS gemessen. Die Zuordnung von Kamerabild und synthetischem Bild erlaubt die Bestimmung der inneren und äußeren Orientierungsparameter mittels räumlichen Rückwärtsschnitt. Nach Rekonstruktion der Aufnahmesituation lässt sich die im Bild gemessene 2D-Wasserlinie in den 3D-Objektraum projizieren und der vorherrschende Wasserstand im Referenzsystem der 3D-Daten ableiten. Im Soll-Ist-Vergleich mit konventionell gemessenen Pegeldaten zeigt OWL ein erreichbares Genauigkeitspotential von durchschnittlich 2 cm, insofern synthetisches und reales Kamerabild einen möglichst konsistenten Bildinhalt aufweisen und die Wasserlinie zuverlässig detektiert werden kann. In der vorliegenden Dissertation werden damit verbundene geometrische und radiometrische Probleme ausführlich diskutiert sowie Lösungsansätze, auf der Basis fortschreitender Entwicklungen von Smartphone-Technologie und Bildverarbeitung sowie der zunehmenden Verfügbarkeit von 3D-Referenzdaten, in der Synthese der Arbeit vorgestellt. Mit der gegenwärtig als Betaversion vorliegenden und auf ausgewählten Geräten getesteten App Open Water Levels wurde eine Basis geschaffen, die mit kontinuierlicher Weiterentwicklung eine finale Freigabe für das Crowdsourcing von Wasserständen und damit den Aufbau neuer und die Erweiterung bestehender Monitoring-Netzwerke anstrebt.

Page generated in 0.1072 seconds