• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 622
  • 136
  • 119
  • 109
  • 108
  • 103
  • 99
  • 70
  • 62
  • 62
  • 55
  • 54
  • 53
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Hydrologic Data Sharing Using Open Source Software and Low-Cost Electronics

Sadler, Jeffrey Michael 01 March 2015 (has links) (PDF)
While it is generally accepted that environmental data are critical to understanding environmental phenomena, there are yet improvements to be made in their consistent collection, curation, and sharing. This thesis describes two research efforts to improve two different aspects of hydrologic data collection and management. First described is a recipe for the design, development, and deployment of a low-cost environmental data logging and transmission system for environmental sensors and its connection to an open source data-sharing network. The hardware is built using several low-cost, open-source, mass-produced components. The system automatically ingests data into HydroServer, a standards-based server in the open source Hydrologic Information System (HIS) created by the Consortium of Universities for the Advancement of Hydrologic Sciences Inc (CUAHSI). A recipe for building the system is provided along with several test deployment results. Second, a connection between HydroServer and HydroShare is described. While the CUAHSI HIS system is intended to empower the hydrologic sciences community with better data storage and distribution, it lacks support for the kind of “Web 2.0” collaboration and social-networking capabilities that are increasing scientific discovery in other fields. The design, development, and testing of a software system that integrates CUAHSI HIS with the HydroShare social hydrology architecture is presented. The resulting system supports efficient archive, discovery, and retrieval of data, extensive creator and science metadata, assignment of a persistent digital identifier such as a Digital Object Identifier (DOI), scientific discussion and collaboration around the data and other basic social-networking features. In this system, HydroShare provides functionality for social interaction and collaboration while the existing HIS provides the distributed data management and web services framework. The system is expected to enable scientists, for the first time, to access and share both national- and research lab-scale hydrologic time series in a standards-based web services architecture combined with a social network developed specifically for the hydrologic sciences.These two research projects address and provide a solution for significant challenges in the automatic collection, curation, and feature-rich sharing of hydrologic data.
552

Mobile Framework for Real-Time Database Management

Jansson, Simon, Sandström, Theodor January 2017 (has links)
The primary purpose of this thesis is to explore what issues may arise during development of a framework for handling and display of streamed real-time data. In addition to this, it also investigates how the display of different types of data, along with a change of execution platform, impacts execution time. Through the undertaking of two case studies, each split into developmental and an experimental phases, the thesis goes through the development of such a real-time data handling framework. The framework was developed in both stationary and mobile forms, and the developmental issues encountered along each of these paths are highlighted. Afterwards, the results gathered from performance tests run on each framework version were compared, in order to ascertain whether the handling and display of different data types, along with a change in execution platform, had had an impact upon the frameworks execution time. The results from the developmental observations revealed that the most commonly encountered issues were those relating to program latency, commonly due to sub-optimal program architecture along with connectivity issues encountered during data streaming. The second most encountered issue regarded the choice of an appropriate display method, in order to communicate changes in the displayed data along with correlation between several tracked data points. The results from the experimental comparisons revealed that while the impact on execution time caused by the use of calculated data, as opposed to raw data values, was marginal at most, a change of execution platform impacted said time drastically. By porting the framework to the mobile platform, the different processes whose execution time were measured during the tests experienced an increase in execution time ranging from 2405% all the way to 15860%. The authors recommend that the framework be developed towards gaining the ability to connect to any given relational database, and to handle and display the data therein, in order for it to have application areas other than as a test instrument. Further, the authors also recommend that additional tests be run on the framework using a wider variety of stationary and mobile devices, in order to determine whether the conclusions drawn from the results in the thesis hold up in the face of greater hardware variety. / Denna studies primära mål är att utforska vilka problem som kan uppstå under utveckling av ett ramverk för hantering och visande av streamad realtidsdata. Utöver det undersöks även hur visande av olika datatyper, ihop med ett byte av exekveringsplattform, påverkar exekveringstiden. Genom utförandet av två fallstudier, båda uppdelade i utvecklingsoch experimenteringsfaser, går denna studie igenom utvecklingen av ett sådant ramverk för hantering av realtidsdata. Ramverket utvecklades i både stationär och mobil form, och de utvecklingsrelaterade problem som påträffades i vardera fall belyses. Efteråt jämfördes resultaten framtagna genom prestandatester, som kördes på samtliga ramverksversioner, för att upptäcka om hantering och visning av olika datatyper, samt ett skifte av exekveringsplattform, hade påverkat ramverkets exekveringstid. Resultaten från de utvecklingsrelaterade observationerna visade att det mest påträffade problemet hade att göra med programlatens, vanligtvis p.g.a. ickeoptimal programarkitektur kombinerat med konnektivitetsproblem. Det näst mest påträffade problemet hade att göra med valet av en passande visningsmetod, för att kunna förmedla förändringar i den visade datan, samt korrelation mellan flera följda datapunkter. Resultaten från de experimentella jämförelserna visade att medan påverkan av exekveringstiden som uppstått genom användandet av kalkylerad data, till skillnad från rådatavärden, endast var marginell som bäst, påverkade förändringen av exekveringsplattform denna tid drastiskt. Genom att porta ramverket till den mobila plattformen upplevde de processer vars exekveringstid mättes under testerna en ökning från 2405% hela vägen upp till 15860%. Författarna rekommenderar att ramverket utvecklas mot förmågan att koppla till godtycklig databas, och att kunna hantera och visa datan från denna, för att ha ett användningsområde bortom användandet som testinstrument. Vidare rekommenderar även författarna att ytterliggare test utförs på ramverket med en större variation av stationära och mobila enheter, för att kunna bekräfta om slutsatserna som dragits utifrån resultaten av denna studie kvarstår efter att de utsatts för mer varierande hårdvara.
553

Detecting, Tracking, And Recognizing Activities In Aerial Video

Reilly, Vladimir 01 January 2012 (has links)
In this dissertation, we address the problem of detecting humans and vehicles, tracking them in crowded scenes, and finally determining their activities in aerial video. Even though this is a well explored problem in the field of computer vision, many challenges still remain when one is presented with realistic data. These challenges include large camera motion, strong scene parallax, fast object motion, large object density, strong shadows, and insufficiently large action datasets. Therefore, we propose a number of novel methods based on exploiting scene constraints from the imagery itself to aid in the detection and tracking of objects. We show, via experiments on several datasets, that superior performance is achieved with the use of proposed constraints. First, we tackle the problem of detecting moving, as well as stationary, objects in scenes that contain parallax and shadows. We do this on both regular aerial video, as well as the new and challenging domain of wide area surveillance. This problem poses several challenges: large camera motion, strong parallax, large number of moving objects, small number of pixels on target, single channel data, and low frame-rate of video. We propose a method for detecting moving and stationary objects that overcomes these challenges, and evaluate it on CLIF and VIVID datasets. In order to find moving objects, we use median background modelling which requires few frames to obtain a workable model, and is very robust when there is a large number of moving objects in the scene while the model is being constructed. We then iii remove false detections from parallax and registration errors using gradient information from the background image. Relying merely on motion to detect objects in aerial video may not be sufficient to provide complete information about the observed scene. First of all, objects that are permanently stationary may be of interest as well, for example to determine how long a particular vehicle has been parked at a certain location. Secondly, moving vehicles that are being tracked through the scene may sometimes stop and remain stationary at traffic lights and railroad crossings. These prolonged periods of non-motion make it very difficult for the tracker to maintain the identities of the vehicles. Therefore, there is a clear need for a method that can detect stationary pedestrians and vehicles in UAV imagery. This is a challenging problem due to small number of pixels on the target, which makes it difficult to distinguish objects from background clutter, and results in a much larger search space. We propose a method for constraining the search based on a number of geometric constraints obtained from the metadata. Specifically, we obtain the orientation of the ground plane normal, the orientation of the shadows cast by out of plane objects in the scene, and the relationship between object heights and the size of their corresponding shadows. We utilize the above information in a geometry-based shadow and ground plane normal blob detector, which provides an initial estimation for the locations of shadow casting out of plane (SCOOP) objects in the scene. These SCOOP candidate locations are then classified as either human or clutter using a combination of wavelet features, and a Support Vector Machine. Additionally, we combine regular SCOOP and inverted SCOOP candidates to obtain vehicle candidates. We show impressive results on sequences from VIVID and CLIF datasets, and provide comparative quantitative and qualitative analysis. We also show that we can extend the SCOOP detection method to automatically estimate the iv orientation of the shadow in the image without relying on metadata. This is useful in cases where metadata is either unavailable or erroneous. Simply detecting objects in every frame does not provide sufficient understanding of the nature of their existence in the scene. It may be necessary to know how the objects have travelled through the scene over time and which areas they have visited. Hence, there is a need to maintain the identities of the objects across different time instances. The task of object tracking can be very challenging in videos that have low frame rate, high density, and a very large number of objects, as is the case in the WAAS data. Therefore, we propose a novel method for tracking a large number of densely moving objects in an aerial video. In order to keep the complexity of the tracking problem manageable when dealing with a large number of objects, we divide the scene into grid cells, solve the tracking problem optimally within each cell using bipartite graph matching and then link the tracks across the cells. Besides tractability, grid cells also allow us to define a set of local scene constraints, such as road orientation and object context. We use these constraints as part of cost function to solve the tracking problem; This allows us to track fast-moving objects in low frame rate videos. In addition to moving through the scene, the humans that are present may be performing individual actions that should be detected and recognized by the system. A number of different approaches exist for action recognition in both aerial and ground level video. One of the requirements for the majority of these approaches is the existence of a sizeable dataset of examples of a particular action from which a model of the action can be constructed. Such a luxury is not always possible in aerial scenarios since it may be difficult to fly a large number of missions to observe a particular event multiple times. Therefore, we propose a method for v recognizing human actions in aerial video from as few examples as possible (a single example in the extreme case). We use the bag of words action representation and a 1vsAll multi-class classification framework. We assume that most of the classes have many examples, and construct Support Vector Machine models for each class. Then, we use Support Vector Machines that were trained for classes with many examples to improve the decision function of the Support Vector Machine that was trained using few examples, via late weighted fusion of decision values.
554

Deep Understanding of Technical Documents: Automated Generation of Pseudocode from Digital Diagrams & Analysis/Synthesis of Mathematical Formulas

Gkorgkolis, Nikolaos January 2022 (has links)
No description available.
555

Building a Semantic Web of Comics: Publishing Linked Data in HTML/RDFa Using a Comic Book Ontology and Metadata Application Profiles

Petiya, Sean 01 December 2014 (has links)
No description available.
556

Crowdsourcing cultural heritage metadata through social media gaming

Paraschakis, Dimitris January 2013 (has links)
Crowdsourcing has been used in the cultural heritage domain for a variety of tasks. One of them is generation of descriptive metadata for digital archives. Gamification offers citizens a more entertaining way to interact with digital collections and generate useful metadata as a side effect of gameplay. The rise of social gaming on Facebook in recent years opens new horizons for cultural heritage institutions to leverage the capabilities of social networking platforms and to gain immediate access to millions of potential contributors. In this work, we explore the integration of social networks with crowdsourcing games for generating archival metadata. We studied crowdsourcing, gamification and social dynamics from the perspective of cultural heritage and combine their features in a metadata game prototype on the Facebook platform. We tested our prototype and evaluate its results by analysing participation, contribution and player feedback. The two-week testing phase showed promising results in terms of user engagement and produced metadata: almost 3000 tags were added, 90% of which were valid dictionary terms. We conclude that deploying metadata games on social networking platforms is a feasible method for digital archives to harness human intelligence from large shared spaces.
557

Describing data patterns / a general deconstruction of metadata standards

Voß, Jakob 07 August 2013 (has links)
Diese Arbeit behandelt die Frage, wie Daten grundsätzlich strukturiert und beschrieben sind. Im Gegensatz zu vorhandenen Auseinandersetzungen mit Daten im Sinne von gespeicherten Beobachtungen oder Sachverhalten, werden Daten hierbei semiotisch als Zeichen aufgefasst. Diese Zeichen werden in Form von digitalen Dokumenten kommuniziert und sind mittels zahlreicher Standards, Formate, Sprachen, Kodierungen, Schemata, Techniken etc. strukturiert und beschrieben. Diese Vielfalt von Mitteln wird erstmals in ihrer Gesamtheit mit Hilfe der phenomenologischen Forschungsmethode analysiert. Ziel ist es dabei, durch eine genaue Erfahrung und Beschreibung von Mitteln zur Strukturierung und Beschreibung von Daten zum allgemeinen Wesen der Datenstrukturierung und -beschreibung vorzudringen. Die Ergebnisse dieser Arbeit bestehen aus drei Teilen. Erstens ergeben sich sechs Prototypen, die die beschriebenen Mittel nach ihrem Hauptanwendungszweck kategorisieren. Zweitens gibt es fünf Paradigmen, die das Verständnis und die Anwendung von Mitteln zur Strukturierung und Beschreibung von Daten grundlegend beeinflussen. Drittens legt diese Arbeit eine Mustersprache der Datenstrukturierung vor. In zwanzig Mustern werden typische Probleme und Lösungen dokumentiert, die bei der Strukturierung und Beschreibung von Daten unabhängig von konkreten Techniken immer wieder auftreten. Die Ergebnisse dieser Arbeit können dazu beitragen, das Verständnis von Daten --- das heisst digitalen Dokumente und ihre Metadaten in allen ihren Formen --- zu verbessern. Spezielle Anwendungsgebiete liegen unter Anderem in den Bereichen Datenarchäologie und Daten-Literacy. / Many methods, technologies, standards, and languages exist to structure and describe data. The aim of this thesis is to find common features in these methods to determine how data is actually structured and described. Existing studies are limited to notions of data as recorded observations and facts, or they require given structures to build on, such as the concept of a record or the concept of a schema. These presumed concepts have been deconstructed in this thesis from a semiotic point of view. This was done by analysing data as signs, communicated in form of digital documents. The study was conducted by a phenomenological research method. Conceptual properties of data structuring and description were first collected and experienced critically. Examples of such properties include encodings, identifiers, formats, schemas, and models. The analysis resulted in six prototypes to categorize data methods by their primary purpose. The study further revealed five basic paradigms that deeply shape how data is structured and described in practice. The third result consists of a pattern language of data structuring. The patterns show problems and solutions which occur over and over again in data, independent from particular technologies. Twenty general patterns were identified and described, each with its benefits, consequences, pitfalls, and relations to other patterns. The results can help to better understand data and its actual forms, both for consumption and creation of data. Particular domains of application include data archaeology and data literacy.
558

Metadatenbasierte Kontextualisierung architektonischer 3D-Modelle

Blümel, Ina 18 December 2013 (has links)
Digitale 3D-Modelle der Architektur haben innerhalb der letzten fünf Jahrzehnte sowohl die analogen, auf Papier basierenden Zeichnungen als auch die physischen Modelle aus ihrer planungs-, ausführungs- und dokumentationsunterstützenden Rolle verdrängt. Als Herausforderungen bei der Integration von 3D-Modellen in digitale Bibliotheken und Archive sind zunächst die meist nur rudimentäre Annotation mit Metadaten seitens der Autoren und die nur implizit in den Modellen vorhandenen Informationen zu nennen. Aus diesen Defiziten resultiert ein aktuell starkes Interesse an inhaltsbasierter Erschließung durch vernetzte Nutzergruppen oder durch automatisierte Verfahren, die z.B. aufgrund von Form- oder Strukturmerkmalen eine automatische Kategorisierung von 3D-Modellen anhand gegebener Schemata ermöglichen. Die teilweise automatische Erkennung von objektinhärenter Semantik vergrößert die Menge an diskreten und semantisch unterscheidbaren Einheiten. 3D-Modelle als Content im World Wide Web können sowohl untereinander als auch mit anderen textuellen wie nichttextuellen Objekten verknüpft werden, also Teil von aggregierten Dokumenten sein. Die Aggregationen bzw. der Modellkontext sowie die inhärenten Entitäten erfordern Instrumente der Organisation, um dem Benutzer bei der Suche nach Informationen einen Mehrwert zu bieten, insbesondere dann, wenn textbasiert nach Informationen zum Modell und zu dessen Kontext gesucht wird. In der vorliegenden Arbeit wird ein Metadatenmodell zur gezielten Strukturierung von Information entwickelt, welche aus 3D-Architekturmodellen gewonnen wird. Mittels dieser Strukturierung kann das Modell mit weiterer Information vernetzt werden. Die Anwendung etablierter Ontologien sowie der Einsatz von URIs machen die Informationen nicht nur explizit, sondern beinhalten auch eine semantische Information über die Relation selbst, sodass eine Interoperabilität zu anderen verfügbaren Daten im Sinne der Grundprinzipien des Linked-Data-Ansatzes gewährleistet wird. / Digital 3D models from the domain of architecture have replaced analogue paper-based drawings as well as haptic scale models bit by bit during the last five decades. The main challenges for integrating 3D models in digital libraries and archives are posed by mostly only sparse annotation with metadata provided by the author and the fact that information is only implicitly available. This has recently led to an increased interest in context-based indexing using automatic approaches as well as social tagging. Computer based approaches usually rely on methods from artificial intelligence including machine learning for automated categorization based on geometric and structural properties according to a given classification scheme. The partially automated recognition of model-inherent semantics increases the number of discrete and semantically distinguishable entities. 3D models as parts of the World Wide Web can be interlinked which each other. Aggregations as well as the model context along with inherent entities require efficient tools for organization in order to provide real additional benefits for the user during its quest for information. Especially for text-based search on information about a 3D model and its context, a metadata model is an indispensable tool regarding the above described challenges. In this work we develop a metadata model for specific structuring of information, which is obtained from 3D architectural models. Using this structure, the model can be linked to further information. The application of established ontologies and the use of URIs make the information not only explicitly, but also provide semantic information about the relation itself. By that, interoperability according to the principles of the LOD approach is guaranteed.
559

De l'usage des métadonnées dans l'objet sonore / The use of sound objects metadata

Debaecker, Jean 09 October 2012 (has links)
La reconnaissance des émotions dans la musique est un challenge industriel et académique. À l’heure de l’explosion des contenus multimédias, il nous importe de concevoir des ensembles structurés de termes, concepts et métadonnées facilitant l’organisation et l’accès aux connaissances. Notre problématique est la suivante : est-Il possible d'avoir une connaissance a priori de l'émotion en vue de son élicitation ? Autrement dit, dans quelles mesures est-Il possible d'inscrire les émotions ressenties à l'écoute d'une oeuvre musicale dans un régime de métadonnées et de bâtir une structure formelle algorithmique permettant d'isoler le mécanisme déclencheur des émotions ? Est-Il possible de connaître l'émotion que l'on ressentira à l'écoute d'une chanson, avant de l'écouter ? Suite à l'écoute, son élicitation est-Elle possible ? Est-Il possible de formaliser une émotion dans le but de la sauvegarder et de la partager ? Nous proposons un aperçu de l'existant et du contexte applicatif ainsi qu'une réflexion sur les enjeux épistémologiques intrinsèques et liés à l'indexation même de l'émotion : à travers lune démarche psychologique, physiologique et philosophique, nous proposerons un cadre conceptuel de cinq démonstrations faisant état de l'impossible mesure de l'émotion, en vue de son élicitation. Une fois dit à travers notre cadre théorique qu'il est formellement impossible d'indexer les émotions, il nous incombe de comprendre la mécanique d'indexation cependant proposée par les industriels et académiques. Nous proposons, via l'analyse d'enquêtes quantitatives et qualitatives, la production d'un algorithme effectuant des préconisationsd'écoute d’œuvres musicales. / Emotion recognition in music is an industrial and academic challenge. In the age of multimedia content explosion, we mean to design structured sets of terms, concepts and metadata facilitating access to organized knowledge. Here is our research question : can we have an a priori knowledge of emotion that could be elicited afterwards ? In other words, to what extent can we record emotions felt while listening to music, so as to turn them into metadata ? Can we create an algorithm enabling us to detect how emotions are released ? Are we likely to guess ad then elicit the emotion an individual will feel before listening to a particular song ? Can we formalize emotions to save, record and share them ? We are giving an overview of existing research, and tackling intrinsic epistemological issues related to emotion existing, recording and sharing out. Through a psychological, physiological ad philosophical approach, we are setting a theoretical framework, composed of five demonstrations which assert we cannot measure emotions in order to elicit them. Then, a practical approach will help us to understand the indexing process proposed in academic and industrial research environments. Through the analysis of quantitative and qualitative surveys, we are defining the production of an algorithm, enabling us to recommend musical works considering emotion.
560

AutoEduMat: ferramenta de apoio a autoria de metadados de objetos de aprendizagem para o domínio de ensino de matemática

Xavier, Ana Carolina 16 July 2010 (has links)
Submitted by Mariana Dornelles Vargas (marianadv) on 2015-05-25T12:29:15Z No. of bitstreams: 1 AutoEduMat.pdf: 1060362 bytes, checksum: 25b8156de4b9c2c2c5b9dc0f69aea011 (MD5) / Made available in DSpace on 2015-05-25T12:29:15Z (GMT). No. of bitstreams: 1 AutoEduMat.pdf: 1060362 bytes, checksum: 25b8156de4b9c2c2c5b9dc0f69aea011 (MD5) Previous issue date: 2010 / Nenhuma / Esta dissertação apresenta uma pesquisa relacionada as ferramentas que dão suporte a utilização de objetos de aprendizagem em plataformas digitais. Mais especificamente, a pesquisa se direciona para as ferramentas de apoio a autoria destes objetos, em particular dos seus metadados. Inicialmente é apresentada a contextualização do problema de pesquisa, sua fundamentação teórica e os trabalhos relacionados ao tema. Em seguida são apresentadas as principais características do sistema proposto, o AutoEduMat - Ferramenta de Apoio a Autoria de Metadados de Objetos de Aprendizagem para o Domínio de Ensino de Matemática. A ferramenta AutoEduMat dá apoio a autoria de objetos de aprendizagem, oferecendo assistência ao projetista (designer) de objetos na criação e edição de metadados destes objetos. A principal inovação do trabalho é a combinação das tecnologias de Engenharia de Software de Agentes e de Engenharia de Ontologias para construir um sistema multiagente que oferece suporte inteligente para a geração dos metadados dos objetos de aprendizagem, sendo capaz de interagir com o usuário com termos de seu próprio contexto profissional e educacional. No trabalho é proposta a ontologia Onto-EduMat que incorpora os conhecimentos sobre o domínio de ensino de matemática, incluindo aspectos pedagógicos, necessários para o auxílio a geração dos metadados. Tanto a ferramenta quanto seu modelo ontológico são validados através de experimentos descritos no final do trabalho. / This dissertation presents a research related to the tools that support the utilization of learning objects in digital platforms. More precisely, the research is directed to the tools that support the authoring process of these objects, in particular of their metadata. Initially are presented the characterization of the problem, its theoretical foundations and related works. Then are presented the main characteristics of the proposed system, the AutoEduMat - Metadata Authoring Tool for Mathematics Learning Objects. The AutoEduMat system will provide assistance to the object designer in the metadata creation and edition of these objects. The main innovation of this work is the combination of Agent Oriented Software Engineering and Ontology Engineering technologies to built a multiagent system able to offer intelligent support for metadata creation, interacting with users using terms related to their professional and educational context. This work proposes the Onto-EduMat ontology, which incorporates the mathematical and pedagogical knowledge necessary to generate the metadata. The authoring tool and its ontological model are validated through experiments described in the end of the work.

Page generated in 0.0474 seconds