• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 13
  • 10
  • 9
  • 8
  • 6
  • 6
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 118
  • 16
  • 14
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Policy Merger System for P3P in a Cloud Aggregation Platform

Olurin, Olumuyiwa January 2013 (has links)
The need for aggregating privacy policies is present in a variety of application areas today. In traditional client/server models, websites host services along with their policies in different private domains. However, in a cloud-computing platform where aggregators can merge multiple services, users often face complex decisions in terms of choosing the right services from service providers. In this computing paradigm, the ability to aggregate policies as well as services will be useful and more effective for users that are privacy conscious regarding their sensitive or personal information. This thesis studies the problems associated with the Platform for Privacy Preference (P3P) language, and the present issues with communicating and understanding the P3P language. Furthermore, it discusses some efficient strategies and algorithms for the matching and the merging processes, and then elaborates on some privacy policy conflicts that may occur after merging policies. Lastly, the thesis presents a tool for matching and merging P3P policies. If successful, the merge produces an aggregate policy that is consistent with the policies of all participating service providers.
62

Ekonominá analýza efektu fúze na praktickém příkladu společnosti Rudolf Jelínek a.s. / Economic analysis of the effect of the merger on the practical example of Rudolf Jelinek a.s.

Šticha, David January 2015 (has links)
This thesis will include an economic analysis of the merger of Czech company Rudolf Jelinek a.s. The analysis will be prepared on the basis of performance relevant financial analysis, strategic analysis, analysis of value drivers, financial plan and a valuation based on discounted cash flow method. At the end of this thesis, there will be evaluated the effect of the merger based on appropriately selected methods.
63

Demo: Freeway Merge Assistance System Using DSRC

Ahmed, Md Salman, Hoque, Mohammad A., Rios-Torres, Jackeline, Khattak, Asad 16 October 2017 (has links)
This paper presents the development of a novel decentralized freeway merge assistance system using the Dedicated Short Range Communication (DSRC) technology. The system provides visual advisories on a Google map through a smart phone application. To the best of our knowledge, this is the first implementation of a DSRC-based freeway merging assistance system-integrated with smart phone application via Bluetooth-that has been tested in real-world on an interstate highway in an uncontrolled environment. Results from field operational tests indicate that this system can successfully advise drivers towards a collaborative and smooth merging experience on typical "Diamond" interchanges.
64

Creating a Customizable Component Based ETL Solution for the Consumer / Skapandet av en anpassningsbar komponentbaserad ETL-lösning för konsumenten

Retelius, Philip, Bergström Persson, Eddie January 2021 (has links)
In today's society, an enormous amount of data is created that is stored in various databases. Since the data is in many cases stored in different databases, there is a demand from organizations with a lot of data to be able to merge separated data and get an extraction of this resource. Extract, Transform and Load System (ETL) is a solution that has made it possible to easily merge different databases. However, the ETL market has been owned by large actors such as Amazon and Microsoft and the solutions offered are completely owned by these actors. This leaves the consumer with little ownership of the solution. Therefore, this thesis proposes a framework to create a component based ETL which gives consumers an opportunity to own and develop their own ETL solution that they can customize to their own needs. The result of the thesis is a prototype ETL solution that is built with the idea of being able to configure and customize the prototype and it accomplishes this by being independent of inflexible external libraries and a level of modularity that makes adding and removing components easy. The results of this thesis are verified with a test that shows how two different files containing data can be combined. / I dagens samhälle skapas det en enorm mängd data som är lagrad i olika databaser. Eftersom data i många fall är lagrat i olika databaser, finns det en efterfrågan från organisationer med mycket data att kunna slå ihop separerad data och få en utvinning av denna resurs. Extract, Transform and Load System (ETL) är en lösning som gjort det möjligt att slå ihop olika databaser. Dock är problemet denna expansion av ETL teknologi. ETL marknaden blivit ägd av stora aktörer såsom Amazon och Microsoft och de lösningar som erbjuds är helt ägda av dem. Detta lämnar konsumenten med lite ägodel av lösningen. Därför föreslår detta examensarbete ett ramverk för att skapa ett komponentbaserat ETL verktyg som ger konsumenter en möjlighet att utveckla en egen ETL lösning som de kan skräddarsy efter deras egna förfogande. Resultatet av examensarbete är en prototyp ETL-lösning som är byggd för att kunna konfigurera och skräddarsy prototypen. Lösningen lyckas med detta genom att vara oberoende av oflexibla externa bibliotek och en nivå av modularitet som gör addering och borttagning av komponenter enkelt. Resultatet av detta examensarbete är verifierat av ett test som visar på hur två olika filer med innehållande data kan kombineras.
65

Evaluating the Bluespot model with the August 2021 flood in Gävle, Sweden

Björklund, Oskar January 2023 (has links)
Floods are one of the most common types of natural disasters. They annually affect vast amounts of people and cause severe economic losses. While fluvial, coastal, and flash floods are well studied, pluvial floods (rain related) have received modest attention from researchers and decision-makers in comparison. There are several reasons for this, one is that it has been considered a fixed problem with infrastructure and other engineered solutions and another is that they are generally undramatic and small-scale. However, as cities expand, the environment’s ability to retain and dispose of excess water is inhibited and as the frequency of extreme weather events is expected to increase due to climate change, the risk associated with pluvial floods has become increasingly recognized. Commercial and open-source Urban pluvial flood models tend to require advanced modelling expertise, considerable computational power, large amounts of input data and are often expensive. Consequently, there is less knowledge of flood inundation caused by pluvial floods compared to other types. This thesis investigates the Bluespot model, which aims to provide an approachable tool to generate an overview of the effects of pluvial floods in urban areas. The model requires few input data and is relatively simple to perform. Results from the model are compared to the August 2021 flood event in Gävle, Sweden.The study finds that results ranged from accurate to over- and underestimated. Slope and incoming water were found to affect the outcome most. Blue spots without the influence of streams or other waterways, with a distinct slope were mapped with accuracy and showed consistency with coarser resolutions. Consequently, underpasses in the road network were mapped with especially good consistency. Further, blue spots within close distance to large flow accumulation were underestimated and the accuracy tended to decrease with a coarser resolution. The model cannot account for water outside blue spots, thus, when large volumes of water accumulate and spread beyond these boarders it generates poor results. These areas were found to be efficiently indicated by generating a heatmap from high-flow accumulation points. Thus, indicating low confidence and where a hydraulic flood model should be performed. Depending on the scope a 1-3m resolution is recommended for investigating effects on property etc and a 5-10m resolution is sufficient for investigating underpasses, however, a finer resolution will generate more accurate results.
66

Non-parametric Clustering and Topic Modeling via Small Variance Asymptotics with Local Search

Singh, Siddharth January 2013 (has links)
No description available.
67

臺灣華語的口語詞彙辨識歷程: 從雙音節詞來看 / Spoken word recognition in Taiwan Mandarin: evidence from isolated disyllabic words

錢昱夫, Chien, Yu Fu Unknown Date (has links)
論文提要內容:(共一冊,17770字,分六章) 本研究用雙音節詞來探討不同音段和聲調在臺灣華語的口語詞彙辨識歷程中的重要性。Cohort模型(1978)非常強調詞首訊息的重要性,然而Merge模型(2000)認為訊息輸入和音韻表徵的整體吻合才是最重要的。因此,本研究企圖探索不同音段和詞首詞尾在臺灣華語的口語詞彙辨識歷程中的重要性。然而,聲調的問題並無在先前的模型裡被討論。因此,聲調在臺灣華語的口語詞彙辨識歷程中所扮演的角色也會在本研究中被討論。另外,詞頻效應也會在本研究中被探索。本研究的三個實驗均由同樣的十五名受試者參加。實驗一是測試不同音段在臺灣華語的口語詞彙辨識歷程中的重要性。實驗一操弄十二個雙音節高頻詞和十二個雙音節低頻詞,每一個雙音節詞的每一個音段都分別被噪音擋住。實驗二是在探索詞首和詞尾在臺灣華語的口語詞彙辨識歷程中的重要性。實驗二操弄十二個雙音節高頻詞和十二個雙音節低頻詞。這些雙音節詞的詞首CV或詞尾VG/N都分別被雜音擋住。實驗三操弄二十四個雙音節高頻詞和二十四個雙音節低頻詞。這些雙音節詞的聲調都被拉平到100赫茲。在這三個實驗中,受試者必須聽這些被操弄過的雙音節詞,並且辨認它們。受試者的反應時間和辨詞的準確率都用E-Prime來記錄。實驗結果顯示,傳統的Cohort模型不能被完全支持,因為詞首訊息被噪音擋住的詞仍能被受試者成功的辨識出來。強調聲音訊息和音韻表徵的整體吻合度的Merge模型,比較能解釋實驗的結果。然而,Merge模型必須要加入韻律節點才能處理臺灣華語的聲調辨識的問題。本研究也顯示,雙音節詞的第一個音節的母音在口語詞彙辨識歷程中是最重要的,而雙音節詞的第二個音節的母音是第二重要的。這是因為母音帶了最多訊息,包括聲調。另外,雙音節詞的詞首和詞尾在臺灣華語的口語詞彙辨識歷程中是扮演差不多重要的角色。母音對於聲調的感知是最重要的。詞頻效應也完全表現在臺灣華語的口語詞彙辨識歷程中。 關鍵詞:口語詞彙辨識歷程、臺灣華語、華語聲調、音段、Cohort模型、Merge模型 / The present study investigated the importance of different segments and the importance of tone in spoken word recognition in Taiwan Mandarin by using isolated disyllabic words. Cohort model (1978) emphasized the absolute importance of the initial information. On the contrary, Merge (2000) proposed that the overall match between the input and the phonological representation is the most crucial. Therefore, this study tried to investigate the importance of different segments and the importance of onsets and offsets in the processing of Mandarin spoken words. However, the issues of tone were not included in the previous models. Thus, the importance of tone was also investigated in this study. The issues about frequency effect were also explored here. Three experiments were designed in this study. Fifteen subjects were invited to participate in all three experiments. Experiment 1 was designed to investigate the importance of different segments in Taiwan Mandarin. In experiment 1, 12 high-frequency disyllabic words and 12 low-frequency disyllabic words were selected. Each segment of each disyllabic word was replaced by the hiccup noise. Experiment 2 was designed to investigate the importance of onsets and offsets. In experiment 2, 12 high-frequency disyllabic words and 12 low-frequency disyllabic words were chosen. The CV of the first syllable and the VG/N of the second syllable were replaced by the hiccup noise. Experiment 3 was designed to investigate the importance of Mandarin tones. In experiment 3, 24 high-frequency disyllabic words and 24 low-frequency disyllabic words were selected. The tones of the disyllabic words were leveled to 100 Hz. In the three experiments, subjects listened to the stimuli and recognized them. The reaction time and accuracy were measured by E-Prime. The results indicated that traditional Cohort model cannot be fully supported because words can still be correctly recognized when word initial information is disruptive. Merge model, which proposed that the overall match between the input and the lexical representation is the most important, was more compatible with the results here. However, Merge model needs to include the prosody nodes, so that it can account for the processing of tones in Taiwan Mandarin. In addition, the current study also showed that the first vowel of the disyllabic word is the most crucial and the second vowel of the disyllabic word is the second influential since the vowel carries the most important information, including tones. The results of experiment 2 demonstrated that the onsets and offsets are almost the same important in Mandarin. Furthermore, vowel is the most influential segment for the perception of Mandarin tones. Finally, frequency effect appeared in the processing of Mandarin words. Keywords: spoken word recognition, Taiwan Mandarin, Mandarin tones, segments, Cohort, Merge
68

Distributed Collaboration on Versioned Decentralized RDF Knowledge Bases

Arndt, Natanael 30 June 2021 (has links)
Ziel dieser Arbeit ist es, die Entwicklung von RDF-Wissensbasen in verteilten kollaborativen Szenarien zu unterstützen. In dieser Arbeit wird eine neue Methodik für verteiltes kollaboratives Knowledge Engineering – „Quit“ – vorgestellt. Sie geht davon aus, dass es notwendig ist, während des gesamten Kooperationsprozesses Dissens auszudrücken und individuelle Arbeitsbereiche für jeden Mitarbeiter bereitzustellen. Der Ansatz ist von der Git-Methodik zum kooperativen Software Engineering inspiriert und basiert auf dieser. Die Analyse des Standes der Technik zeigt, dass kein System die Git-Methodik konsequent auf das Knowledge Engineering überträgt. Die Hauptmerkmale der Quit-Methodik sind unabhängige Arbeitsbereiche für jeden Benutzer und ein gemeinsamer verteilter Arbeitsbereich für die Zusammenarbeit. Während des gesamten Kollaborationsprozesses spielt die Data-Provenance eine wichtige Rolle. Zur Unterstützung der Methodik ist der Quit-Stack als eine Sammlung von Microservices implementiert, die es ermöglichen, die Semantic-Web-Datenstruktur und Standardschnittstellen in den verteilten Kollaborationsprozess zu integrieren. Zur Ergänzung der verteilten Datenerstellung werden geeignete Methoden zur Unterstützung des Datenverwaltungsprozesses erforscht. Diese Managementprozesse sind insbesondere die Erstellung und das Bearbeiten von Daten sowie die Publikation und Exploration von Daten. Die Anwendung der Methodik wird in verschiedenen Anwendungsfällen für die verteilte Zusammenarbeit an Organisationsdaten und an Forschungsdaten gezeigt. Weiterhin wird die Implementierung quantitativ mit ähnlichen Arbeiten verglichen. Abschließend lässt sich feststellen, dass der konsequente Ansatz der Quit-Methodik ein breites Spektrum von Szenarien zum verteilten Knowledge Engineering im Semantic Web ermöglicht.:Preface by Thomas Riechert Preface by Cesare Pautasso 1 Introduction 2 Preliminaries 3 State of the Art 4 The Quit Methodology 5 The Quit Stack 6 Data Creation and Authoring 7 Publication and Exploration 8 Application and Evaluation 9 Conclusion and Future Work Bibliography Web References List of Figures List of Tables List of Listings List of Definitions and Acronyms List of Namespace Prefixes / The aim of this thesis is to support the development of RDF knowledge bases in a distributed collaborative setup. In this thesis, a new methodology for distributed collaborative knowledge engineering – called Quit – is presented. It follows the premise that it is necessary to express dissent throughout a collaboration process and to provide individual workspaces for each collaborator. The approach is inspired by and based on the Git methodology for collaboration in software engineering. The state-of-the-art analysis shows that no system is consequently transferring the Git methodology to knowledge engineering. The key features of the Quit methodology are independent workspaces for each user and a shared distributed workspace for the collaboration. Throughout the whole collaboration process data provenance plays an important role. To support the methodology the Quit Stack is implemented as a collection of microservices, that allow to integrate the Semantic Web data structure and standard interfaces with the distributed collaborative process. To complement the distributed data authoring, appropriate methods to support the data management process are researched. These management processes are in particular the creation and authoring of data as well as the publication and exploration of data. The application of the methodology is shown in various use cases for the distributed collaboration on organizational data and on research data. Further, the implementation is quantitatively compared to the related work. Finally, it can be concluded that the consequent approach followed by the Quit methodology enables a wide range of distributed Semantic Web knowledge engineering scenarios.:Preface by Thomas Riechert Preface by Cesare Pautasso 1 Introduction 2 Preliminaries 3 State of the Art 4 The Quit Methodology 5 The Quit Stack 6 Data Creation and Authoring 7 Publication and Exploration 8 Application and Evaluation 9 Conclusion and Future Work Bibliography Web References List of Figures List of Tables List of Listings List of Definitions and Acronyms List of Namespace Prefixes
69

Codage multi-vues multi-profondeur pour de nouveaux services multimédia / Multiview video plus depth coding for new multimedia services

Mora, Elie-Gabriel 04 February 2014 (has links)
Les travaux effectués durant cette thèse de doctorat ont pour but d’augmenter l’efficacité de codage dans 3D-HEVC. Nous proposons des approches conventionnelles orientées vers la normalisation vidéo, ainsi que des approches en rupture basées sur le flot optique. En approches conventionnelles, nous proposons une méthode qui prédit les modes Intra de profondeur avec ceux de texture. L’héritage est conditionné par un critère qui mesure le degré de similitude entre les deux modes. Ensuite, nous proposons deux méthodes pour améliorer la prédiction inter-vue du mouvement dans 3D-HEVC. La première ajoute un vecteur de disparité comme candidat inter-vue dans la liste des candidats du Merge, et la seconde modifie le processus de dérivation de ce vecteur. Finalement, un outil de codage intercomposantes est proposé, où le lien entre les arbres quaternaires de texture et de profondeur est exploité pour réduire le temps d’encodage et le débit, à travers un codage conjoint des deux arbres. Dans la catégorie des approches en rupture, nous proposons deux méthodes basées sur l’estimation de champs denses de vecteurs de mouvement en utilisant le flot optique. La première calcule un champ au niveau d’une vue de base reconstruite, puis l’extrapole au niveau d’une vue dépendante, où il est hérité par les unités de prédiction en tant que candidat dense du Merge. La deuxième méthode améliore la synthèse de vues : quatre champs sont calculés au niveau de deux vues de référence en utilisant deux références temporelles. Ils sont ensuite extrapolés au niveau d’une vue synthétisée et corrigés en utilisant une contrainte épipolaire. Les quatre prédictions correspondantes sont ensuite combinées. / This PhD. thesis deals with improving the coding efficiency in 3D-HEVC. We propose both constrained approaches aimed towards standardization, and also more innovative approaches based on optical flow. In the constrained approaches category, we first propose a method that predicts the depth Intra modes using the ones of the texture. The inheritance is driven by a criterion measuring how much the two are expected to match. Second, we propose two simple ways to improve inter-view motion prediction in 3D-HEVC. The first adds an inter-view disparity vector candidate in the Merge list and the second modifies the derivation process of this disparity vector. Third, an inter-component tool is proposed where the link between the texture and depth quadtree structures is exploited to save both runtime and bits through a joint coding of the quadtrees. In the more innovative approaches category, we propose two methods that are based on a dense motion vector field estimation using optical flow. The first computes such a field on a reconstructed base view. It is then warped at the level of a dependent view where it is inserted as a dense candidate in the Merge list of prediction units in that view. The second method improves the view synthesis process: four fields are computed at the level of the left and right reference views using a past and a future temporal reference. These are then warped at the level of the synthesized view and corrected using an epipolar constraint. The four corresponding predictions are then blended together. Both methods bring significant coding gains which confirm the potential of such innovative solutions.
70

L'iconographie des divinités féminines hindoues au Bengale de la préhistoire au XIIᵉ siècle / The iconography of Hindu goddesses in Bengal from the prehistory period up to the 12th century

Chamoret, Suzanne 14 December 2017 (has links)
Les représentations des divinités féminines hindoues mises au jour au Bengale, stèles et statues de pierre ou de métal, ont été analysées à partir d'un corpus d'un peu plus de trois cents œuvres que nous avons collectées dans les musées indiens, bangladais et occidentaux, mais aussi dans les catalogues, études et publications diverses. L'étude iconographique sera faite par une mise en perspective des images, de l'épigraphie, de la littérature et des concepts théologiques exprimés dans les textes sacrés. La première partie de cette recherche est une étude chronologique consacrée (1) à l'étude des plaques de terre cuite produites au Bengale entre le IIIᵉ siècle av. notre ère et le IIᵉ après qui représentent divers personnages féminins portant déjà pour certains les caractéristiques iconographiques de la divinité telles qu'on les trouvera sur les images ultérieures, et (2) à l'apparition et au développement de la parèdre de Śiva dans son rôle d'épouse : à partir du IXᵉ siècle et jusqu'au XIIᵉ siècle, c'est en effet la Déesse, śakti du dieu, qui est omniprésente. Les déesses viṣṇuites n'occupent qu'une infime partie du corpus. Dans la deuxième partie, ce sont les formes redoutables de la Déesse śivaïte, Durgā siṃhavāhinī, Mahiṣāsuramardinī et Cāmuṇḍā/Kālī qui sont analysées. Les déesses serpents gardent leur spécificité malgré leur intégration dans le panthéon śivaïte. L'étude stylistique des images permet d'identifier le développement des différentes écoles de la région avec, à partir des XIᵉ et XIIᵉ siècles, une différence notable entre les stèles à la décoration foisonnante du nord-ouest du Bengale et celles dépouillées et empreintes de spiritualité de la région de Dhaka devenue le centre du pouvoir sous les Sena. Cette étude iconographique permet de constater que de la bhakti apparue à l'époque des Épopées, aux cultes tantriques ésotériques les plus transgressifs, le Bengale médiéval a beaucoup développé les cultes śākta en l'honneur de la Déesse Suprême rattachée au panthéon śivaïte : les courants orthodoxes, kaula et Trika non dualistes, et peut-être Nātha ont pu être identifiés. Mais quelle que soit la voie choisie, le but de l'adepte reste le même, la libération, mokṣa, et la fusion avec la Déesse Suprême. / The production in Bengal of stone stelae and stone and metal statues representing Hindu Goddesses, dated from prehistory up to the twelfth century was assembled in a collection of more than three hundred pieces from the museums in India, Bangladesh and Western countries, from catalogues and from other scholar research publications. The purpose of this doctoral dissertation is the analysis of the collection.The first part of this research is a chronological approach. Between the third century B.C. and the second century A.D., there was an important production of terracotta plaques with feminine figurines but it is difficult to say whether they were modeled for decoration or for cult purposes. Later, other than some beautiful terracotta statues representing Mahiṣāsuramardinī and snake goddesses dated around the fifth century, there is a paucity of images until the eighth century. The pieces dating from the ninth up to the twelfth century in the collection are quite all images of the Goddess, Śiva's śakti and wife, and the stelae are quite all narratives and dedicated to orthodox cults.The second part of the research is a more detailed analysis of the fearsome forms of the Goddess: Durgā siṃhavāhinī, Mahiṣāsuramardinī, Cāmuṇḍā; the snake goddesses, although being incorporated within the Śaiva pantheon, keep a specific role.Stylistic elements facilitate the identification of several schools of sculpture, with, by the eleventh and twelfth centuries, a substantial difference between the abundance of decorative elements on the stelae from North-West of Bengal and the bare style of those conceived in the area of Dhaka.From a religious point of view, an evolution from the narrative to the esoteric tantric images shows different types of beliefs and śākta cults: orthodox, non dualist kaula and Trika, and may be Nātha, being understood that whichever way is chosen, the goal remains the same: mokṣa and merge within the Supreme Goddess.

Page generated in 0.0412 seconds