• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 21
  • 21
  • 16
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 172
  • 72
  • 48
  • 33
  • 29
  • 28
  • 26
  • 23
  • 23
  • 22
  • 20
  • 19
  • 18
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Kolektivní paměť násilně vystěhovaných obyvatel: Životní příběhy o druhé světové válce z Neveklovska / The collective memory of the forcibly ejected inhabitants: The oral history about the 2nd World War from Neveklov and its neighbourhoods

Štěpánková, Jana January 2019 (has links)
This Dissertation is the benefit to studying of the questions of the oral history of forcibly ejected inhabitants of Neveklov and its neighbourhoods during the 2nd World War. About this region, there exists authentic testimony and this territory is fixed in the collective memory. The oral history is a highly valued literary source. Its value is in the authenticity, which is characterized by selectivity and represents another point of view. The essay brings the unique opportunity to get acquainted with witnesses of the wartime. At present this testimony is unknown or is being forgotten in the offical documents. The essay follows the researches of Jaromír Jech from the middle of the last century. If we compare the results of both researches, we get the view of the importance of forcible displacement of Czech inhabitants in a demarcated region. Result of this work is the analysis of the results obtained with the help of the modern approaches that are based on the method of the oral history, which is a part of the qualitative research, with an emphasis on general objectives and context. The essay brings new testimonies of events that have not been presented and published yet. It also simultaneously maps over the current state, i.e. the reflection of the forcible war persecution on the present times and...
152

MIDB : um modelo de integração de dados biológicos

Perlin, Caroline Beatriz 29 February 2012 (has links)
Made available in DSpace on 2016-06-02T19:05:56Z (GMT). No. of bitstreams: 1 4370.pdf: 1089392 bytes, checksum: 82daa0e51d37184f8864bd92d9342dde (MD5) Previous issue date: 2012-02-29 / In bioinformatics, there is a huge volume of data related to biomolecules and to nucleotide and amino acid sequences that reside (in almost their totality) in several Biological Data Bases (BDBs). For a specific sequence, there are some informational classifications: genomic data, evolution-data, structural data, and others. Some BDBs store just one or some of these classifications. Those BDBs are hosted in different sites and servers, with several data base management systems with different data models. Besides, instances and schema might have semantic heterogeneity. In such scenario, the objective of this project is to propose a biological data integration model, that adopts new schema integration and instance integration techniques. The proposed integration model has a special mechanism of schema integration and another mechanism that performs the instance integration (with support of a dictionary) allowing conflict resolution in the attribute values; and a Clustering Algorithm is used in order to cluster similar entities. Besides, a domain specialist participates managing those clusters. The proposed model was validated through a study case focusing on schema and instance integration about nucleotide sequence data from organisms of Actinomyces gender, captured from four different data sources. The result is that about 97.91% of the attributes were correctly categorized in the schema integration, and the instance integration was able to identify that about 50% of the clusters created need support from a specialist, avoiding errors on the instance resolution. Besides, some contributions are presented, as the Attributes Categorization, the Clustering Algorithm, the distance functions proposed and the proposed model itself. / Na bioinformática, existe um imenso volume de dados sendo produzidos, os quais estão relacionados a sequências de nucleotídeos e aminoácidos que se encontram, em quase a sua totalidade, armazenados em Bancos de Dados Biológicos (BDBs). Para uma determinada sequência existem algumas classificações de informação: dados genômicos, dados evolutivos, dados estruturais, dentre outros. Existem BDBs que armazenam somente uma ou algumas dessas classificações. Tais BDBs estão hospedados em diferentes sites e servidores, com sistemas gerenciadores de banco de dados distintos e com uso de diferentes modelos de dados, além de terem instâncias e esquemas com heterogeneidade semântica. Dentro desse contexto, o objetivo deste projeto de mestrado é propor um Modelo de Integração de Dados Biológicos, com novas técnicas de integração de esquemas e integração de instâncias. O modelo de integração proposto possui um mecanismo especial de integração de esquemas, e outro mecanismo que realiza a integração de instâncias de dados (com um dicionário acoplado) permitindo resolução de conflitos nos valores dos atributos; e um Algoritmo de Clusterização é utilizado, com o objetivo de realizar o agrupamento de entidades similares. Além disso, o especialista de domínio participa do gerenciamento desses agrupamentos. Esse modelo foi validado por meio de um estudo de caso com ênfase na integração de esquemas e integração de instâncias com dados de sequências de nucleotídeos de genes de organismos do gênero Actinomyces, provenientes de quatro diferentes fontes de dados. Como resultado, obteve-se que aproximadamente 97,91% dos atributos foram categorizados corretamente na integração de esquemas e a integração de instâncias conseguiu identificar que aproximadamente 50% dos clusters gerados precisam de tratamento do especialista, evitando erros de resolução de entidades. Além disso, algumas contribuições são apresentadas, como por exemplo a Categorização de Atributos, o Algoritmo de Clusterização, as funções de distância propostas e o modelo MIDB em si.
153

Repérer, reconnaître et prévenir les risques psychosociaux : une analyse institutionnelle et économique du cas français / Identify, recognize and prevent psychosocial risks : an institutional and economic analysis of the French case

Gaillard, Aurélie 08 December 2017 (has links)
Les risques psychosociaux (RPS) sont devenus en France une préoccupation majeure pour la société par leurs enjeux en termes de santé publique, de coûts pour les entreprises et les travailleurs. Les ministères du Travail et de la Santé se sont emparés de ces enjeux à la fin des années 2000 en suscitant enquêtes, collectes de données et travaux scientifiques. Malgré le développement de la connaissance sur les RPS et leurs conséquences, l’intégration de ces nouveaux risques dans les politiques publiques et managériales est encore très modeste.A travers une analyse économique, institutionnelle et empirique, l’objectif principal de cette thèse est de contribuer à une meilleure connaissance des conséquences de l’exposition aux RPS pour l’individu et pour l’entreprise, et d’analyser le rôle des instances de prévention actuelles dans la réduction des niveaux de RPS perçus et dans la préservation de la santé des travailleurs. Les différentes analyses empiriques réalisées révèlent que l’exposition des travailleurs aux RPS conduit à une dégradation de leur santé mentale, à davantage d’absence-maladie et de présentéisme. Il semble donc nécessaire de mettre en place des actions de prévention visant à limiter ces conséquences néfastes. Une analyse institutionnelle et économique du cadre de prévention français établit le rôle important du Comité d’Hygiène, de Sécurité et des Conditions de Travail (CHSCT) malgré les moyens d’action limités dont l’instance dispose. / In France, psychosocial risks (PSR) became a major concern for society by their stakes in termsof public health, costs for companies and workers. In the late 2000s, the Ministries of Labor and Health took up these challenges by initiating surveys, data collection and scientific works. Despite the development of knowledge about PSR and its consequences, the integration of these new risks into public and managerial policies is still very modest.Through an economic, institutional and empirical analysis, the main objective of this thesis is to contribute to a better knowledge of the consequences of PSR’ exposure for individual and for company, and to analyze the role of prevention authorities to reduce the perceived levels of PSR and preserving the workers’ health. The empirical analyzes carried out reveal that workers' exposure to PSR leads to a degradation of their mental health, more sick leave and presenteeism at work. It therefore seems necessary to put in place prevention measures to limit these harmful consequences. An institutional and economic analysis of the French prevention framework establishes the important role of the Health, Safety and Working Conditions Committee (CHSCT) in spite of the limited means of action available.
154

Oralidade e escrita no processo civil / Oralité et écriture dans le procés civil

Alexandre Miura Iura 02 May 2012 (has links)
O objetivo principal desta dissertação é apresentar a Oralidade e a Escrita no Processo Civil sob a ótica do Gerenciamento de Processos. Deste modo, é negada que a oralidade constitua um princípio formador do Direito Processual Civil, destacando-se que se trata de uma escolha técnica dada ao órgão jurisdicional visando maior eficiência. É questionada a funcionalidade das audiências e da prova oral. É enfatizado que o papel da conciliação é promover o acesso à justiça, e não reduzir gastos públicos. De outro lado, é sustentado que a garantia de um processo justo é compatível com um procedimento escrito. À guisa de conclusão, é afirmado que a oralidade não pode ser tratada exclusivamente no plano dos princípios. Com o consenso das partes, pode o juiz customizar as audiências e a colheita das provas visando dar maior eficiência ao processo. / This essay overriding objective is to present Orality and Writing in Civil Procedure in a Case Management view. By doing so, it is denied that orality constitutes a formative principle of Civil Procedure, rather than a technical choice given to the court in order to bring more efficiency to the procedure. The oral hearing and proof gathering functionality is also questioned. It is emphasized that the role of conciliation is to improve access to justice and it is not its aim to reduce public expenses. In another hand, its sustained that the guarantee of a fair public hearing is compatible with a writing procedure. As a conclusion, it is said that orality and writing cannot be treated exclusively as a matter of principle. With the consent of the parties, the judge can customize the hearings and proof taking, giving more efficiency to the civil procedure.
155

Machine Learning and Rank Aggregation Methods for Gene Prioritization from Heterogeneous Data Sources

Laha, Anirban January 2013 (has links) (PDF)
Gene prioritization involves ranking genes by possible relevance to a disease of interest. This is important in order to narrow down the set of genes to be investigated biologically, and over the years, several computational approaches have been proposed for automat-ically prioritizing genes using some form of gene-related data, mostly using statistical or machine learning methods. Recently, Agarwal and Sengupta (2009) proposed the use of learning-to-rank methods, which have been used extensively in information retrieval and related fields, to learn a ranking of genes from a given data source, and used this approach to successfully identify novel genes related to leukemia and colon cancer using only gene expression data. In this work, we explore the possibility of combining such learning-to-rank methods with rank aggregation techniques to learn a ranking of genes from multiple heterogeneous data sources, such as gene expression data, gene ontology data, protein-protein interaction data, etc. Rank aggregation methods have their origins in voting theory, and have been used successfully in meta-search applications to aggregate webpage rankings from different search engines. Here we use graph-based learning-to-rank methods to learn a ranking of genes from each individual data source represented as a graph, and then apply rank aggregation methods to aggregate these rankings into a single ranking over the genes. The thesis describes our approach, reports experiments with various data sets, and presents our findings and initial conclusions.
156

Význam zdrojů likvidity centrální banky v průběhu finanční krize. Vývoj pozice věřitele poslední instance / Importance of sources of central bank liquidity during the financial crisis. The development of the lender of last resort function

Laga, Václav January 2014 (has links)
The aim of this thesis is to document the importance of liquidity resources of central banks during banking panics and financial crises and analysis of the development of LLR function. We examined three historical examples: the banking panic of 1866, the Great Depression and the current financial crisis, and we focused on the interaction between the demand for liquidity on the one hand and the supply of liquidity by central banks on the other. On the wide historical background we also analysed the changes in the function of LLR. We present that a restrictive monetary policy during financial market distortions may lead to further disturbances and cause serious recession. The analysis of the BoE during 1866 and of the FED between 2007 and 2009, on the contrary shows that the expansionary stance and considerably endogenous liquidity supply are able to reduce financial market's distortions and mitigate possible recession. Analysis of FED's reaction also indicated that should the LLR remain efficient, central banks must expand their instruments portfolio.
157

Nástroje pro automatizaci workflow procesů / Tools for Automating the Workflow Processes

Vančura, Tomáš January 2008 (has links)
The the thesis deals with tools for workflow processes automation. It describes in general what workflow is. It also briefly describes tools such as MS BizTalk Server, SAP NetWeaver, IBM WebSphere, ORACLE BPEL. The main part deals with Windows Workflow Foundation. This tool is decribed in detail together with its parts workflow runtime, workflow instances and workflow activities. One part of the thesis is a application, which uses all the possibilities of Windows Workflow Foundation.
158

Vyhledávání vzorů v dynamických datech / Pattern Finding in Dymanical Data

Budík, Jan January 2009 (has links)
First chapter is about basic information pattern learning. Second chapter is about solutions of pattern recognition and about using artificial inteligence and there are basic informations about statistics and theory of chaos. Third chapter is focused on time series, types of time series and preprocessing. There are informations about time series in financial sector. Fourth charter discuss about pattern recognition problems and about prediction. Last charter is about software, which I did and there are informations about part sof program.
159

Instance Segmentation on depth images using Swin Transformer for improved accuracy on indoor images / Instans-segmentering på bilder med djupinformation för förbättrad prestanda på inomhusbilder

Hagberg, Alfred, Musse, Mustaf Abdullahi January 2022 (has links)
The Simultaneous Localisation And Mapping (SLAM) problem is an open fundamental problem in autonomous mobile robotics. One of the latest most researched techniques used to enhance the SLAM methods is instance segmentation. In this thesis, we implement an instance segmentation system using Swin Transformer combined with two of the state of the art methods of instance segmentation namely Cascade Mask RCNN and Mask RCNN. Instance segmentation is a technique that simultaneously solves the problem of object detection and semantic segmentation. We show that depth information enhances the average precision (AP) by approximately 7%. We also show that the Swin Transformer backbone model can work well with depth images. Our results also show that Cascade Mask RCNN outperforms Mask RCNN. However, the results are to be considered due to the small size of the NYU-depth v2 dataset. Most of the instance segmentation researches use the COCO dataset which has a hundred times more images than the NYU-depth v2 dataset but it does not have the depth information of the image.
160

AI-based Quality Inspection forShort-Series Production : Using synthetic dataset to perform instance segmentation forquality inspection / AI-baserad kvalitetsinspektion för kortserieproduktion : Användning av syntetiska dataset för att utföra instans segmentering förkvalitetsinspektion

Russom, Simon Tsehaie January 2022 (has links)
Quality inspection is an essential part of almost any industrial production line. However, designing customized solutions for defect detection for every product can be costlyfor the production line. This is especially the case for short-series production, where theproduction time is limited. That is because collecting and manually annotating the training data takes time. Therefore, a possible method for defect detection using only synthetictraining data focused on geometrical defects is proposed in this thesis work. The methodis partially inspired by previous related work. The proposed method makes use of aninstance segmentation model and pose-estimator. However, this thesis work focuses onthe instance segmentation part while using a pre-trained pose-estimator for demonstrationpurposes. The synthetic data was automatically generated using different data augmentation techniques from a 3D model of a given object. Moreover, Mask R-CNN was primarilyused as the instance segmentation model and was compared with a rival model, HTC. Thetrials show promising results in developing a trainable general-purpose defect detectionpipeline using only synthetic data

Page generated in 0.0476 seconds