301 |
Kritická analýza jazykových ideologií v českém veřejném diskurzu / Critical Analysis of Language Ideologies in Czech Public DiscourseDufek, Ondřej January 2018 (has links)
The thesis deals with language ideologies in Czech public discourse. After introducing its topic, motivation and structure in the opening chapter, it devotes the second chapter to a thorough analysis of the research field of language ideologies. It presents various ways of defining them, two different approaches to them and a few key features which characterize language ideologies. The relation of language ideologies and other related notions is outlined, possibilities and ways of investigation are surveyed. Some remarks focus on existing lists or glossaries of language ideologies. The core of this chapter is an original, complex definition of language ideologies grounded in a critical reflection of approaches up to now. The third chapter summarizes relevant existing findings and on that basis, it formulates the main aim of the thesis - to make a contribution to knowledge on the foundations and ways of conceptualizing language in Czech public discourse. The fourth chapter elaborates the methodological frame of the thesis. Critical discourse analysis is chosen as a basis - its basics are summarized, main critical comments are considered and a partial solutions are proposed in use of corpus linguistics' tools. Another part of this chapter concerns with keyness as one of the dominant principles used...
|
302 |
ScaleSem : model checking et web sémantique / ScaleSem : model checking and semantic webGueffaz, Mahdi 11 December 2012 (has links)
Le développement croissant des réseaux et en particulier l'Internet a considérablement développé l'écart entre les systèmes d'information hétérogènes. En faisant une analyse sur les études de l'interopérabilité des systèmes d'information hétérogènes, nous découvrons que tous les travaux dans ce domaine tendent à la résolution des problèmes de l'hétérogénéité sémantique. Le W3C (World Wide Web Consortium) propose des normes pour représenter la sémantique par l'ontologie. L'ontologie est en train de devenir un support incontournable pour l'interopérabilité des systèmes d'information et en particulier dans la sémantique. La structure de l'ontologie est une combinaison de concepts, propriétés et relations. Cette combinaison est aussi appelée un graphe sémantique. Plusieurs langages ont été développés dans le cadre du Web sémantique et la plupart de ces langages utilisent la syntaxe XML (eXtensible Meta Language). Les langages OWL (Ontology Web Language) et RDF (Resource Description Framework) sont les langages les plus importants du web sémantique, ils sont basés sur XML.Le RDF est la première norme du W3C pour l'enrichissement des ressources sur le Web avec des descriptions détaillées et il augmente la facilité de traitement automatique des ressources Web. Les descriptions peuvent être des caractéristiques des ressources, telles que l'auteur ou le contenu d'un site web. Ces descriptions sont des métadonnées. Enrichir le Web avec des métadonnées permet le développement de ce qu'on appelle le Web Sémantique. Le RDF est aussi utilisé pour représenter les graphes sémantiques correspondant à une modélisation des connaissances spécifiques. Les fichiers RDF sont généralement stockés dans une base de données relationnelle et manipulés en utilisant le langage SQL ou les langages dérivés comme SPARQL. Malheureusement, cette solution, bien adaptée pour les petits graphes RDF n'est pas bien adaptée pour les grands graphes RDF. Ces graphes évoluent rapidement et leur adaptation au changement peut faire apparaître des incohérences. Conduire l’application des changements tout en maintenant la cohérence des graphes sémantiques est une tâche cruciale et coûteuse en termes de temps et de complexité. Un processus automatisé est donc essentiel. Pour ces graphes RDF de grande taille, nous suggérons une nouvelle façon en utilisant la vérification formelle « Le Model checking ».Le Model checking est une technique de vérification qui explore tous les états possibles du système. De cette manière, on peut montrer qu’un modèle d’un système donné satisfait une propriété donnée. Cette thèse apporte une nouvelle méthode de vérification et d’interrogation de graphes sémantiques. Nous proposons une approche nommé ScaleSem qui consiste à transformer les graphes sémantiques en graphes compréhensibles par le model checker (l’outil de vérification de la méthode Model checking). Il est nécessaire d’avoir des outils logiciels permettant de réaliser la traduction d’un graphe décrit dans un formalisme vers le même graphe (ou une adaptation) décrit dans un autre formalisme / The increasing development of networks and especially the Internet has greatly expanded the gap between heterogeneous information systems. In a review of studies of interoperability of heterogeneous information systems, we find that all the work in this area tends to be in solving the problems of semantic heterogeneity. The W3C (World Wide Web Consortium) standards proposed to represent the semantic ontology. Ontology is becoming an indispensable support for interoperability of information systems, and in particular the semantics. The structure of the ontology is a combination of concepts, properties and relations. This combination is also called a semantic graph. Several languages have been developed in the context of the Semantic Web. Most of these languages use syntax XML (eXtensible Meta Language). The OWL (Ontology Web Language) and RDF (Resource Description Framework) are the most important languages of the Semantic Web, and are based on XML.RDF is the first W3C standard for enriching resources on the Web with detailed descriptions, and increases the facility of automatic processing of Web resources. Descriptions may be characteristics of resources, such as the author or the content of a website. These descriptions are metadata. Enriching the Web with metadata allows the development of the so-called Semantic Web. RDF is used to represent semantic graphs corresponding to a specific knowledge modeling. RDF files are typically stored in a relational database and manipulated using SQL, or derived languages such as SPARQL. This solution is well suited for small RDF graphs, but is unfortunately not well suited for large RDF graphs. These graphs are rapidly evolving, and adapting them to change may reveal inconsistencies. Driving the implementation of changes while maintaining the consistency of a semantic graph is a crucial task, and costly in terms of time and complexity. An automated process is essential. For these large RDF graphs, we propose a new way using formal verification entitled "Model Checking".Model Checking is a verification technique that explores all possible states of the system. In this way, we can show that a model of a given system satisfies a given property. This thesis provides a new method for checking and querying semantic graphs. We propose an approach called ScaleSem which transforms semantic graphs into graphs understood by the Model Checker (The verification Tool of the Model Checking method). It is necessary to have software tools to perform the translation of a graph described in a certain formalism into the same graph (or adaptation) described in another formalism
|
303 |
Návrh dílčí strategie při propagaci elektronického obchodu firmy / Proposal of the Partial Strategy for Promotion of E-shop of the CompanyŽenatová, Eva January 2014 (has links)
This thesis focuses on the definition of important terms in e-commerce for the proper functioning e-shop. It analyzes the current condition of an existing e-shop and on the basis of the weaknesses proposes a partial strategy for further promotion of trade.
|
304 |
Metody sumarizace textových dokumentů / Methods of Text Document SummarizationPokorný, Lubomír January 2012 (has links)
This thesis deals with one-document summarization of text data. Part of it is devoted to data preparation, mainly to the normalization. Listed are some of the stemming algorithms and it contains also description of lemmatization. The main part is devoted to Luhn"s method for summarization and its extension of use WordNet dictionary. Oswald summarization method is described and applied as well. Designed and implemented application performs automatic generation of abstracts using these methods. A set of experiments where developed, which verified correct functionality of the application and of extension of Luhn"s summarization method too.
|
305 |
Implementace detektoru klíčových slov do mobilního telefonu (Symbian 60) / Keyword Spotting Implementation to Mobil Phone (Symbian 60)Cipr, Tomáš Unknown Date (has links)
Keyword spotting is one of the many applications of automatic speech recognition. Its purpose is determining spots in given utterance in which some of the specified words were spoken. Keyword spotting has a great potential to enhance performance of new applications as well as the existing ones. An example could be a mobile phone voice control. Due to OS Symbian's coming to the market it is even possible for end user to implement a keyword spotting for a mobile phone on his or her own. The thesis describes theoretical prerequisites for keyword spotting and its implementation. Firstly the OS Symbian is presented with respect to the given task. Secondly each step of keyword spotting process is described. Finally the object design of keyword spotter is presented followed by implementation description. The thesis concludes with results review and notes on possible improvements.
|
306 |
Barns rättigheter inom kolloverksamheten : – fokus på delaktighetMalmgren, Lisa January 2024 (has links)
This study aims to analyze children's rights within camp activities, especially in terms of participation. More specifically, it analyzes activities conducted on Barnens Ö with the Stiftelsen Barnens Dag (the Foundation) as organizer. Through a keyword analysis, the Foundation's governing documents are linked to the relevant articles of the United Nations Convention on the Rights of the Child (CRC). In this part of the study, children's rights are analyzed in activities in general. The next part focuses on Article 12. Children have the right to express their views and be heard in all matters affecting them. The views of the child should be considered, respecting the child's age and maturity. In order to find out how children's views are taken into account in practice in the camp – and not only in policy documents – camp staff are interviewed. Children's participation is analyzed using Roger Hart's ladder of participation and Harry Shier's model of pathways to participation. The results of the study indicate that the Foundation does solid work regarding children's rights, sometimes going beyond the Convention on the Rights of the Child. One observation is that how to work with children's participation differs among the farms on Barnens Ö. All farms work hard to ensure that children express their opinions and that their opinions are considered. It is less common for children to be involved in decision-making, but it does happen occasionally. / Denna studie syftar till att analysera barns rättigheter inom kolloverksamheten, särskilt vad gäller delaktighet. Mer specifikt analyseras verksamhet som bedrivs på Barnens Ö med Stiftelsen Barnens Dag (Stiftelsen) som arrangör. Genom en nyckelordsanalys kopplas Stiftelsens styrdokument ihop med de relevanta artiklarna i Förenta Nationernas konvention om barnets rättigheter (Barnkonventionen). I denna del av studien analyseras barns rättigheter över lag inom verksamheten. Nästkommande del fokuserar på Artikel 12. Barn har rätt att uttrycka sin mening och höras i alla frågor som rör barnet. Hänsyn ska tas till barnets åsikter, utifrån barnets ålder och mognad. För att ta reda på hur barns åsikter beaktas praktiskt inom kolloverksamheten och inte endast inom styrdokument, intervjuas personal inom kolloverksamheten. Barns delaktighet analyseras med hjälp av Roger Harts delaktighetsstege och Harry Shiers modell för vägar till delaktighet. Resultaten av studien indikerar att Stiftelsen Barnens Dag gör ett gediget arbete gällande barns rättigheter, där det stundtals går längre än Barnkonventionen. En iakttagelse är att arbetet med barns delaktighet skiljer sig någon mellan gårdarna på Barnens Ö. Samtliga gårdar jobbar mycket med att barnen ska uttrycka sina åsikter samt att barnens åsikter beaktas. Mer ovanligt är att barnen faktiskt är med vid beslutfattandet, men stundtals sker också det.
|
307 |
A Probabilistic Formulation of Keyword SpottingPuigcerver I Pérez, Joan 18 February 2019 (has links)
[ES] La detección de palabras clave (Keyword Spotting, en inglés), aplicada a documentos de texto manuscrito, tiene como objetivo recuperar los documentos, o partes de ellos, que sean relevantes para una cierta consulta (query, en inglés), indicada por el usuario, entre una gran colección de documentos. La temática ha recogido un gran interés en los últimos 20 años entre investigadores en Reconocimiento de Formas (Pattern Recognition), así como bibliotecas y archivos digitales.
Esta tesis, en primer lugar, define el objetivo de la detección de palabras clave a partir de una perspectiva basada en la Teoría de la Decisión y una formulación probabilística adecuada. Más concretamente, la detección de palabras clave se presenta como un caso particular de Recuperación de la Información (Information Retrieval), donde el contenido de los documentos es desconocido, pero puede ser modelado mediante una distribución de probabilidad. Además, la tesis también demuestra que, bajo las distribuciones de probabilidad correctas, el marco de trabajo desarrollada conduce a la solución óptima del problema, según múltiples medidas de evaluación utilizadas tradicionalmente en el campo.
Más tarde, se utilizan distintos modelos estadísticos para representar las distribuciones necesarias: Redes Neuronales Recurrentes o Modelos Ocultos de Markov. Los parámetros de estos son estimados a partir de datos de entrenamiento, y las respectivas distribuciones son representadas mediante Transductores de Estados Finitos con Pesos (Weighted Finite State Transducers).
Con el objetivo de hacer que el marco de trabajo sea práctico en grandes colecciones de documentos, se presentan distintos algoritmos para construir índices de palabras a partir de modelos probabilísticos, basados tanto en un léxico cerrado como abierto. Estos índices son muy similares a los utilizados por los motores de búsqueda tradicionales.
Además, se estudia la relación que hay entre la formulación probabilística presentada y otros métodos de gran influencia en el campo de la detección de palabras clave, destacando cuáles son las limitaciones de los segundos.
Finalmente, todas la aportaciones se evalúan de forma experimental, no sólo utilizando pruebas académicas estándar, sino también en colecciones con decenas de miles de páginas provenientes de manuscritos históricos. Los resultados muestran que el marco de trabajo presentado permite construir sistemas de detección de palabras clave muy rápidos y precisos, con una sólida base teórica. / [CA] La detecció de paraules clau (Keyword Spotting, en anglès), aplicada a documents de text manuscrit, té com a objectiu recuperar els documents, o parts d'ells, que siguen rellevants per a una certa consulta (query, en anglès), indicada per l'usuari, dintre d'una gran col·lecció de documents. La temàtica ha recollit un gran interés en els últims 20 anys entre investigadors en Reconeixement de Formes (Pattern Recognition), així com biblioteques i arxius digitals.
Aquesta tesi defineix l'objectiu de la detecció de paraules claus a partir d'una perspectiva basada en la Teoria de la Decisió i una formulació probabilística adequada. Més concretament, la detecció de paraules clau es presenta com un cas concret de Recuperació de la Informació (Information Retrieval), on el contingut dels documents és desconegut, però pot ser modelat mitjançant una distribució de probabilitat. A més, la tesi també demostra que, sota les distribucions de probabilitat correctes, el marc de treball desenvolupat condueix a la solució òptima del problema, segons diverses mesures d'avaluació utilitzades tradicionalment en el camp.
Després, diferents models estadístics s'utilitzen per representar les distribucions necessàries: Xarxes Neuronal Recurrents i Models Ocults de Markov. Els paràmetres d'aquests són estimats a partir de dades d'entrenament, i les corresponents distribucions són representades mitjançant Transductors d'Estats Finits amb Pesos (Weighted Finite State Transducers).
Amb l'objectiu de fer el marc de treball útil per a grans col·leccions de documents, es presenten distints algorismes per construir índexs de paraules a partir dels models probabilístics, tan basats en un lèxic tancat com en un obert. Aquests índexs són molt semblants als utilitzats per motors de cerca tradicionals.
A més a més, s'estudia la relació que hi ha entre la formulació probabilística presentada i altres mètodes de gran influència en el camp de la detecció de paraules clau, destacant algunes limitacions dels segons.
Finalment, totes les aportacions s'avaluen de forma experimental, no sols utilitzant proves acadèmics estàndard, sinó també en col·leccions amb desenes de milers de pàgines provinents de manuscrits històrics. Els resultats mostren que el marc de treball presentat permet construir sistemes de detecció de paraules clau molt acurats i ràpids, amb una sòlida base teòrica. / [EN] Keyword Spotting, applied to handwritten text documents, aims to retrieve the documents, or parts of them, that are relevant for a query, given by the user, within a large collection of documents. The topic has gained a large interest in the last 20 years among Pattern Recognition researchers, as well as digital libraries and archives.
This thesis, first defines the goal of Keyword Spotting from a Decision Theory perspective. Then, the problem is tackled following a probabilistic formulation. More precisely, Keyword Spotting is presented as a particular instance of Information Retrieval, where the content of the documents is unknown, but can be modeled by a probability distribution. In addition, the thesis also proves that, under the correct probability distributions, the framework provides the optimal solution, under many of the evaluation measures traditionally used in the field.
Later, different statistical models are used to represent the probability distribution over the content of the documents. These models, Hidden Markov Models or Recurrent Neural Networks, are estimated from training data, and the corresponding distributions over the transcripts of the images can be efficiently represented using Weighted Finite State Transducers.
In order to make the framework practical for large collections of documents, this thesis presents several algorithms to build probabilistic word indexes, using both lexicon-based and lexicon-free models. These indexes are very similar to the ones used by traditional search engines.
Furthermore, we study the relationship between the presented formulation and other seminal approaches in the field of Keyword Spotting, highlighting some limitations of the latter. Finally, all the contributions are evaluated experimentally, not only on standard academic benchmarks, but also on collections including tens of thousands of pages of historical manuscripts. The results show that the proposed framework and algorithms allow to build very accurate and very fast Keyword Spotting systems, with a solid underlying theory. / Puigcerver I Pérez, J. (2018). A Probabilistic Formulation of Keyword Spotting [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/116834
|
308 |
Etude de valorisation des rejets des usines à zinc de Kolwezi, République démocratique du Congo / Recovery study of values metals from Kolwezi zinc plant residues, Democratic Republic of CongoNgenda Banka, Richard 28 April 2010 (has links)
Les rejets des Usines à Zinc de Kolwezi contiennent majoritairement du zinc sous forme réfractaire (ferrite) au traitement hydrométallurgique conventionnel. Ils contiennent d’autres métaux « lourds » qui les rendent dangereux vis-à-vis de l’environnement dans lequel ils sont actuellement entreposés. Ces métaux, dont la plupart peuvent être valorisés, font de ces rejets un véritable gisement secondaire. Il est donc impératif de mettre au point un procédé adéquat de valorisation ;d’où le thème de la présente thèse :« Etude de valorisation des rejets des Usines à Zinc de Kolwezi, RDC ». <p>A l’aide des techniques modernes de caractérisation (physico–chimique, minéralogique et morphologique), nous sommes arrivés à cibler, à adapter et à justifier l’utilisation d’une technique de valorisation des matières minérales existantes. Les minéraux utiles contenus dans les rejets UZK ont été sulfatés par digestion et sélectivement mis en solution après un grillage. La sulfatation s’est avérée l’étape déterminante du procédé et un intérêt particulier a été focalisé sur cette étape en réalisant une étude cinétique approfondie.<p>Les données et informations récoltées tout le long de cette recherche nous ont permis de réaliser une simulation du procédé par le logiciel ASPEN PLUS. Ce qui a permis de faire une ébauche d’un schéma de traitement industriel. Ce dernier s’est avéré souple vis-à-vis de l’utilisation d’autres matières comme les calcines des concentrés sulfurés cuivre-zinc.<p><p>Residues from the Kolwezi Zinc Plant (Usines à Zinc de Kolwezi UZK) essentially contain zinc in a refractory (ferrite) form, which is difficult to recover by conventional hydrometallurgical methods. « Heavy» metals are also present that make them hazardous towards the environment in which they are currently stored. Most of these metals are valuable; thus, the UZK residues are a real secondary deposit. It is therefore imperative to develop an appropriate method of treatment, hence the theme of the present thesis: « Recovery study of values metals from Kolwezi Zinc Plant residues, DRC ».<p>Using modern techniques of characterization (physical and chemical, mineralogical and morphological), we focused, adapted and justified the use of a technique for efficient recovery of the existing valuable minerals. The minerals contained in UZK residues have been sulphated by digestion and thereafter selectively dissolved after roasting. Sulphatation proved to be the decisive step of the process and a particular attention has been given to this step by performing a detailed kinetic study. <p>The data and information collected throughout this research allowed a simulation of the developed method by using the « Aspen Plus » software. This allowed us to propose a draft scheme of industrial processing. The latter proved flexible towards the use of other materials such as calcines of copper-zinc sulphide concentrates.<p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
|
309 |
Jak vytvořit samostatně motivované vzdělávání: Případová studie Coursera & Khan Academy 2014 / How to Create Self-Driven Education: The Social Web & Social Sciences, Coursera & Khan Academy 2014 Case StudyRůžička, Jakub January 2015 (has links)
This diploma thesis is concerned with the possibilities of the social web data employment in social sciences. Its theoretical part describes the changes in education in the context of the dynamics of contemporary society within three fundamental (interrelated) dimensions of technology (the cause and/or the tool for the change), work (new models of collaboration), and economics (sustainability of free & open-source business models). The main methodological part of the thesis is focused on the issues of sampling, sample representativeness, validity & reliability assessment, ethics, and data collection of the emerging social web research in social sciences. The research part includes illustrative social web analyses and conclusions of the author's 2014 Coursera & Khan Academy on the Social Web research and provides the full research report in its attachement to compare its results to the theoretical part in order to provide a "naive" (as derived from the social web mentions and networks) answer to the fundamental question: "How to Create Self-Driven Education?" Powered by TCPDF (www.tcpdf.org)
|
Page generated in 0.0536 seconds