• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 48
  • 48
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Projeto da arquitetura de hardware para binarização e modelagem de contextos para o CABAC do padrão de compressão de vídeo H.264/AVC / Hardware architecture design for binarization and context modeling for CABAC of H.264/AVC video compression

Martins, André Luis Del Mestre January 2011 (has links)
O codificador aritmético binário adaptativo ao contexto adotado (CABAC – Context-based Adaptive Binary Arithmetic Coding) pelo padrão H.264/AVC a partir de perfil Main é o estado-da-arte em termos de eficiência de taxa de bits. Entretanto, o CABAC ocupa 9.6% do tempo total de processamento e seu throughput é limitado pelas dependências de dados no nível de bit (LIN, 2010). Logo, atingir os requisitos de desempenho em tempo real nos níveis mais altos do padrão H.264/AVC se torna uma tarefa árdua em software, sendo necesário então, a aceleração do CABAC através de implementações em hardware. As arquiteturas de hardware encontradas na literatura para o CABAC focam no Codificador Aritmético Binário (BAE - Binary Arithmetic Encoder) enquanto que a Binarização e Modelagem de Contextos (BCM – Binarization and Context Modeling) fica em segundo plano ou nem é apresentada. O BCM e o BAE juntos constituem o CABAC. Esta dissertação descreve detalhadamente o conjunto de algoritmos que compõem o BCM do padrão H.264/AVC. Em seguida, o projeto de uma arquitetura de hardware específica para o BCM é apresentada. A solução proposta é descrita em VHDL e os resultados de síntese mostram que a arquitetura alcança desempenho suficiente, em FPGA e ASIC, para processar vídeos no nível 5 do padrão H.264/AVC. A arquitetura proposta é 13,3% mais rápida e igualmente eficiente em área que os melhores trabalhos relacionados nestes quesitos. / Context-based Adaptive Binary Arithmetic Coding (CABAC) adopted in the H.264/AVC main profile is the state-of-art in terms of bit-rate efficiency. However, CABAC takes 9.6% of the total encoding time and its throughput is limited by bit-level data dependency (LIN, 2010). Moreover, meeting real-time requirement for a pure software CABAC encoder is difficult at the highest levels of the H.264/AVC standard. Hence, speeding up the CABAC by hardware implementation is required. The CABAC hardware architectures found in the literature focus on the Binary Arithmetic Encoder (BAE), while the Binarization and Context Modeling (BCM) is a secondary issue or even absent in the literature. Integrated, the BCM and the BAE constitute the CABAC. This dissertation presents the set of algorithms that describe the BCM of the H.264/AVC standard. Then, a novel hardware architecture design for the BCM is presented. The proposed design is described in VHDL and the synthesis results show that the proposed architecture reaches sufficiently high performance in FPGA and ASIC to process videos in real-time at the level 5 of H.264/AVC standard. The proposed design is 13.3% faster than the best works in these items, while being equally efficient in area.
32

Projeto da arquitetura de hardware para binarização e modelagem de contextos para o CABAC do padrão de compressão de vídeo H.264/AVC / Hardware architecture design for binarization and context modeling for CABAC of H.264/AVC video compression

Martins, André Luis Del Mestre January 2011 (has links)
O codificador aritmético binário adaptativo ao contexto adotado (CABAC – Context-based Adaptive Binary Arithmetic Coding) pelo padrão H.264/AVC a partir de perfil Main é o estado-da-arte em termos de eficiência de taxa de bits. Entretanto, o CABAC ocupa 9.6% do tempo total de processamento e seu throughput é limitado pelas dependências de dados no nível de bit (LIN, 2010). Logo, atingir os requisitos de desempenho em tempo real nos níveis mais altos do padrão H.264/AVC se torna uma tarefa árdua em software, sendo necesário então, a aceleração do CABAC através de implementações em hardware. As arquiteturas de hardware encontradas na literatura para o CABAC focam no Codificador Aritmético Binário (BAE - Binary Arithmetic Encoder) enquanto que a Binarização e Modelagem de Contextos (BCM – Binarization and Context Modeling) fica em segundo plano ou nem é apresentada. O BCM e o BAE juntos constituem o CABAC. Esta dissertação descreve detalhadamente o conjunto de algoritmos que compõem o BCM do padrão H.264/AVC. Em seguida, o projeto de uma arquitetura de hardware específica para o BCM é apresentada. A solução proposta é descrita em VHDL e os resultados de síntese mostram que a arquitetura alcança desempenho suficiente, em FPGA e ASIC, para processar vídeos no nível 5 do padrão H.264/AVC. A arquitetura proposta é 13,3% mais rápida e igualmente eficiente em área que os melhores trabalhos relacionados nestes quesitos. / Context-based Adaptive Binary Arithmetic Coding (CABAC) adopted in the H.264/AVC main profile is the state-of-art in terms of bit-rate efficiency. However, CABAC takes 9.6% of the total encoding time and its throughput is limited by bit-level data dependency (LIN, 2010). Moreover, meeting real-time requirement for a pure software CABAC encoder is difficult at the highest levels of the H.264/AVC standard. Hence, speeding up the CABAC by hardware implementation is required. The CABAC hardware architectures found in the literature focus on the Binary Arithmetic Encoder (BAE), while the Binarization and Context Modeling (BCM) is a secondary issue or even absent in the literature. Integrated, the BCM and the BAE constitute the CABAC. This dissertation presents the set of algorithms that describe the BCM of the H.264/AVC standard. Then, a novel hardware architecture design for the BCM is presented. The proposed design is described in VHDL and the synthesis results show that the proposed architecture reaches sufficiently high performance in FPGA and ASIC to process videos in real-time at the level 5 of H.264/AVC standard. The proposed design is 13.3% faster than the best works in these items, while being equally efficient in area.
33

Engineering of Context Dependent Quality of Service (QoS) / Engineering of Context Dependent Quality of Service (QoS)

Farooq, Khan H M January 2006 (has links)
The service oriented computing paradigm is a new model for distributed computing and due to its simplicity and powerful features, it is being popular and has a wide variety of implementation in different scenarios. The already built system is studies in detail, which was totally implemented using the Grid Technology. The idea of the current work is how we can implement the same functionality in the non-real time environment. The basic idea is to merge the web services and grid services to formulate the unique concept of context dependent quality of service (QoS) for both real time and non-real time solutions. When we merge these different types of services, the main focusing point is to define the service level agreement (SLA) when these different qualified services want to interact with each other. This document discusses and formulates the theoretical aspects, which in future work, can be consider for the practical implementation.
34

Context Matters : A Qualitative Study of the Teaching of English Vocabulary at the Swedish Upper Secondary Level

Nagy, Victor, Robild, Henrik January 2017 (has links)
The purpose of this essay is to identify effective ways of teaching English vocabulary at upper secondary level in Sweden. We have answered three research questions: -  In what ways do local English teachers at the upper secondary level teach vocabulary? -  How do local students at the upper secondary level perceive their acquisition and learning of English vocabulary? -  According to research regarding vocabulary acquisition, what is necessary to include when constructing an effective module for teaching vocabulary?  We gathered the data needed to answer the research questions by conducting interviews with both teachers and students as well as an independent study. The independent study investigated which of six pre picked vocabulary teaching methods gave the best results in a single English 6 class and was the most preferred by those students. The interviews as well as the independent study indicated that one of the most effective ways of teaching vocabulary is through context-based exercises. Our conclusion is that the students’ retention of new vocabulary may be directly connected to the amount of context in which the words are taught. Based on our findings, we have constructed frameworks for a series of lessons which focus on teaching vocabulary.
35

Apprentissage automatique pour simplifier l’utilisation de banques d’images cardiaques / Machine Learning for Simplifying the Use of Cardiac Image Databases

Margeta, Ján 14 December 2015 (has links)
L'explosion récente de données d'imagerie cardiaque a été phénoménale. L'utilisation intelligente des grandes bases de données annotées pourrait constituer une aide précieuse au diagnostic et à la planification de thérapie. En plus des défis inhérents à la grande taille de ces banques de données, elles sont difficilement utilisables en l'état. Les données ne sont pas structurées, le contenu des images est variable et mal indexé, et les métadonnées ne sont pas standardisées. L'objectif de cette thèse est donc le traitement, l'analyse et l'interprétation automatique de ces bases de données afin de faciliter leur utilisation par les spécialistes de cardiologie. Dans ce but, la thèse explore les outils d'apprentissage automatique supervisé, ce qui aide à exploiter ces grandes quantités d'images cardiaques et trouver de meilleures représentations. Tout d'abord, la visualisation et l'interprétation d'images est améliorée en développant une méthode de reconnaissance automatique des plans d'acquisition couramment utilisés en imagerie cardiaque. La méthode se base sur l'apprentissage par forêts aléatoires et par réseaux de neurones à convolution, en utilisant des larges banques d'images, où des types de vues cardiaques sont préalablement établies. La thèse s'attache dans un deuxième temps au traitement automatique des images cardiaques, avec en perspective l'extraction d'indices cliniques pertinents. La segmentation des structures cardiaques est une étape clé de ce processus. A cet effet une méthode basée sur les forêts aléatoires qui exploite des attributs spatio-temporels originaux pour la segmentation automatique dans des images 3Det 3D+t est proposée. En troisième partie, l'apprentissage supervisé de sémantique cardiaque est enrichi grâce à une méthode de collecte en ligne d'annotations d'usagers. Enfin, la dernière partie utilise l'apprentissage automatique basé sur les forêts aléatoires pour cartographier des banques d'images cardiaques, tout en établissant les notions de distance et de voisinage d'images. Une application est proposée afin de retrouver dans une banque de données, les images les plus similaires à celle d'un nouveau patient. / The recent growth of data in cardiac databases has been phenomenal. Cleveruse of these databases could help find supporting evidence for better diagnosis and treatment planning. In addition to the challenges inherent to the large quantity of data, the databases are difficult to use in their current state. Data coming from multiple sources are often unstructured, the image content is variable and the metadata are not standardised. The objective of this thesis is therefore to simplify the use of large databases for cardiology specialists withautomated image processing, analysis and interpretation tools. The proposed tools are largely based on supervised machine learning techniques, i.e. algorithms which can learn from large quantities of cardiac images with groundtruth annotations and which automatically find the best representations. First, the inconsistent metadata are cleaned, interpretation and visualisation of images is improved by automatically recognising commonly used cardiac magnetic resonance imaging views from image content. The method is based on decision forests and convolutional neural networks trained on a large image dataset. Second, the thesis explores ways to use machine learning for extraction of relevant clinical measures (e.g. volumes and masses) from3D and 3D+t cardiac images. New spatio-temporal image features are designed andclassification forests are trained to learn how to automatically segment the main cardiac structures (left ventricle and left atrium) from voxel-wise label maps. Third, a web interface is designed to collect pairwise image comparisons and to learn how to describe the hearts with semantic attributes (e.g. dilation, kineticity). In the last part of the thesis, a forest-based machinelearning technique is used to map cardiac images to establish distances and neighborhoods between images. One application is retrieval of the most similar images.
36

Utveckling av enzymatisk bioremediering av PET : Användnin av ett kontextbaserat lärande i implementering av ett miljöperspektiv i gymnasieskolan / The development of enzymatic bioremediation of PET : The use of context-based learning in implementing an environmental perspective in secondary education.

Jakobsson, Jessika January 2021 (has links)
Plastic pollution is one of if not the biggest threat against earth’s ecosystems. Almost 400 million tons of plastic is produced every year and most of it is discarded outside of the recycling systems. Marine ecosystems are extra exposed due to microplastics which are plastic smaller than 5 mm. The most common type of plastic is PET. Plastic in general is very chemically stable and hard to degrade but scientists have found a bacterium named I.sakaiensis that can degrade PET with a two-enzyme system called PETase and MHETase. Due to being exo-enzymes, they have to be secreted to function, the thermostability of these enzymes are very low so most research has been focused on increasing the thermal stability with its enzyme activity. This report focuses on what structures are important for the PET degrading ability of MHETas and PETas and how they can be applied to cleaning marine ecosystems. A key to solving environmental issues is creating environmentally aware students through the education system. Studies about Context based education have indicated that it sparks motivation and interest in students and the lessons seem more relevant. This report is also about how context-based education can be used to create an environmental perspective in secondary education.
37

Kompresní metody založené na kontextovém modelování / Compression Methods Based on Context Modelling

Kozák, Filip January 2013 (has links)
Purpose of this thesis is to describe the context-based compression methods and their application to multimedia data. There is described the principle of arithmetic coding and prediction by partial matching method, including creation of the probability model. There are also described multimedia data and the basic principles of their compression. The next section presents compression methods, that I implemented at work and their results.
38

Bezeztrátová komprese obrazu / Lossless Image Compression

Vondrášek, Petr January 2011 (has links)
The aim of this master's thesis was to design, develop and test a method for lossless image compression. The theoretical part includes a description of selected exiting methods such as RLE, MTF, adaptive arithmetic coding, color models used in LOCO-I and JPEG 2000, predictors MED, GAP and laplacian pyramid. The conclusion includes a comparison of various combinations of chosen approaches and overall efficiency compared with PNG and JPEG-LS.
39

Conception et réalisation d'un consultant basé sur le contexte : application en histopathologie pour la gradation du cancer du sein / Design and implementation of a context-based consultant : application on histopathology for breast cancer gradation

Aroua, Anissa 13 June 2014 (has links)
Le diagnostic du cancer du sein est une activité humaine qui dépend du contexte dans lequel il est réalisé. Ce contexte se traduit par l'existence de très nombreux éléments qui rend l'automatisation de cette activité impossible. Depuis quelques années, la numérisation des lames (support de raisonnement) a incité les pathologistes à passer à l'analyse d'image de lames à l'écran. Cette migration a offre la possibilité d'une procéduralisation au moins partielle de leurs méthodes d'analyse. Dans le cadre de cette thèse, nous nous sommes intéressés à l'activité d'analyse d'une image de lame par un pathologiste qui est modélisée dans le formalisme des graphes contextuels dans le but de proposer une solution permettant d'assister les pathologistes dans leurs diagnostics. Notre Consultant fait partie des Systèmes d'Assistance Intelligents basés sur le Contexte. L'outil principal du Consultant est basé sur la Simulation à partir de pratiques expertes décrites dans un graphe contextuel. En partant d'une image que le pathologiste doit analyser, le simulateur va développer une pratique qui est la plus adaptée au contexte de travail. Le résultat de la simulation va donc être la pratique résultante et toutes les informations sur la manière dont cette pratique a été obtenue. Le consultant propose alors à l'utilisateur une visualisation des résultats des simulations réalisées permettant de les analyser et de les comparer. / Breast cancer diagnosis is a human activity that is context-dependent. The context contains a large number of elements that limits strongly any possibility de complete automation. Recently, digitization of slides (reasoning support) prompted pathologists to migrate from slide analysis under microscope to slide image analysis on the screen. This migration offers a possibility of partial proceduralization of their analysis methods. In this thesis, we are interested on the activity of slide image analysis by a pathologist that is modeled in the Contextual-Graphs formalism with the goal to propose a solution to support pathologists in their diagnosis. Our Consultant belongs to the class of Context based Intelligent Assistant Systems. The main tool of the consultant is based on the simulation of expert practices described in a contextual graph. Starting from an image to analyze, the simulator will develop a practice that is the most adapted to the working context. The output of the simulation is the resulting practice and ll information about its development. The consultant proposes to the user a visualization of the different results for analysis and comparison.
40

Contextualizing Observational Data For Modeling Human Performance

Trinh, Viet 01 January 2009 (has links)
This research focuses on the ability to contextualize observed human behaviors in efforts to automate the process of tactical human performance modeling through learning from observations. This effort to contextualize human behavior is aimed at minimizing the role and involvement of the knowledge engineers required in building intelligent Context-based Reasoning (CxBR) agents. More specifically, the goal is to automatically discover the context in which a human actor is situated when performing a mission to facilitate the learning of such CxBR models. This research is derived from the contextualization problem left behind in Fernlund's research on using the Genetic Context Learner (GenCL) to model CxBR agents from observed human performance [Fernlund, 2004]. To accomplish the process of context discovery, this research proposes two contextualization algorithms: Contextualized Fuzzy ART (CFA) and Context Partitioning and Clustering (COPAC). The former is a more naive approach utilizing the well known Fuzzy ART strategy while the latter is a robust algorithm developed on the principles of CxBR. Using Fernlund's original five drivers, the CFA and COPAC algorithms were tested and evaluated on their ability to effectively contextualize each driver's individualized set of behaviors into well-formed and meaningful context bases as well as generating high-fidelity agents through the integration with Fernlund's GenCL algorithm. The resultant set of agents was able to capture and generalized each driver's individualized behaviors.

Page generated in 0.0262 seconds