• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 15
  • 8
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 97
  • 30
  • 28
  • 26
  • 24
  • 17
  • 16
  • 15
  • 15
  • 13
  • 11
  • 11
  • 10
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Towards automatic asset management for real-time visualization of urban environments

Olsson, Erik January 2017 (has links)
This thesis describes how a pipeline was obtained to reconstruct an urban environment from terrestrial laser scanning and photogrammetric 3D maps of Norrköping, visualized in first prison and real-time. Together with LIU University and the city planning office of Norrköping the project was carried out as a preliminary study to get an idea of how much work is needed and in what accuracy we can recreate a few buildings. The visualization is intended to demonstrate a new way of exploring the city in virtual reality as well as visualize the geometrical and textural details in a higher quality comparing to the 3D map that Municipality of Norrköping uses today. Before, the map has only been intended to be displayed from a bird’s eye view and has poor resolution from closer ranges. In order to improve the resolution, HDR photos were used to texture the laser scanned model and cover a particular area of the low res 3D map. This thesis will explain which method was used to process a point based environment for texturing and setting up an environment in Unreal using both the 3d map and the laser scanned model.
52

Backward compatible approaches for the compression of high dynamic range videos / Approches rétro-compatibles pour la compression de vidéos à grande gamme dynamique

Le Pendu, Mikaël 17 March 2016 (has links)
Les technologies d'écran ont connu récemment une évolution rapide. De la télévision 3D à l'Ultra Haute Définition, la tendance est maintenant aux écrans HDR (pour ''High Dynamic Range'') permettant de reproduire une gamme de luminance bien plus élevée que les écrans classiques. L'émergence de cette technologie implique de nouveaux travaux de standardisation dans le domaine de la compression vidéo. Une question essentielle pour la distribution à grande échelle de contenu HDR est celle de la rétro-compatibilité. Tandis que la future génération d'écrans de télévision sera adaptée à ce nouveau format, il est nécessaire de permettre aux équipements plus anciens de décoder et afficher une version du même contenu dont la dynamique a été préalablement réduite par un procédé appelé ''tone mapping''. Cette thèse vise à explorer les schémas de compression HDR rétro-compatibles. Dans une première approche, un algorithme de tone mapping spécifié par l'encodeur est appliqué à l'image HDR. L'image générée, alors appelée LDR (pour ''Low Dynamic Range''), peut être encodée et décodée dans un format classique. L'encodeur transmet par ailleurs une quantité réduite d'information permettant à un décodeur HDR d'inverser l'opération de tone mapping et de reconstruire une version HDR. L'étude de ces schémas est axée sur la définition de méthodes de tone mapping optimisées pour les performances de compression. La suite de la thèse se concentre sur l'approche scalable dans laquelle les deux versions sont fournies à l'encodeur sans connaissance à priori sur l'opérateur de tone mapping utilisé. Le producteur garde donc le contrôle sur la création du contenu LDR. Cette version LDR est d'abord compressée comme une première couche. L'image reconstruite est utilisée par le codeur scalable pour compresser plus efficacement la couche HDR grâce à un mécanisme de prédiction inter-couches. Notre approche locale et non linéaire nous permet d'améliorer les performances de codage par rapport aux méthodes scalables existantes, en particulier dans le cas où un tone mapping complexe est utilisé pour générer la version LDR. / In recent years, the display technologies have been rapidly evolving. From 3D television to Ultra High Definition, the trend is now towards High Dynamic Range (HDR) displays that can reproduce a luminance range far beyond the capabilities of conventional displays. The emergence of this technology involves new standardization effort in the field of video compression. In terms of large scale content distribution, the question of backward compatibility is critical. While the future generation of television displays will be adapted to this new format, it is necessary to enable the older equipment to decode and display a version of the same content whose dynamic range has been previously reduced by a process called “tone mapping”. This thesis aims at exploring the backward compatible HDR compression schemes. In a first approach, a tone mapping operator specified by the encoder is applied to the HDR image. The resulting image, called Low Dynamic Range (LDR), can then be encoded and decoded in a conventional format. The encoder additionally transmits a small amount of information enabling a HDR capable decoder to inverse the tone mapping operator and retrieve the HDR version. The study of these schemes is directed towards the definition of tone mapping operators optimized for the compression performance. We then focus on scalable approaches, where both versions are given to the encoder without prior knowledge on the tone mapping operator used. The producer thus keeps full control on the LDR content creation process. This LDR version is compressed as a first layer. The reconstructed image is used by the scalable encoder to compress the HDR layer efficiently by performing inter-layer predictions. Thanks to a local and non-linear approach, the proposed schemes improve the coding performance compared to the existing scalable methods, especially in the case where a complex tone mapping is used for generating the LDR version.
53

Computational Verification of Illumination

Bheemeswara Aravind, Poolla January 2021 (has links)
Background: Automobile lighting is a major function on any automobile to illuminate the road to let drivers and commuters see the road ahead. It also serves a variety of other purposes. However, it is now becoming a luxury design feature, with each automobile manufacturer having their own unique lamps. Every car manufacturer now has its own characteristic lights that can be recognized from a considerable distance, and they strive for homogeneity. As a result, it’s critical to check and assess a lamp's homogeneity during the product development phase to identify any potential flaws. Objectives: This research presents a HDRImage encoding for visualizing and verifying luminance data in image format. For an intuitive and subjective evaluation also the colour is used. Secondly, using the mean filter technique to validate an internal Volvo Cars lit surface homogeneity requirement and automate the time-consuming process. Lastly, using the ISOcontour approach to propose and implement a simple yet effective verification method for distributed light homogeneity. Methods: The research methods used in this study are literature review and experiment. To discover further about HDRImage encoding using luminance data, as well as existing light measurement and evaluation approaches, a literature review is conducted. The appropriate approaches for this study are then combined and implemented to produce a verification method that uses the homogeneity requirement to automatically verify lit surfaces. This thesis also presented ISOcontour lines as a way for evaluating distributed light. Results: The findings of this thesis demonstrate that it is possible to develop a method for verifying and evaluating luminance data obtained from simulation software and photometers without relying on any licensed software for light evaluation. The methods are, for visualisation using HDRImage encoding, a method to evaluate light such as false colour, ISOcontour lines for distributed light verification, and an automatic homogeneity verification approach for lit surface to make the verification of illumination process efficient. Conclusions:  Experiment provided a means of visualizing luminance data of both virtual and physical prototypes, verifying distributed light, and automatically verifying it surface homogeneity, while literature review assisted in gathering information in certain fields to better comprehend light evaluation methods.
54

Akcelerace HDR tone-mappingu na platformě Xilinx Zynq / HDR Tone-Mapping Acceleration on Xilinx Zynq Platform

Nosko, Svetozár January 2016 (has links)
This diploma thesis focuses on the High-level synthesis (HLS). The first part deals with theoretical details and methods that are used in HLS tools. This is followed by a description of the synthesis tool Vivado HLS which will be used for implementation of an application. In the second part is briefly introduced high dynamic range images (HDR) and tone mapping. The third part is dedicated to design and implementation of the aplication which implements tone mapping methods in HDR images. This methods are implemented in Vivado HLS and language C++. This application is based on platform Xilinx Zynq and it uses multiexposure camera for capturing HDR images. Images are transmitted to FPGA for tone mapping processing.
55

Uživatelské rozhraní systému pro práci s HDR obrazem / User Interface for HDR Tone Mapping System

Jedlička, Jan January 2021 (has links)
The goal of this thesis is to improve graphical user interface of Tone Mapping Studio(TMS) program. This program is being developed on the Faculty of Information Technology(FIT), Brno University of Technology (BUT) by doc. Ing. Martin Čadík, PhD. The current program is using framework Qt3 , which is old and not compatible with modern libraries. This program has to be rewritten to support current version Qt5. I will analyze other programs in the area of working with High Dynamic Range (HDR) images and video. Changes for improving the interface will be proposed and UX tests will be done. Second part will consist of comparing plug-ins for converting images to grayscale that already exists in TMS.
56

Compounds screening for the identification of novel drug to improve the Knock in efficiency mediated by CRISPR-Cas9

Anagnostou, Evangelia January 2023 (has links)
Genome editing is an exciting field that allows for the precise modification of an organism's DNA. One of the most advanced tools in this area is CRISPR/Cas9 (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR-associated protein 9), which creates a DSB (Double-strand break) at a specific location in the genome. This break can then be repaired by the cell using one of two pathways – NHEJ (nonhomologous end joining) or HDR (homology-directed repair) HDR leads to more precise repair and is used to create KI (Knock-In) modifications by introducing a homologous piece of DNA with the desired changes. However, HDR is a rare event that competes with the error prone NHEJ pathway, limiting its efficiency. HDR mainly occurs in the G2 and S phases of the cell cycle, making it a challenge to control and target. To improve KI efficiency, researchers have used strategies such as inhibiting NHEJ or activating HDR. This study focuses on identifying direct and indirect activators of HDR through a library assay screening. We established a robust method for screening compounds in HEK293 cells that relies on a plasmid-based delivery Cas9, gRNA (guide RNA), and synthetic ssDNA (single strand DNA). Out of 3,000 compounds screened, 1% showed a higher signal than the positive control, and approximately 10% presented a higher signal than untreated cells. The top 5 compounds were further validated in dose response. Our system opens new avenues for improving the efficiency of KI modifications.
57

Structural characterization of plant derived HDR enzymes in the MEP pathway

Idman, Lukas January 2023 (has links)
No description available.
58

Identification of novel active Cas9 orthologs from metagenomic data

Demozzi, Michele 12 April 2022 (has links)
CRISPR-Cas is the state-of-the-art biological tool that allows precise and fast manipulation of the genetic information of cellular genomes. The translation of the CRISPR-Cas technology from in vitro studies into clinical applications highlighted a variety of limitations: the currently available systems are limited by their off-target activity, the availability of a Cas-specific PAM sequence next to the target and the size of the Cas protein. In particular, despite high levels of activity, the size of the CRISPR-SpCas9 editing machinery is not compatible with an all-in-one AAV delivery system and the genomic sequences that can be targeted are limited by the 3-NGG PAM-dependency of the SpCas9 protein. To further expand the CRISPR tools repertoire we turned to metagenomic data of the human microbiome to search for uncharacterized CRISPR-Cas9 systems and we identified a set of novel small Cas9 orthologs derived from the analysis of reconstructed bacterial metagenomes. In this thesis study, ten candidates were chosen according to their size (less than 1100aa). The PAM preference of all the ten orthologs was established exploiting a bacterial-based and an in vitro platform. We demonstrated that three of them are active nucleases in human cells and two out of the three showed robust editing levels at endogenous loci, outperforming SpCas9 at particular targets. We expect these new variants to be very useful in expanding the available genome editing tools both in vitro and in vivo. Knock-out-based Cas9 applications are very efficient but many times a precise control of the repair outcome through HDR-mediated gene targeting is required. To address this issue, we also developed an MS2-based reporter platform to measure the frequency of HDR events and evaluate novel HDR-modulating factors. The platform was validated and could allow the screening of libraries of proteins to assess their influence on the HDR pathway.
59

Evaluation of a MapCHECK2<sup>TM</sup> Diode Array for High Dose Rate Brachytherapy Quality Assurance

Macey, Nathaniel J. January 2015 (has links)
No description available.
60

Development of High Speed High Dynamic Range Videography

Griffiths, David John 09 February 2017 (has links)
High speed video has been a significant tool for unraveling the quantitative and qualitative assessment of phenomena that is too fast to readily observe. It was first used in 1852 by William Henry Fox Talbot to settle a dispute with reference to the synchronous position of a horse's hooves while galloping. Since that time private industry, government, and enthusiasts have been measuring dynamic scenarios with high speed video. One challenge that faces the high speed video community is the dynamic range of the sensors. The dynamic range of the sensor is constrained to the bit depth of the analog to digital converter, the deep well capacity of the sensor site, and baseline noise. A typical high speed camera can span a 60 dB dynamic range, 1000:1, natively. More recently the dynamic range has been extended to about 80 dB utilizing different pixel acquisition methods. In this dissertation a method to extend the dynamic range will be presented and demonstrated to extend the dynamic range of a high speed camera system to over 170 dB, about 31,000,000:1. The proposed formation methodology is adaptable to any camera combination, and almost any needed dynamic range. The dramatic increase in the dynamic range is made possible through an adaptation of the current high dynamic range image formation methodologies. Due to the high cost of a high speed camera, a minimum number of cameras are desired to form a high dynamic range high speed video system. With a reduced number of cameras spanning a significant range, the errors on the formation process compound significantly relative to a normal high dynamic range image. The increase in uncertainty is created from the lack of relevant correlated information for final image formation, necessitating the development of a new formation methodology. In the proceeding text the problem statement and background information will be reviewed in depth. The development of a new weighting function, stochastic image formation process, tone map methodology, and optimized multi camera design will be presented. The proposed methodologies' effectiveness will be compared to current methods throughout the text and a final demonstration will be presented. / Ph. D. / High speed video is a tool that has been developed to capture events that occur faster than a human can observe. The use and prevalence of high speed video is rapidly expanding as cost drops and ease of use increases. It is currently used in private and government industries for quality control, manufacturing, test evaluation, and the entertainment industry in movie making and sporting events. Due to the specific hardware requirements when capturing high speed video, the dynamic range, the ratio of the brightest measurement to the darkest measurement the camera can acquire, is limited. The dynamic range limitation can be seen in a video as either a white or black region with no discernible detail when there should be. This is referred to as regions of over saturation or under saturation. Presented in this document is a new method to capture high speed video utilizing multiple commercially available high speed cameras. An optimized camera layout is presented and a mathematical algorithm is developed for the formation of a video that will never be over or under saturated using a minimum number of cameras. This was done to reduce the overall cost and complexity of the setup while retaining an accurate image. The concept is demonstrated with several examples of both controlled tests and explosive tests filmed up to 3,300 times faster than a standard video, with a dynamic range spanning over 310,000 times the capabilities of a standard high speed camera. The technology developed in this document can be used in the previously mentioned industries whenever the content being filmed over saturates the imager. It has been developed so it can be scalable in order to capture extremely large dynamic range scenes, cost efficient to broaden applicability, and accurate to allow for a fragment free final image.

Page generated in 0.0292 seconds