• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 18
  • 10
  • 8
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 52
  • 50
  • 28
  • 21
  • 19
  • 19
  • 18
  • 17
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Operational data extraction using visual perception

Shunmugam, Nagarajan January 2021 (has links)
The information era has led the manufacturer of trucks and logistics solution providers are inclined towards software as a service (SAAS) based solutions. With advancements in software technologies like artificial intelligence and deep learning, the domain of computer vision has achieved significant performance boosts that it competes with hardware based solutions. Firstly, data is collected from a large number of sensors which can increase production costs and carbon footprint in the environment. Secondly certain useful physical quantities/variables are impossible to measure or turns out to be very expensive solution. So in this dissertation, we are investigating the feasibility of providing the similar solution using a single sensor (dashboard- camera) to measure multiple variables. This provides a sustainable solution even when scaled up in huge fleets. The video frames that can be collected from the visual perception of the truck (i.e. the on-board camera of the truck) is processed by the deep learning techniques and operational data can be extracted. Certain techniques like the image classification and semantic segmentation outputs were experimented and shows potential to replace costly hardware counterparts like Lidar or radar based solutions. / Informationstiden har lett till att tillverkare av lastbilar och logistiklösningsleve -rantörer är benägna mot mjukvara som en tjänst (SAAS) baserade lösningar. Med framsteg inom mjukvaruteknik som artificiell intelligens och djupinlärnin har domänen för datorsyn uppnått betydande prestationsförstärkningar att konkurrera med hårdvarubaserade lösningar. För det första samlas data in från ett stort antal sensorer som kan öka produktionskostnaderna och koldioxidavtry -cket i miljön. För det andra är vissa användbara fysiska kvantiteter / variabler omöjliga att mäta eller visar sig vara en mycket dyr lösning. Så i denna avhandling undersöker vi möjligheten att tillhandahålla liknande lösning med hjälp av en enda sensor (instrumentbrädkamera) för att mäta flera variabler. Detta ger en hållbar lösning även när den skalas upp i stora flottor. Videoramar som kan samlas in från truckens visuella uppfattning (dvs. lastbilens inbyggda kamera) bearbetas av djupinlärningsteknikerna och operativa data kan extraher -as. Vissa tekniker som bildklassificering och semantiska segmenteringsutgång -ar experimenterades och visar potential att ersätta dyra hårdvaruprojekt som Lidar eller radarbaserade lösningar.
112

Analytický a syntetický přístup k řešení metrických úloh v prostoru / Analytic and synthetic approach to metrical tasks in space solving

Kreslová, Iva January 2019 (has links)
The diploma thesis deals with metric tasks in space, using synthetic and analytical geometry. In addition to explaining the different approaches, there is a set of examples to practice. The solution of the examples is part of the Portal of High School Mathematics (Portál středoškolské matematiky), where we can and analytical solutions, synthetic numerical solutions and synthetic constructional solutions.
113

Trigonometry: Applications of Laws of Sines and Cosines

Su, Yen-hao 02 July 2010 (has links)
Chapter 1 presents the definitions and basic properties of trigonometric functions including: Sum Identities, Difference Identities, Product-Sum Identities and Sum-Product Identities. These formulas provide effective tools to solve the problems in trigonometry. Chapter 2 handles the most important two theorems in trigonometry: The laws of sines and cosines and show how they can be applied to derive many well known theorems including: Ptolemy¡¦s theorem, Euler Triangle Formula, Ceva¡¦s theorem, Menelaus¡¦s Theorem, Parallelogram Law, Stewart¡¦s theorem and Brahmagupta¡¦s Formula. Moreover, the formulas of computing a triangle area like Heron¡¦s formula and Pick¡¦s theorem are also discussed. Chapter 3 deals with the method of superposition, inverse trigonometric functions, polar forms and De Moivre¡¦s Theorem.
114

Characterization of carotid artery plaques using noninvasive vascular ultrasound elastography

Li, Hongliang 09 1900 (has links)
L'athérosclérose est une maladie vasculaire complexe qui affecte la paroi des artères (par l'épaississement) et les lumières (par la formation de plaques). La rupture d'une plaque de l'artère carotide peut également provoquer un accident vasculaire cérébral ischémique et des complications. Bien que plusieurs modalités d'imagerie médicale soient actuellement utilisées pour évaluer la stabilité d'une plaque, elles présentent des limitations telles que l'irradiation, les propriétés invasives, une faible disponibilité clinique et un coût élevé. L'échographie est une méthode d'imagerie sûre qui permet une analyse en temps réel pour l'évaluation des tissus biologiques. Il est intéressant et prometteur d’appliquer une échographie vasculaire pour le dépistage et le diagnostic précoces des plaques d’artère carotide. Cependant, les ultrasons vasculaires actuels identifient uniquement la morphologie d'une plaque en termes de luminosité d'écho ou l’impact de cette plaque sur les caractéristiques de l’écoulement sanguin, ce qui peut ne pas être suffisant pour diagnostiquer l’importance de la plaque. La technique d’élastographie vasculaire non-intrusive (« noninvasive vascular elastography (NIVE) ») a montré le potentiel de détermination de la stabilité d'une plaque. NIVE peut déterminer le champ de déformation de la paroi vasculaire en mouvement d’une artère carotide provoqué par la pulsation cardiaque naturelle. En raison des différences de module de Young entre les différents tissus des vaisseaux, différents composants d’une plaque devraient présenter différentes déformations, caractérisant ainsi la stabilité de la plaque. Actuellement, les performances et l’efficacité numérique sous-optimales limitent l’acceptation clinique de NIVE en tant que méthode rapide et efficace pour le diagnostic précoce des plaques vulnérables. Par conséquent, il est nécessaire de développer NIVE en tant qu’outil d’imagerie non invasif, rapide et économique afin de mieux caractériser la vulnérabilité liée à la plaque. La procédure à suivre pour effectuer l’analyse NIVE consiste en des étapes de formation et de post-traitement d’images. Cette thèse vise à améliorer systématiquement la précision de ces deux aspects de NIVE afin de faciliter la prédiction de la vulnérabilité de la plaque carotidienne. Le premier effort de cette thèse a été dédié à la formation d'images (Chapitre 5). L'imagerie par oscillations transversales a été introduite dans NIVE. Les performances de l’imagerie par oscillations transversales couplées à deux estimateurs de contrainte fondés sur un modèle de déformation fine, soit l’ « affine phase-based estimator (APBE) » et le « Lagrangian speckle model estimator (LSME) », ont été évaluées. Pour toutes les études de simulation et in vitro de ce travail, le LSME sans imagerie par oscillation transversale a surperformé par rapport à l'APBE avec imagerie par oscillations transversales. Néanmoins, des estimations de contrainte principales comparables ou meilleures pourraient être obtenues avec le LSME en utilisant une imagerie par oscillations transversales dans le cas de structures tissulaires complexes et hétérogènes. Lors de l'acquisition de signaux ultrasonores pour la formation d'images, des mouvements hors du plan perpendiculaire au plan de balayage bidimensionnel (2-D) existent. Le deuxième objectif de cette thèse était d'évaluer l'influence des mouvements hors plan sur les performances du NIVE 2-D (Chapitre 6). À cette fin, nous avons conçu un dispositif expérimental in vitro permettant de simuler des mouvements hors plan de 1 mm, 2 mm et 3 mm. Les résultats in vitro ont montré plus d'artefacts d'estimation de contrainte pour le LSME avec des amplitudes croissantes de mouvements hors du plan principal de l’image. Malgré tout, nous avons néanmoins obtenu des estimations de déformations robustes avec un mouvement hors plan de 2.0 mm (coefficients de corrélation supérieurs à 0.85). Pour un jeu de données cliniques de 18 participants présentant une sténose de l'artère carotide, nous avons proposé d'utiliser deux jeux de données d'analyses sur la même plaque carotidienne, soit des images transversales et longitudinales, afin de déduire les mouvements hors plan (qui se sont avérés de 0.25 mm à 1.04 mm). Les résultats cliniques ont montré que les estimations de déformations restaient reproductibles pour toutes les amplitudes de mouvement, puisque les coefficients de corrélation inter-images étaient supérieurs à 0.70 et que les corrélations croisées normalisées entre les images radiofréquences étaient supérieures à 0.93, ce qui a permis de démontrer une plus grande confiance lors de l'analyse de jeu de données cliniques de plaques carotides à l'aide du LSME. Enfin, en ce qui concerne le post-traitement des images, les algorithmes NIVE doivent estimer les déformations des parois des vaisseaux à partir d’images reconstituées dans le but d’identifier les tissus mous et durs. Ainsi, le dernier objectif de cette thèse était de développer un algorithme d'estimation de contrainte avec une résolution de la taille d’un pixel ainsi qu'une efficacité de calcul élevée pour l'amélioration de la précision de NIVE (Chapitre 7). Nous avons proposé un estimateur de déformation de modèle fragmenté (SMSE) avec lequel le champ de déformation dense est paramétré avec des descriptions de transformées en cosinus discret, générant ainsi des composantes de déformations affines (déformations axiales et latérales et en cisaillement) sans opération mathématique de dérivées. En comparant avec le LSME, le SMSE a réduit les erreurs d'estimation lors des tests de simulations, ainsi que pour les mesures in vitro et in vivo. De plus, la faible mise en oeuvre de la méthode SMSE réduit de 4 à 25 fois le temps de traitement par rapport à la méthode LSME pour les simulations, les études in vitro et in vivo, ce qui pourrait permettre une implémentation possible de NIVE en temps réel. / Atherosclerosis is a complex vascular disease that affects artery walls (by thickening) and lumens (by plaque formation). The rupture of a carotid artery plaque may also induce ischemic stroke and complications. Despite the use of several medical imaging modalities to evaluate the stability of a plaque, they present limitations such as irradiation, invasive property, low clinical availability and high cost. Ultrasound is a safe imaging method with a real time capability for assessment of biological tissues. It is clinically used for early screening and diagnosis of carotid artery plaques. However, current vascular ultrasound technologies only identify the morphology of a plaque in terms of echo brightness or the impact of the vessel narrowing on flow properties, which may not be sufficient for optimum diagnosis. Noninvasive vascular elastography (NIVE) has been shown of interest for determining the stability of a plaque. Specifically, NIVE can determine the strain field of the moving vessel wall of a carotid artery caused by the natural cardiac pulsation. Due to Young’s modulus differences among different vessel tissues, different components of a plaque can be detected as they present different strains thereby potentially helping in characterizing the plaque stability. Currently, sub-optimum performance and computational efficiency limit the clinical acceptance of NIVE as a fast and efficient method for the early diagnosis of vulnerable plaques. Therefore, there is a need to further develop NIVE as a non-invasive, fast and low computational cost imaging tool to better characterize the plaque vulnerability. The procedure to perform NIVE analysis consists in image formation and image post-processing steps. This thesis aimed to systematically improve the accuracy of these two aspects of NIVE to facilitate predicting carotid plaque vulnerability. The first effort of this thesis has been targeted on improving the image formation (Chapter 5). Transverse oscillation beamforming was introduced into NIVE. The performance of transverse oscillation imaging coupled with two model-based strain estimators, the affine phase-based estimator (APBE) and the Lagrangian speckle model estimator (LSME), were evaluated. For all simulations and in vitro studies, the LSME without transverse oscillation imaging outperformed the APBE with transverse oscillation imaging. Nonetheless, comparable or better principal strain estimates could be obtained with the LSME using transverse oscillation imaging in the case of complex and heterogeneous tissue structures. During the acquisition of ultrasound signals for image formation, out-of-plane motions which are perpendicular to the two-dimensional (2-D) scan plane are existing. The second objective of this thesis was to evaluate the influence of out-of-plane motions on the performance of 2-D NIVE (Chapter 6). For this purpose, we designed an in vitro experimental setup to simulate out-of-plane motions of 1 mm, 2 mm and 3 mm. The in vitro results showed more strain estimation artifacts for the LSME with increasing magnitudes of out-of-plane motions. Even so, robust strain estimations were nevertheless obtained with 2.0 mm out-of-plane motion (correlation coefficients higher than 0.85). For a clinical dataset of 18 participants with carotid artery stenosis, we proposed to use two datasets of scans on the same carotid plaque, one cross-sectional and the other in a longitudinal view, to deduce the out-of-plane motions (estimated to be ranging from 0.25 mm to 1.04 mm). Clinical results showed that strain estimations remained reproducible for all motion magnitudes since inter-frame correlation coefficients were higher than 0.70, and normalized cross-correlations between radiofrequency images were above 0.93, which indicated that confident motion estimations can be obtained when analyzing clinical dataset of carotid plaques using the LSME. Finally, regarding the image post-processing component of NIVE algorithms to estimate strains of vessel walls from reconstructed images with the objective of identifying soft and hard tissues, we developed a strain estimation method with a pixel-wise resolution as well as a high computation efficiency for improving NIVE (Chapter 7). We proposed a sparse model strain estimator (SMSE) for which the dense strain field is parameterized with Discrete Cosine Transform descriptions, thereby deriving affine strain components (axial and lateral strains and shears) without mathematical derivative operations. Compared with the LSME, the SMSE reduced estimation errors in simulations, in vitro and in vivo tests. Moreover, the sparse implementation of the SMSE reduced the processing time by a factor of 4 to 25 compared with the LSME based on simulations, in vitro and in vivo results, which is suggesting a possible implementation of NIVE in real time.
115

Analýza a získávání informací ze souboru dokumentů spojených do jednoho celku / Analysis and Data Extraction from a Set of Documents Merged Together

Jarolím, Jordán January 2018 (has links)
This thesis deals with mining of relevant information from documents and automatic splitting of multiple documents merged together. Moreover, it describes the design and implementation of software for data mining from documents and for automatic splitting of multiple documents. Methods for acquiring textual data from scanned documents, named entity recognition, document clustering, their supportive algorithms and metrics for automatic splitting of documents are described in this thesis. Furthermore, an algorithm of implemented software is explained and tools and techniques used by this software are described. Lastly, the success rate of the implemented software is evaluated. In conclusion, possible extensions and further development of this thesis are discussed at the end.
116

Reprezentace řešení autonomních lineárních diskrétních systémů a jejich aplikace v teorii řízení / Representations of Solutions to Autonomous Linear Discrete Systems and Their Applications in the Control Theory

Mencáková, Kristýna January 2020 (has links)
Disertační práce se zabývá soustavou lineárních diskrétních rovnic se zpožděním a řeší Cauchyovu úlohu s danou počáteční podmínkou užitím zde definovaných maticových funkcí. Odvozený vzorec je pak použit při řešení úlohy relativní řiditelnosti této soustavy. Je dokázáno kritérium řiditelnosti soustavy, nalezena množina všech řídicí funkcí a minimální funkce vyhovující dané úloze.
117

Komprese dat / Data compression

Krejčí, Michal January 2009 (has links)
This thesis deals with lossless and losing methods of data compressions and their possible applications in the measurement engineering. In the first part of the thesis there is a theoretical elaboration which informs the reader about the basic terminology, the reasons of data compression, the usage of data compression in standard practice and the division of compression algorithms. The practical part of thesis deals with the realization of the compress algorithms in Matlab and LabWindows/CVI.
118

Výukový video kodek / Educational video codec

Dvořák, Martin January 2012 (has links)
The first goal of diploma thesis is to study the basic principles of video signal compression. Introduction to techniques used to reduce irrelevancy and redundancy in the video signal. The second goal is, on the basis of information about compression tools, implement the individual compression tools in the programming environment of Matlab and assemble simple model of the video codec. Diploma thesis contains a description of the three basic blocks, namely - interframe coding, intraframe coding and coding with variable length word - according the standard MPEG-2.
119

Wireless Networking in Future Factories: Protocol Design and Evaluation Strategies

Naumann, Roman 17 January 2020 (has links)
Industrie-4.0 bringt eine wachsende Nachfrage an Netzwerkprotokollen mit sich, die es erlauben, Informationen vom Produktionsprozess einzelner Maschinen zu erfassen und verfügbar zu machen. Drahtlose Übertragung erfüllt hierbei die für industrielle Anwendungen benötigte Flexibilität, kann in herausfordernden Industrieumgebungen aber nicht immer zeitnahe und zuverlässige Übertragung gewährleisten. Die Beiträge dieser Arbeit behandeln schwerpunktmäßig Protokollentwurf und Protokollevaluation für industrielle Anwendungsfälle. Zunächst identifizieren wir Anforderungen für den industriellen Anwendungsfall und leiten daraus konkrete Entwufskriterien ab, die Protokolle erfüllen sollten. Anschließend schlagen wir Protokollmechanismen vor, die jene Entwurfskriterien für unterschiedliche Arten von Protokollen umsetzen, und die in verschiedenem Maße kompatibel zu existierenden Netzwerken und existierender Hardware sind: Wir zeigen, wie anwendungsfallspezifische Priorisierung von Netzwerkdaten dabei hilft, zuverlässige Übertragung auch unter starken Störeinflüssen zu gewährleisten, indem zunächst eine akkurate Vorschau von Prozessinformationen übertragen wird. Für deren Fehler leiten wir präziser Schranken her. Ferner zeigen wir, dass die Fairness zwischen einzelnen Maschinen durch Veränderung von Warteschlangen verbessert werden kann, wobei hier ein Teil der Algorithmen von Knoten innerhalb des Netzwerks durchgeführt wird. Ferner zeigen wir, wie Network-Coding zu unserem Anwendungsfall beitragen kann, indem wir spezialisierte Kodierungs- und Dekodierungsverfahren einführen. Zuletzt stellen wir eine neuartige Softwarearchitektur und Evaluationstechnik vor, die es erlaubt, potentiell proprietäre Protokollimplementierungen innerhalb moderner diskreter Ereignissimulatoren zu verwenden. Wir zeigen, dass unser vorgeschlagener Ansatz ausreichend performant für praktische Anwendungen ist und, darüber hinaus, die Validität von Evaluationsergebnissen gegenüber existierenden Ansätzen verbessert. / As smart factory trends gain momentum, there is a growing need for robust information transmission protocols that make available sensor information gathered by individual machines. Wireless transmission provides the required flexibility for industry adoption but poses challenges for timely and reliable information delivery in challenging industrial environments. This work focuses on to protocol design and evaluation aspects for industrial applications. We first introduce the industrial use case, identify requirements and derive concrete design principles that protocols should implement. We then propose mechanisms that implement these principles for different types of protocols, which retain compatibility with existing networks and hardware to varying degrees: we show that use-case tailored prioritization at the source is a powerful tool to implement robustness against challenged connectivity by conveying an accurate preview of information from the production process. We also derive precise bounds for the quality of that preview. Moving parts of the computational work into the network, we show that reordering queues in accordance with our prioritization scheme improves fairness among machines. We also demonstrate that network coding can benefit our use case by introducing specialized encoding and decoding mechanisms. Last, we propose a novel architecture and evaluation techniques that allows incorporating possibly proprietary networking protocol implementations with modern discrete event network simulators, rendering, among others, the adaption of protocols to specific industrial use cases more cost efficient. We demonstrate that our approach provides sufficient performance and improves the validity of evaluation results over the state of the art.
120

Studying the effectiveness of dynamic analysis for fingerprinting Android malware behavior / En studie av effektivitet hos dynamisk analys för kartläggning av beteenden hos Android malware

Regard, Viktor January 2019 (has links)
Android is the second most targeted operating system for malware authors and to counter the development of Android malware, more knowledge about their behavior is needed. There are mainly two approaches to analyze Android malware, namely static and dynamic analysis. Recently in 2017, a study and well labeled dataset, named AMD (Android Malware Dataset), consisting of over 24,000 malware samples was released. It is divided into 135 varieties based on similar malicious behavior, retrieved through static analysis of the file classes.dex in the APK of each malware, whereas the labeled features were determined by manual inspection of three samples in each variety. However, static analysis is known to be weak against obfuscation techniques, such as repackaging or dynamic loading, which can be exploited to avoid the analysis. In this study the second approach is utilized and all malware in the dataset are analyzed at run-time in order to monitor their dynamic behavior. However, analyzing malware at run-time has known weaknesses as well, as it can be avoided through, for instance, anti-emulator techniques. Therefore, the study aimed to explore the available sandbox environments for dynamic analysis, study the effectiveness of fingerprinting Android malware using one of the tools and investigate whether static features from AMD and the dynamic analysis correlate. For instance, by an attempt to classify the samples based on similar dynamic features and calculating the Pearson Correlation Coefficient (r) for all combinations of features from AMD and the dynamic analysis. The comparison of tools for dynamic analysis, showed a need of development, as most popular tools has been released for a long time and the common factor is a lack of continuous maintenance. As a result, the choice of sandbox environment for this study ended up as Droidbox, because of aspects like ease of use/install and easily adaptable for large scale analysis. Based on the dynamic features extracted with Droidbox, it could be shown that Android malware are more similar to the varieties which they belong to. The best metric for classifying samples to varieties, out of four investigated metrics, turned out to be Cosine Similarity, which received an accuracy of 83.6% for the entire dataset. The high accuracy indicated a correlation between the dynamic features and static features which the varieties are based on. Furthermore, the Pearson Correlation Coefficient confirmed that the manually extracted features, used to describe the varieties, and the dynamic features are correlated to some extent, which could be partially confirmed by a manual inspection in the end of the study.

Page generated in 0.0773 seconds