111 |
Monitorovací systém pro zjištění motility a polohy laboratorních zvířat po anestézii / Monitoring system for detecting the motility and position of laboratory animals after anesthesiaEnikeev, Amir January 2019 (has links)
This diplom work entitled "Monitoring system for the detection of motility and position of laboratory animals after anesthesia" focuses on the design and implementation of non-contact detection of the rat or mouse position in the enclosure with a transparent cover. The aim of this semester paper is to find suitable methods of realizing contactless detection of the position of a laboratory rat or mouse. This automatic positioning of the animal will be used as the basis for controlling the irradiator in the next follow-up work, which will "shade" animal movement and aim at the scar on the animal's back. The rat that is located inside our enclosure is either standard or movable after anesthesia. In this work I first deal with searches of automatic monitoring systems for detecting the position of animals in the enclosure. Then, in the practical part, I test three types of cameras for image detection of rat position. Evaluation software for motion analysis will largely be solved in the follow-up diploma thesis.Project made like monitoring and detecting software based on OpenCV.
|
112 |
Hierarchické techniky pro výpočet osvětlení / Hierarchical Techniques in Lighting ComputationLigmajer, Jiří January 2012 (has links)
This master thesis deals with description of hierarchical techniques in global lighting computation. Here is explaining the importance of hierarchical techniques in lighting computation and shows method, how to use these hierarchical techniques in realtime radiosity and its extension to dynamic area lighting. These two techniques are described in detail in the first part of this project. In the other part is desing and implementation of application for dynamic area lighting computation.
|
113 |
An Efficient Framework for Processing and Analyzing Unstructured Text to Discover Delivery Delay and Optimization of Route Planning in Realtime / Un framework efficace pour le traitement et l'analyse des textes non structurés afin de découvrir les retards de livraison et d'optimiser la planification de routes en temps réelAlshaer, Mohammad 13 September 2019 (has links)
L'Internet des objets, ou IdO (en anglais Internet of Things, ou IoT) conduit à un changement de paradigme du secteur de la logistique. L'avènement de l'IoT a modifié l'écosystème de la gestion des services logistiques. Les fournisseurs de services logistiques utilisent aujourd'hui des technologies de capteurs telles que le GPS ou la télémétrie pour collecter des données en temps réel pendant la livraison. La collecte en temps réel des données permet aux fournisseurs de services de suivre et de gérer efficacement leur processus d'expédition. Le principal avantage de la collecte de données en temps réel est qu’il permet aux fournisseurs de services logistiques d’agir de manière proactive pour éviter des conséquences telles que des retards de livraison dus à des événements imprévus ou inconnus. De plus, les fournisseurs ont aujourd'hui tendance à utiliser des données provenant de sources externes telles que Twitter, Facebook et Waze, parce que ces sources fournissent des informations critiques sur des événements tels que le trafic, les accidents et les catastrophes naturelles. Les données provenant de ces sources externes enrichissent l'ensemble de données et apportent une valeur ajoutée à l'analyse. De plus, leur collecte en temps réel permet d’utiliser les données pour une analyse en temps réel et de prévenir des résultats inattendus (tels que le délai de livraison, par exemple) au moment de l’exécution. Cependant, les données collectées sont brutes et doivent être traitées pour une analyse efficace. La collecte et le traitement des données en temps réel constituent un énorme défi. La raison principale est que les données proviennent de sources hétérogènes avec une vitesse énorme. La grande vitesse et la variété des données entraînent des défis pour effectuer des opérations de traitement complexes telles que le nettoyage, le filtrage, le traitement de données incorrectes, etc. La diversité des données - structurées, semi-structurées et non structurées - favorise les défis dans le traitement des données à la fois en mode batch et en temps réel. Parce que, différentes techniques peuvent nécessiter des opérations sur différents types de données. Une structure technique permettant de traiter des données hétérogènes est très difficile et n'est pas disponible actuellement. En outre, l'exécution d'opérations de traitement de données en temps réel est très difficile ; des techniques efficaces sont nécessaires pour effectuer les opérations avec des données à haut débit, ce qui ne peut être fait en utilisant des systèmes d'information logistiques conventionnels. Par conséquent, pour exploiter le Big Data dans les processus de services logistiques, une solution efficace pour la collecte et le traitement des données en temps réel et en mode batch est essentielle. Dans cette thèse, nous avons développé et expérimenté deux méthodes pour le traitement des données: SANA et IBRIDIA. SANA est basée sur un classificateur multinomial Naïve Bayes, tandis qu'IBRIDIA s'appuie sur l'algorithme de classification hiérarchique (CLH) de Johnson, qui est une technologie hybride permettant la collecte et le traitement de données par lots et en temps réel. SANA est une solution de service qui traite les données non structurées. Cette méthode sert de système polyvalent pour extraire les événements pertinents, y compris le contexte (tel que le lieu, l'emplacement, l'heure, etc.). En outre, il peut être utilisé pour effectuer une analyse de texte sur les événements ciblés. IBRIDIA a été conçu pour traiter des données inconnues provenant de sources externes et les regrouper en temps réel afin d'acquérir une connaissance / compréhension des données permettant d'extraire des événements pouvant entraîner un retard de livraison. Selon nos expériences, ces deux approches montrent une capacité unique à traiter des données logistiques / Internet of Things (IoT) is leading to a paradigm shift within the logistics industry. The advent of IoT has been changing the logistics service management ecosystem. Logistics services providers today use sensor technologies such as GPS or telemetry to collect data in realtime while the delivery is in progress. The realtime collection of data enables the service providers to track and manage their shipment process efficiently. The key advantage of realtime data collection is that it enables logistics service providers to act proactively to prevent outcomes such as delivery delay caused by unexpected/unknown events. Furthermore, the providers today tend to use data stemming from external sources such as Twitter, Facebook, and Waze. Because, these sources provide critical information about events such as traffic, accidents, and natural disasters. Data from such external sources enrich the dataset and add value in analysis. Besides, collecting them in real-time provides an opportunity to use the data for on-the-fly analysis and prevent unexpected outcomes (e.g., such as delivery delay) at run-time. However, data are collected raw which needs to be processed for effective analysis. Collecting and processing data in real-time is an enormous challenge. The main reason is that data are stemming from heterogeneous sources with a huge speed. The high-speed and data variety fosters challenges to perform complex processing operations such as cleansing, filtering, handling incorrect data, etc. The variety of data – structured, semi-structured, and unstructured – promotes challenges in processing data both in batch-style and real-time. Different types of data may require performing operations in different techniques. A technical framework that enables the processing of heterogeneous data is heavily challenging and not currently available. In addition, performing data processing operations in real-time is heavily challenging; efficient techniques are required to carry out the operations with high-speed data, which cannot be done using conventional logistics information systems. Therefore, in order to exploit Big Data in logistics service processes, an efficient solution for collecting and processing data in both realtime and batch style is critically important. In this thesis, we developed and experimented with two data processing solutions: SANA and IBRIDIA. SANA is built on Multinomial Naïve Bayes classifier whereas IBRIDIA relies on Johnson's hierarchical clustering (HCL) algorithm which is hybrid technology that enables data collection and processing in batch style and realtime. SANA is a service-based solution which deals with unstructured data. It serves as a multi-purpose system to extract the relevant events including the context of the event (such as place, location, time, etc.). In addition, it can be used to perform text analysis over the targeted events. IBRIDIA was designed to process unknown data stemming from external sources and cluster them on-the-fly in order to gain knowledge/understanding of data which assists in extracting events that may lead to delivery delay. According to our experiments, both of these approaches show a unique ability to process logistics data. However, SANA is found more promising since the underlying technology (Naïve Bayes classifier) out-performed IBRIDIA from performance measuring perspectives. It is clearly said that SANA was meant to generate a graph knowledge from the events collected immediately in realtime without any need to wait, thus reaching maximum benefit from these events. Whereas, IBRIDIA has an important influence within the logistics domain for identifying the most influential category of events that are affecting the delivery. Unfortunately, in IBRIRDIA, we should wait for a minimum number of events to arrive and always we have a cold start. Due to the fact that we are interested in re-optimizing the route on the fly, we adopted SANA as our data processing framework
|
114 |
Generating Procedural Environments using Masks : Layered Image Document to Real-time environmentEldstål, Emil January 2019 (has links)
This paper will explore the possibilities of using an automated self-made procedural tool to create real-time environments based on simple image masks. The purpose of this is to enable a concept artist or level designer to quickly get out results in a game engine and to be able to explore ideas. The goal of this thesis was to better understand how you can break down simple ideas and shapes into more complex details and assets. In the first part of this thesis, I go over the traditional workflow of creating a real-time environment. I then go on and break down my tool, what it does and how it works. I start off with a Photoshop file, make tools in Houdini and then utilize those in Unreal for the end result. I also argument about the time-saving possibilities with these tools. From the work, I draw the conclusion that these kinds of tools save a lot of time for repeating tasks and the creation of similar environments.
|
115 |
連續性審計理論分析與系統技術探討-以物件式雛型系統為例周濟群 Unknown Date (has links)
傳統的財務報表審計服務,其查核作業的執行時點主要是以會計期間為基準,每季、每半年或一年才針對企業財務揭露,進行事後的交易查核。然而,近年來企業普遍地應用網際網路和全球資訊網技術之後,在沒有任何第三者稽核的情況下,企業的財務資訊幾乎可選擇於任何時點在網路上公開揭露,再加上投資者藉由網路瀏覽器等工具蒐集或分析這些資訊的成本降低,更使得此類資訊被使用的機率大幅增加。由資訊效率的觀點來看,市場投資者使用這些未經查核的資訊,將可能導致因資訊信賴度低而產生的資訊不效率,故傳統期間性的審計服務,顯然亦即當即時性網路資訊環境逐漸形成時,不但已不能提昇資本市場網路時代的資訊品質,同時也會逐漸喪失審計專業一直所強調的審計品質和權威性,故為了因應此一即時性資訊揭露市場的來臨,審計專業該當尋求更符合即時性資訊環境的審計方法。
改善資訊效率的方法之一,即所謂連續性審計(Continuous Auditing)的觀念,其目的乃是希望擴充即時線上資訊系統診斷機能至外部審計服務,以達成「交易結束後,立即進行查核;財務報表發布後,立即出具審計意見報告」的目標。但此一課題目前仍屬新興階段,不論是理論或技術架構皆存在許多未盡之處。例如連續性審計是否在任何經濟環境下均較具效率?或是某些經濟條件必須符合時,才適合應用連續性審計方法?此等重要的經濟適用性問題,均無任何研究曾明確地交代。此外,如何有效率地整合各種資訊技術,來實地發展連續性審計技術,以對目前網路財務揭露系統進行連續性審計?而完整的一般化系統架構、技術指引與系統發展方法論又如何建立?這些問題,在相關的文獻中,均皆未能提出適切的答案。
本研究即針對上述各項議題分別提出解決方案,首先從資訊經濟學理論的角度,探討在連續性財務資訊揭露的環境下,連續性審計的必要性,並以較嚴謹的定義,來建立連續性審計的理論架構,並討論可能影響連續性審計效率性的各種經濟條件;確認連續性審計的重要性後,其次將整合應用審計專業知識、連續性審計觀念架構與相關的資訊技術,以發展出適用於連續性審計的一般化技術架構;最後則依照連續性審計一般化的技術架構,實地設計出一個應用物件技術的連續性審計雛型系統,以驗證連續性審計理論與技術架構的可行性。 / Through years, regulation parties consistently emphasized the importance of timely accounting information in their formal statements. Despite those highlights from regulation parties, professional accountants can't achieve timeliness due to the lack ofrealtime disclosure technology. However, in the past a few years, situation is changing dramatically. The combined innovative technology on production (such as the on-line transaction processing and the on-line analytical processing) and dissemination (such as the Internet distributed object technology and World- Wide-Web technology) of the real-time accounting information definitely made timeliness feasible. In fact, there is strong evidence to believe there would be more and more public companies posting their timely important financial or operating information on Internet in the near future. Although the increasing provision of timely accounting numbers on the web is expected to strengthen the quality of accounting information, behind the web-disclosure behavior, the information asymmetry problem still will bother both the reporting companies and information users since those web-releases are usually remained unaudited.
Recently, in AICPA and CICA's joint report, they emphasized on the importance of continuous auditing as a soluetion to this emerging web-release problem. Unfortunately, besides their awakenings on this material, those official reports did not provide much insight on both continuous auditing theoretic and technical framework. For instance, what exactly is the economic definition and implication of continuous auditing? Would continuous auditing be the most efficient way to audit the real-time information? How to prove it? Also, technically, how to conduct this new approach successfully? Depending on what kind of information technology this approach could be best implemented?
From research design view, the above problems are certainly not appropriate to be answered through an empirical research approach since there are still no continuous auditing practice existing. Accordingly, the most emerging research now is to construct a complete continuous auditing theory and technical framework. This paper, based on the motivation to examine both the theoretic and technical framework of continuous auditing, is going to address the following issues. First, some formal modeling tools are adopted to analyze different auditing approaches to show why, from an economic view, continuous auditing would dominate others in different dimensions. Secondly, we derive a generic technical framework from the continuous auditing concepts to guide the implementation of technical issues. Finally, an object-oriented prototype system implemented by Java language is developed to support the proposed theoretic and generic technical framework.
|
116 |
Entwicklung alternativer Auswerteverfahren für Mikrowellendopplersignale bei der Geschwindigkeitsbestimmung im Bahnverkehr / New Methods for the Analysis of Doppler-Radar-Signals in Train Speed MeasurementKakuschke, Chris 24 June 2004 (has links) (PDF)
To measure the speed of a vehicle, the revolution of a wheel or a rigid axle is traditionally used. Therefore non corrigible systematic errors occur which are caused by slip, spin and by the change of wheel diameter due to fretting. Train control and traction systems require new robust as well as precise methods of speed measurement. Because of their physical properties, Doppler-radar-sensors attached to the vehicle and measuring ground speed are first choice for this range of applications. Currently used sensors cannot fulfil the high demands under all operating conditions, because they are unable to completely compensate the various interferences and systematic deviations.
This is the starting point of this dissertation. Two independent diverse methods with optimised reliability and accuracy must be used to meet all requirements. Limited resources of the embedded digital signal processor system under real-time conditions have to be taken into account. According to the boundary conditions, the introductory chapters critically discuss the frequency analysis methods currently used and describe starting points for further development. This leads to the design of a new, robust, wide-band spectral analysis which combines techniques of the dyadic wavelet transformation with the fast Fourier transformation. At the same time a new frame procedure and general model for the estimation of motion parameters is developed which features short delays. The disadvantages of the block-based discrete spectral analysis applied over continuous approaches are extensively compensated. The block structure of spectral data enables the selective use of new knowledge-based spectral filters for the compensation of the remaining intense interferences which are typical of this kind of application. / Die Fahrzeuggeschwindigkeitsmessung über die Drehzahl eines Rades weist in Schlupf- und Schleuderzuständen erhebliche systematische Abweichungen auf. Deshalb erfordern Zugbeeinflussungs- und Antriebssysteme neue gleichzeitig robuste und präzise Geschwindigkeitsmessmethoden. Die Mikrowellensensorik unter Nutzung des Dopplereffekts zwischen Fahrzeug und Gleisbett wird wegen ihrer physikalischen Eigenschaften für dieses Einsatzgebiet favorisiert. Bisherige Sensorapplikationen erfüllen aber die hohen Ansprüche nicht in allen Betriebszuständen.
Hier setzt die in dieser Arbeit beschriebene Sensorentwicklung auf. Zwei getrennt hergeleitete und nach Zuverlässigkeit und Genauigkeit optimierte neue Verfahren können bei gleichzeitiger Anwendung die gestellten Anforderungen erfüllen. Dabei müssen auch die beschränkten Ressourcen des eingebetteten digitalen Signalverarbeitungssystems unter Echtzeitbedingungen berücksichtigt werden. Entsprechend dieser Randbedingungen findet einleitend eine kritische Betrachtung bestehender Frequenzanalysemethoden statt und Ansätze für die Weiterentwicklung werden herausgearbeitet. Einerseits führt dies zur Konstruktion einer neuen störunempfindlichen Weitbereichsspektralzerlegung, welche Ansätze der dyadischen Wavelettransformation mit der Diskreten Fourier-Transformation verbindet. Andererseits wird ein neues Rahmenverfahren für die verzögerungsarme Schätzung der Bewegungsparameter des Fahrzeuges aufgrund seines physikalischen Bewegungsmodells hergeleitet und mit einem hochgenauen Frequenzauswerteverfahren kombiniert. Beide Verfahren basieren auf blockweisen diskreten Spektralzerlegungen, deren prinzipielle Nachteile gegenüber kontinuierlichen Ansätzen weitgehend kompensiert werden können. Durch die Blockorganisation lassen sich neuartige wissensbasierte Spektralfilter selektiv zur Unterdrückung starker bahnanwendungstypischer Störeinflüsse einsetzen.
|
117 |
Techniques for automated and interactive note sequence morphing of mainstream electronic musicWooller, René William January 2007 (has links)
Note sequence morphing is the combination of two note sequences to create a ‘hybrid transition’, or ‘morph’. The morph is a ‘hybrid’ in the sense that it exhibits properties of both sequences. The morph is also a ‘transition’, in that it can segue between them. An automated and interactive approach allows manipulation in realtime by users who may control the relative influence of source or target and the transition length. The techniques that were developed through this research were designed particularly for popular genres of predominantly instrumental electronic music which I will refer to collectively as Mainstream Electronic Music (MEM). The research has potential for application within contexts such as computer games, multimedia, live electronic music, interactive installations and accessible music or “music therapy”. Musical themes in computer games and multimedia can morph adaptively in response to parameters in realtime. Morphing can be used by electronic music producers as an alternative to mixing in live performance. Interactive installations and accessible music devices can utilise morphing algorithms to enable expressive control over the music through simple interface components.
I have developed a software application called LEMorpheus which consists of software infrastructure for morphing and three alternative note sequence morphing algorithms: parametric morphing, probabilistic morphing and evolutionary morphing. Parametric morphing involves converting the source and target into continuous envelopes, interpolation, and converting the interpolated envelopes back into note sequences. Probabilistic morphing involves converting the source and target into probability matrices and seeding them on recent output to generate the next note. Evolutionary morphing involves iteratively mutating the source into multiple possible candidates and selecting those which are judged as more similar to the target, until the target is reached.
I formally evaluated the probabilistic morphing algorithm by extracting qualitative feedback from participants in a live electronic music situation, benchmarked against a live, professional DJ. The probabilistic algorithm was competitive, being favoured particularly for long morphs. The evolutionary morphing algorithm was formally evaluated using an online questionnaire, benchmarked against a human composer/producer. For particular samples, the morphing algorithm was competitive and occasionally seen as innovative; however, the morphs created by the human composer typically received more positive feedback, due to coherent, large scale structural changes, as opposed to the forced continuity of the morphing software.
|
118 |
Studies On The Viability Of The Boundary Element Method For The Real-Time Simulation Of Biological OrgansKirana Kumara, P 22 August 2016 (has links) (PDF)
Realistic and real-time computational simulation of biological organs (e.g., human kidneys, human liver) is a necessity when one tries to build a quality surgical simulator that can simulate surgical procedures involving these organs. Currently deformable models, spring-mass models, or finite element models are widely used to achieve the realistic simulations and/or the real-time performance. It is widely agreed that continuum mechanics based numerical techniques are preferred over deformable models or spring-mass models, but those techniques are computationally expensive and hence the higher accuracy offered by those numerical techniques come at the expense of speed. Hence there is a need to study the speed of different numerical techniques, while keeping an eye on the accuracy offered by those numerical techniques. Such studies are available for the Finite Element Method (FEM) but rarely available for the Boundary Element Method (BEM). Hence the present work aims to conduct a study on the viability of BEM for the real-time simulation of biological organs, and the present study is justified by the fact that BEM is considered to be inherently efficient when compared to mesh based techniques like FEM. A significant portion of literature on the real-time simulation of biological organs suggests the use of BEM to achieve better simulations. When one talks about the simulation of biological organs, one needs to have the geometry of a biological organ in hand. Geometry of biological organs of interest is not readily available many a times, and hence there is a need to extract the three dimensional (3D) geometry of biological organs from a stack of two dimensional (2D) scanned images. Software packages that can readily reconstruct 3D geometry of biological organs from 2D images are expensive. Hence, a novel procedure that requires only a few free software packages to obtain the geometry of biological organs from 2D image sequences is presented. The geometry of a pig liver is extracted from CT scan images for illustration purpose. Next, the three dimensional geometry of human kidney (left and right kidneys of male, and left and right kidneys of female) is obtained from the Visible Human Dataset (VHD). The novel procedure presented in this work can be used to obtain patient specific organ geometry from patient specific images, without requiring any of the many commercial software packages that can readily do the job. To carry out studies on the speed and accuracy of BEM, a source code for BEM is needed. Since the BEM code for 3D elasticity is not readily available, a BEM code that can solve 3D linear elastostatic problems without accounting for body forces is developed from scratch. The code comes in three varieties: a MATLAB version, a Fortran version (sequential version), and a Fortran version (parallelized version). This is the first free and open source BEM code for 3D elasticity. The developed code is used to carry out studies on the viability of BEM for the real-time simulation of biological organs, and a few representative problems involving kidneys and liver are found to give accurate solutions. The present work demonstrates that it is possible to simulate linear elastostatic behaviour in real-time using BEM without resorting to any type of precomputations, on a computer cluster by fully parallelizing the simulations and by performing simulations on different number of processors and for different block sizes. Since it is possible to get a complete solution in real-time, there is no need to separately prove that every type of cutting, suturing etc. can be simulated in real-time. Future work could involve incorporating nonlinearities into the simulations. Finally, a BEM based simulator may be built, after taking into account details like rendering.
|
119 |
Monitorovací systém pro zjištění motility a polohy laboratorních zvířat po anestézii / Monitoring system for detecting the motility and position of laboratory animals after anesthesiaEnikeev, Amir January 2019 (has links)
This diploma thesis, entitled "Monitoring System for Determination of Motility and Position of Laboratory Animals After Anesthesia", focuses on the design and implementation of contactless detection of the position of a rat or mouse in an enclosure with a transparent cover. The aim of the semester work is to find suitable methods of realization of contactless detection of rat or mouse position and to automatically determine and display average speed or other movement characteristics. The assignment arose from the needs of animal monitoring after curative intervention and also as a necessary utility for future "shading" animal movement (automatic targeting of the scar on the animal's back). The rat, which is located inside our enclosure, is either moving as standard or is dazed after anesthesia. In this work I deal first with search of automatic monitoring systems for detection of animals in the enclosure. Then in the practical part are tested three types of cameras for visual detection of rat position and a script for automatic detection and analysis of rat movement is designed. The system works like a camera eye which in real time is able to find the area of a black box in its field of view and then limit the detection area to the size of this box and then automatically detects the center of gravity and counts. and evaluates the obtained speed with an average calculated with a test of 10 mice - voices on the screen the mouse status in the previous ten seconds. for no stressed animal The rat that is located inside our enclosure is either standard or movable after anesthesia. In this work I first deal with searches of automatic monitoring systems for detecting the position of animals in the enclosure. Then, in the practical part, I test three types of cameras for image detection of rat position. Evaluation software for motion analysis will largely be solved in the follow-up diploma thesis.Project made like monitoring and detecting software based on OpenCV.
|
120 |
Ray-tracing s knihovnou IPP / Ray-tracing Using IPP LibraryKukla, Michal January 2010 (has links)
Master thesis is dealing with design and implementation of ray-tracing and path-tracing using IPP library. Theoretical part discusses current trends in acceleration of selected algorithms and also possibilities of parallelization. Design of ray-tracing and path-tracing algorithm and form of parallelization are described in proposal. This part also discusses implementation of adaptive sampling and importance sampling with Monte Carlo method to accelerate path-tracing algorithm. Next part is dealing with particular steps in implementation of selected rendering methods regarding IPP library. Implementation of network interface using Boost library is also discussed. At the end, implemented methods are subjected to performance and quality test. Final product of this thesis is server aplication capable of handling multiple connections which provides visualisation and client application which implements ray-tracing and path-tracing.
|
Page generated in 0.0795 seconds