231 |
FPGA-based Speed Limit Sign DetectionTallawi, Reham 19 July 2017 (has links)
This thesis presents a new hardware accelerated approach using image processing and detection algorithms for implementing fast and robust traffic sign detection system with focus on speed limit sign detection. The proposed system targets reconfigurable integrated circuits particularly Field Programmable Gate Array (FPGA) devices. This work propose a fully parallelized and pipelined parallel system architecture to exploit the high performance and flexibility capabilities of FPGA devices. This thesis is divided into two phases, the first phase, is a software prototype implementation of the proposed system. The software system was designed and developed using C++ and OpenCV library on general purpose CPU. The prototype is used to explore and investigate potential segmentation and detection algorithms that might be feasible to design and implement in hardware accelerated environments. These algorithms includes RGB colour conversion, colour segmentation through thresholding, noise reduction through median filter, morphological operations through erosion and dilation, and sign detection through template matching. The second phase, a hardware-based design of the system was developed using the same algorithms used in the software design. The hardware design is composed of 20 image processing components each designed to xxx fully parallelized and pipelined xxx. The hardware implementation was developed using VHDL as the hardware description language targeting a Xilinix Virtex-6 FPGA XC6VLX240T device. The development environment is Xilinx ISE®Design Suite version 14.3. A set of 20 640x480 test images was used as the test data for the verification and testing of this work. The images was captured by a smart-phone camera in various weather and lightning conditions. The software implementation delivered speed limit detection results with a success rate of 75%. The hardware implementation was only simulated using Xilinx ISE Simulator (ISim) with a overall system latency of 12964 clock cycles. According to the Place and Route report the maximum operation frequency for the proposed hardware design is 71,2 MHz. The design only utilized 2% of the slice registers, 4% of the slice Look up Tables (LUT), and 11% of the block memory. This thesis project concludes the work based on the provided software and hardware implementation and performance analysis results. Also the conclusions chapter provides recommendations and future work for possible extension of the project.
|
232 |
Advanced visualization and modeling of tetrahedral meshesFrank, Tobias 07 April 2006 (has links)
Tetrahedral meshes are becoming more and more important for geo-modeling applications. The presented work introduces new algorithms for efficient visualization and modeling of tetrahedral meshes. Visualization consists of a generic framework that includes the extraction of geological information like stratigraphic columns, fault block boundaries, simultaneous co-rendering of different attributes and boolean operations of Constructive Solid Geometry with constant complexity. Modeling can be classified into geometric and implicit modeling. Geometric modeling addresses local mesh refinement to increase the numerical resolution of a given mesh. Implicit modeling covers the definition and manipulation of implicitly defined models. A new surface reconstruction method was developed to reconstruct complex, multi-valued surfaces from noisy and sparse data sets as they occur in geological applications. The surface can be bounded and may have discontinuities. Further, this work proposes a new and innovative algorithm for rapid editing of implicitly defined shapes like horizons based on the GeoChron parametrization. The editing is performed interactively on the 3d-volumetric model and geological constraints are respected automatically.
|
233 |
Increasing the Position Precision of a Navigation Device by a Camera-based Landmark Detection ApproachJumani, Kashif Rashid 24 September 2018 (has links)
The main objective of this paper is to discuss a platform which can provide accurate
information to moving objects like a car in poor environmental conditions where the use of signals of GPS is not possible. This approach is going to integrate imaging sensor data into an inertial navigation system. Navigation systems are getting smart and accurate but still, an error occurs at long distances causing a failure to find out accurate location. In order to increase the accuracy front camera in a car is proposed as a sensor for the navigation system. Before this problem is solved with the help of extended Kalmanfilter but still, the small error occurs. In order to find out, accurate location landmarks will be detected from the real-time environment and will be matched with the landmarks which are already stored database. Detection is the challenge in an open environment in which object must be illumination invariant, pose invariant and scale invariant. Selection between algorithms according to the requirement is important. SIFT is a feature descriptor which creates the description of features in an image and known as the more accurate algorithm. Speeded up robust features (SURF) is another algorithm in computer considered as fast and less accurate than SIFT. Most of the time it is not a problem with given algorithms but the feature is not detected or matched because of illumination, scale, and pose. In this condition use of filters and other techniques is important for better results. Better results mean required information from images must extract easily, this part is obtained with the help of computer vision and image processing. After creating matched images data, this data is given to navigation data calculation so that it can produce an exact location based on matched images and time calculation. Navigation data calculation unit has the connection with Landmark Database so navigation system can compute that at this point landmark is present and it is matched and assure that given location is accurate. In this way accuracy, safety and security can be assured.
|
234 |
A Framework for example-based Synthesis of Materials for Physically Based RenderingRudolph, Carsten 14 February 2019 (has links)
In computer graphics, textures are used to create detail along geometric surfaces. They are less computationally expensive than geometry, but this efficiency is traded for greater memory demands, especially with large output resolutions. Research has shown, that textures can be synthesized from low-resolution exemplars, reducing overall runtime memory cost and enabling applications, like remixing existing textures to create new, visually similar representations.
In many modern applications, textures are not limited to simple images, but rather represent geometric detail in different ways, that describe how lights interacts at a certain point on a surface. Physically Based Rendering (PBR) is a technique, that employs complex lighting models to create effects like self-shadowing, realistic reflections or subsurface scattering. A set of multiple textures is used to describe what is called a material.
In this thesis, example-based texture synthesis is extented to physical lighting models to create a physically based material synthesizer. It introduces a framework that is capable of utilizing multiple texture maps to synthesize new representations from existing material exemplars. The framework is then tested with multiple exemplars from different texture categories, to prospect synthesis performance in terms of quality and computation time.
The synthesizer works in uv space, enabling to re-use the same exemplar material at runtime with different uv maps, reducing memory cost, whilst increasing visual varienty and minimizing repetition artifacts. The thesis shows, that this can be done effectively, without introducing inconsitencies like seams or discontiuities under dynamic lighting scenarios.:1. Context and Motivation
2. Introduction
2.1. Terminology: What is a Texture?
2.1.1. Classifying Textures
2.1.2. Characteristics and Appearance
2.1.3. Advanced Analysis
2.2. Texture Representation
2.2.1. Is there a theoretical Limit for Texture Resolution?
2.3. Texture Authoring
2.3.1. Texture Generation from Photographs
2.3.2. Computer-Aided Texture Generation
2.4. Introduction to Physically Based Rendering
2.4.1. Empirical Shading and Lighting Models
2.4.2. The Bi-Directional Reflectance Distribution Function (BRDF)
2.4.3. Typical Texture Representations for Physically Based Models
3. A brief History of Texture Synthesis
3.1. Algorithm Categories and their Developments
3.1.1. Pixel-based Texture Synthesis
3.1.2. Patch-based Texture Synthesis
3.1.3. Texture Optimization
3.1.4. Neural Network Texture Synthesis
3.2. The Purpose of example-based Texture Synthesis Algorithms
4. Framework Design
4.1. Dividing Synthesis into subsequent Stages
4.2. Analysis Stage
4.2.1. Search Space
4.2.2. Guidance Channel Extraction
4.3. Synthesis Stage
4.3.1. Synthesis by Neighborhood Matching
4.3.2. Validation
5. Implementation
5.1. Modules and Components
5.2. Image Processing
5.2.1. Image Representation
5.2.2. Filters and Guidance Channel Extraction
5.2.3. Search Space and Descriptors
5.2.4. Neighborhood Search
5.3. Implementing Synthesizers
5.3.1. Unified Synthesis Interface
5.3.2. Appearance Space Synthesis: A Hierarchical, Parallel, Per-Pixel Synthesizer
5.3.3. (Near-) Regular Texture Synthesis
5.3.4. Extented Appearance Space: A Physical Material Synthesizer
5.4. Persistence
5.4.1. Codecs
5.4.2. Assets
5.5. Command Line Sandbox
5.5.1. Providing Texture Images and Material Dictionaries
6. Experiments and Results
6.1. Test Setup
6.1.1. Metrics
6.1.2. Result Visualization
6.1.3. Limitations and Conventions
6.2. Experiment 1: Analysis Stage Performance
6.2.1. Influence of Exemplar Resolution
6.2.2. Influence of Exemplar Maps
6.3. Experiment 2: Synthesis Performance
6.3.1. Influence of Exemplar Resolution
6.3.2. Influence of Exemplar Maps
6.3.3. Influence of Sample Resolution
6.4. Experiment 3: Synthesis Quality
6.4.1. Influence of Per-Level Jitter
6.4.2. Influence of Exemplar Maps and Map Weights
7. Discussion and Outlook
7.1. Contributions
7.2. Further Improvements and Research
7.2.1. Performance Improvements
7.2.2. Quality Improvements
7.2.3. Methology
7.2.4. Further Problem Fields
|
235 |
Development of a Surgical Assistance System for Guiding Transcatheter Aortic Valve ImplantationKARAR, Mohamed Esmail Abdel Razek Hassan 26 January 2012 (has links)
Development of image-guided interventional systems is growing up rapidly in the recent years. These new systems become an essential part of the modern minimally invasive surgical procedures, especially for the cardiac surgery. Transcatheter aortic valve implantation (TAVI) is a recently developed surgical technique to treat severe aortic valve stenosis in elderly and high-risk patients. The placement of stented aortic valve prosthesis is crucial and typically performed under live 2D fluoroscopy guidance. To assist the placement of the prosthesis during the surgical procedure, a new fluoroscopy-based TAVI assistance system has been developed.
The developed assistance system integrates a 3D geometrical aortic mesh model and anatomical valve landmarks with live 2D fluoroscopic images. The 3D aortic mesh model and landmarks are reconstructed from interventional angiographic and fluoroscopic C-arm CT system, and a target area of valve implantation is automatically estimated using these aortic mesh models. Based on template-based tracking approach, the overlay of visualized 3D aortic mesh model, landmarks and target area of implantation onto fluoroscopic images is updated by approximating the aortic root motion from a pigtail catheter motion without contrast agent. A rigid intensity-based registration method is also used to track continuously the aortic root motion in the presence of contrast agent. Moreover, the aortic valve prosthesis is tracked in fluoroscopic images to guide the surgeon to perform the appropriate placement of prosthesis into the estimated target area of implantation. An interactive graphical user interface for the surgeon is developed to initialize the system algorithms, control the visualization view of the guidance results, and correct manually overlay errors if needed.
Retrospective experiments were carried out on several patient datasets from the clinical routine of the TAVI in a hybrid operating room. The maximum displacement errors were small for both the dynamic overlay of aortic mesh models and tracking the prosthesis, and within the clinically accepted ranges. High success rates of the developed assistance system were obtained for all tested patient datasets.
The results show that the developed surgical assistance system provides a helpful tool for the surgeon by automatically defining the desired placement position of the prosthesis during the surgical procedure of the TAVI. / Die Entwicklung bildgeführter interventioneller Systeme wächst rasant in den letzten Jahren. Diese neuen Systeme werden zunehmend ein wesentlicher Bestandteil der technischen Ausstattung bei modernen minimal-invasiven chirurgischen Eingriffen. Diese Entwicklung gilt besonders für die Herzchirurgie. Transkatheter Aortenklappen-Implantation (TAKI) ist eine neue entwickelte Operationstechnik zur Behandlung der schweren Aortenklappen-Stenose bei alten und Hochrisiko-Patienten. Die Platzierung der Aortenklappenprothese ist entscheidend und wird in der Regel unter live-2D-fluoroskopischen Bildgebung durchgeführt. Zur Unterstützung der Platzierung der Prothese während des chirurgischen Eingriffs wurde in dieser Arbeit ein neues Fluoroskopie-basiertes TAKI Assistenzsystem entwickelt.
Das entwickelte Assistenzsystem überlagert eine 3D-Geometrie des Aorten-Netzmodells und anatomischen Landmarken auf live-2D-fluoroskopische Bilder. Das 3D-Aorten-Netzmodell und die Landmarken werden auf Basis der interventionellen Angiographie und Fluoroskopie mittels eines C-Arm-CT-Systems rekonstruiert. Unter Verwendung dieser Aorten-Netzmodelle wird das Zielgebiet der Klappen-Implantation automatisch geschätzt. Mit Hilfe eines auf Template Matching basierenden Tracking-Ansatzes wird die Überlagerung des visualisierten 3D-Aorten-Netzmodells, der berechneten Landmarken und der Zielbereich der Implantation auf fluoroskopischen Bildern korrekt überlagert. Eine kompensation der Aortenwurzelbewegung erfolgt durch Bewegungsverfolgung eines Pigtail-Katheters in Bildsequenzen ohne Kontrastmittel. Eine starrere Intensitätsbasierte Registrierungsmethode wurde verwendet, um kontinuierlich die Aortenwurzelbewegung in Bildsequenzen mit Kontrastmittelgabe zu detektieren. Die Aortenklappenprothese wird in die fluoroskopischen Bilder eingeblendet und dient dem Chirurg als Leitfaden für die richtige Platzierung der realen Prothese. Eine interaktive Benutzerschnittstelle für den Chirurg wurde zur Initialisierung der Systemsalgorithmen, zur Steuerung der Visualisierung und für manuelle Korrektur eventueller Überlagerungsfehler entwickelt.
Retrospektive Experimente wurden an mehreren Patienten-Datensätze aus der klinischen Routine der TAKI in einem Hybrid-OP durchgeführt. Hohe Erfolgsraten des entwickelten Assistenzsystems wurden für alle getesteten Patienten-Datensätze erzielt.
Die Ergebnisse zeigen, dass das entwickelte chirurgische Assistenzsystem ein hilfreiches Werkzeug für den Chirurg bei der Platzierung Position der Prothese während des chirurgischen Eingriffs der TAKI bietet.
|
236 |
3D interferometric shape measurement technique using coherent fiber bundlesZhang, Hao, Kuschmierz, Robert, Czarske, Jürgen 13 August 2019 (has links)
In-situ 3-D shape measurements with submicron shape uncertainty of fast rotating objects in a cutting lathe are expected, which can be achieved by simultaneous distance and velocity measurements. Conventional tactile methods, coordinate measurement machines, only support ex-situ measurements. Optical measurement techniques such as triangulation and conoscopic holography offer only the distance, so that the absolute diameter cannot be retrieved directly. In comparison, laser Doppler distance sensors (P-LDD sensor) enable simultaneous and in-situ distance and velocity measurements for monitoring the cutting process in a lathe. In order to achieve shape measurement uncertainties below 1 µm, a P-LDD sensor with a dual camera based scattered light detection has been investigated. Coherent fiber bundles (CFB) are employed to forward the scattered light towards cameras. This enables a compact and passive sensor head in the future. Compared with a photo detector based sensor, the dual camera based sensor allows to decrease the measurement uncertainty by the order of one magnitude. As a result, the total shape uncertainty of absolute 3-D shape measurements can be reduced to about 100 nm.
|
237 |
Design and investigation of a test rig based on AI smart vi-sion sensors for automated component inspection of press-hardened car body componentsSimon, Fabio, Werner, Thomas, Weidemann, Andreas, Guilleaume, Christina, Brosius, Alexander 28 November 2023 (has links)
Defects such as cracks, overlaps and impressions occur during the production of press-hardened car body components. At present, these types of defects are counteracted in the industrial environment by costly visual inspections carried out by humans. Due to the poor efficiency of visual inspection compared to automated inspection and the risk of defects not being detected, the use of AI-based smart vision sensors is being evaluated in order to enable an automated component inspection process with their help. For the realisation of the test, the most relevant defect types deformation, crack and overlap are identified using a Pareto analysis.
|
238 |
Konzeption und Untersuchung eines Prüfstandes auf der Basis von KI-Smart-Vision-Sensoren für die automatisierte Bauteilprüfung pressgehärteter KarosseriebauteileSimon, Fabio, Werner, Thomas, Weidemann, Andreas, Guilleaume, Christina, Brosius, Alexander 28 November 2023 (has links)
Bei der Herstellung pressgehärteter Karosseriebauteile treten Fehler wie Risse, Überlappungen und Abdrücke auf. Gegenwärtig wird diesen Fehlerarten im industriellen Umfeld durch kostenaufwändige, von Menschen durchgeführte Sichtkontrollen entgegengewirkt. Aufgrund des schlechten Wirkungsgrades der visuellen Prüfung gegenüber einer automatisierten Prüfung und der Gefahr des Nichterkennens von Fehlern, wird der Einsatz von KI-basierten Smart-Vision-Sensoren evaluiert, um mit deren Hilfe einen automatisierten Bauteilprüfprozess zu ermöglichen.
|
239 |
Data Augmentation GUI Tool for Machine Learning ModelsSharma, Sweta 30 October 2023 (has links)
The industrial production of semiconductor assemblies is subject to high requirements. As a result, several tests are needed in terms of component quality. In the long run, manual quality assurance (QA) is often connected with higher expenditures. Using a technique based on machine learning, some of these tests may be carried out automatically. Deep neural networks (NN) have shown to be very effective in a diverse range of computer vision applications. Especially convolutional neural networks (CNN), which belong to a subset of NN, are an effective tool for image classification. Deep NNs have the disadvantage of requiring a significant quantity of training data to reach excellent performance. When the dataset is too small a phenomenon known as overfitting can occur. Massive amounts of data cannot be supplied in certain contexts, such as the production of semiconductors. This is especially true given the relatively low number of rejected components in this field. In order to prevent overfitting, a variety of image augmentation methods may be used to the process of artificially creating training images. However, many of those methods cannot be used in certain fields due to their inapplicability. For this thesis, Infineon Technologies AG provided the images of a semiconductor component generated by an ultrasonic microscope. The images can be categorized as having a sufficient number of good and a minority of rejected components, with good components being defined as components that have been deemed to have passed quality control and rejected components being components that contain a defect and did not pass quality control.
The accomplishment of the project, the efficacy with which it is carried out, and its level of quality may be dependent on a number of factors; however, selecting the appropriate tools is one of the most important of these factors because it enables significant time and resource savings while also producing the best results. We demonstrate a data augmentation graphical user interface (GUI) tool that has been widely used in the domain of image processing. Using this method, the dataset size has been increased while maintaining the accuracy-time trade-off and optimizing the robustness of deep learning models. The purpose of this work is to develop a user-friendly tool that incorporates traditional, advanced, and smart data augmentation, image processing,
and machine learning (ML) approaches. More specifically, the technique mainly uses
are zooming, rotation, flipping, cropping, GAN, fusion, histogram matching,
autoencoder, image restoration, compression etc. This focuses on implementing and
designing a MATLAB GUI for data augmentation and ML models. The thesis was
carried out for the Infineon Technologies AG in order to address a challenge that all
semiconductor industries experience. The key objective is not only to create an easy-
to-use GUI, but also to ensure that its users do not need advanced technical
experiences to operate it. This GUI may run on its own as a standalone application.
Which may be implemented everywhere for the purposes of data augmentation and
classification. The objective is to streamline the working process and make it easy to
complete the Quality assurance job even for those who are not familiar with data
augmentation, machine learning, or MATLAB. In addition, research will investigate the
benefits of data augmentation and image processing, as well as the possibility that
these factors might contribute to an improvement in the accuracy of AI models.
|
240 |
Influence of CT image processing on the predicted impact of pores on fatigue of additively manufactured Ti6Al4V and AlSi10MgGebhardt, Ulrike, Schulz, Paul, Raßloff, Alexander, Koch, Ilja, Gude, Maik, Kästner, Markus 04 April 2024 (has links)
Pores are inherent to additively manufactured components and critical especially in technical components. Since they reduce the component’s fatigue life, a reliable identification and description of pores is vital to ensure the component’s performance. X-ray computed tomography (CT) is an established and non-destructive testing method to investigate internal defects. The CT scan process can induce noise and artefacts in the resulting images which afterwards have to be reduced through image processing. To reconstruct the internal defects of a component, the images need to be segmented in defect region and bulk material by applying a threshold. The application of the threshold as well as the previous image processing alter the geometry and size of the identified defects. This contribution aims to quantify the influence of selected commercial image processing and segmentation methods on identified pores in several additively manufactured components made of AlSi10Mg and Ti6Al4V as well as in an artificial CT scan. To that aim, gray value histograms and characteristic parameters thereof are compared for different image processing tools. After the segmentation of the processed images, particle characteristics are compared. The influence of image processing and segmentation on the predicted fatigue life of the material is evaluated through the change of the largest pore in each set of data applying Murakami’s empirical√area-parameter model.
|
Page generated in 0.1016 seconds