• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 9
  • 7
  • 7
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 83
  • 83
  • 27
  • 20
  • 17
  • 16
  • 13
  • 12
  • 10
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Integration of Local Positioning System & Strapdown Inertial Navigation System for Hand-Held Tool Tracking

Parnian, Neda 24 September 2008 (has links)
This research concerns the development of a smart sensory system for tracking a hand-held moving device to millimeter accuracy, for slow or nearly static applications over extended periods of time. Since different operators in different applications may use the system, the proposed design should provide the accurate position, orientation, and velocity of the object without relying on the knowledge of its operation and environment, and based purely on the motion that the object experiences. This thesis proposes the design of the integration a low-cost Local Positioning System (LPS) and a low-cost StrapDown Inertial Navigation System (SDINS) with the association of the modified EKF to determine 3D position and 3D orientation of a hand-held tool within a required accuracy. A hybrid LPS/SDINS combines and complements the best features of two different navigation systems, providing a unique solution to track and localize a moving object more precisely. SDINS provides continuous estimates of all components of a motion, but SDINS loses its accuracy over time because of inertial sensors drift and inherent noise. LPS has the advantage that it can possibly get absolute position and velocity independent of operation time; however, it is not highly robust, is computationally quite expensive, and exhibits low measurement rate. This research consists of three major parts: developing a multi-camera vision system as a reliable and cost-effective LPS, developing a SDINS for a hand-held tool, and developing a Kalman filter for sensor fusion. Developing the multi-camera vision system includes mounting the cameras around the workspace, calibrating the cameras, capturing images, applying image processing algorithms and features extraction for every single frame from each camera, and estimating the 3D position from 2D images. In this research, the specific configuration for setting up the multi-camera vision system is proposed to reduce the loss of line of sight as much as possible. The number of cameras, the position of the cameras with respect to each other, and the position and the orientation of the cameras with respect to the center of the world coordinate system are the crucial characteristics in this configuration. The proposed multi-camera vision system is implemented by employing four CCD cameras which are fixed in the navigation frame and their lenses placed on semicircle. All cameras are connected to a PC through the frame grabber, which includes four parallel video channels and is able to capture images from four cameras simultaneously. As a result of this arrangement, a wide circular field of view is initiated with less loss of line-of-sight. However, the calibration is more difficult than a monocular or stereo vision system. The calibration of the multi-camera vision system includes the precise camera modeling, single camera calibration for each camera, stereo camera calibration for each two neighboring cameras, defining a unique world coordinate system, and finding the transformation from each camera frame to the world coordinate system. Aside from the calibration procedure, digital image processing is required to be applied into the images captured by all four cameras in order to localize the tool tip. In this research, the digital image processing includes image enhancement, edge detection, boundary detection, and morphologic operations. After detecting the tool tip in each image captured by each camera, triangulation procedure and optimization algorithm are applied in order to find its 3D position with respect to the known navigation frame. In the SDINS, inertial sensors are mounted rigidly and directly to the body of the tracking object and the inertial measurements are transformed computationally to the known navigation frame. Usually, three gyros and three accelerometers, or a three-axis gyro and a three-axis accelerometer are used for implementing SDINS. The inertial sensors are typically integrated in an inertial measurement unit (IMU). IMUs commonly suffer from bias drift, scale-factor error owing to non-linearity and temperature changes, and misalignment as a result of minor manufacturing defects. Since all these errors lead to SDINS drift in position and orientation, a precise calibration procedure is required to compensate for these errors. The precision of the SDINS depends not only on the accuracy of calibration parameters but also on the common motion-dependent errors. The common motion-dependent errors refer to the errors caused by vibration, coning motion, sculling, and rotational motion. Since inertial sensors provide the full range of heading changes, turn rates, and applied forces that the object is experiencing along its movement, accurate 3D kinematics equations are developed to compensate for the common motion-dependent errors. Therefore, finding the complete knowledge of the motion and orientation of the tool tip requires significant computational complexity and challenges relating to resolution of specific forces, attitude computation, gravity compensation, and corrections for common motion-dependent errors. The Kalman filter technique is a powerful method for improving the output estimation and reducing the effect of the sensor drift. In this research, the modified EKF is proposed to reduce the error of position estimation. The proposed multi-camera vision system data with cooperation of the modified EKF assists the SDINS to deal with the drift problem. This configuration guarantees the real-time position and orientation tracking of the instrument. As a result of the proposed Kalman filter, the effect of the gravitational force in the state-space model will be removed and the error which results from inaccurate gravitational force is eliminated. In addition, the resulting position is smooth and ripple-free. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. If the sampling rate of the vision system decreases from 20 fps to 5 fps, the errors are still acceptable for many applications.
42

Integration of Local Positioning System & Strapdown Inertial Navigation System for Hand-Held Tool Tracking

Parnian, Neda 24 September 2008 (has links)
This research concerns the development of a smart sensory system for tracking a hand-held moving device to millimeter accuracy, for slow or nearly static applications over extended periods of time. Since different operators in different applications may use the system, the proposed design should provide the accurate position, orientation, and velocity of the object without relying on the knowledge of its operation and environment, and based purely on the motion that the object experiences. This thesis proposes the design of the integration a low-cost Local Positioning System (LPS) and a low-cost StrapDown Inertial Navigation System (SDINS) with the association of the modified EKF to determine 3D position and 3D orientation of a hand-held tool within a required accuracy. A hybrid LPS/SDINS combines and complements the best features of two different navigation systems, providing a unique solution to track and localize a moving object more precisely. SDINS provides continuous estimates of all components of a motion, but SDINS loses its accuracy over time because of inertial sensors drift and inherent noise. LPS has the advantage that it can possibly get absolute position and velocity independent of operation time; however, it is not highly robust, is computationally quite expensive, and exhibits low measurement rate. This research consists of three major parts: developing a multi-camera vision system as a reliable and cost-effective LPS, developing a SDINS for a hand-held tool, and developing a Kalman filter for sensor fusion. Developing the multi-camera vision system includes mounting the cameras around the workspace, calibrating the cameras, capturing images, applying image processing algorithms and features extraction for every single frame from each camera, and estimating the 3D position from 2D images. In this research, the specific configuration for setting up the multi-camera vision system is proposed to reduce the loss of line of sight as much as possible. The number of cameras, the position of the cameras with respect to each other, and the position and the orientation of the cameras with respect to the center of the world coordinate system are the crucial characteristics in this configuration. The proposed multi-camera vision system is implemented by employing four CCD cameras which are fixed in the navigation frame and their lenses placed on semicircle. All cameras are connected to a PC through the frame grabber, which includes four parallel video channels and is able to capture images from four cameras simultaneously. As a result of this arrangement, a wide circular field of view is initiated with less loss of line-of-sight. However, the calibration is more difficult than a monocular or stereo vision system. The calibration of the multi-camera vision system includes the precise camera modeling, single camera calibration for each camera, stereo camera calibration for each two neighboring cameras, defining a unique world coordinate system, and finding the transformation from each camera frame to the world coordinate system. Aside from the calibration procedure, digital image processing is required to be applied into the images captured by all four cameras in order to localize the tool tip. In this research, the digital image processing includes image enhancement, edge detection, boundary detection, and morphologic operations. After detecting the tool tip in each image captured by each camera, triangulation procedure and optimization algorithm are applied in order to find its 3D position with respect to the known navigation frame. In the SDINS, inertial sensors are mounted rigidly and directly to the body of the tracking object and the inertial measurements are transformed computationally to the known navigation frame. Usually, three gyros and three accelerometers, or a three-axis gyro and a three-axis accelerometer are used for implementing SDINS. The inertial sensors are typically integrated in an inertial measurement unit (IMU). IMUs commonly suffer from bias drift, scale-factor error owing to non-linearity and temperature changes, and misalignment as a result of minor manufacturing defects. Since all these errors lead to SDINS drift in position and orientation, a precise calibration procedure is required to compensate for these errors. The precision of the SDINS depends not only on the accuracy of calibration parameters but also on the common motion-dependent errors. The common motion-dependent errors refer to the errors caused by vibration, coning motion, sculling, and rotational motion. Since inertial sensors provide the full range of heading changes, turn rates, and applied forces that the object is experiencing along its movement, accurate 3D kinematics equations are developed to compensate for the common motion-dependent errors. Therefore, finding the complete knowledge of the motion and orientation of the tool tip requires significant computational complexity and challenges relating to resolution of specific forces, attitude computation, gravity compensation, and corrections for common motion-dependent errors. The Kalman filter technique is a powerful method for improving the output estimation and reducing the effect of the sensor drift. In this research, the modified EKF is proposed to reduce the error of position estimation. The proposed multi-camera vision system data with cooperation of the modified EKF assists the SDINS to deal with the drift problem. This configuration guarantees the real-time position and orientation tracking of the instrument. As a result of the proposed Kalman filter, the effect of the gravitational force in the state-space model will be removed and the error which results from inaccurate gravitational force is eliminated. In addition, the resulting position is smooth and ripple-free. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. If the sampling rate of the vision system decreases from 20 fps to 5 fps, the errors are still acceptable for many applications.
43

Vamzdyne skraidančių bepiločių aparatų akustinės regos sistemos kūrimas ir tyrimas / Acoustics vision system‘s of unmanned air vehicle (UAV), flying in pipeline, development and research

Nazaras, Paulius 29 June 2007 (has links)
Darbe analizuojama akustinės regos sistema (ARS) skirta specializuotiems bepiločiams skraidantiems aparatams (BSA) apribotoje erdvėje. Aprašyti esami ir kuriami bepiločiai skraidantys aparatai. Parinkus atitinkamus elementus sudaryta tokios ARS elektrinė principinė schema. Pateiktas valdymo algoritmas. / In this graduate work acoustics vision system of unmanned air vehicle (UAV) is projected. It fit to control UAV in the limit space (pipeline). The operating algorithm of the effect of control system is created and this system construction is analyzed. In the closing part of this graduate work conclusions are formulated and the content of the sources is given. The graphical part of this work covers tables of element’s, control system functional schemes and principal electrical schemes of aforesaid control system.
44

Sistema de visão artificial para a diagnose nutricional de ferro, boro, zinco e cobre em plantas de milho / Artificial vision system for the nutritional diagnosis of iron, boron, zinc and copper in maize plants

Mário Antonio Marin 14 December 2012 (has links)
A pesquisa visou avaliar a metodologia do projeto Tree Vis para determinar a nutrição de ferro, boro, zinco e cobre em plantas de milho submetidas a doses desses nutrientes. Foram utilizados tratamentos constituídos pela omissão, 1/5, 2/5 e a dose completa dos elementos com quatro repetições em cada fase de coleta, sendo essas V4, V7 e R1. Os experimentos foram realizados em casa de vegetação, em cultivo hidropônico, conduzidos em vasos com solução nutritiva. Foi determinada a produção de massa seca da parte aérea e do sistema radicular, além da determinação dos teores dos nutrientes nas folhas indicativas dos estádios fenológicos de cada época de coleta. Em cada estádio foram coletadas imagens das folhas indicativas e novas através de um scanner para as análises de visão artificial. As doses crescentes dos nutrientes promoveram maior produção de massa seca na parte aérea e nas raízes e reduziram a produção quando utilizada a dose máxima do nutriente. O sistema de visão artificial mostrou-se promissor na identificação de deficiência de ferro com 77,5% de acerto, boro com 81,7% de acerto, zinco com 81,0% e cobre com 57,2 % de acerto, tendo identificado as com boa confiabilidade. / The research aimed to evaluate the methodology of the Pr oject Tree Vis for determining nutrition iron, boron, zinc and copper in maize plants subjected to doses of these nutrients. Treatments used were made by omission, 1/5, 2/5 and the full dose of the elements with four replicates at each stage of collection, these are V4, V7 and R1. The experiments ware conducted in a greenhouse in hydroponics, conducted in pots with nutrient solution. Was determined the dry mass production of the aerial part and roots, besides the determ ination of nutritional content in the leaves indicative of phenological stages of each harvest time. At each stage were collected images of indicative and new leaves through with a scanner for the analyzes of artificial vision. The increasing doses of nutr ients promoted higher dry mass production in the aerial part and roots and reduced the production when using the highest dose of the nutrient. The artificial vision system showed promise in identifying of deficiency of iron with 77.5% accuracy, of boron with 81.7% of correct, of zinc with 81.0% accuracy and copper with 57.2% accuracy, with a good reliability in the identifi.
45

Automatic Waterjet Positioning Vision System

Dziak, Damian, Jachimczyk, Bartosz, Jagusiak, Tomasz January 2012 (has links)
The goals of this work are a design and implementation of a new vision system, integrated with the waterjet machine. This system combines two commercial webcams applied on an industrial dedicated platform. A main purpose of the vision system is to detect the position and rotation of a workpiece placed on the machine table. The used object recognition algorithm consists of edge detection, standard math processing functions and noise filters. The Hough transform technique is used to extract lines and their intersections of a workpiece. Metric rectification method is used, in order to obtain a top view of the workspace and to adjust an image coordinate system, accordingly to the waterjet machine coordinates. In situ calibration procedures of the booth webcams are developed and implemented. Experimental results of the proposed new vision system prototype confirm required performance and precision of the element detection.
46

Etude et conception d'un réseau sur puce dynamiquement adaptable pour la vision embarquée / Dynamically adaptable Network-on-Chip for embedded vision systems

Ngan, Nicolas 09 December 2011 (has links)
Un équipement portable moderne intègre plusieurs capteurs d'image qui peuvent être de différents types. On peut citer en guise d'exemple un capteur couleur, un capteur infrarouge ou un capteur basse lumière. Cet équipement doit alors supporter différentes sources qui peuvent être hétérogènes en terme de résolution, de granularité de pixels et de fréquence d'émission des images. Cette tendance à multiplier les capteurs, est motivée par des besoins applicatifs dans un but de complémentarité en sensibilité (fusion des images), en position (panoramique) ou en champ de vision. Le système doit par conséquent être capable de supporter des applications de plus en plus complexes et variées, nécessitant d'utiliser une seule ou plusieurs sources d'image. Du fait de cette variété de fonctionnalités embarquées, le système électronique doit pouvoir s'adapter constamment pour garantir des performances en terme de latence et de temps de traitement en fonction des applications, tout en respectant des contraintes d'encombrement.% Même si depuis de nombreuses années, un grand nombre de solutions architecturales ont été proposées pour améliorer l'adaptabilité des unités de calcul, un problème majeur persiste au niveau du réseau d'interconnexion qui n'est pas suffisamment adaptable, en particulier pour le transfert des flux de pixels et l'accès aux données. Nous proposons dans cette thèse un nouveau réseau de communication sur puce (NoC) pour un SoC dédié à la vision. Ce réseau permet de gérer dynamiquement différents types de flux en parallèle en auto-adaptant le chemin de donnée entre les unités de calcul, afin d'exécuter de manière efficace différentes applications. La proposition d'une nouvelle structure de paquets de données, facilite les mécanismes d'adaptation du système grâce à la combinaison d'instructions et de données à traiter dans un même paquet. Nous proposons également un système de mémorisation de trames à adressage indirecte, capable de gérer dynamiquement plusieurs trames image de différentes sources d'image. Cet adressage indirect est réalisé par l'intermédiaire d'une couche d'abstraction matérielle qui se charge de traduire des requêtes de lecture et d'écriture, réalisées suivant des indicateurs de la trame requise (source de l'image, indice temporel et dernière opération effectuée). Afin de valider notre proposition, nous définissons une nouvelle architecture, appelée Multi Data Flow Ring (MDFR) basée sur notre réseau avec une topologie en anneau. Les performances de cette architecture, en temps et en surface, ont été évaluées dans le cadre d'une implémentation sur une cible FPGA / Modern portable vision systems include several types of image sensors such as colour, low-light or infrared sensor. Such system has to support heterogeneous image sources with different spatial resolutions, pixel granularities and working frequencies. This trend to multiply sensors is motivated by needs to complete sensor sensibilities with image fusion processing techniques, or sensor positions in the system. Moreover, portable vision systems implement image applications which require several images sources with a growing computing complexity. To face those challenges in integrating such a variety of functionalities, the embedded electronic computing system has to adapt permanently to preserve application timing performance in latency and processing, and to respect area and low-power constraints. In this thesis, we propose a new Network-On-Chip (NoC) adapted for a System-On-Chip (SoC) dedicated to image applications. This NoC can manage several pixel streams in parallel by adapting dynamically the datapatah between processing elements and memories. The new header packet structure enables adaptation mechanisms in routers by combining instructions and data in a same packet. To manage efficiently the frames storage required for an application, we propose a frame buffer system with an indirect frame addressing, which is able to manage several frames from different sensors. It features a hardware abstraction layer which is in charge to collect reading and writing requests, according to specific frame indicators such as the image source ID. The NoC has been validated in a complete processing architecture called Multi Data Flow Ring (MDFR) with a ring topology. The MDFR performances in time and area has been demonstrated for an FPGA target
47

A Vision and Differential Steering System for a Mobile Robot Platform / En vision och differentierad Styrsystem för en mobil robot Plattform

Siddiqui, Abujawad Rafid January 2010 (has links)
Context: Effective vision processing is an important study area for mobile robots which use vision to detect objects. The problem of detecting small sized coloured objects (e.g. Lego bricks) with no texture information can be solved using either colour or contours of the objects. The shape of such objects doesn‟t help much in detecting the objects due to the poor quality of the picture and small size of the object in the image. In such cases it is seen that the use of hybrid techniques can benefit the overall detection of objects, especially, combining keypoint based methods with the colour based techniques. Robotic motion also plays a vital role in the completion of autonomous tasks. Mobile robots have different configurations for locomotion. The most important system is differential steering because of its application in sensitive areas like military tanks and security robot platforms. The kinematic design of a robotic platform is usually based on the number of wheels and their movement. There can be several configurations of wheels designs, for example differential drives, car-like designs, omni-directional, and synchro drives. Differential drive systems use speed on individual channels to determine the combined speed and trajectory of the robot. Accurate movement of the robot is very important for correct completion of its activities. Objectives: A vision solution is developed that is capable of detecting small sized colour objects in the environment. This has also been compared with other shape detection techniques for performance evaluation. The effect of distance on detection is also investigated for the participating techniques. The precise motion of a four-wheel differential drive system is investigated. The target robot platform uses a differential drive steering system and the main focus of this study is accurate position and orientation control based upon sensor data. Methods: For object detection, a novel hybrid method „HistSURF‟ is proposed and is compared with other vision processing techniques. This method combines the results of colour histogram comparison and detection by the SURF algorithm. A solution for differential steering using a Gyro for the rotational speed measurement is compared with a solution using a speed model and control outputs without feedback (i.e. dead reckoning). Results: The results from the vision experiment rank the new proposed method highest among the other participating techniques. The distance experiment indicates that there is a direct and inverse relation between the distance and detected SURF features. It is also indicated by the results that distance affects the detection rate of the new proposed technique. In case of robot control, the differential drive solution using a speed model has less error rate than the one that uses a Gyro for angle measurement. It is also clear from the results that the greater the difference of speeds among the channels the less smooth is the angular movement. Conclusions: The results indicate that by combining a key-point based technique with colour segmentation, the false positive rate can be reduced and hence object recognition performance improves . It has also become clear that the improved accuracy of the proposed technique is limited to small distances and its performance decreases rapidly with increase in the distance to target objects. For robot control, the results indicate that a Gyro alone cannot improve the movement accuracy of the robotic system due to a variable drift exhibited by the Gyro while in rotation. However, a Gyro can be effective if used in combination with a magnetometer and some form of estimation mechanism like a Kalman filter. A Kalman filter can be used to correct the error in the Gyro by using the output from the magnetometer, resulting in a good estimate. / Bakgrund: Effektiv vision behandling är ett viktigt studieområde för mobila robotar som använder vision att upptäcka föremål. Problemet upptäcka små och medelstora färgade föremål (t.ex. Lego tegelstenar) utan konsistens information kan lösas med färg eller konturer av föremålen. Formen på sådana föremål spelar ingen hjälpa mycket att upptäcka föremål på grund av den dåliga kvaliteten på bild och ringa storlek på objektet i bilden. I sådana fall är det sett att användningen av hybrid-teknik kan gynna den totala upptäckt av föremål, särskilt genom att kombinera keypoint metoder med färgen tekniker. Robotic motion spelar också en viktig roll i genomförandet av självständiga uppgifter. Mobila robotar har olika konfigurationer för transport. Det viktigaste är differentierad styrning på grund av dess tillämpning i känsliga områden som stridsvagnar och säkerhet plattformar robot. Den kinematiska utformningen av en robot plattform är vanligtvis baserad på antalet hjul och deras rörelser. Det kan finnas flera konfigurationer av hjul mönster, till exempel olika enheter, bil-liknande mönster, rundstrålande, och driver synkroniserad. Differential drivsystem använder fart om olika kanaler för att bestämma den kombinerade snabbhet och banan för roboten. Exakt förflyttning av roboten är mycket viktigt för korrekt ifyllande av sin verksamhet. Mål: En vision lösning har utvecklats som kan upptäcka små och medelstora färg objekt i miljön. Detta har också jämfört med andra tekniker form upptäcka för utvärdering av prestanda. Effekten av avstånd vid upptäckt är också undersökas för de deltagande tekniker. Den exakta rörelse av en fyrhjulsdriven olika drivsystem undersöks. Målet robot plattform använder en differentierad system driva styrning och i centrum för denna studie är korrekt läge och riktning kontroll baserat på sensordata. Metoder: För att upptäcka, en ny hybrid metod "HistSURF" föreslås och jämförs med andra tekniker vision bearbetning. Denna metod kombinerar resultaten av färg histogram jämförelse och upptäckt av SURF algoritm. En lösning för differentierad styrning med hjälp av en Gyro för varvtal mätningen jämförs med en lösning med en hastighet modell och utgångar kontroll utan återkoppling (dvs död räkning). Resultat: Resultaten från den vision experiment inom den nya föreslagna metoden högsta bland de andra deltagande tekniker. Avståndet experiment indikerar att det finns ett direkt och omvänd korrelation mellan avstånd och upptäckt SURF funktioner. Det är också framgå av resultatet från det avståndet påverkar upptäckten hastighet av den nya föreslagna tekniken. Vid robot kontroll har skillnaden köra lösningen med en hastighet modell mindre felfrekvens än den som använder en Gyro för vinkelmätning. Det framgår även av resultaten att ju större skillnaden i hastigheter mellan de kanaler de mindre smidiga är vinkelrörelse. Slutsatser: Resultaten visar att genom att kombinera en central-punkt baserad teknik med färg segmentering, den falska positiva kan sänkas och därmed objektigenkänning prestanda ökar. Det har också blivit uppenbart att förbättrad noggrannhet av den föreslagna tekniken är begränsad till små avstånd och dess prestanda minskar snabbt med ökat avstånd till målet objekt. För robot kontroll, tyder resultaten på att en Gyro inte ensam kan förbättra rörligheten noggrannhet robotsystem på grund av en variabel glida ut av Gyro medan rotation. Men en Gyro kan vara effektiva om de används i kombination med en magnetometer och någon form av uppskattning mekanism som ett Kalman filter. En Kalman filter kan användas för att rätta till felet i Gyro med hjälp av utdata från magnetometer, vilket resulterar i en god uppskattning.
48

Kvalitetssäkring av fläckanalys för hypoidväxlar / Quality Assurance of Contact Area Spot in Hypoid Gears

Engdahl, Philip, Aspelin, Jesper January 2020 (has links)
Det amerikanska företaget Meritor är en globalt ledande leverantör av lösningar för bland annat drivlinor, bromsning och rörlighet till både industri- och kommersiella fordon. Meritor HVS AB:s fabrik, belägen i Lindesberg, är främst inriktad på montering av kompletta hjulaxlar samt bearbetning av komponenter, inklusive kugghjul till hypoidväxlar, till tunga fordon såsom buss och lastbil. Hypoidväxlar används i bakaxlar då de klarar av höga vridmoment samtidigt som de har en hög hållbarhet och är relativt tysta i drift. Ett steg i monteringsprocessen för växlarna på Meritor är att passa in pinjong och kronhjul, två typer av kugghjul, så att de har god kontakt mellan kuggarna. En god kontaktyta behövs för att inte påverka kugghjulens livslängd samt säkerställa tyst drift. Kontaktytan verifieras bland annat genom okulär fläckanalys av montören. Syftet med examensarbetet var att utföra en undersökning om hur kvalitetssäkringen av fläckanalysen kan förbättras i Meritors centrumväxelmontering. Visionsystem är en metod som ansetts lämpad och extra intressant att undersöka.  Under arbetet har nuläget kartlagts genom bland annat observationer, intervjuer och interna dokument såsom arbetsinstruktioner. Undersökning av kvalitetssäkringslösningar har utförts genom bland annat studiebesök. Fem stycken lösningsförslag har tagits fram varav en kombination av några av dem skulle ge en mer fullständig kvalitetssäkring. För att bestämma exakt vilken lösning som är mest lämpad så behövs en kravspecifikation tas fram vilket i sin tur kan kräva att flera olika tester behövs genomföras för att kartlägga behov och önskemål. / he American company Meritor is a globally leading provider of solutions for drivetrain, braking and mobility for both industrial and commercial vehicles. Meritor HVS AB's factory, located in Lindesberg, is primarily focused on the assembly of complete wheel axles and gears but also machining of components, including gears for hypoid gears, for heavy vehicles such as busses and trucks. Hypoid gears are used in rear axles as they can handle high torque while having a high durability as well as begin relatively quiet in operation. One step in the assembly process of the gears at Meritor is to fit the pinion and ring gears, two types of gears, so that they have good contact (mesh) between the gears. A good contact surface is necessary to not impact the life-span of the gears and to ensure quiet operation. The contact is verified through ocular contact spot analysis by the assembly worker, where the assessment may be a difficult task as well may differentiate between the assembly workers, which is a risk of false assessment. The aim of this project was to carry out a study regarding how quality assurance of the contact spot analysis can be done in Meritor's gear assembly. Vision systems are a method that is considered suitable and of extra interest to investigate. During the work, the current situation has been mapped through observations, interviews and internal documents such as operation instructions. Investigation of quality assurance solutions has been done through e.g. study visit. Five proposals for solutions have been developed where a combination of some of them would provide a more complete quality assurance. In order to determine exactly which solution that is most suited, a specification of requirements is needed to be determined. Several tests could be required to establish requirements and requests.
49

Conception mixte d’un capteur d’images intelligent intégré à traitements locaux massivement parallèles / Mixed co-design for an integrated smart image sensor with massively parallel local image processing

Le hir, Juliette 14 December 2018 (has links)
Les capteurs intelligents permettentaux systèmes embarqués d’analyser leurenvironnement sans transmission de donnéesbrutes, consommatrice d’énergie. Ce mémoireprésente donc un travail sur un imageur intégrantdu traitement d’image. Deux figures de méritesont introduites pour classer l’état de l’art desimageurs intelligents en fonction de leurversatilité et de leur préservation de la surfacephotosensible. Cela met en évidence uncompromis que ce travail essaie d’améliorer enexplorant une approche par macropixels. Eneffet, en regroupant les éléments de calculs (PEs)pour plusieurs pixels, les traitements sont à lafois massivement parallèles et potentiellementplus versatiles à surface photosensible donnée.Une adaptation du filtrage spatial et du filtragetemporel en adéquation avec une architecture parmacropixels est proposée (sous-échantillonnagepar 3x3 pixels et par 2x2 pixels respectivement),et validée fonctionnellement. Une architectured’imageur en macropixels asymétriques est doncprésentée. Le PE conçu est un circuit analogiqueà capacités commutées, programmable par uncontrôle numérique extérieur à la matrice. Sondimensionnement est discuté pour descompromis entre surface et précision des calculs,avant d’être implémenté en calcul approximépour notre cas. La matrice proposée a été simuléeen vue extraite et présente des images de résultatsde détection de contours ou de différencetemporelle corrects, avec un facteur deremplissage de 28%. / Smart sensors allow embeddedsystems for analysing their environment withoutany transmission of raw data, which consumes alot of power. This thesis presents an imagesensor integrating image processing tasks. Twofigures of merit are introduced in order toclassify the state of the art of smart imagersregarding their versatility and their preservationof photosensitive area. This shows a trade-offthat this work aims at improving by using amacropixel approach. By merging processingelements (PEs) between several pixels,processing tasks are both massively parallel andpotentially more versatile at givenphotosensitive area. An adaptation of spatial andtemporal filtering, matching such anarchitecture is proposed (downsampling by3x3 and 2x2 pixels respectively for eachprocessing task) and functionnally validated. Anarchitecture of asymmetric macropixels is thuspresented. The designed PE is an analogswitched capacitor circuit that is controlled byout-of-matrix digital electronics. The sizing ofthe PE is discussed over the trade-off betweenaccuracy and area, and implemented in anapproximate computing approach in our study.The proposed matrix of pixels and PEs issimulated in post-layout extracted views andshows good results on computed images of edgedetection or temporal difference, with a 28% fillfactor.
50

Vision-based Testbeds For Control System Applicaitons

Sivilli, Robert 01 January 2012 (has links)
In the field of control systems, testbeds are a pivotal step in the validation and improvement of new algorithms for different applications. They provide a safe, controlled environment typically having a significantly lower cost of failure than the final application. Vision systems provide nonintrusive methods of measurement that can be easily implemented for various setups and applications. This work presents methods for modeling, removing distortion, calibrating, and rectifying single and two camera systems, as well as, two very different applications of vision-based control system testbeds: deflection control of shape memory polymers and trajectory planning for mobile robots. First, a testbed for the modeling and control of shape memory polymers (SMP) is designed. Red-green-blue (RGB) thresholding is used to assist in the webcam-based, 3D reconstruction of points of interest. A PID based controller is designed and shown to work with SMP samples, while state space models were identified from step input responses. Models were used to develop a linear quadratic regulator that is shown to work in simulation. Also, a simple to use graphical interface is designed for fast and simple testing of a series of samples. Second a robot testbed is designed to test new trajectory planning algorithms. A templatebased predictive search algorithm is investigated to process the images obtained through a lowcost webcam vision system, which is used to monitor the testbed environment. Also a userfriendly graphical interface is developed such that the functionalities of the webcam, robots, and optimizations are automated. The testbeds are used to demonstrate a wavefront-enhanced, Bspline augmented virtual motion camouflage algorithm for single or multiple robots to navigate through an obstacle dense and changing environment, while considering inter-vehicle conflicts, iv obstacle avoidance, nonlinear dynamics, and different constraints. In addition, it is expected that this testbed can be used to test different vehicle motion planning and control algorithms.

Page generated in 0.0729 seconds