• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 23
  • 9
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 185
  • 185
  • 64
  • 35
  • 30
  • 27
  • 26
  • 25
  • 24
  • 24
  • 22
  • 21
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Technologies informatiques pour l'étude du comportement expérimental et numérique d'un assemblage poutre-poteau en béton armé / Information technologies for the study of the experimental and numerical behavior of a reinforced concrete beam-column joint

Iskef, Alaa Eddin 08 April 2016 (has links)
L'analyse du comportement des assemblages poteau-poutre en béton armé ainsi que leur influence sur la résistance de l'ensemble de la structure sous chargement cyclique ou sismique a fait l'objet de plusieurs investigations ces dernières années. Toutefois, le comportement de cette partie de structure reste loin d'être maitrisé à cause de la complexité de cet assemblage qui fait intervenir plusieurs phénomènes physiques, et à cause du manque de données expérimentales. Ce travail a pour but de mettre en place et fournir une base de données expérimentales fiable et dense dont la vocation est de donner accès à un benchmark expérimental pour permettre la modélisation et la validation du comportement de ces assemblages. / The behavior of reinforced concrete beam-column joints and their influence on the strength of the structures under cyclic or seismic loadings has been the subject of several investigations in recent years. However, the behavior of that part of the structure remains far from being mastered due to the complexity of the assembly involving several physical phenomena and due to the lack of exhaustive experimental data. This work aims to implement and provide a reliable and dense experimental database whose vocation is to provide access to an experimental benchmark to enable the modeling and validation of the behavior of these assemblies.
162

Improved Stereo Vision Methods for FPGA-Based Computing Platforms

Fife, Wade S. 28 November 2011 (has links) (PDF)
Stereo vision is a very useful, yet challenging technology for a wide variety of applications. One of the greatest challenges is meeting the computational demands of stereo vision applications that require real-time performance. The FPGA (Field Programmable Gate Array) is a readily-available technology that allows many stereo vision methods to be implemented while meeting the strict real-time performance requirements of some applications. Some of the best results have been obtained using non-parametric stereo correlation methods, such as the rank and census transform. Yet relatively little work has been done to study these methods or to propose new algorithms based on the same principles for improved stereo correlation accuracy or reduced resource requirements. This dissertation describes the sparse census and sparse rank transforms, which significantly reduce the cost of implementation while maintaining and in some case improving correlation accuracy. This dissertation also proposes the generalized census and generalized rank transforms, which opens up a new class of stereo vision transforms and allows the stereo system to be even more optimized, often reducing the hardware resource requirements. The proposed stereo methods are analyzed, providing both quantitative and qualitative results for comparison to existing algorithms. These results show that the computational complexity of local stereo methods can be significantly reduced while maintaining very good correlation accuracy. A hardware architecture for the implementation of the proposed algorithms is also described and the actual resource requirements for the algorithms are presented. These results confirm that dramatic reductions in hardware resource requirements can be achieved while maintaining high stereo correlation accuracy. This work proposes the multi-bit census, which provides improved pixel discrimination as compared to the census, and leads to improved correlation accuracy with some stereo configurations. A rotation-invariant census transform is also proposed and can be used in applications where image rotation is possible.
163

TYFLOS: A WEARABLE NAVIGATION PROTOTYPE FOR BLIND & VISUALLY IMPAIRED; DESIGN, MODELLING AND EXPERIMENTAL RESULTS

Dakopoulos, Dimitrios 27 July 2009 (has links)
No description available.
164

An evaluation of the Amblyopia and Strabismus Questionnaire using Rasch analysis

Vianya-Estopa, Marta, Elliott, David B., Barrett, Brendan T. 01 May 2010 (has links)
No / PURPOSE. To evaluate whether the Amblyopia and Strabismus Questionnaire (A&SQ) is a suitable instrument for the assessment of vision-related quality-of life (VR-QoL) in individuals with strabismus and/or amblyopia. METHODS. The A&SQ was completed by 102 individuals, all of whom had amblyopia, strabismus, or both. Rasch analysis was used to evaluate the usefulness of individual questionnaire items (i.e., questions); the response-scale performance; how well the items targeted VR-QoL; whether individual items showed response bias, depending on factors such as whether strabismus was present; and dimensionality. RESULTS. Items relating to concerns about the appearance of the eyes were applicable only to those with strabismus, and many items showed large ceiling effects. The response scale showed disordered responses and underused response options, which improved after the number of response options was reduced from five to three. This change improved the discriminative ability of the questionnaire (person separation index increased from 1.98 to 2.11). Significant bias was found between strabismic and nonstrabismic respondents. Separate Rasch analyses conducted for subjects with and without strabismus indicated that all A&SQ items seemed appropriate for individuals with strabismus (Rasch infit values between 0.60 and 1.40), but several items fitted the model poorly in amblyopes without strabismus. The AS&Q was not found to be unidimensional. CONCLUSIONS. The findings highlight the limitations of the A&SQ instrument in the assessment of VR-QoL in subjects with strabismus and especially in those with amblyopia alone. The results suggest that separate instruments are needed to quantify VR-QoL in amblyopes with and without strabismus.
165

Multilevel Datenfusion konkurrierender Sensoren in der Fahrzeugumfelderfassung

Haberjahn, Mathias 21 November 2013 (has links)
Mit der vorliegenden Dissertation soll ein Beitrag zur Steigerung der Genauigkeit und Zuverlässigkeit einer sensorgestützten Objekterkennung im Fahrzeugumfeld geleistet werden. Aufbauend auf einem Erfassungssystem, bestehend aus einer Stereokamera und einem Mehrzeilen-Laserscanner, werden teils neu entwickelte Verfahren für die gesamte Verarbeitungskette vorgestellt. Zusätzlich wird ein neuartiges Framework zur Fusion heterogener Sensordaten eingeführt, welches über eine Zusammenführung der Fusionsergebnisse aus den unterschiedlichen Verarbeitungsebenen in der Lage ist, die Objektbestimmung zu verbessern. Nach einer Beschreibung des verwendeten Sensoraufbaus werden die entwickelten Verfahren zur Kalibrierung des Sensorpaares vorgestellt. Bei der Segmentierung der räumlichen Punktdaten werden bestehende Verfahren durch die Einbeziehung von Messgenauigkeit und Messspezifik des Sensors erweitert. In der anschließenden Objektverfolgung wird neben einem neuartigen berechnungsoptimierten Ansatz zur Objektassoziierung ein Modell zur adaptiven Referenzpunktbestimmung und –Verfolgung beschrieben. Durch das vorgestellte Fusions-Framework ist es möglich, die Sensordaten wahlweise auf drei unterschiedlichen Verarbeitungsebenen (Punkt-, Objekt- und Track-Ebene) zu vereinen. Hierzu wird ein sensorunabhängiger Ansatz zur Fusion der Punktdaten dargelegt, der im Vergleich zu den anderen Fusionsebenen und den Einzelsensoren die genaueste Objektbeschreibung liefert. Für die oberen Fusionsebenen wurden unter Ausnutzung der konkurrierenden Sensorinformationen neuartige Verfahren zur Bestimmung und Reduzierung der Detektions- und Verarbeitungsfehler entwickelt. Abschließend wird beschrieben, wie die fehlerreduzierenden Verfahren der oberen Fusionsebenen mit der optimalen Objektbeschreibung der unteren Fusionsebene für eine optimale Objektbestimmung zusammengeführt werden können. Die Effektivität der entwickelten Verfahren wurde durch Simulation oder in realen Messszenarien überprüft. / With the present thesis a contribution to the increase of the accuracy and reliability of a sensor-supported recognition and tracking of objects in a vehicle’s surroundings should be made. Based on a detection system, consisting of a stereo camera and a laser scanner, novel developed procedures are introduced for the whole processing chain of the sensor data. In addition, a new framework is introduced for the fusion of heterogeneous sensor data. By combining the data fusion results from the different processing levels the object detection can be improved. After a short description of the used sensor setup the developed procedures for the calibration and mutual orientation are introduced. With the segmentation of the spatial point data existing procedures are extended by the inclusion of measuring accuracy and specificity of the sensor. In the subsequent object tracking a new computation-optimized approach for the association of the related object hypotheses is presented. In addition, a model for a dynamic determination and tracking of an object reference point is described which exceeds the classical tracking of the object center in the track accuracy. By the introduced fusion framework it is possible to merge the sensor data at three different processing levels (point, object and track level). A sensor independent approach for the low fusion of point data is demonstrated which delivers the most precise object description in comparison to the other fusion levels and the single sensors. For the higher fusion levels new procedures were developed to discover and clean up the detection and processing mistakes benefiting from the competing sensor information. Finally it is described how the fusion results of the upper and lower levels can be brought together for an ideal object description. The effectiveness of the newly developed methods was checked either by simulation or in real measurement scenarios.
166

Dense Stereo Reconstruction in a Field Programmable Gate Array

Sabihuddin, Siraj 30 July 2008 (has links)
Estimation of depth within an imaged scene can be formulated as a stereo correspondence problem. Software solutions tend to be too slow for high frame rate (i.e. > 30 fps) performance. Hardware solutions can result in marked improvements. This thesis explores one such hardware implementation that generates dense binocular disparity estimates at frame rates of over 200 fps using a dynamic programming formulation (DPML) developed by Cox et. al. A highly parameterizable field programmable gate array implementation of this architecture demonstrates equivalent accuracy while executing at significantly higher frame rates to those of current approaches. Existing hardware implementations for dense disparity estimation often use sum of squared difference, sum of absolute difference or other similar algorithms that typically perform poorly in comparison to DPML. The presented system runs at 248 fps for a resolution of 320 x 240 pixels and disparity range of 128 pixels, a performance of 2.477 billion DPS.
167

Dense Stereo Reconstruction in a Field Programmable Gate Array

Sabihuddin, Siraj 30 July 2008 (has links)
Estimation of depth within an imaged scene can be formulated as a stereo correspondence problem. Software solutions tend to be too slow for high frame rate (i.e. > 30 fps) performance. Hardware solutions can result in marked improvements. This thesis explores one such hardware implementation that generates dense binocular disparity estimates at frame rates of over 200 fps using a dynamic programming formulation (DPML) developed by Cox et. al. A highly parameterizable field programmable gate array implementation of this architecture demonstrates equivalent accuracy while executing at significantly higher frame rates to those of current approaches. Existing hardware implementations for dense disparity estimation often use sum of squared difference, sum of absolute difference or other similar algorithms that typically perform poorly in comparison to DPML. The presented system runs at 248 fps for a resolution of 320 x 240 pixels and disparity range of 128 pixels, a performance of 2.477 billion DPS.
168

Medical Image Registration and Stereo Vision Using Mutual Information

Fookes, Clinton Brian January 2003 (has links)
Image registration is a fundamental problem that can be found in a diverse range of fields within the research community. It is used in areas such as engineering, science, medicine, robotics, computer vision and image processing, which often require the process of developing a spatial mapping between sets of data. Registration plays a crucial role in the medical imaging field where continual advances in imaging modalities, including MRI, CT and PET, allow the generation of 3D images that explicitly outline detailed in vivo information of not only human anatomy, but also human function. Mutual Information (MI) is a popular entropy-based similarity measure which has found use in a large number of image registration applications. Stemming from information theory, this measure generally outperforms most other intensity-based measures in multimodal applications as it does not assume the existence of any specific relationship between image intensities. It only assumes a statistical dependence. The basic concept behind any approach using MI is to find a transformation, which when applied to an image, will maximise the MI between two images. This thesis presents research using MI in three major topics encompassed by the computer vision and medical imaging field: rigid image registration, stereo vision, and non-rigid image registration. In the rigid domain, a novel gradient-based registration algorithm (MIGH) is proposed that uses Parzen windows to estimate image density functions and Gauss-Hermite quadrature to estimate the image entropies. The use of this quadrature technique provides an effective and efficient way of estimating entropy while bypassing the need to draw a second sample of image intensities (a procedure required in previous Parzen-based MI registration approaches). It is possible to achieve identical results with the MIGH algorithm when compared to current state of the art MI-based techniques. These results are achieved using half the previously required sample sizes, thus doubling the statistical power of the registration algorithm. Furthermore, the MIGH technique improves algorithm complexity by up to an order of N, where N represents the number of samples extracted from the images. In stereo vision, a popular passive method of depth perception, new extensions have been pro- posed in order to increase the robustness of MI-based stereo matching algorithms. Firstly, prior probabilities are incorporated into the MI measure to considerably increase the statistical power of the matching windows. The statistical power, directly related to the number of samples, can become too low when small matching windows are utilised. These priors, which are calculated from the global joint histogram, are tuned to a two level hierarchical approach. A 2D match surface, in which the match score is computed for every possible combination of template and matching windows, is also utilised to enforce left-right consistency and uniqueness constraints. These additions to MI-based stereo matching significantly enhance the algorithms ability to detect correct matches while decreasing computation time and improving the accuracy, particularly when matching across multi-spectra stereo pairs. MI has also recently found use in the non-rigid domain due to a need to compute multimodal non-rigid transformations. The viscous fluid algorithm is perhaps the best method for re- covering large local mis-registrations between two images. However, this model can only be used on images from the same modality as it assumes similar intensity values between images. Consequently, a hybrid MI-Fluid algorithm is proposed to compute a multimodal non-rigid registration technique. MI is incorporated via the use of a block matching procedure to generate a sparse deformation field which drives the viscous fluid algorithm, This algorithm is also compared to two other popular local registration techniques, namely Gaussian convolution and the thin-plate spline warp, and is shown to produce comparable results. An improved block matching procedure is also proposed whereby a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampler is used to optimally locate grid points of interest. These grid points have a larger concentration in regions of high information and a lower concentration in regions of small information. Previous methods utilise only a uniform distribution of grid points throughout the image.
169

Bearing-only SLAM : a vision-based navigation system for autonomous robots

Huang, Henry January 2008 (has links)
To navigate successfully in a previously unexplored environment, a mobile robot must be able to estimate the spatial relationships of the objects of interest accurately. A Simultaneous Localization and Mapping (SLAM) sys- tem employs its sensors to build incrementally a map of its surroundings and to localize itself in the map simultaneously. The aim of this research project is to develop a SLAM system suitable for self propelled household lawnmowers. The proposed bearing-only SLAM system requires only an omnidirec- tional camera and some inexpensive landmarks. The main advantage of an omnidirectional camera is the panoramic view of all the landmarks in the scene. Placing landmarks in a lawn field to define the working domain is much easier and more flexible than installing the perimeter wire required by existing autonomous lawnmowers. The common approach of existing bearing-only SLAM methods relies on a motion model for predicting the robot’s pose and a sensor model for updating the pose. In the motion model, the error on the estimates of object positions is cumulated due mainly to the wheel slippage. Quantifying accu- rately the uncertainty of object positions is a fundamental requirement. In bearing-only SLAM, the Probability Density Function (PDF) of landmark position should be uniform along the observed bearing. Existing methods that approximate the PDF with a Gaussian estimation do not satisfy this uniformity requirement. This thesis introduces both geometric and proba- bilistic methods to address the above problems. The main novel contribu- tions of this thesis are: 1. A bearing-only SLAM method not requiring odometry. The proposed method relies solely on the sensor model (landmark bearings only) without relying on the motion model (odometry). The uncertainty of the estimated landmark positions depends on the vision error only, instead of the combination of both odometry and vision errors. 2. The transformation of the spatial uncertainty of objects. This thesis introduces a novel method for translating the spatial un- certainty of objects estimated from a moving frame attached to the robot into the global frame attached to the static landmarks in the environment. 3. The characterization of an improved PDF for representing landmark position in bearing-only SLAM. The proposed PDF is expressed in polar coordinates, and the marginal probability on range is constrained to be uniform. Compared to the PDF estimated from a mixture of Gaussians, the PDF developed here has far fewer parameters and can be easily adopted in a probabilistic framework, such as a particle filtering system. The main advantages of our proposed bearing-only SLAM system are its lower production cost and flexibility of use. The proposed system can be adopted in other domestic robots as well, such as vacuum cleaners or robotic toys when terrain is essentially 2D.
170

Vision-based moving pedestrian recognition from imprecise and uncertain data / Reconnaissance de piétons par vision à partir de données imprécises et incertaines

Zhou, Dingfu 05 December 2014 (has links)
La mise en oeuvre de systèmes avancés d’aide à la conduite (ADAS) basée vision, est une tâche complexe et difficile surtout d’un point de vue robustesse en conditions d’utilisation réelles. Une des fonctionnalités des ADAS vise à percevoir et à comprendre l’environnement de l’ego-véhicule et à fournir l’assistance nécessaire au conducteur pour réagir à des situations d’urgence. Dans cette thèse, nous nous concentrons sur la détection et la reconnaissance des objets mobiles car leur dynamique les rend plus imprévisibles et donc plus dangereux. La détection de ces objets, l’estimation de leurs positions et la reconnaissance de leurs catégories sont importants pour les ADAS et la navigation autonome. Par conséquent, nous proposons de construire un système complet pour la détection des objets en mouvement et la reconnaissance basées uniquement sur les capteurs de vision. L’approche proposée permet de détecter tout type d’objets en mouvement en fonction de deux méthodes complémentaires. L’idée de base est de détecter les objets mobiles par stéréovision en utilisant l’image résiduelle du mouvement apparent (RIMF). La RIMF est définie comme l’image du mouvement apparent causé par le déplacement des objets mobiles lorsque le mouvement de la caméra a été compensé. Afin de détecter tous les mouvements de manière robuste et de supprimer les faux positifs, les incertitudes liées à l’estimation de l’ego-mouvement et au calcul de la disparité doivent être considérées. Les étapes principales de l’algorithme sont les suivantes : premièrement, la pose relative de la caméra est estimée en minimisant la somme des erreurs de reprojection des points d’intérêt appariées et la matrice de covariance est alors calculée en utilisant une stratégie de propagation d’erreurs de premier ordre. Ensuite, une vraisemblance de mouvement est calculée pour chaque pixel en propageant les incertitudes sur l’ego-mouvement et la disparité par rapport à la RIMF. Enfin, la probabilité de mouvement et le gradient de profondeur sont utilisés pour minimiser une fonctionnelle d’énergie de manière à obtenir la segmentation des objets en mouvement. Dans le même temps, les boîtes englobantes des objets mobiles sont générées en utilisant la carte des U-disparités. Après avoir obtenu la boîte englobante de l’objet en mouvement, nous cherchons à reconnaître si l’objet en mouvement est un piéton ou pas. Par rapport aux algorithmes de classification supervisée (comme le boosting et les SVM) qui nécessitent un grand nombre d’exemples d’apprentissage étiquetés, notre algorithme de boosting semi-supervisé est entraîné avec seulement quelques exemples étiquetés et de nombreuses instances non étiquetées. Les exemples étiquetés sont d’abord utilisés pour estimer les probabilités d’appartenance aux classes des exemples non étiquetés, et ce à l’aide de modèles de mélange de gaussiennes après une étape de réduction de dimension réalisée par une analyse en composantes principales. Ensuite, nous appliquons une stratégie de boosting sur des arbres de décision entraînés à l’aide des instances étiquetées de manière probabiliste. Les performances de la méthode proposée sont évaluées sur plusieurs jeux de données de classification de référence, ainsi que sur la détection et la reconnaissance des piétons. Enfin, l’algorithme de détection et de reconnaissances des objets en mouvement est testé sur les images du jeu de données KITTI et les résultats expérimentaux montrent que les méthodes proposées obtiennent de bonnes performances dans différents scénarios de conduite en milieu urbain. / Vision-based Advanced Driver Assistance Systems (ADAS) is a complex and challenging task in real world traffic scenarios. The ADAS aims at perceiving andunderstanding the surrounding environment of the ego-vehicle and providing necessary assistance for the drivers if facing some emergencies. In this thesis, we will only focus on detecting and recognizing moving objects because they are more dangerous than static ones. Detecting these objects, estimating their positions and recognizing their categories are significantly important for ADAS and autonomous navigation. Consequently, we propose to build a complete system for moving objects detection and recognition based on vision sensors. The proposed approach can detect any kinds of moving objects based on two adjacent frames only. The core idea is to detect the moving pixels by using the Residual Image Motion Flow (RIMF). The RIMF is defined as the residual image changes caused by moving objects with compensated camera motion. In order to robustly detect all kinds of motion and remove false positive detections, uncertainties in the ego-motion estimation and disparity computation should also be considered. The main steps of our general algorithm are the following : first, the relative camera pose is estimated by minimizing the sum of the reprojection errors of matched features and its covariance matrix is also calculated by using a first-order errors propagation strategy. Next, a motion likelihood for each pixel is obtained by propagating the uncertainties of the ego-motion and disparity to the RIMF. Finally, the motion likelihood and the depth gradient are used in a graph-cut-based approach to obtain the moving objects segmentation. At the same time, the bounding boxes of moving object are generated based on the U-disparity map. After obtaining the bounding boxes of the moving object, we want to classify the moving objects as a pedestrian or not. Compared to supervised classification algorithms (such as boosting and SVM) which require a large amount of labeled training instances, our proposed semi-supervised boosting algorithm is trained with only a few labeled instances and many unlabeled instances. Firstly labeled instances are used to estimate the probabilistic class labels of the unlabeled instances using Gaussian Mixture Models after a dimension reduction step performed via Principal Component Analysis. Then, we apply a boosting strategy on decision stumps trained using the calculated soft labeled instances. The performances of the proposed method are evaluated on several state-of-the-art classification datasets, as well as on a pedestrian detection and recognition problem.Finally, both our moving objects detection and recognition algorithms are tested on the public images dataset KITTI and the experimental results show that the proposed methods can achieve good performances in different urban scenarios.

Page generated in 0.099 seconds