• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 46
  • 26
  • 18
  • 10
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 317
  • 59
  • 55
  • 52
  • 45
  • 44
  • 43
  • 39
  • 36
  • 30
  • 28
  • 28
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Motion and shape from apparent flow.

January 2013 (has links)
捕捉攝像機運動和重建攝像機成像場景深度圖的測定是在計算機視覺和機器任務包括可視化控制和自主導航是非常重要。在執行上述任務時,一個攝像機(或攝像機群組)通常安裝在機器的執行端部。攝像機和執行端部之間的手眼校準在視覺控制的正常操作中是不可缺少的。同樣,在對於需要使用多個攝像機的情况下,它們的相對幾何關係也是對各種計算機視覺應用來說也是非常重要。 / 攝像機和場景的相對運動通常產生出optical flow。問題的困難主要在於,在直接觀察視頻中的optical flow通常不是完全由運動誘導出的optical flow,而只是它的一部分。這個部分就是空間圖像等光線輪廓的正交。這部分的流場被稱為normal flow。本論文提出直接利用normal flow,而不是由normal flow引申出的optical flow,去解決以下的問題:尋找攝像機運動,場景深度圖和手眼校準。這種方法有許多顯著的貢獻,它不需引申流場,進而不要求平滑的成像場景。跟optical flow相反,normal flow不需要複雜的優化處理程序去解決流場不連續性的問題,這種技術一般是需要用大量的計算量。這也打破了傳統攝像機運動與場景深度之間的問題,在沒有預先知道不連續位置的情況下也可找出攝像機的運動。這篇論提出了幾個直接方法運用在三種不同類型的視覺系統,分別是單個攝像機,雙攝像機和多個攝像機,去找出攝像機的運動。 / 本論文首先提通過Apparent Flow 正深度 (AFPD) 約束去利用所有觀察到的normal flow去找出單個攝像機的運動參數。AFPD約束是利用一個優化問題來估計運動參數。一個反复由粗到細雙重約束的投票框架能使AFPD約束尋找出運動參數。 / 由於有限的視頻採樣率,normal flow在提取方向比其幅度部分更準確。本論文提出了兩個約束條件:一個是Apparent Flow方向(AFD)的約束,另外一個是Apparent Flow 幅度(AFM)的約束去尋找運動參數。第一個約束本身是作為一個線性不等式系統去約束運動方向的參數,第二個是利用所有圖像位置的旋轉幅度的統一性去進一步限制運動參數。一個兩階段從粗到細的約束框架能使AFD及AFM約束尋找出運動參數。 / 然而,如果沒有optical flow,normal flow是唯一的原始資料,它通常遭受到有限影像分辨率和有限視頻採樣率的問題而產生出錯誤。本文探討了這個問題的補救措施,方法是把一些攝像機併在一起,形成一個近似球形的攝像機,以增加成像系統的視野。有了一個加寬視野,normal flow的數量可更大,這可以用來抵銷normal flow在每個成像點的提取錯誤。更重要的是,攝像頭的平移和旋轉運動方向可以透過Apparent Flow分離 (AFS) 約束 及 延伸Apparent Flow分離 (EAFS) 約束來獨立估算。 / 除了使用單攝像機或球面成像系統之外,立體視覺成像系統提供了其它的視覺線索去尋找攝像機在沒有被任意縮放大小的平移運動和深度圖。傳統的立體視覺方法是確定在兩個輸入圖像特徵的對應。然而,對應的建立是非常困難。本文探討了兩個直接方法來恢復完整的攝像機運動,而沒有需要利用一對影像明確的點至點對應。第一種方法是利用AFD和AFM約束伸延到立體視覺系統,並提供了一個穩定的幾何方法來確定平移運動的幅度。第二個方法需要利用有一個較大的重疊視場,以提供一個不需反覆計算的closed-form算法。一旦確定了運動參數,深度圖可以沒有任何困難地重建。從normal flow產生的深度圖一般是以稀疏的形式存在。我們可以通過擴張深度圖,然後利用它作為在常見的TV-L₁框架的初始估計。其結果不僅有一個更好的重建性能,也產生出更快的運算時間。 / 手眼校準通常是基於像圖特徵對應。本文提出一個替代方法,是從動態攝像系統產生的normal flow來做自我校準。為了使這個方法有更強防備噪音的能力,策略是使用normal flow的流場方向去尋找手眼幾何的方向部份。偏離點及部分的手眼幾何可利用normal flow固有的流場屬性去尋找。最後完整的手眼幾何可使用穩定法來變得更可靠。手眼校準還可以被用來確定多個攝像機的相對幾何關係,而不需要求它們有重疊的視場。 / Determination of general camera motion and reconstructing depth map from a captured video of the imaged scene relative to a camera is important for computer vision and various robotics tasks including visual control and autonomous navigation. A camera (or a cluster of cameras) is usually mounted on the end-effector of a robot arm when performing the above tasks. The determination of the relative geometry between the camera frame and the end-effector frame which is commonly referred as hand-eye calibration is essential to proper operation in visual control. Similarly, determining the relative geometry of multiple cameras is also important to various applications requiring the use of multi-camera rig. / The relative motion between an observer and the imaged scene generally induces apparent flow in the video. The difficulty of the problem lies mainly in that the flow pattern directly observable in the video is generally not the full flow field induced by the motion, but only partial information of it, which is orthogonal to the iso-brightness contour of the spatial image intensity profile. The partial flow field is known as the normal flow field. This thesis addresses several important problems in computer vision: determination of camera motion, recovery of depth map, and performing hand-eye calibration from the apparent flow (normal flow) pattern itself in the video data directly but not from the full flow interpolated from the apparent flow. This approach has a number of significant contributions. It does not require interpolating the flow field and in turn does not demand the imaged scene to be smooth. In contrast to optical flow, no sophisticated optimization procedures that account for handling flow discontinuities are required, and such techniques are generally computational expensive. It also breaks the classical chicken-and-egg problem between scene depth and camera motion. No prior knowledge about the locations of the discontinuities is required for motion determination. In this thesis, several direct methods are proposed to determine camera motion using three different types of imaging systems, namely monocular camera, stereo camera, and multi-camera rig. / This thesis begins with the Apparent Flow Positive Depth (AFPD) constraint to determine the motion parameters using all observable normal flows from a monocular camera. The constraint presents itself as an optimization problem to estimate the motion parameters. An iterative process in a constrained dual coarse-to-fine voting framework on the motion parameter space is used to exploit the constraint. / Due to the finite video sampling rate, the extracted normal flow field is generally more accurate in direction component than its magnitude part. This thesis proposes two constraints: one related to the direction component of the normal flow field - the Apparent Flow Direction (AFD) constraint, and the other to the magnitude component of the field - the Apparent Flow Magnitude (AFM) constraint, to determine motion. The first constraint presents itself as a system of linear inequalities to bind the direction of motion parameters; the second one uses the globality of rotational magnitude to all image positions to constrain the motion parameters further. A two-stage iterative process in a coarse-to-fine framework on the motion parameter space is used to exploit the two constraints. / Yet without the need of the interpolation step, normal flow is only raw information extracted locally that generally suffers from flow extraction error arisen from finiteness of the image resolution and video sampling rate. This thesis explores a remedy to the problem, which is to increase the visual field of the imaging system by fixating a number of cameras together to form an approximate spherical eye. With a substantially widened visual field, the normal flow data points would be in a much greater number, which can be used to combat the local flow extraction error at each image point. More importantly, the directions of translation and rotation components in general motion can be separately estimated with the use of the novel Apparent Flow Separation (AFS) and Extended Apparent Flow Separation (EAFS) constraints. / Instead of using a monocular camera or a spherical imaging system, stereo vision contributes another visual clue to determine magnitude of translation and depth map without the problem of arbitrarily scaling of the magnitude. The conventional approach in stereo vision is to determine feature correspondences across the two input images. However, the correspondence establishment is often difficult. This thesis explores two direct methods to recover the complete camera motion from the stereo system without the explicit point-to-point correspondences matching. The first method extends the AFD and AFM constraints to stereo camera, and provides a robust geometrical method to determine translation magnitude. The second method which requires the stereo image pair to have a large overlapped field of view provides a closed-form solution, requiring no iterative computation. Once the motion parameters are here, depth map can be reconstructed without any difficulty. The depth map resulted from normal flows is generally sparse in nature. We can interpolate the depth map and then utilizing it as an initial estimate in a conventional TV-L₁ framework. The result is not only a better reconstruction performance, but also a faster computation time. / Calibration of hand-eye geometry is usually based on feature correspondences. This thesis presents an alternative method that uses normal flows generated from an active camera system to perform self-calibration. In order to make the method more robust to noise, the strategy is to use the direction component of the flow field which is more noise-immune to recover the direction part of the hand-eye geometry first. Outliers are then detected using some intrinsic properties of the flow field together with the partially recovered hand-eye geometry. The final solution is refined using a robust method. The method can also be used to determine the relative geometry of multiple cameras without demanding overlap in their visual fields. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Hui, Tak Wai. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 159-165). / Abstracts in English and Chinese. / Acknowledgements --- p.i / Abstract --- p.ii / Lists of Figures --- p.xiii / Lists of Tables --- p.xix / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Motivation --- p.4 / Chapter 1.3 --- Research Objectives --- p.6 / Chapter 1.4 --- Thesis Outline --- p.7 / Chapter Chapter 2 --- Literature Review --- p.10 / Chapter 2.1 --- Introduction --- p.10 / Chapter 2.2 --- Recovery of Optical Flows --- p.10 / Chapter 2.3 --- Egomotion Estimation Based on Optical Flow Field --- p.14 / Chapter 2.3.1 --- Bilinear Constraint --- p.14 / Chapter 2.3.2 --- Subspace Method --- p.15 / Chapter 2.3.3 --- Partial Search Method --- p.16 / Chapter 2.3.4 --- Fixation --- p.17 / Chapter 2.3.5 --- Region Alignment --- p.17 / Chapter 2.3.6 --- Linearity and Divergence Properties of Optical Flows --- p.18 / Chapter 2.3.7 --- Constraint Lines and Collinear Points --- p.18 / Chapter 2.3.8 --- Multi-Camera Rig --- p.19 / Chapter 2.3.9 --- Discussion --- p.21 / Chapter 2.4 --- Determining Egomotion Using Direct Methods --- p.22 / Chapter 2.4.1 --- Introduction --- p.22 / Chapter 2.4.2 --- Classical Methods --- p.23 / Chapter 2.4.3 --- Pattern Matching --- p.24 / Chapter 2.4.4 --- Search Subspace Method --- p.25 / Chapter 2.4.5 --- Histogram-Based Method --- p.26 / Chapter 2.4.6 --- Multi-Camera Rig --- p.26 / Chapter 2.4.7 --- Discussion --- p.27 / Chapter 2.5 --- Determining Egomotion Using Feature Correspondences --- p.28 / Chapter 2.6 --- Hand-Eye Calibration --- p.30 / Chapter 2.7 --- Summary --- p.31 / Chapter Chapter 3 --- Determining Motion from Monocular Camera Using Merely the Positive Depth Constraint --- p.32 / Chapter 3.1 --- Introduction --- p.32 / Chapter 3.2 --- Related Works --- p.33 / Chapter 3.3 --- Background --- p.34 / Chapter 3.3 --- Apparent Flow Positive Depth (AFPD) Constraint --- p.39 / Chapter 3.4 --- Numerical Solution to AFPD Constraint --- p.40 / Chapter 3.5 --- Constrained Coarse-to-Fine Searching --- p.40 / Chapter 3.6 --- Experimental Results --- p.43 / Chapter 3.7 --- Conclusion --- p.47 / Chapter Chapter 4 --- Determining Motion from Monocular Camera Using Direction and Magnitude of Normal Flows Separately --- p.48 / Chapter 4.1 --- Introduction --- p.48 / Chapter 4.2 --- Related Works --- p.50 / Chapter 4.3 --- Apparent Flow Direction (AFD) Constraint --- p.51 / Chapter 4.3.1 --- The Special Case: Pure Translation --- p.51 / Chapter 4.3.1.1 --- Locus of Translation Using Full Flow as a Constraint --- p.51 / Chapter 4.3.1.2 --- Locus of Translation Using Normal Flow as a Constraint --- p.53 / Chapter 4.3.2 --- The Special Case: Pure Rotation --- p.54 / Chapter 4.3.2.1 --- Locus of Rotation Using Full Flow as a Constraint --- p.54 / Chapter 4.3.2.2 --- Locus of Rotation Using Normal Flow as a Constraint --- p.54 / Chapter 4.3.3 --- Solving the System of Linear Inequalities for the Two Special Cases --- p.55 / Chapter 4.3.5 --- Ambiguities of AFD Constraint --- p.59 / Chapter 4.4 --- Apparent Flow Magnitude (AFM) Constraint --- p.60 / Chapter 4.5 --- Putting the Two Constraints Together --- p.63 / Chapter 4.6 --- Experimental Results --- p.65 / Chapter 4.6.1 --- Simulation --- p.65 / Chapter 4.6.2 --- Video Data --- p.67 / Chapter 4.6.2.1 --- Pure Translation --- p.67 / Chapter 4.6.2.2 --- General Motion --- p.68 / Chapter 4.7 --- Conclusion --- p.72 / Chapter Chapter 5 --- Determining Motion from Multi-Cameras with Non-Overlapping Visual Fields --- p.73 / Chapter 5.1 --- Introduction --- p.73 / Chapter 5.2 --- Related Works --- p.75 / Chapter 5.3 --- Background --- p.76 / Chapter 5.3.1 --- Image Sphere --- p.77 / Chapter 5.3.2 --- Planar Case --- p.78 / Chapter 5.3.3 --- Projective Transformation --- p.79 / Chapter 5.4 --- Constraint from Normal Flows --- p.80 / Chapter 5.5 --- Approximation of Spherical Eye by Multiple Cameras --- p.81 / Chapter 5.6 --- Recovery of Motion Parameters --- p.83 / Chapter 5.6.1 --- Classification of a Pair of Normal Flows --- p.84 / Chapter 5.6.2 --- Classification of a Triplet of Normal Flows --- p.86 / Chapter 5.6.3 --- Apparent Flow Separation (AFS) Constraint --- p.87 / Chapter 5.6.3.1 --- Constraint to Direction of Translation --- p.87 / Chapter 5.6.3.2 --- Constraint to Direction of Rotation --- p.88 / Chapter 5.6.3.3 --- Remarks about the AFS Constraint --- p.88 / Chapter 5.6.4 --- Extension of Apparent Flow Separation Constraint (EAFS) --- p.89 / Chapter 5.6.4.1 --- Constraint to Direction of Translation --- p.90 / Chapter 5.6.4.2 --- Constraint to Direction of Rotation --- p.92 / Chapter 5.6.5 --- Solution to the AFS and EAFS Constraints --- p.94 / Chapter 5.6.6 --- Apparent Flow Magnitude (AFM) Constraint --- p.96 / Chapter 5.7 --- Experimental Results --- p.98 / Chapter 5.7.1 --- Simulation --- p.98 / Chapter 5.7.2 --- Real Video --- p.103 / Chapter 5.7.2.1 --- Using Feature Correspondences --- p.108 / Chapter 5.7.2.2 --- Using Optical Flows --- p.108 / Chapter 5.7.2.3 --- Using Direct Methods --- p.109 / Chapter 5.8 --- Conclusion --- p.111 / Chapter Chapter 6 --- Motion and Shape from Binocular Camera System: An Extension of AFD and AFM Constraints --- p.112 / Chapter 6.1 --- Introduction --- p.112 / Chapter 6.2 --- Related Works --- p.112 / Chapter 6.3 --- Recovery of Camera Motion Using Search Subspaces --- p.113 / Chapter 6.4 --- Correspondence-Free Stereo Vision --- p.114 / Chapter 6.4.1 --- Determination of Full Translation Using Two 3D Lines --- p.114 / Chapter 6.4.2 --- Determination of Full Translation Using All Normal Flows --- p.115 / Chapter 6.4.3 --- Determination of Full Translation Using a Geometrical Method --- p.117 / Chapter 6.5 --- Experimental Results --- p.119 / Chapter 6.5.1 --- Synthetic Image Data --- p.119 / Chapter 6.5.2 --- Real Scene --- p.120 / Chapter 6.6 --- Conclusion --- p.122 / Chapter Chapter 7 --- Motion and Shape from Binocular Camera System: A Closed-Form Solution for Motion Determination --- p.123 / Chapter 7.1 --- Introduction --- p.123 / Chapter 7.2 --- Related Works --- p.124 / Chapter 7.3 --- Background --- p.125 / Chapter 7.4 --- Recovery of Camera Motion Using a Linear Method --- p.126 / Chapter 7.4.1 --- Region-Correspondence Stereo Vision --- p.126 / Chapter 7.3.2 --- Combined with Epipolar Constraints --- p.127 / Chapter 7.4 --- Refinement of Scene Depth --- p.131 / Chapter 7.4.1 --- Using Spatial and Temporal Constraints --- p.131 / Chapter 7.4.2 --- Using Stereo Image Pairs --- p.134 / Chapter 7.5 --- Experiments --- p.136 / Chapter 7.5.1 --- Synthetic Data --- p.136 / Chapter 7.5.2 --- Real Image Sequences --- p.137 / Chapter 7.6 --- Conclusion --- p.143 / Chapter Chapter 8 --- Hand-Eye Calibration Using Normal Flows --- p.144 / Chapter 8.1 --- Introduction --- p.144 / Chapter 8.2 --- Related Works --- p.144 / Chapter 8.3 --- Problem Formulation --- p.145 / Chapter 8.3 --- Model-Based Brightness Constraint --- p.146 / Chapter 8.4 --- Hand-Eye Calibration --- p.147 / Chapter 8.4.1 --- Determining the Rotation Matrix R --- p.148 / Chapter 8.4.2 --- Determining the Direction of Position Vector T --- p.149 / Chapter 8.4.3 --- Determining the Complete Position Vector T --- p.150 / Chapter 8.4.4 --- Extrinsic Calibration of a Multi-Camera Rig --- p.151 / Chapter 8.5 --- Experimental Results --- p.151 / Chapter 8.5.1 --- Synthetic Data --- p.151 / Chapter 8.5.2 --- Real Image Data --- p.152 / Chapter 8.6 --- Conclusion --- p.153 / Chapter Chapter 9 --- Conclusion and Future Work --- p.154 / Related Publications --- p.158 / Bibliography --- p.159 / Appendix --- p.166 / Chapter A --- Apparent Flow Direction Constraint --- p.166 / Chapter B --- Ambiguity of AFD Constraint --- p.168 / Chapter C --- Relationship between the Angle Subtended by any two Flow Vectors in Image Plane and the Associated Flow Vectors in Image Sphere --- p.169
142

Styrning av visionkameror för positionsbestämning samt programmering av användargränssnitt : Försvarets materielverk FMV, Test & Evaluering / Contol of visioncameras for positioning and programming of userinterface : Swedish Defence Materiel Administration FMV, Test and evaluation

Vedin, Markus, Wik, Jonathan January 2018 (has links)
This thesis is accomplished in the Swedish Defence Materiel Administration, FMV in Karlsborg together with the University of Skövde. The thesis of 30 credits is within the main area automation technology. FMV in Karlsborg performs different tests of military systems. For one of these tests, industrial cameras are used to decide positions of objects. In order for FMV to operate these cameras they use a demo software which is controlled by laptops. The demo software can be improved because it contains several settings that are never used and also it is only adapted for the control of one camera. The hardware that controls the cameras can also be improved since requested frame rate is not achieved. The purpose of this thesis is to improve the existing hardware for collecting the images from the cameras and develop a new graphical user interface software for the cameras. To use the existing hardware better, information about the different parts of the computers have been collected to evaluate bottlenecks. This has been done by studying books and technical rapports and tests of the different computers. A software to control two cameras has been coded and a recommended platform has been described to FMV. The performance of the cameras has been improved with the new software and the steering of the cameras is now easier. / Examensarbetet är utfört på Försvarets materielverk (FMV) i Karlsborg i samarbete med Högskolan i Skövde. Examensarbetet är på 30 högskolepoäng och inom huvudområdet automatiseringsteknik. FMV i Karlsborg utför olika tester av militära system, och till en av dessa tester används industrikameror för att bestämma positioner på olika föremål. För att FMV ska kunna använda sig av dessa kameror används en demoprogramvara som styrs med hjälp av bärbara datorer. Demo programmet kan förbättras då det innehåller flera olika inställningar som aldrig används samt att det endast är anpassat för en kamera. Även hårdvaran som styr kamerorna kan förbättras då önskad bildhastighet inte uppnås. Målet med examensarbetet är att förbättra hårdvaran för insamlandet av bilder från kamerorna samt utveckla ett användargränssnitt till kamerorna. För att kunna utnyttja befintlig hårdvara bättre har information letats upp om datorns olika komponenter för att därefter undersöka vilka komponenter som är flaskhalsar i systemet. Detta har gjorts med hjälp av böcker, tekniska rapporter och tester på olika datorer. Ett användargränssnitt för att styra två kameror har programmerats samt en rekommenderad plattform för styrning av kamerorna har tagits fram. Med det nya användargränssnittet har prestandan ökat och styrningen av kamerorna har blivit enklare.
143

Cidade vigiada: segurança e controle em tempos de biopoder / City surveillance: security and control in times of biopower

Oliveira, Ludmilla Alves de 23 September 2013 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-22T13:04:54Z No. of bitstreams: 2 Oliveira, Ludmilla Alves de.pdf: 757358 bytes, checksum: df642ffbfccc9cf3ba0376c7dd50d650 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-23T11:45:25Z (GMT) No. of bitstreams: 2 Oliveira, Ludmilla Alves de.pdf: 757358 bytes, checksum: df642ffbfccc9cf3ba0376c7dd50d650 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-09-23T11:45:25Z (GMT). No. of bitstreams: 2 Oliveira, Ludmilla Alves de.pdf: 757358 bytes, checksum: df642ffbfccc9cf3ba0376c7dd50d650 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-09-23 / The contemporary world lives the reality of the culture of fear, culminating in a society under constant surveillance. And the surveillance cameras are part of the urban scenario being necessary to exercise of controlling and security. This paper aims to identify how the individual is incurred as a subject before surveillance. Insecurity, fear and social conflicts guided by contemporary capitalism. This is a qualitative exploratory study that pursued to know the current surveillance control universe established in the contemporary world. The following study of the surveillance cameras usage in Goiânia uses as a theoretical and methodological referential the discourse analysis (DA) based on the method of Eni P. Orlandi (2005). All the evoked notions and theories such as power, knowledge, subjectivity and discourse, in view of authors as Foucault (2009: 2008: 2007: 2006: 1999a: 1999b: 1987), Guattari and Rolnik (2011: 1992), Deleuze (2005: 2001: 1992) and Agamben (2009), are worked in the theoretical chapters and resumed during the analysis process. Were interviewed passers-by and traders present in the monitored areas as well as representatives of the monitoring central station. Throughout the analysis and interviews the attention was turned to the process of subjectivity of the subject. Their subjection process (acceptance and resistance/biopower) and still the ways of knowing and the power present in the relation between the subject and the surveillance cameras. From these categories of analysis and interviews it was observed the presence of a hegemonic discourse where the subject is incurred by the relation of forces, modes coercion and control that they are involved in, developing a behavior characteristic of biopower. However this same subject is constituted according to the environment they live it is also the subject that was not characterized by a unique behavior in their own way to make themselves the subject, becoming, therefore, mass of a hegemonic discourse, absolutely controllable by a security system, sold only as something to benefit society. Keywords: Subjectivity. Surveillance. Cameras. Insecurity. Urban space. / A contemporaneidade vive a cultura do medo, que culminou em uma sociedade altamente vigiada. As câmeras de vigilância compõem o cenário urbano por se fazerem necessárias ao exercício do controle e da segurança. O presente trabalho pretende identificar como o indivíduo se constitui enquanto sujeito diante da vigilância, da insegurança, do medo e dos conflitos sociais pautados no capitalismo contemporâneo. Trata-se de uma pesquisa qualitativa e exploratória que buscou conhecer o atual universo de controle e vigilância estabelecidos na contemporaneidade. O estudo do uso de câmeras de vigilância em Goiânia faz uso da Análise do Discurso (AD) baseada na obra de Eni P. Orlandi (2005). Todas as noções e teorias evocadas, como as do poder, do saber, do dispositivo, da subjetivação e do discurso, na visão de autores como Foucault (2009: 2008: 2007: 2006: 1999: 1999: 1987), Guattari e Rolnik (2011: 1992), Deleuze (2005: 2001: 1992) e Agamben (2009), são trabalhadas nos capítulos teóricos e retomadas durante o processo de análise. Foram entrevistados transeuntes e comerciantes presentes nas áreas monitoradas, assim como representantes da central de monitoramento. Ao longo das entrevistas e análise, a atenção voltou-se para os processos de subjetivação do sujeito, a seus processos de assujeitamento (aceitação e resistência/ biopoder) e ainda às formas de saber e poder presentes na relação entre sujeito e câmeras de vigilância. A partir dessas categorias de análise e das entrevistas, observou-se a presença de um discurso hegemônico, em que o sujeito se constitui por meio das relações de forças, modos de coerção e controle que o envolve, desenvolvendo um comportamento de total aceitação e submissão as formas de controle e poder regentes na sociedade, comportamento este que é próprio do biopoder. Contudo, esse mesmo sujeito que se constitui conforme o meio no qual está inserido, é também aquele que não se caracterizou por um comportamento singular, num modo próprio de se fazer sujeito; tornando-se, portanto, massa de um discurso hegemônico, absolutamente controlável por um sistema de segurança, que é vendido apenas como algo a beneficiar a sociedade.
144

Assessment Of Using A Life-Logging Wearable Camera As A Tool For Determining Dietary Intake In Free Living Non-Communicative Individuals

Cress, Eileen M., Wooliver, O. G., Evans, L. T., DePaoli, C. M., Stafford, J. M., Clark, W. Andrew 17 October 2015 (has links)
No description available.
145

Conception et réalisation de caméras plénoptiques pour l'apport d'une vision 3D à un imageur infrarouge mono plan focal / Design and implementation of cooled infrared cameras with single focal plane array depth estimation capability

Cossu, Kevin 23 November 2018 (has links)
Les systèmes d’imagerie infrarouge suivent depuis plusieurs années la même tendance de miniaturisation que les systèmes d’imagerie visible. Aujourd’hui cette miniaturisation se rapproche d’une limite physique qui amène la communauté à se tourner vers une nouvelle approche : la fonctionnalisation, c’est-à-dire l’apport de fonctions d’imagerie avancées aux systèmes telles que l’imagerie 3D.En infrarouge, la fonction d’imagerie 3D est très recherchée car elle pourrait apporter à un fantassin un outil de télémétrie passive fonctionnant de nuit comme de jour, ou encore permettre l’autonomie en environnements complexes à des systèmes tels que les drones. Cependant, le cout d’une caméra infrarouge hautes-performances est élevé. Multiplier le nombre de cameras n’est donc pas une solution acceptable pour répondre à ce besoin.C’est dans ce contexte que se situe ce travail qui consiste à apporter une fonction de vision 3D à des caméras infrarouges possédant un unique plan focal.Au cours de cette thèse, j’ai identifié la technologie d’imagerie 3D la plus adaptée à ce besoin : la camera plénoptique. J’ai montré que cette dernière permet de proposer, en intégrant une matrice de microlentilles dans le cryostat, un bloc de détection infrarouge avec une fonction d’imagerie 3D. L’environnement scellé du bloc de détection m’a amené à développer un modèle de dimensionnement rigoureux que j’ai appliqué pour concevoir et réaliser une camera plénoptique infrarouge refroidie. J’ai ensuite mis au point une méthode de caractérisation originale et intégré les mesures dans une série d’algorithmes de traitement d’image afin de remonter à la distance des objets observés. / For a few years now, infrared cameras have been following the same miniaturization trend introduced with visible cameras. Today, this miniaturization is nearing a physical limit, leading the community to take a different approach called functionalization: that is bringing an advanced imaging capability to the system.For infrared cameras, one of the most desired functions is 3D vision. This could be used to bring soldiers a passive telemetry tool or to help UAVs navigate a complex environment, even at night. However, high performance infrared cameras are expensive. Multiplying the number of cameras would thus not be an acceptable solution to bring 3D vision to these systems.That is why this work focuses on bringing 3D vision to cooled infrared cameras using only a single focal plane array.During this PhD, I have first identified the plenoptic technology as the most suitable for our need of 3D vision with a single cooled infrared sensor. I have shown that integrating a microlens array inside the dewar could bring this function to the infrared region. I have then developed a complete design model for such a camera and used it to design and build a cooled infrared plenoptic camera. I have created a method to characterize our camera and integrated this method into the image processing algorithms necessary to generate refocused images and derive the distance of objects in the scene.
146

POLICE OFFICER PERCEPTIONS OF ORGANIZATIONAL JUSTICE AND BODY-WORN CAMERAS: A CIVILIZING EFFECT?

Naoroz, Carolyn, Ph.D. 01 January 2018 (has links)
This research sought to understand the potential association between officer perceptions of organizational justiceand officer perceptions of body-worn cameras (BWCs). A questionnaire was administered to a convenience sample of 362 officersfrom the 750 sworn personnel from the Richmond Police Department in Richmond, VA, yielding a response rate of 91% and representing 44% of the Richmond Police Department’s sworn employees. This study extends prior work by partially replicating a previous BWC survey conducted by leading body-worn camera scholars, utilizing a large sample from an urban mid-Atlantic police department. This study also extends prior work on officer perceptions of organizational justice by examining officer perceptions of personal behavior modifications motivated by BWCs. Findings indicate that officers had positive general perceptions of BWCs but did not perceive that their own behavior would change due to wearing a BWC. Officers reported high perceptions of self-legitimacy and mixed perceptions of organizational justice; for example, although three quarters of respondents (74.6%) felt that command staff generally treats employees with respect, less than a third felt command staff explained the reasons for their decisions (29.1%) and that employees had a voice in agency decisions (29.7%), indicating areas for improvement in agency communication. Exploratory factor analysis yielded three separate organizational justice factors: procedural justice, distributive justice, and interactional justice. Regression analyses indicated that only procedural justice had a significant association with officers’ general perceptions of BWCs after controlling for officer demographics and perceptions of self-legitimacy (β = .20, p < .001), and there were no significant correlations between officer perceptions of organizational justice constructs and their perceptions of personal behavior modification motivated by BWCs. Policy recommendations include quarterly command staff attendance at precinct roll calls to improve internal department communication and an evaluation of the promotion process to improve officer perceptions of organizational justice. Practitioner/researcher partnerships are recommended to realize the full potential of BWC video data in improving department training and policies.
147

The Social World Through Infants’ Eyes : How Infants Look at Different Social Figures

Schmitow, Clara A. January 2012 (has links)
This thesis aims to study how infants actively look at different social figures: parents and strangers. To study infants’ looking behavior in “live” situations, new methods to record looking behavior were tested. Study 1 developed a method to record looking behavior in “live” situations: a head-mounted camera. This method was calibrated for a number of angles and then used to measure how infants look at faces and objects in two “live” situations, a conversation and a joint action. High reliability was found for the head-mounted camera in horizontal positions and the possibility of using it in a number of “live” situations with infants from 6 to 14 months of age. In Study 2, the head-mounted camera and a static camera and were used in a “live” ambiguous situation to study infants’ preferences to refer to and to use the information from parents and strangers. The results from Experiment 1 of Study 2 showed that if no information is provided in ambiguous situations in the lab, infants at 10 months of age look more at the experimenter than at the parent. Further, Experiment 2 of Study 2 showed that the infants also used more of the emotional information provided by the experimenter than by the parent to regulate their behavior.  In Study 3, looking behavior was analyzed in detail when infants looked at pictures of their parents’ and strangers’ emotional facial expressions. Corneal eye tracking was used to record looking. In this study, the influence of identity, gender, emotional expressions and parental leave on looking behavior was analyzed. The results indicated that identity and experience of looking at others influences how infants discriminate emotions in pictures of facial expressions. Fourteen-month-old infants who had been with both parents in parental leave discriminated more emotional expressions in strangers than infants who only had one parent on leave. Further, they reacted with larger pupil dilation toward the parent who was actually in parental leave than to the parent not on leave. Finally, fearful emotional expressions were more broadly scanned than neutral or happy facial expressions. The results of these studies indicate that infants discriminate between mothers’, fathers’ and strangers’ emotional facial expressions and use the other people’s expressions to regulate their behavior. In addition, a new method, a head-mounted camera was shown to capture infants’ looking behavior in “live” situations.
148

Robust Servo Tracking with Divergent Trinocular Cameras

Chang, Chin-Kuei 30 July 2007 (has links)
It has been well known that the architecture of insect compound eyes contributes outstanding capability for precise and efficient observation of moving objects. If this technique can be transferred to the domain of engineering applications, significant improvement on visual tracking of moving objects will be greatly expected. The brightness variation, caused by relative velocity of the camera and environment in a sequence of images, is called optical flow. The advantage of the optical-flow-based visual servo methods is that features of the moving object do not have to be known in advance. Therefore, they can be applied for general positioning and tracking tasks. The purpose of this thesis is to develop a visual servo system with trinocular cameras. For mimicking the configuration of compound eyes of insects, the arrangement of the divergent trinocular cameras is applied. In order to overcome possible difficulties of unknown or uncertain parameters, an image servo technique using the robust discrete-time sliding-mode control algorithm to track an object moving in 2D space is developed.
149

Autonomous Morphometrics using Depth Cameras for Object Classification and Identification / Autonom Morphometri med Djupkameror för Objektklassificering och Identifiering

Björkeson, Felix January 2013 (has links)
Identification of individuals has been solved with many different solutions around the world, either using biometric data or external means of verification such as id cards or RFID tags. The advantage of using biometric measurements is that they are directly tied to the individual and are usually unalterable. Acquiring dependable measurements is however challenging when the individuals are uncooperative. A dependable system should be able to deal with this and produce reliable identifications. The system proposed in this thesis can autonomously classify uncooperative specimens from depth data. The data is acquired from a depth camera mounted in an uncontrolled environment, where it was allowed to continuously record for two weeks. This requires stable data extraction and normalization algorithms to produce good representations of the specimens. Robust descriptors can therefore be extracted from each sample of a specimen and together with different classification algorithms, the system can be trained or validated. Even with as many as 138 different classes the system achieves high recognition rates. Inspired by the research field of face recognition, the best classification algorithm, the method of fisherfaces, was able to accurately recognize 99.6% of the validation samples. Followed by two variations of the method of eigenfaces, achieving recognition rates of 98.8% and 97.9%. These results affirm that the capabilities of the system are adequate for a commercial implementation.
150

High-Speed Imaging of Polymer Induced Fiber Flocculation

Hartley, William H. 22 March 2007 (has links)
This study presents quantitative results on the effect on individual fiber length during fiber flocculation. Flocculation was induced by a cationic polyacrylamide (cPAM). A high speed camera recorded 25 second video clips. The videos were image-analyzed and the fiber length and the amount of fiber in each sample were measured. Prior to the flocculation process, fibers were fractionated into short and long fibers. Trials were conducted using the unfractionated fiber, short fiber, and long fiber. The short and long fibers were mixed in several trials to study the effect of fiber length. The concentration of cPAM was varied as well as the motor speed of the impeller (RPM). It was found that the average fiber length decreased more rapidly with increasing motor speed. Increasing the concentration of cPAM also led to a greater decrease in average fiber length. A key finding was that a plateau was reached where further increasing the amount of cPAM had no effect. Hence, fibers below a critical length resisted flocculation even if the chemical dose or shear was increased. This critical length was related to the initial length of the fiber.

Page generated in 0.0738 seconds