• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 46
  • 26
  • 18
  • 10
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 317
  • 59
  • 55
  • 52
  • 45
  • 44
  • 43
  • 39
  • 36
  • 30
  • 28
  • 28
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Methods for Calibration, Registration, and Change Detection in Robot Mapping Applications

January 2016 (has links)
abstract: Multi-sensor fusion is a fundamental problem in Robot Perception. For a robot to operate in a real world environment, multiple sensors are often needed. Thus, fusing data from various sensors accurately is vital for robot perception. In the first part of this thesis, the problem of fusing information from a LIDAR, a color camera and a thermal camera to build RGB-Depth-Thermal (RGBDT) maps is investigated. An algorithm that solves a non-linear optimization problem to compute the relative pose between the cameras and the LIDAR is presented. The relative pose estimate is then used to find the color and thermal texture of each LIDAR point. Next, the various sources of error that can cause the mis-coloring of a LIDAR point after the cross- calibration are identified. Theoretical analyses of these errors reveal that the coloring errors due to noisy LIDAR points, errors in the estimation of the camera matrix, and errors in the estimation of translation between the sensors disappear with distance. But errors in the estimation of the rotation between the sensors causes the coloring error to increase with distance. On a robot (vehicle) with multiple sensors, sensor fusion algorithms allow us to represent the data in the vehicle frame. But data acquired temporally in the vehicle frame needs to be registered in a global frame to obtain a map of the environment. Mapping techniques involving the Iterative Closest Point (ICP) algorithm and the Normal Distributions Transform (NDT) assume that a good initial estimate of the transformation between the 3D scans is available. This restricts the ability to stitch maps that were acquired at different times. Mapping can become flexible if maps that were acquired temporally can be merged later. To this end, the second part of this thesis focuses on developing an automated algorithm that fuses two maps by finding a congruent set of five points forming a pyramid. Mapping has various application domains beyond Robot Navigation. The third part of this thesis considers a unique application domain where the surface displace- ments caused by an earthquake are to be recovered using pre- and post-earthquake LIDAR data. A technique to recover the 3D surface displacements is developed and the results are presented on real earthquake datasets: El Mayur Cucupa earthquake, Mexico, 2010 and Fukushima earthquake, Japan, 2011. / Dissertation/Thesis / Doctoral Dissertation Engineering Science 2016
172

Computer Vision from Spatial-Multiplexing Cameras at Low Measurement Rates

January 2017 (has links)
abstract: In UAVs and parking lots, it is typical to first collect an enormous number of pixels using conventional imagers. This is followed by employment of expensive methods to compress by throwing away redundant data. Subsequently, the compressed data is transmitted to a ground station. The past decade has seen the emergence of novel imagers called spatial-multiplexing cameras, which offer compression at the sensing level itself by providing an arbitrary linear measurements of the scene instead of pixel-based sampling. In this dissertation, I discuss various approaches for effective information extraction from spatial-multiplexing measurements and present the trade-offs between reliability of the performance and computational/storage load of the system. In the first part, I present a reconstruction-free approach to high-level inference in computer vision, wherein I consider the specific case of activity analysis, and show that using correlation filters, one can perform effective action recognition and localization directly from a class of spatial-multiplexing cameras, called compressive cameras, even at very low measurement rates of 1\%. In the second part, I outline a deep learning based non-iterative and real-time algorithm to reconstruct images from compressively sensed (CS) measurements, which can outperform the traditional iterative CS reconstruction algorithms in terms of reconstruction quality and time complexity, especially at low measurement rates. To overcome the limitations of compressive cameras, which are operated with random measurements and not particularly tuned to any task, in the third part of the dissertation, I propose a method to design spatial-multiplexing measurements, which are tuned to facilitate the easy extraction of features that are useful in computer vision tasks like object tracking. The work presented in the dissertation provides sufficient evidence to high-level inference in computer vision at extremely low measurement rates, and hence allows us to think about the possibility of revamping the current day computer systems. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2017
173

Perdas gasosas de nitrogênio em sistema de produção de arroz irrigado em várzea tropical / Gaseosus nitrogen losses in rice production system in tropical lowland

Carvalho, Glaucilene Duarte 31 March 2015 (has links)
Submitted by Cássia Santos (cassia.bcufg@gmail.com) on 2017-04-13T13:44:36Z No. of bitstreams: 2 Tese - Glaucilene Duarte Carvalho - 2015.pdf: 1802347 bytes, checksum: f9637cc5acb406196d502d1d9c90cffc (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-04-13T14:15:11Z (GMT) No. of bitstreams: 2 Tese - Glaucilene Duarte Carvalho - 2015.pdf: 1802347 bytes, checksum: f9637cc5acb406196d502d1d9c90cffc (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-04-13T14:15:11Z (GMT). No. of bitstreams: 2 Tese - Glaucilene Duarte Carvalho - 2015.pdf: 1802347 bytes, checksum: f9637cc5acb406196d502d1d9c90cffc (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2015-03-31 / Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG / The objective of this study was to characterize the dynamics of nitrous oxide flows and estimate the loss of nitrogen in the form of nitrous oxide and ammonia derived from nitrogen fertilization in rice cultivation in tropical lowland . The experiment was conducted in the experimental area of Embrapa Rice and Beans, at Palmital Farm, in the municipality of Goianira - Goiás, Brazil. The N2O flow soil alternated between positive (output) and negative (inflow) ranging between -83,67 and 470,84 μg N-N2O m-2 h-1; -168,01 to 113, 46 μg N-N2O m-2 h-1 and -103,54 to 290,08 μg N-N2O m-2 h-1 in the 2011/2012 season, off season and 2012/2013, respectively. N losses by volatilization from the use of nitrogen fertilizer, totaled 210 and 203 mg N-NH3 m-2, T1 and T2, respectively. In the off was, on average, 65,08 mg NNH3 m-2 and 2012/2013 amounted to 218,25; 244,80 e 233,78 mg N-NH3 m-2, at T0, T1 and T2, respectively. Actual values for emission factor for NH3-N and N-N2O were below (max. EF = 0.3%) than recommended by the IPCC. / O objetivo deste estudo foi caracterizar a dinâmica dos fluxos de óxido nitroso e estimar a perda de nitrogênio, na forma de óxido nitroso e amônia, derivada da fertilização nitrogenada em cultivo de arroz irrigado em várzea tropical. O experimento foi conduzido na área experimental da Embrapa Arroz e Feijão, na Fazenda Palmital, no município de Goianira- Goiás, Brasil. Os fluxos de N2O do solo alternaram entre positivos (emissão) e negativos (influxo), variando entre -83,67 e 470,84 μg N-N2O m-2 h-1; -168,01 a 113, 46 μg N-N2O m-2 h- 1 e -103,54 a 290,08 μg N-N2O m-2 h-1 na safra 2011/2012, entressafra e 2012/2013, respectivamente. As perdas de N por volatilização de amônia decorrentes da utilização de fertilizante nitrogenado, totalizaram 210 mg m-2 e 203 mg m-2 de N-NH3, em T1 e T2, respectivamente. Na entressafra foi, em média, 65,08 mg m-2 de N-NH3 e 2012/2013 totalizaram 218,25 mg m-2, 244,80 mg m-2 e 233,78 de N-NH3, em T0, T1 e T2, respectivamente. Os valores encontrados de fator de emissão para N-NH3 e N-N2O foram abaixo (max. FE = 0,3 %) do preconizado pelo IPCC.
174

Biomechanics of the feeding process of broiler chicks / Biomecânica do processo de alimentação de pintos de corte

Neves, Diego Pereira, 1983- 25 August 2018 (has links)
Orientador: Irenilza de Alencar Nääs / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Agrícola / Made available in DSpace on 2018-08-25T10:56:01Z (GMT). No. of bitstreams: 1 Neves_DiegoPereira_D.pdf: 2128104 bytes, checksum: aae1334cdaf344654741c087740bd7c6 (MD5) Previous issue date: 2014 / Resumo: Os frangos podem exibir diferentes padrões de movimentos biomecânicos as partes do corpo em relação às características físicas do alimento (tamanho, formato e dureza) durante a alimentação. As limitações anatômicas relacionadas com a idade, sexo e linhagem também podem afetar o processo mecânico de alimentação. Para determinar a importância desses parâmetros, as medidas relacionadas aos movimentos biomecânicos de partes corporais são necessárias. Em particular, a trajetória, dimensões e efeitos temporais relacionados com o bico do frango e com a movimentação da cabeça devem ser considerados. No entanto, determinar esta informação manualmente do vídeo por um operador humano é tedioso e propenso a erros. A presente tese tem como objetivo avaliar o impacto de três tipos distintos de ração sobre a biomecânica da alimentação de frangos de corte. O total de 19 pintos de corte machos foram filmados durante a alimentação aos 3 e 4 dias de idade através de uma câmera de alta velocidade com taxa de aquisição de 250 fps (quadros por segundo). As rações avaliadas foram: farelada fina (F1), farelada grossa (F2) e quebrada (F3), no qual o diâmetro geométrico médio e o desvio padrão geométrico foram 476?m (2.54), 638?m (2.56), e 1243?m (2.43) , respectivamente. O peso e a morfometria do bico (comprimento e largura) foram medidos após as gravações. O deslocamento da cabeça das aves durante as fases `mouthful¿ e `mandibulação¿ e a abertura máxima do bico foram mensurados por de análise computacional de imagem. A fase `mouthful¿ consistiu no movimento da cabeça de forma ininterrupta direção oblíqua ou vertical em direção à ração até que a partícula de alimento fosse capturada. A fase `mandibulação¿ consistiu em um ciclo de abertura e de fechamento do bico, na qual existe uma abertura máxima do bico. Estas fases foram classificadas manualmente como: `mouthful¿ como 'sucedido' ou 'fracassado' e `mandibulações¿ como `catch-and-throw¿ (CT) ou `slide-and-glue¿ (SG). O `mouthful sucedido¿ consistiu quando a ave capturou o alimento com sucesso, e a `mouthful fracassado¿ quando a ave errou a partícula de alimento. `Catch-and-throw¿ consistiu no reposicionamento da partícula na ponta bico antes de iniciar o transporte para o interior da cavidade oral. `Slide-and-glue¿ consistiu na deslocação da língua até a ponta em bico para aderir as partículas de alimento com o auxílio da saliva pegajosa e transportar para o interior da cavidade oral. Os resultados indicaram correlações significativas de fraca intensidade entre o peso, as características morfométricas do bico e as variáveis biomecânicas, bem como correlação entre a abertura máxima do bico e o deslocamento cabeça. O deslocamento da cabeça foi maior no `mouthful sucedido¿ (0,439 mm ± 0,002) em relação ao `mouthful fracassado¿ (0,371 mm ± 0,005). Além disso, o deslocamento da cabeça foi mais expressivo em F3 (0,526 mm ± 0,005), F2 (0,519 mm ± 0,004) e F1 (0,431 mm ± 0,003), respectivamente. O deslocamento da cabeça também foi significativamente maior para CT (0,245 mm ± 0,001) do que SG (0,114 mm ± 0,000). Considerando os diferentes tipos de ração, o deslocamento da cabeça para CT foi maior em F3, F1 e F2, enquanto que para SG foram maiores em F3, F2 e F1, respectivamente. A abertura máxima do bico também foi maior para CT (0,245 mm ± 0,001) do que SG (0,114 mm ± 0,00). Além do mais, para CT foi maior no F3 e F1 que em F2, enquanto que para SG foi maior para F1, F3 e F2, respectivamente. Assim, os diferentes tamanhos das partículas de ração (granulometria) foi, potencialmente, o fator chave para o movimento dos pintos durante a alimentação. Além disso, esta relação não foi proporcional à granulometria, explicada por valores mais elevados em F3 e F1. A ocorrência de `mouthful fracassado¿ foi 18,0% para F3, 11,2% para F2 e 6,6% para a F1. Para a classificação das mandibulações, observou-se a maior frequência de CT em F3 (26,1%), F1 (24,9%) e F2 (17,9%), respectivamente. Esta situação sugere que os pintos capturaram as partículas na ponta bico de maneira mais adequada para a deglutição com a granulometria 638µm (F2) do que 476?m (F1) e 1243µm (F3), explicada pela menor movimentação e necessidade de reposicionamento das partículas de alimento. De forma geral, a tecnologia de câmeras de alta velocidade combinada com análise computacional de imagem adotada neste experimento foi um método eficaz para análise de movimentação. É desejável uma melhor compreensão das limitações mecânicas do aparelho bucal das aves durante a alimentação, a fim de determinar a relação entre os diferentes tipos de alimentos sobre os padrões biomecânicas exibidos pelas aves / Abstract: Broiler chickens may exhibit different biomechanical motions patterns of the body parts in relation to the physical properties of feed (size, shape and hardness) while feeding. The anatomical limitations related to age, gender and breed may also impact the feeding mechanical process. To determine the significance of these parameters, measurements related to the biomechanical motions of body parts are required. In particular, the trajectory, dimensions and temporal effects related to the chicken¿s beak and head movements should be considered. However, determining this information manually from video by a human operator is tedious and prone to errors. The present thesis aims assess the impact of three different feed types on the biomechanics of feeding behaviour of broiler chicks. A total of 19 male broiler chicks were recorded while feeding at 3 and 4-d-old using a high-speed camera with an acquisition rate of 250 fps (frames per second). The feed types considered were: fine mash (F1), coarse mash (F2) and crumbled (F3), in which the geometric mean diameter and the geometric standard deviation were 476µm (2.54), 638µm (2.56), and 1243µm (2.43), respectively. The birds¿ weight and morphometric traits of the beak (length and width) were measured after the recordings. The birds¿ head displacement during mouthful and mandibulation phases and the maximum beak gape were measured through computational image analysis. Mouthful phase consisted an uninterruptedly head movement towards feed in an oblique or vertical direction until the feed particle is grasped. Mandibulation phase consisted in one cycle of opening and closing of the beak, in which there is a maximum beak gape. These phases were manually classified, as follows: mouthfuls as `normal¿ or `fail¿ and mandibulations as catch-and-throw (CT) or slide-and-glue (SG). Normal mouthful was when the bird successfully grasped the feed, and fail mouthful was when the birds missed the feed. Catch-and-throw is when the feed is repositioned within the beak tip before starting the transport into the oral cavity. Slide-and-glue consists in the displacement of the tongue up to the beak tip in order to glue the feed particles with the aid of the sticky saliva and carry inward oral cavity. The results indicated significant correlations of weak intensity between weight, morphometric traits of the beak, and the biomechanical variables, as well as correlation between maximum beak gape and head displacement. The head displacement was higher in a normal mouthful (0.439 mm ± 0.002) than fail mouthful (0.371 mm ± 0.005). Furthermore, head displacement was more expressive in F3 (0.526 mm ± 0.005), F2 (0.519 mm ± 0.004), and F1 (0.431 mm ± 0.003), respectively. The head displacement was also significantly higher for CT technique (0.245 mm ± 0.001) than SG (0.114 mm ± 0.000). Considering the different feed types, head displacement for CT was higher in F3, F1 and F2, while for SG were higher in F3, F2, and F1, respectively. The maximum beak gape was also higher for CT (0.245 mm ±0.001) than SG (0.114 mm ± 0.00). Moreover, for CT it was higher in F3 and F1 than in F2, while for SG was higher for F1, F3 and F2, respectively. Thus, the different size of the feed particles (granulometry) was potentially the key factor for the chicks¿ motion while feeding. Besides, this relation was not proportional to the granulometry, explained by higher values for F3 and F1. The occurrence of `fail mouthful¿ was 18,0% for F3, 11,2% for F2 and 6,6% for F1, respectively. For mandibulations classification, it was observed a higher frequency of CT in F3 (26,1%), F1 (24,9%), and F2 (17,9%). This situation suggests that the chicks grasped the particles in the beak tip more properly for swallowing with the granulometry 638µm (F2) than 476µm (F1), and 1243µm (F3), explained by the less motion and necessity of repositioning the feed particles. Overall, the high-speed camera technology combined with computational image analysis adopted in this experiment was an effective method for motion analysis. It is desirable a better understanding of the mechanical limitations of the birds¿ jaw apparatus while feeding in order to determine the relationship between different types of feed in biomechanical patterns displayed by the birds / Doutorado / Construções Rurais e Ambiencia / Doutor em Engenharia Agrícola
175

Examining Variation in Police Discretion: The Impact of Context and Body-Worn Cameras on Officer Behavior

January 2020 (has links)
abstract: Discretion is central to policing. The way officers use their discretion is influenced by situational, officer, and neighborhood-level factors. Concerns that discretion could be used differentially across neighborhoods have resulted in calls for increased police transparency and accountability. Body-worn cameras (BWCs) have been promoted to further these goals through increasing oversight of police-citizen encounters. The implication is that BWCs will increase officer self-awareness and result in more equitable outcomes. Prior researchers have largely evaluated the direct impact of BWCs. Researchers have yet to examine the potential for BWCs to moderate the influence of neighborhood context in individual incidents. To address this gap, I use Phoenix Police Department data collected as part of a three-year randomized-controlled trial of BWCs to examine variation in police discretion. These data include over 1.5 million police-citizen contacts nested within 826 officers and 388 neighborhoods. I examine two research questions. First, how do proactivity, arrests, and use of force vary depending on situational, officer, and neighborhood contexts? This provides a baseline for my next research question. Second, examining the same contexts and outcomes, do BWCs moderate the influence of neighborhood factors on police behavior? As such, I examine the untested, though heavily promoted, argument that BWCs will reduce the influence of extralegal factors on officer behavior. Using cross-classified logistic regression models, I found that situational, officer, and neighborhood factors all influenced proactivity, arrest, and use of force. BWCs were associated with a lower likelihood of proactivity, but an increased likelihood of arrest and use of force. Officers were more proactive and were more likely to conduct arrests in immigrant and Hispanic neighborhoods. The moderating effects suggest that officers were even more likely to proactively initiate contacts and conduct arrests in immigrant and Hispanic neighborhoods when BWCs were activated. However, after BWCs were deployed, use of force was significantly less likely to occur in black neighborhoods. Given that high-profile police use of force incidents involving black suspects are often cited as a major impetus for the adoption of BWCs in American police agencies, this finding is a key contribution to the literature. / Dissertation/Thesis / Doctoral Dissertation Criminology and Criminal Justice 2020
176

Bobcat Abundance and Habitat Selection on the Utah Test and Training Range

Muncey, Kyle David 01 December 2018 (has links)
Remote cameras have become a popular tool for monitoring wildlife. We used remote cameras to estimate bobcat (Lynx rufus) population abundance on the Utah Test and Training Range during two sample periods between 2015 and 2017. We used two statistical methods, closed capture mark-recapture (CMR) and mark-resight Poisson log-normal (PNE), to estimate bobcat abundance within the study area. We used the maximum mean distance moved method (MMDM) to calculate the effective sample area for estimating density. Additionally, we captured bobcats and estimated home range using minimum convex polygon (MCP) and kernel density estimation (KDE) methods. Bobcat abundance on the UTTR was 35-48 in 2017 and density was 11.95 bobcats/100 km2 using CMR and 16.69 bobcats/100 km2 using PNE. The North Range of the study area experienced a decline of 36-44 percent in density between sample periods. Density declines could be explained by natural predator prey cycles, by habituation to attractants or by an increase in home range area. We recommend that bobcat abundance and density be estimated regularly to establish population trends.To improve the management of bobcats on the Utah Test and Training Range (UTTR), we investigated bobcat (Lynx rufus) habitat use. We determined habitat use points by capturing bobcats in remote camera images. Use and random points were intersected with remotely sensed data in a geographic information system. Habitat variables were evaluated at the capture point scale and home range scale. Home range size was calculated using the mean maximum distance moved method. Scales and habitat variables were compared within generalized linear mixed-effects models. Our top model (AICc weight = 1) included a measure of terrain ruggedness, mean aspect, and land cover variables related to prey availability and human avoidance.
177

Automated Discovery of Real-Time Network Camera Data from Heterogeneous Web Pages

Ryan Merrill Dailey (8086355) 14 January 2021 (has links)
<div>Reduction in the cost of Network Cameras along with a rise in connectivity enables entities all around the world to deploy vast arrays of camera networks. Network cameras offer real-time visual data that can be used for studying traffic patterns, emergency response, security, and other applications. Although many sources of Network Camera data are available, collecting the data remains difficult due to variations in programming interface and website structures. Previous solutions rely on manually parsing the target website, taking many hours to complete. We create a general and automated solution for indexing Network Camera data spread across thousands of uniquely structured webpages. We analyze heterogeneous webpage structures and identify common characteristics among 73 sample Network Camera websites (each website has multiple web pages). These characteristics are then used to build an automated camera discovery module that crawls and indexes Network Camera data. Our system successfully extracts 57,364 Network Cameras from 237,257 unique web pages. </div>
178

Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks

Kong, Lingchao 01 October 2019 (has links)
No description available.
179

Système de caméras intelligentes pour l’étude en temps-réel de personnes en mouvement / Smart Camera System for Kinetic Behavior Study in Real-time.

Burbano, Andres 06 June 2018 (has links)
Nous proposons un système dedétection et de suivi des personnes enmouvement dans des grands espaces. Notresolution repose sur un réseau de camérasintelligentes pour l'extraction desinformations spatio-temporelles despersonnes. Les caméras sont composées d'uncapteur 3D, d'un système embarqué et decommunication. Nous avons montrél'efficacité du placement des capteurs 3D enposition zénithale par rapport auxoccultations et variations d’échelle.Nous garantissons l'exécution des traitementsen temps-réel (~20 fps), permettantde détecter des déplacements rapides avecune précision jusqu’à 99 %, et capable d’unfiltrage paramétrique des cibles non désiréescomme les enfants ou les caddies.Nous avons réalisé une étude sur la viabilitétechnologique des résultats pour de grandsespaces, rendant la solution industrialisable / We propose a detection and trackingsystem of people moving in large spacessystem. Our solution is based on a network ofsmart cameras capable of retrievingspatiotemporal information from the observedpeople. These smart cameras are composed bya 3d sensor, an onboard system and acommunication and power supply system. Weexposed the efficacy of the overhead positionto decreasing the occlusion and the scale'svariation.Finally, we carried out a study on the use ofspace, and a global trajectories analysis ofrecovered information by our and otherssystems, able to track people in large andcomplex spaces.
180

Software Systems for Large-Scale Retrospective Video Analytics

Tiantu Xu (10706787) 29 April 2021 (has links)
<p>Pervasive cameras are generating videos at an unprecedented pace, making videos the new frontier of big data. As the processors, e.g., CPU/GPU, become increasingly powerful, the cloud and edge nodes can generate useful insights from colossal video data. However, as the research in computer vision (CV) develops vigorously, the system area has been a blind spot in CV research. With colossal video data generated from cameras every day and limited compute resource budgets, how to design software systems to generate insights from video data efficiently?</p><p><br></p><p>Designing cost-efficient video analytics software systems is challenged by the expensive computation of vision operators, the colossal data volume, and the precious wireless bandwidth of surveillance cameras. To address above challenges, three software systems are proposed in this thesis. For the first system, we present VStore, a data store that supports fast, resource-efficient analytics over large archival videos. VStore manages video ingestion, storage, retrieval, and consumption and controls video formats through backward derivation of configuration: in the opposite direction along the video data path, VStore passes the video quantity and quality expected by analytics backward to retrieval, to storage, and to ingestion. VStore derives an optimal set of video formats, optimizes for different resources in a progressive manner, and runs queries as fast as 362x of video realtime. For the second system, we present a camera/cloud runtime called DIVA that supports querying cold videos distributed on low-cost wireless cameras. DIVA is built upon a novel zero-streaming paradigm: to save wireless bandwidth, when capturing video frames, a camera builds sparse yet accurate landmark frames without uploading any video data; when executing a query, a camera processes frames in multiple passes with increasingly more expensive operators. On diverse queries over 15 videos, DIVA runs at more than 100x realtime and outperforms competitive alternatives remarkably. For the third system, we present Clique, a practical object re-identification (ReID) engine that builds upon two unconventional techniques. First, Clique assesses target occurrences by clustering unreliable object features extracted by ReID algorithms, with each cluster representing the general impression of a distinct object to be matched against the input. Second, to search across camera videos, Clique samples cameras to maximize the spatiotemporal coverage and incrementally adds cameras for processing on demand. Through evaluation on 25 hours of traffic videos from 25 cameras, Clique reaches a high recall at 5 of 0.87 across 70 queries and runs at 830x of video realtime in achieving high accuracy.</p>

Page generated in 0.0381 seconds