• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 34
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Photographic zoom fisheye lens design for DSLR cameras

Yan, Yufeng, Sasian, Jose 27 September 2017 (has links)
Photographic fisheye lenses with fixed focal length for cameras with different sensor formats have been well developed for decades. However, photographic fisheye lenses with variable focal length are rare on the market due in part to the greater design difficulty. This paper presents a large aperture zoom fisheye lens for DSLR cameras that produces both circular and diagonal fisheye imaging for 35-mm sensors and diagonal fisheye imaging for APS-C sensors. The history and optical characteristics of fisheye lenses are briefly reviewed. Then, a 9.2- to 16.1-mm F/2.8 to F/3.5 zoom fisheye lens design is presented, including the design approach and aberration control. Image quality and tolerance performance analysis for this lens are also presented. (C) 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
22

Automatic juxtaposition of source files

Davis, Samuel 11 1900 (has links)
Previous research has found that programmers spend a significant fraction of their time navigating between different source code locations and that much of that time is spent returning to previously viewed code. Other work has identified the ability to juxtapose arbitrary pieces of code as cognitively important. However, modern IDEs have inherited a user interface design in which, usually, only one source file is displayed at a time, with the result that users must switch back and forth from one file to another. Taking advantage of the increasing availability of large displays, we propose a new interaction paradigm in which an IDE presents parts of multiple source files side by side, using the Mylyn degree-of-interest function to dynamically allocate screen space to them on the basis of degree-of-interest to the current development task. We demonstrate the feasibility of this paradigm with a prototype implementation built on the Eclipse IDE and note that it was used by the author over a period of months in the development of the prototype itself. Additionally, we present two case studies which quantify the potential reduction in navigation and demonstrate the simplicity of the approach and its ability to capture complete concerns on screen. These case studies suggest that the approach has the potential to reduce the time that programmers spend navigating by as much as 50%. / Science, Faculty of / Computer Science, Department of / Graduate
23

Using active learning for semi-automatically labeling a dataset of fisheye distorted images for object detection

Bourghardt, Olof January 2022 (has links)
Self-driving vehicles has become a hot topic in today's industry during the past years and companies all around the globe are attempting to solve the complex task of developing vehicles that can safely navigate roads and traffic without the assistance of a driver.  As deep learning and computer vision becomes more streamlined and with the possibility of using fisheye cameras as a cheap alternative to external sensors some companies have begun researching the possibility for assisted driving on vehicles such as electrical scooters to prevent injuries and accidents by detecting dangerous situations as well as promoting a sustainable infrastructure. However training such a model requires gathering large amounts of data which needs to be labeled by a human annotator. This process is expensive, time consuming, and requires extensive quality checking which can be difficult for companies to afford. This thesis presents an application that allows for semi-automatically labeling a dataset with the help of a human annotator and an object detector. The application trains an object detector together with an active learning framework on a small part of labeled data sampled from the woodscape dataset of fisheye distorted images and uses the knowledge of the trained model as well as using a human annotator as assistance to label more data. This thesis examines the labeled data produced by using the application described in this thesis and compares them with the quality of the annotations in the woodscape dataset. Results show that the model can't make any quality annotations compared to the woodscape dataset and resulted in the human annotator having to label all of the data, and the model achieved an accuracy of 0.00099 mAP.
24

Supporting Spatial Collaboration: An Investigation of Viewpoint Constraint and Awareness Techniques

Schafer, Wendy A. 28 April 2004 (has links)
Spatial collaboration refers to collaboration activities involving physical space. It occurs every day as people work together to solve spatial problems, such as rearranging furniture or communicating about an environmental issue. In this work, we investigate how to support spatial collaboration when the collaborators are not colocated. We propose using shared, interactive representations of the space to support distributed, spatial collaboration. Our study examines viewpoint constraint techniques, which determine how the collaborators individually view the representation, and awareness techniques, which enable the collaborators to maintain an understanding of each other's work efforts. Our work consists of four phases, in which we explore a design space for interactive representations and examine the effects of different viewpoint constraint and awareness techniques. We consider situations where the collaborators use the same viewpoints, different viewpoints, and have a choice in viewpoint constraint techniques. In phase 1, we examine current technological support for spatial collaboration and designed two early prototypes. Phase 2 compares various two-dimensional map techniques, with the collaborators using identical techniques. Phase 3 focuses on three-dimensional virtual environment techniques, comparing similar and different frames of reference. The final phase reuses the favorable techniques from the previous studies and presents a novel prototype that combines both two-dimensional and three-dimensional representations. Each phase of this research is limited to synchronous communication activities and non-professional users working together on everyday tasks. Our findings highlight the advantages and disadvantages of the different techniques for spatial collaboration solutions. Also, having conducted multiple evaluations of spatial collaboration prototypes, we offer a common set of lessons with respect to distributed, spatial collaboration activities. This research also highlights the need for continued study to improve on the techniques evaluated and to consider additional spatial collaboration activities. / Ph. D.
25

Geometric model of a dual-fisheye system composed of hyper-hemispherical lenses /

Castanheiro, Letícia Ferrari January 2020 (has links)
Orientador: Antonio Maria Garcia Tommaselli / Resumo: A combinação de duas lentes com FOV hiper-hemisférico em posição opostas pode gerar um sistema omnidirecional (FOV 360°) leve, compacto e de baixo custo, como Ricoh Theta S e GoPro Fusion. Entretanto, apenas algumas técnicas e modelos matemáticos para a calibração um sistema com duas lentes hiper-hemisféricas são apresentadas na literatura. Nesta pesquisa, é avaliado e definido um modelo geométrico para calibração de sistemas omnidirecionais compostos por duas lentes hiper-hemisféricas e apresenta-se algumas aplicações com esse tipo de sistema. A calibração das câmaras foi realizada no programa CMC (calibração de múltiplas câmeras) utilizando imagens obtidas a partir de vídeos feitos com a câmara Ricoh Theta S no campo de calibração 360°. A câmara Ricoh Theta S é composto por duas lentes hiper-hemisféricas fisheye que cobrem 190° cada uma. Com o objetivo de avaliar as melhorias na utilização de pontos em comum entre as imagens, dois conjuntos de dados de pontos foram considerados: (1) apenas pontos no campo hemisférico, e (2) pontos em todo o campo de imagem (isto é, adicionar pontos no campo de imagem hiper-hemisférica). Primeiramente, os modelos ângulo equisólido, equidistante, estereográfico e ortogonal combinados com o modelo de distorção Conrady-Brown foram testados para a calibração de um sensor da câmara Ricoh Theta S. Os modelos de ângulo-equisólido e estereográfico apresentaram resultados melhores do que os outros modelos. Portanto, esses dois modelos de projeção for... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The arrangement of two hyper-hemispherical fisheye lenses in opposite position can design a light weight, small and low-cost omnidirectional system (360° FOV), e.g. Ricoh Theta S and GoPro Fusion. However, only a few techniques are presented in the literature to calibrate a dual-fisheye system. In this research, a geometric model for dual-fisheye system calibration was evaluated, and some applications with this type of system are presented. The calibrating bundle adjustment was performed in CMC (calibration of multiple cameras) software by using the Ricoh Theta video frames of the 360° calibration field. The Ricoh Theta S system is composed of two hyper-hemispherical fisheye lenses with 190° FOV each one. In order to evaluate the improvement in applying points in the hyper-hemispherical image field, two data set of points were considered: (1) observations that are only in the hemispherical field, and (2) points in all image field, i.e. adding points in the hyper-hemispherical image field. First, one sensor of the Ricoh Theta S system was calibrated in a bundle adjustment based on the equidistant, equisolid-angle, stereographic and orthogonal models combined with Conrady-Brown distortion model. Results showed that the equisolid-angle and stereographic models can provide better solutions than those of the others projection models. Therefore, these two projection models were implemented in a simultaneous camera calibration, in which the both Ricoh Theta sensors were considered i... (Complete abstract click electronic access below) / Mestre
26

Vision-based multi-sensor people detection system for heavy machines / Étude d'un système de détection multi-capteurs pour la détection de risques de collision : applications aux manoeuvres d'engins de chantier

Bui, Manh-Tuan 27 November 2014 (has links)
Ce travail de thèse a été réalisé dans le cadre de la coopération entre l’Université de Technologie de Compiègne (UTC) et le Centre Technique des Industries Mécaniques (CETIM). Nous présentons un système de détection de personnes pour l’aide à la conduite dans les engins de chantier. Une partie du travail a été dédiée à l’analyse du contexte de l’application, ce qui a permis de proposer un système de perception composé d’une caméra monoculaire fisheye et d’un Lidar. L’utilisation des caméras fisheye donne l’avantage d’un champ de vision très large avec en contrepartie, la nécessité de gérer les fortes distorsions dans l’étape de détection. A notre connaissance, il n’y a pas eu de recherches dédiées au problème de la détection de personnes dans les images fisheye. Pour cette raison, nous nous sommes concentrés sur l’étude et la quantification de l’impact des distorsions radiales sur l’apparence des personnes dans les images et nous avons proposé des approches adaptatives pour gérer ces spécificités. Nos propositions se sont inspirées de deux approches de l’état de l’art pour la détection des personnes : les histogrammes de gradient orientés (HOG) et le modèle des parties déformables (DPM). Tout d’abord, en enrichissant la base d’apprentissage avec des imagettes fisheye artificielles, nous avons pu montrer que les classificateurs peuvent prendre en compte les distorsions dans la phase d’apprentissage. Cependant, adapter les échantillons d’entrée, n’est pas la solution optimale pour traiter le problème de déformation de l’apparence des personnes dans les images. Nous avons alors décidé d’adapter l’approche de DPM pour prendre explicitement en compte le modèle de distorsions. Il est apparu que les modèles déformables peuvent être modifiés pour s’adapter aux fortes distorsions des images fisheye, mais ceci avec un coût de calculatoire supérieur. Dans cette thèse, nous présentons également une approche de fusion Lidar/camera fisheye. Une architecture de fusion séquentielle est utilisée et permet de réduire les fausses détections et le coût calculatoire de manière importante. Un jeu de données en environnement de chantier a été construit et différentes expériences ont été réalisées pour évaluer les performances du système. Les résultats sont prometteurs, à la fois en terme de vitesse de traitement et de performance de détection. / This thesis has been carried out in the framework of the cooperation between the Compiègne University of Technology (UTC) and the Technical Centre for Mechanical Industries (CETIM). In this work, we present a vision-based multi-sensors people detection system for safety on heavy machines. A perception system composed of a monocular fisheye camera and a Lidar is proposed. The use of fisheye cameras provides an advantage of a wide field-of-view but yields the problem of handling the strong distortions in the detection stage.To the best of our knowledge, no research works have been dedicated to people detection in fisheye images. For that reason, we focus on investigating and quantifying the strong radial distortions impacts on people appearance and proposing adaptive approaches to handle that specificity. Our propositions are inspired by the two state-of-the-art people detection approaches : the Histogram of Oriented Gradient (HOG) and the Deformable Parts Model (DPM). First, by enriching the training data set, we prove that the classifier can take into account the distortions. However, fitting the training samples to the model, is not the best solution to handle the deformation of people appearance. We then decided to adapt the DPM approach to handle properly the problem. It turned out that the deformable models can be modified to be even better adapted to the strong distortions of the fisheye images. Still, such approach has adrawback of the high computation cost and complexity. In this thesis, we also present a framework that allows the fusion of the Lidar modality to enhance the vision-based people detection algorithm. A sequential Lidar-based fusion architecture is used, which addresses directly the problem of reducing the false detections and computation cost in vision-based-only system. A heavy machine dataset have been also built and different experiments have been carried out to evaluate the performances of the system. The results are promising, both in term of processing speed and performances.
27

Road scene perception based on fisheye camera, LIDAR and GPS data combination / Perception de la route par combinaison des données caméra fisheye, Lidar et GPS

Fang, Yong 24 September 2015 (has links)
La perception de scènes routières est un domaine de recherche très actif. Cette thèse se focalise sur la détection et le suivi d’objets par fusion de données d’un système multi-capteurs composé d’un télémètre laser, une caméra fisheye et un système de positionnement global (GPS). Plusieurs étapes de la chaîne de perception sont ´ étudiées : le calibrage extrinsèque du couple caméra fisheye / télémètre laser, la détection de la route et enfin la détection et le suivi d’obstacles sur la route.Afin de traiter les informations géométriques du télémètre laser et de la caméra fisheye dans un repère commun, une nouvelle approche de calibrage extrinsèque entre les deux capteurs est proposée. La caméra fisheye est d’abord calibrée intrinsèquement. Pour cela, trois modèles de la littérature sont étudiés et comparés. Ensuite, pour le calibrage extrinsèque entre les capteurs,la normale au plan du télémètre laser est estimée par une approche de RANSAC couplée `a une régression linéaire `a partir de points connus dans le repère des deux capteurs. Enfin une méthode des moindres carres basée sur des contraintes géométriques entre les points connus, la normale au plan et les données du télémètre laser permet de calculer les paramètres extrinsèques. La méthode proposée est testée et évaluée en simulation et sur des données réelles.On s’intéresse ensuite `a la détection de la route à partir des données issues de la caméra fisheye et du télémètre laser. La détection de la route est initialisée `a partir du calcul de l’image invariante aux conditions d’illumination basée sur l’espace log-chromatique. Un seuillage sur l’histogramme normalisé est appliqué pour classifier les pixels de la route. Ensuite, la cohérence de la détection de la route est vérifiée en utilisant les mesures du télémètre laser. La segmentation de la route est enfin affinée en exploitant deux détections de la route successives. Pour cela, une carte de distance est calculée dans l’espace couleur HSI (Hue,Saturation, Intensity). La méthode est expérimentée sur des données réelles. Une méthode de détection d’obstacles basée sur les données de la caméra fisheye, du télémètre laser, d’un GPS et d’une cartographie routière est ensuite proposée. On s’intéresse notamment aux objets mobiles apparaissant flous dans l’image fisheye. Les régions d’intérêts de l’image sont extraites `a partir de la méthode de détection de la route proposée précédemment. Puis, la détection dans l’image du marquage de la ligne centrale de la route est mise en correspondance avec un modelé de route reconstruit `a partir des données GPS et cartographiques. Pour cela, la transformation IPM (Inverse Perspective Mapping) est appliquée à l’image. Les régions contenant potentiellement des obstacles sont alors extraites puis confirmées à l’aide du télémètre laser.L’approche est testée sur des données réelles et comparée `a deux méthodes de la littérature. Enfin, la dernière problématique étudiée est le suivi temporel des obstacles détectés `a l’aide de l’utilisation conjointe des données de la caméra fisheye et du télémètre laser. Pour cela, les resultats de détection d’obstacles précédemment obtenus sont exploit ´es ainsi qu’une approche de croissance de région. La méthode proposée est également testée sur des données réelles. / Road scene understanding is one of key research topics of intelligent vehicles. This thesis focuses on detection and tracking of obstacles by multisensors data fusion and analysis. The considered system is composed of a lidar, a fisheye camera and aglobal positioning system (GPS). Several steps of the perception scheme are studied: extrinsic calibration between fisheye camera and lidar, road detection and obstacles detection and tracking. Firstly, a new method for extinsic calibration between fisheye camera and lidar is proposed. For intrinsic modeling of the fisheye camera, three models of the literatureare studied and compared. For extrinsic calibration between the two sensors, the normal to the lidar plane is firstly estimated based on the determination of ń known ż points. The extrinsic parameters are then computed using a least square approachbased on geometrical constraints, the lidar plane normal and the lidar measurements. The second part of this thesis is dedicated to road detection exploiting both fisheye camera and lidar data. The road is firstly coarse detected considering the illumination invariant image. Then the normalised histogram based classification is validated using the lidar data. The road segmentation is finally refined exploiting two successive roaddetection results and distance map computed in HSI color space. The third step focuses on obstacles detection, especially in case of motion blur. The proposed method combines previously detected road, map, GPS and lidar information.Regions of interest are extracted from previously road detection. Then road central lines are extracted from the image and matched with road shape model extracted from 2DŋSIG map. Lidar measurements are used to validated the results.The final step is object tracking still using fisheye camera and lidar. The proposed method is based on previously detected obstacles and a region growth approach. All the methods proposed in this thesis are tested, evaluated and compared to stateŋofŋtheŋart approaches using real data acquired with the IRTESŋSET laboratory experimental platform.
28

The enigma of imaging in the Maxwell fisheye medium

Sahebdivan, Sahar January 2016 (has links)
The resolution of optical instruments is normally limited by the wave nature of light. Circumventing this limit, known as the diffraction limit of imaging, is of tremendous practical importance for modern science and technology. One method, super-resolved fluorescence microscopy was distinguished with the Nobel Prize in Chemistry in 2014, but there is plenty of room for alternatives and complementary methods such as the pioneering work of Prof. J. Pendry on the perfect lens based on negative refraction that started the entire research area of metamaterials. In this thesis, we have used analytical techniques to solve several important challenges that have risen in the discussion of the microwave experimental demonstration of absolute optical instruments and the controversy surrounding perfect imaging. Attempts to overcome or circumvent Abbe's diffraction limit of optical imaging, have traditionally been greeted with controversy. In this thesis, we have investigated the role of interacting sources and detectors in perfect imaging. We have established limitations and prospects that arise from interactions and resonances inside the lens. The crucial role of detection becomes clear in Feynman's argument against the diffraction limit: “as Maxwell's electromagnetism is invariant upon time reversal, the electromagnetic wave emitted from a point source may be reversed and focused into a point with point-like precision, not limited by diffraction.” However, for this, the entire emission process must be reversed, including the source: A point drain must sit at the focal position, in place of the point source, otherwise, without getting absorbed at the detector, the focused wave will rebound and the superposition of the focusing and the rebounding wave will produce a diffraction-limited spot. The time-reversed source, the drain, is the detector which taking the image of the source. In 2011-2012, experiments with microwaves have confirmed the role of detection in perfect focusing. The emitted radiation was actively time-reversed and focused back at the point of emission, where, the time-reversed of the source sits. Absorption in the drain localizes the radiation with a precision much better than the diffraction limit. Absolute optical instruments may perform the time reversal of the field with perfectly passive materials and send the reversed wave to a different spatial position than the source. Perfect imaging with absolute optical instruments is defected by a restriction: so far it has only worked for a single–source single–drain configuration and near the resonance frequencies of the device. In chapters 6 and 7 of the thesis, we have investigated the imaging properties of mutually interacting detectors. We found that an array of detectors can image a point source with arbitrary precision. However, for this, the radiation has to be at resonance. Our analysis has become possible thanks to a theoretical model for mutually interacting sources and drains we developed after considerable work and several failed attempts. Modelling such sources and drains analytically had been a major unsolved problem, full numerical simulations have been difficult due to the large difference in the scales involved (the field localization near the sources and drains versus the wave propagation in the device). In our opinion, nobody was able to reproduce reliably the experiments, because of the numerical complexity involved. Our analytic theory draws from a simple, 1–dimensional model we developed in collaboration with Tomas Tyc (Masaryk University) and Alex Kogan (Weizmann Institute). This model was the first to explain the data of experiment, characteristic dips of the transmission of displaced drains, which establishes the grounds for the realistic super-resolution of absolute optical instruments. As the next step in Chapter 7 we developed a Lagrangian theory that agrees with the simple and successful model in 1–dimension. Inspired by the Lagrangian of the electromagnetic field interacting with a current, we have constructed a Lagrangian that has the advantage of being extendable to higher dimensions in our case two where imaging takes place. Our Lagrangian theory represents a device-independent, idealized model independent of numerical simulations. To conclude, Feynman objected to Abbe's diffraction limit, arguing that as Maxwell's electromagnetism is time-reversal invariant, the radiation from a point source may very well become focused in a point drain. Absolute optical instruments such as the Maxwell Fisheye can perform the time reversal and may image with a perfect resolution. However, the sources and drains in previous experiments were interacting with each other as if Feynman's drain would act back to the source in the past. Different ways of detection might circumvent this feature. The mutual interaction of sources and drains does ruin some of the promising features of perfect imaging. Arrays of sources are not necessarily resolved with arrays of detectors, but it also opens interesting new prospects in scanning near-fields from far–field distances. To summarise the novel idea of the thesis: • We have discovered and understood the problems with the initial experimental demonstration of the Maxwell Fisheye. • We have solved a long-standing challenge of modelling the theory for mutually interacting sources and drains. • We understand the imaging properties of the Maxwell Fisheye in the wave regime. Let us add one final thought. It has taken the scientific community a long time of investigation and discussion to understand the different ingredients of the diffraction limit. Abbe's limit was initially attributed to the optical device only. But, rather all three processes of imaging, namely illumination, transfer and detection, make an equal contribution to the total diffraction limit. Therefore, we think that for violating the diffraction limit one needs to consider all three factors together. Of course, one might circumvent the limit and achieve a better resolution by focusing on one factor, but that does not necessary imply the violation of a fundamental limit. One example is STED microscopy that focuses on the illumination, another near–field scanning microscopy that circumvents the diffraction limit by focusing on detection. Other methods and strategies in sub-wavelength imaging –negative refraction, time reversal imaging and on the case and absolute optical instruments –are concentrating on the faithful transfer of the optical information. In our opinion, the most significant, and naturally the most controversial, part of our findings in the course of this study was elucidating the role of detection. Maxwell's Fisheye transmits the optical information faithfully, but this is not enough. To have a faithful image, it is also necessary to extract the information at the destination. In our last two papers, we report our new findings of the contribution of detection. We find out in the absolute optical instruments, such as the Maxwell Fisheye, embedded sources and detectors are not independent. They are mutually interacting, and this interaction influences the imaging property of the system.
29

Generování realistických snímků obloh / Generation of realistic skydome images

Špaček, Jan January 2020 (has links)
Generation of realistic skydome images We aim to generate realistic images of the sky with clouds using generative adversarial networks (GANs). We explore two GAN architectures, ProGAN and StyleGAN, and find that StyleGAN produces significantly better results. We also propose a novel architecture SuperGAN which aims to generate images at very high resolutions, which cannot be efficiently handled using state-of-art architectures. 1
30

Sensors and wireless networks for monitoring climate and biology in a tropical region of intensive agriculture : methods, tools and applications to the case of the Mekong Delta of Vietnam / Réseaux de capteurs sans fil pour l’observation du climat et de la biologie dans une région tropicale d’agriculture intensive : méthodes, outils et applications pour le cas du Delta du Mékong, Vietnam

Lam, Bao Hoai 26 January 2018 (has links)
Les changements climatiques ont des impacts considérables sur le temps, les océans et les rivages, la vie sauvage. Ils amènent des problèmes désormais considérés comme majeurs par les gouvernements et organisations internationales. Ces efforts ont fourni un cadre à cette thèse, qui propose de procéder en boucle fermée de l’observation d’insectes ravageurs, avec des centaines de capteurs en réseau ("light traps"), au système d’information, et enfin à des décisions de lutte, manuelles ou automatiques. Le point d’appui pratique est la conception d’un système de comptage d’insectes proliférant dans les cultures de riz (BPH). L’abstraction que nous développons est celle d’une machine environnementale de grande taille, distribuée, qui capte et synthétise l’information, élabore des connaissances, et prend des décisions. Autour de cette abstraction, nous avons élaboré un système de vision "fisheye" effectuant le comptage des insectes. Nous proposons un système d’information géographique directement connecté au réseau de capteurs. Le couplage direct, "cyber-physique", entre les systèmes d’information et l’observation de l’environnement à échelle régionale est une nouveauté transposable, qui permet de comprendre et contrôler quantité d’évolutions. / Climate changes bring problems related to nature evolutions. Global warming has an impact on sea level, weather patterns, and wild life. A number of national and international organizations are developing research programs in these directions, including threats on cultures and insect proliferation. Monitoring these phenomena, observing consequences, elaborating counteracted strategies are critical for the economy and society.The initial motivation of this work was the understanding of change impacts in the Mekong Delta region. From there, automatic observation tools were designed with a real time information system able to integrate environmental measures, then to support knowledge production.Tracking environment evolutions is distributed sensing, which can be the association of efficient sensors and radio communications, operated under the control of an information system. Sensing insects is very complex due to their diversity and dispersion. However, this is feasible in the case of intensive agricultural production as it is the case of rice, having a small number of pests. An automatic vision observatory is proposed to observe the main threats for the rice, as an evolution of manual light traps. Radio communication weaves these observatories into a network with connection to databases storing measures and possible counteractions. An example observatory has a fisheye camera and insect counting algorithms for the BPH practical case in Vietnam.By considering the observation system as an input for an abstract machine, and considering decision and actions taken as a possible control on the environment, we obtain a framework for knowledge elaboration that can be useful in lots of other situations.

Page generated in 0.4211 seconds