• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1850
  • 727
  • 412
  • 249
  • 204
  • 143
  • 42
  • 37
  • 36
  • 35
  • 35
  • 30
  • 25
  • 14
  • 13
  • Tagged with
  • 4468
  • 1089
  • 1043
  • 595
  • 586
  • 510
  • 494
  • 460
  • 439
  • 375
  • 370
  • 367
  • 362
  • 349
  • 302
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Three dimensional object analysis and tracking by digital holography microscopy

Schockaert, Cédric 26 February 2007 (has links)
Digital Holography Microscopy (DHM) is a new 3D measurement technique that exists since Charge Coupled Devices (or CCD cameras) allow to record numerically high resolution images. That opens a new door to the theory of holography discovered in 1949 by Gabor: the door that masked the world of digital hologram processing. A hologram is a usual image but that contains the complex amplitude of the light coded into intensities recorded by the camera. The complex amplitude of the light can be seen as the combination of the energy information (squared amplitude modulus) with the information of the propagation angle of the light (phase of the amplitude) for each point of the image. When the hologram is digital, this dual information associated with a diffractive model of the light propagation permits to numerically investigate back and front planes to the recorded plane of the imaging system. We understand that 3D information can be recorded by a CCD camera and the acquisition rate of this volume information is only limited by the acquisition rate of the unique camera. For each digital hologram, the numerical investigation of front and back regions to the recorded plane is a tool to numerically refocus objects appearing unfocused in the original plane acquired by the CCD. This thesis aims to develop general and robust algorithms that are devoted to automate the analysis process in the 3D space and in time of objects present in a volume studied by a specific imaging system that permits to record holograms. Indeed, the manual processing of a huge amount of holograms is not realistic and has to be automated by software implementing precise algorithms. In this thesis, the imaging system that records holograms is a Mach-Zehnder interferometer working in transmission and studied objects are either of biological nature (crystals, vesicles, cancer cells) or latex particles. We propose and test focus criteria, based on an identical focus metric, for both amplitude and phase objects. These criteria allow the determination of the best focus plane of an object when the numerical investigation is performed. The precision of the best focus plane is lower than the depth of field of the microscope. From this refocus theory, we develop object detection algorithms that build a synthetic image where objects are bright on a dark background. This detection map of objects is the first step to a fully automatic analysis of objects present in one hologram. The combination of the detection algorithm and the focus criteria allow the precise measurement of the 3D position of the objects, and of other relevant characteristics like the object surface in its focus plane, or its convexity or whatever. These extra relevant measures are carried out with a segmentation algorithm adapted to the studied objects of this thesis (opaque objects, and transparent objects in a uniform refractive index environment). The last algorithm investigated in this research work is the data association in time of objects from hologram to hologram in order to extract 3D trajectories by using the predictive Kalman filtering theory. These algorithms are the abstract bricks of two software: DHM Object Detection and Analysis software, and Kalman Tracking software. The first software is designed for both opaque and transparent objects. The term object is not defined by one other characteristic in this work, and as a consequence, the developed algorithms are very general and can be applied on various objects studied in transmission by DHM. The tracking software is adapted to the dynamic applications of the thesis, which are flows of objects. Performance and results are exposed in a specific chapter.
102

Control of distributed objects

Hamid, Tariq Parwaiz January 1994 (has links)
No description available.
103

An object-oriented approach towards a collaborative environment for computer-aided engineering (C.A.E.)

Iosifidis, Philip January 1996 (has links)
No description available.
104

Acceleration techniques in ray tracing for dynamic scenes

Samothrakis, Stavros Nikolaou January 1998 (has links)
No description available.
105

Digital image based surface modelling

Eberhardt, Joerg January 1998 (has links)
No description available.
106

Management of long-running high-performance persistent object stores

Printezis, Antonios January 2000 (has links)
No description available.
107

Applications of sequence geometry to visual motion

Clarke, John Christopher January 1997 (has links)
No description available.
108

The effects of age of acquisition in processing people's faces and names

Moore, Viviene M. January 1998 (has links)
Word frequency and age of acquisition (AoA) influence word and object recognition and naming. High frequency and early acquired items are processed faster than low frequency and/or late acquired items. The high correlation between word frequency and AoA make these effects difficult to distinguish. However, this difficulty can be avoided by investigating the effects of AoA in the domain of recognising and naming famous faces and names. Face processing a suitable domain because the functional models of face processing were developed by analogy to word and object processing models. Nine experiments on the effects of AoA on face and name processing are reported. Experiment 1 investigated the influence of variables on naming famous faces. The variables were regressed on the speed and accuracy of face naming. Only familiarity and AoA significantly predicted successful naming. A factorial analysis and full replication revealed a consistent advantage for name production to early acquired celebrities' faces (Experiments 2 & 3). Furthermore this advantage was apparent from the first presentation (Experiment 4).Faster face and name recognition occured for early acquired than late acquired celebrities (Experiments 5 & 8). Early acquired names were read aloud faster than late acquired names (Experiment 7). Conversly semantic classifications were made faster to late acquired celebrities' faces (Experiment 6), but there was no effect in the same task to written names (Experiment 9).An effect of AoA for celebrities, whose names are acquired later in life than object names is problematic for the developmental account of AoA. Effects of AoA in recognition tasks are problematic for theorists who propose that speech output is the locus of AoA. A mechanism is proposed to account for the empirical findings. The data also presents a challenge for computer modelling to simulate the combined effects of AoA and cumulative frequency.
109

Some new techniques for faint object spectroscopy in astronomy

Parry, Ian R. January 1986 (has links)
Astronomers require large amounts of spectroscopic data for faint astronomical sources if they are to successfully confront today's most important cosmological and astrophysica1 problems. However, until recently such data has been particularly difficult to acquire and the supply of telescope time available has fallen well short of demand. This thesis describes spectroscopic techniques of high efficiency that allow the data to be obtained using a minimum of telescope time. The technical aspects discussed include instrumentation, observing practices and data reduction. In particular, a new faint object spectrograph for the La Palma Observatory and an automated multi-fibre spectrograph coupler for the Anglo-Australian Telescope are described. It is now feasible for extensive spectroscopic surveys to be carried out at very faint magnitudes.
110

Augmenting the relational model with conceptual graphs

Bowen, Brian A. January 1995 (has links)
While the relational model for data storage is sufficient for the modelling and manipulation of a large number of application domains, a growing class of application domains are either difficult or impossible for the relational model to deal with efficiently. The realisation of this fact has led to a proliferation of data models that attempt to increase the complexity and semantic capture of the domains that they can model - the development of object-oriented databases and the various semantic data models are a result of this. The idea of using logic to define, manipulate and constrain data has given rise to large numbers of systems that interface - not always successfully - a database system and a logic processing system. Most such systems are based on Prolog or its derivations. This thesis describes the development and use of an object-oriented and semantically rich form of logic - conceptual graph theory - as a system for the definition, manipulation, and constraint of data. It describes a theoretical correspondence between conceptual graph theory and the relational model, and proceeds to develop an augmented, hybrid theory that is formally more expressive and as rigorous as those languages based on the relational algebra or calculus. This thesis also describes the design and implementation of a hybrid relational database - conceptual graph system, that has a cleaner and more principled system of semantic capture than other (for example, Prolog-based) systems, and that is also adaptive in nature - it automatically modifies its underlying storage structures in accordance with modifications made to the structures of the application domain over time. This completely shields the user from any responsibility for database design and maintenance, and so the user need only be concerned with application domain knowledge. Although the implementation described is incomplete, it can be extended to produce a deductive, object-oriented database system based on conceptual graphs.

Page generated in 0.0393 seconds