• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 11
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 440
  • 440
  • 174
  • 167
  • 139
  • 139
  • 139
  • 100
  • 84
  • 75
  • 25
  • 21
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Content rendering and interaction technologies for digital heritage systems

Patoli, Muhammad Zeeshan January 2011 (has links)
Existing digital heritage systems accommodate a huge amount of digital repository information; however their content rendering and interaction components generally lack the more interesting functionality that allows better interaction with heritage contents. Many digital heritage libraries are simply collections of 2D images with associated metadata and textual content, i.e. little more than museum catalogues presented online. However, over the last few years, largely as a result of EU framework projects, some 3D representation of digital heritage objects are beginning to appear in a digital library context. In the cultural heritage domain, where researchers and museum visitors like to observe cultural objects as closely as possible and to feel their existence and use in the past, giving the user only 2D images along with textual descriptions significantly limits interaction and hence understanding of their heritage. The availability of powerful content rendering technologies, such as 3D authoring tools to create 3D objects and heritage scenes, grid tools for rendering complex 3D scenes, gaming engines to display 3D interactively, and recent advances in motion capture technologies for embodied immersion, allow the development of unique solutions for enhancing user experience and interaction with digital heritage resources and objects giving a higher level of understanding and greater benefit to the community. This thesis describes DISPLAYS (Digital Library Services for Playing with Shared Heritage Resources), which is a novel conceptual framework where five unique services are proposed for digital content: creation, archival, exposition, presentation and interaction services. These services or tools are designed to allow the heritage community to create, interpret, use and explore digital heritage resources organised as an online exhibition (or virtual museum). This thesis presents innovative solutions for two of these services or tools: content creation where a cost effective render grid is proposed; and an interaction service, where a heritage scenario is presented online using a real-time motion capture and digital puppeteer solution for the user to explore through embodied immersive interaction their digital heritage.
242

Supporting policy-based contextual reconfiguration and adaptation in ubiquitous computing

Dhomeja, Lachhman Das January 2011 (has links)
In order for pervasive computing systems to be able to perform tasks which support us in everyday life without requiring attention from the users of the environment, they need to adapt themselves in response to context. This makes context-awareness in general, and context-aware adaptation in particular, an essential requirement for pervasive computing systems. Two of the features of context-awareness are: contextual reconfiguration and contextual adaptation in which applications adapt their behaviour in response to context. We combine both these features of context-awareness to provide a broad scope of adaptation and put forward a system, called Policy-Based Contextual Reconfiguration and Adaptation (PCRA) that provides runtime support for both. The combination of both context-aware reconfiguration and context-aware adaptation provides a broad scope of adaptation and hence allows the development of diverse adaptive context-aware applications. However, another important issue is the choice of an effective means for developing, modifying and extending such applications. The main argument forming the basis of this thesis is that we advocate the use of a policy-based programming model and argue that it provides more effective means for developing, modifying and extending such applications. This thesis addresses other important surrounding issues which are associated with adaptive context-aware applications. These include the management of invalid bindings and the provision of seamless caching support for remote services involved in bindings for improved performance. The bindings may become invalid due to failure conditions that can arise due to network problems or migration of software components, causing bindings between the application component and remote service to become invalid. We have integrated reconfiguration support to manage bindings, and seamless caching support for remote services in PCRA. This thesis also describes the design and implementation of PCRA, which enables development of adaptive context-aware applications using policy specifications. Within PCRA, adaptive context-aware applications are modelled by specifying binding policies and adaptation policies. The use of policies within PCRA simplifies the development task because policies are expressed at a high-level of abstraction, and are expressed independently of each other. PCRA also allows the dynamic modification of applications since policies are independent units of execution and can be dynamically loaded and removed from the system. This is a powerful and useful capability as applications may evolve over time, i.e. the user needs and preferences may change, but re-starting is undesirable. We evaluate PCRA by comparing its features to other systems in the literature, and by performance measures.
243

Quality Evaluation in Fixed-point Systems with Selective Simulation / Evaluation de la qualité des systèmes en virgule fixe avec la simulation sélective

Nehmeh, Riham 13 June 2017 (has links)
Le temps de mise sur le marché et les coûts d’implantation sont les deux critères principaux à prendre en compte dans l'automatisation du processus de conception de systèmes numériques. Les applications de traitement du signal utilisent majoritairement l'arithmétique virgule fixe en raison de leur coût d'implantation plus faible. Ainsi, une conversion en virgule fixe est nécessaire. Cette conversion est composée de deux parties correspondant à la détermination du nombre de bits pour la partie entière et pour la partie fractionnaire. Le raffinement d'un système en virgule fixe nécessite d'optimiser la largeur des données en vue de minimiser le coût d'implantation tout en évitant les débordements et un bruit de quantification excessif. Les applications dans les domaines du traitement d'image et du signal sont tolérantes aux erreurs si leur probabilité ou leur amplitude est suffisamment faible. De nombreux travaux de recherche se concentrent sur l'optimisation de la largeur de la partie fractionnaire sous contrainte de précision. La réduction du nombre de bits pour la partie fractionnaire conduit à une erreur d'amplitude faible par rapport à celle du signal. La théorie de la perturbation peut être utilisée pour propager ces erreurs à l'intérieur des systèmes à l'exception du cas des opérations un- smooth, comme les opérations de décision, pour lesquelles une erreur faible en entrée peut conduire à une erreur importante en sortie. De même, l'optimisation de la largeur de la partie entière peut réduire significativement le coût lorsque l'application est tolérante à une faible probabilité de débordement. Les débordements conduisent à une erreur d'amplitude élevée et leur occurrence doit donc être limitée. Pour l'optimisation des largeurs des données, le défi est d'évaluer efficacement l'effet des erreurs de débordement et de décision sur la métrique de qualité associée à l'application. L'amplitude élevée de l'erreur nécessite l'utilisation d'approches basées sur la simulation pour évaluer leurs effets sur la qualité. Dans cette thèse, nous visons à accélérer le processus d'évaluation de la métrique de qualité. Nous proposons un nouveau environnement logiciel utilisant des simulations sélectives pour accélérer la simulation des effets des débordements et des erreurs de décision. Cette approche peut être appliquée à toutes les applications de traitement du signal développées en langage C. Par rapport aux approches classiques basées sur la simulation en virgule fixe, où tous les échantillons d'entrée sont traités, l'approche proposée simule l'application uniquement en cas d'erreur. En effet, les dépassements et les erreurs de décision doivent être des événements rares pour maintenir la fonctionnalité du système. Par conséquent, la simulation sélective permet de réduire considérablement le temps requis pour évaluer les métriques de qualité des applications. De plus, nous avons travaillé sur l'optimisation de la largeur de la partie entière, qui peut diminuer considérablement le coût d'implantation lorsqu'une légère dégradation de la qualité de l'application est acceptable. Nous exploitons l'environnement logiciel proposé auparavant à travers un nouvel algorithme d'optimisation de la largeur des données. La combinaison de cet algorithme et de la technique de simulation sélective permet de réduire considérablement le temps d'optimisation. / Time-to-market and implementation cost are high-priority considerations in the automation of digital hardware design. Nowadays, digital signal processing applications use fixed-point architectures due to their advantages in terms of implementation cost. Thus, floating-point to fixed-point conversion is mandatory. The conversion process consists of two parts corresponding to the determination of the integer part word-length and the fractional part world-length. The refinement of fixed-point systems requires optimizing data word -length to prevent overflows and excessive quantization noises while minimizing implementation cost. Applications in image and signal processing domains are tolerant to errors if their probability or their amplitude is small enough. Numerous research works focus on optimizing the fractional part word-length under accuracy constraint. Reducing the number of bits for the fractional part word- length leads to a small error compared to the signal amplitude. Perturbation theory can be used to propagate these errors inside the systems except for unsmooth operations, like decision operations, for which a small error at the input can leads to a high error at the output. Likewise, optimizing the integer part word-length can significantly reduce the cost when the application is tolerant to a low probability of overflow. Overflows lead to errors with high amplitude and thus their occurrence must be limited. For the word-length optimization, the challenge is to evaluate efficiently the effect of overflow and unsmooth errors on the application quality metric. The high amplitude of the error requires using simulation based-approach to evaluate their effects on the quality. In this thesis, we aim at accelerating the process of quality metric evaluation. We propose a new framework using selective simulations to accelerate the simulation of overflow and un- smooth error effects. This approach can be applied on any C based digital signal processing applications. Compared to complete fixed -point simulation based approaches, where all the input samples are processed, the proposed approach simulates the application only when an error occurs. Indeed, overflows and unsmooth errors must be rare events to maintain the system functionality. Consequently, selective simulation allows reducing significantly the time required to evaluate the application quality metric. 1 Moreover, we focus on optimizing the integer part, which can significantly decrease the implementation cost when a slight degradation of the application quality is acceptable. Indeed, many applications are tolerant to overflows if the probability of overflow occurrence is low enough. Thus, we exploit the proposed framework in a new integer word-length optimization algorithm. The combination of the optimization algorithm and the selective simulation technique allows decreasing significantly the optimization time.
244

A Developmental Grasp Learning Scheme For Humanoid Robots

Bozcuoglu, Asil Kaan 01 September 2012 (has links) (PDF)
While an infant is learning to grasp, there are two key processes that she uses for leading a successful development. In the first process, infants use an intuitional approach where the hand is moved towards the object to create an initial contact regardless of the object properties. The contact is followed by a tactile grasping phase where the object is enclosed by the hand. This intuitive grasping behavior leads an grasping mechanism, which utilizes visual input and incorporates this into the grasp plan. The second process is called scaffolding, a guidance by stating how to accomplish the task or modifying its behaviors by interference. Infants pay attention to such guidance and understand the indication of important features of an object from 9 months of age. This supervision mechanism plays an important role for learning how to grasp certain objects in a proper way. To simulate these behavioral findings, a reaching and a tactile grasping controller was implemented on iCub humanoid robot which allowed it to reach an object from different directions, and enclose its fingers to cover the object. With these, a human-like grasp learning for iCub is proposed. Namely, the first step is an unsupervised learning where the robot is experimenting how to grasp objects. The second step is supervised learning phase where a caregiver modifies the end-effectors position when the robot is mistaken. By doing several experiments for two different grasping styles, we observe that the proposed methodology shows a better learning rate comparing to the scaffolding-only learning mechanism.
245

Comparison Of Histograms Of Oriented Optical Flowbased Action Recogniton Methods

Ercis, Firat 01 September 2012 (has links) (PDF)
In the task of human action recognition in uncontrolled video, motion features are used widely in order to achieve subject and appearence invariance. We implemented 3 Histograms of Oriented Optical Flow based method which have a common motion feature extraction phase. We compute an optical flow field over each frame of the video. Then those flow vectors are histogrammed due to angle values to represent each frame with a histogram. In order to capture local motions, The bounding box of the subject is divided into grids and the angle histograms of all grids are concetanated to obtain the final motion feature vector. Motion Features are supplied to 3 dierent classification system alternatives containing clustering combined with HMM, clustering with K-nearest neighbours and average histograms methods. Three methods are implemented and results are evaluated over Weizmann and KTH datasets.
246

Acceleration Of Molecular Dynamics Simulation For Tersoff2 Potential Through Reconfigurable Hardware

Vargun, Bilgin 01 June 2012 (has links) (PDF)
In nanotechnology, Carbon Nanotubes systems are studied with Molecular Dynamics Simulation software programs investigating the properties of molecular structure. Computational loads are very complex in these kinds of software programs. Especially in three body simulations, it takes a couple of weeks for small number of atoms. Researchers use supercomputers to study more complex systems. In recent years, by the development of sophisticated Field Programmable Gate Array (FPGA) Technology, researchers design special purpose co-processor to accelerate their simulations. Ongoing researches show that using application specific digital circuits will have better performance with respect to an ordinary computer. In this thesis, a new special co-processor, called TERSOFF2, is designed and implemented. Resulting design is a low cost, low power and high performance computing solution. It can solve same computation problem 1000 times faster. Moreover, an optimized digital mathematical elementary functions library is designed and implemented through thesis study. All of the work about digital circuits and architecture of co-processor will be given in the related chapter. Performance achievements will be at the end of thesis.
247

Robust Watermarking Of Images

Balci, Salih Eren 01 September 2003 (has links) (PDF)
Digital image watermarking has gained a great interest in the last decade among researchers. Having such a great community which provide a continuously growing list of proposed algorithms, it is rapidly finding solutions to its problems. However, still we are far away from being successful. Therefore, more and more people are entering the field to make the watermarking idea useful and reliable for digital world. Of these various watermarking algorithms, some outperform others in terms of basic watermarking requirements like robustness, invisibility, processing cost, etc. In this thesis, we study the performances of different watermarking algorithms in terms of robustness. Algorithms are chosen to be representatives of different categories such as spatial and transform domain. We evaluate the performance of a selected set of 9 different methods from the watermarking literature against again a selected set of attacks and distortions and try to figure out the properties of the methods that make them vulnerable or invulnerable against these attacks.
248

Road Extraction From Satellite Images By Self-supervised Classification And Perceptual Grouping

Sahin, Eda 01 January 2013 (has links) (PDF)
Road network extraction from high resolution satellite imagery is the most frequently utilized technique for updating and correcting geographic information system (GIS) databases, registering multi-temporal images for change detection and automatically aligning spatial datasets. This advance method is widely employed due to the improvements in satellite technology such as development of new sensors for high resolution imagery. To avoid the cost of the human interaction, various automatic and semi-automatic road extraction methods are developed and proposed in the literature. The aim of this study is to develop a fully automatized method which can extract road networks by using the spectral and structural features of the roads. In order to achieve this goal we set various objectives and work them out one by one. First bjective is to obtain reliable road seeds, since they are crucial for determining road regions correctly in the classification step. Second objective is finding most onvenient features and classification method for the road extraction. The third objective is to locate road centerlines which are defines the road topology. A number of algorithms are developed and tested throughout the thesis to achieve these objectives and the advantages of the proposed ones are explained. The final version of the proposed algorithm is tested by three band (RGB) satellite images and the results are compared with other studies in the literature to illustrate the benefits of the proposed algorithm.
249

Testing Effectiveness And Effort In Software Product Lines

Coteli, Mert Burkay 01 January 2013 (has links) (PDF)
Software product lines (SPL) aim to decrease the total software development cost by the help of reusability and variability. However, the increasing number of variations for the delivery types of products would result in increasing cost of the verification and validation process. Total testing cost of development can also be decreased by reusing test cases and scripts. The main objective of this study is to increase testing effectiveness while minimizing testing effort. Four different cases consisting of Aselsan&rsquo / s SPL projects have been studied. Firstly, FIG Basis path method was applied at the functional testing phase, and an increase on the testing effectiveness value has been observed. FIG basis path method is a test case sequence generation technique using the feature tree of the software component. This method would be preferable to improve testing effectiveness on the functional verification phase. The second study was on testing effort estimation. There are two testing approaches for SPL projects, namely infrastructure based and product focused testing. These two techniques have been compared in terms of testing effort. It was a study that gives an idea to test managers about the selection of the proper testing technique. Thirdly, reusability techniques were evaluated. Reusability of testing artifacts can be used to decrease the total testing effort. Two reusability techniques for testing artifacts were compared in terms of the number of test cases. Proper technique would be chosen to decrease testing effort. Finally, selection of a reference application on platform tests was proposed and software products were grouped according to the redundancy values. Then, testing effectiveness values were evaluated for each test grouping.
250

Effects Of Spl Domain Engineering On Testing Cost And Maintainability

Senbayrak, Ziya 01 February 2013 (has links) (PDF)
A software product line (SPL) consists of a set of software-intensive systems sharing a common, managed set of features that satisfy the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way. Together with testing of final deliverable products developed within the SPL, called Integration Testing, particularly important in this context is the way individual hardware as well as software components in an SPL are tested and certified for usage within the SPL. This study investigates specific approaches and techniques proposed in the literature for unit testing in the SPL context. Problems inherent to this issue were studied and possible solutions aiming towards systematic and effective testing of hardware as well as software units in SPLs have been proposed. The specific problems of SPL testing in ASELSAN were investigated in the light of these possible solutions and their applicability as well as their benefits were quantitatively assessed.

Page generated in 0.0515 seconds