• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1678
  • 338
  • 13
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2041
  • 707
  • 488
  • 365
  • 346
  • 279
  • 252
  • 251
  • 236
  • 225
  • 223
  • 216
  • 191
  • 189
  • 179
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Design and Characterization of a Current Assisted Photo Mixing Demodulator for Tof Based 3d Cmos Image Sensor

Hossain, Quazi Delwar January 2010 (has links)
Due to the increasing demand for 3D vision systems, many efforts have been recently concentrated to achieve complete 3D information analogous to human eyes. Scannerless optical range imaging systems are emerging as an interesting alternative to conventional intensity imaging in a variety of applications, including pedestrian security, biomedical appliances, robotics and industrial control etc. For this, several studies have reported to produce 3D images including stereovision, object distance from vision system and structured light source with high frame rate, accuracy, wide dynamic range, low power consumption and lower cost. Several types of optical techniques for 3D imaging range measurement are available in the literature, among them one of the most important is time-of-flight (TOF) principle that is intensively investigated. The third dimension, i.e. depth information, can be determined by correlating the reflected modulated light signal from the scene with a reference signal synchronous with the light source modulation signal. CMOS image sensors are capable of integrating the image processing circuitry on the same chip as the light sensitive elements. As compared to other imaging technologies, they have the advantages of lower power consumption and potentially lower price. The merits make this technology competent for the next-generation solid-state imaging applications. However, CMOS process technologies are developed for high-performance digital circuits. Different types of 3D photodetectors have been proposed for three-dimensional imaging. A major performance improvement has been found in the adoption of inherently mixing detectors that incorporate the role of detection and demodulation in a single device. Basically, these devices use a modulated electric field to guide the photo generated charge carriers to different collection sites in phase with a modulation signal. One very promising CMOS photonic demodulator based on substrate current modulation has recently been proposed. In this device the electric field penetrates deeper into the substrate, thus enhancing the charge separation and collection mechanism. A very good sensitivity and high demodulation efficiency can be achieved. The objective of this thesis has been the design and characterization of a Current Assisted Photo mixing Demodulator (CAPD) to be applied in a TOF based 3D CMOS sensing system. At first, the experimental investigation of the CAPD device is carried out. As a test vehicle, 10×10 pixel arrays have been fabricated in 0.18Âμm CMOS technology with 10×10 Âμm2 pixel size. The main properties of CAPD devices, such as the charge transfer characteristic, modulation contrast, noise performance and non-linearity problem, etc. have been simulated and experimentally evaluated. Experimental results demonstrate a good DC charge separation efficiency and good dynamic demodulation capabilities up to 45MHz. The influence of performance parameters such as wavelength, modulation frequency and voltage on this device is also discussed. This test device corresponds to the first step towards incorporating a high resolution TOF based 3D CMOS image sensor. The demodulator structure featuring a remarkably small pixel size 10 × 10 Âμm2 is used to realize a 120 × 160 pixel array of ranging sensor fabricated in standard 0.18Âμm CMOS technology. Initial results demonstrate that the demodulator structure is suitable for a real-time 3D image sensor. The prototype camera system is capable of providing real-time distance measurements of a scene through modulated-wave TOF measurements with a modulation frequency 20 MHz. In the distance measurement, the sensor array provides a linear distance range from 1.2m to 3.7m with maximum accuracy error 3.3% and maximum pixel noise 8.5% at 3.7m distance. Extensive testing of the device and prototype camera system has been carried out to gain insight into the characteristics of this device, which is a good candidate for integration in large arrays for time-of-flight based 3D CMOS image sensor in the near future.
12

Optimal Adaptations over Multi-Dimensional Adaptation Spaces with a Spice of Control Theory

Angelopoulos, Konstantinos January 2016 (has links)
(Self-)Adaptive software systems monitor the status of their requirements and adapt when some of these requirements are failing. The baseline for much of the research on adaptive software systems is the concept of a feedback loop mechanism that monitors the performance of a system relative to its requirements, determines root causes when there is failure, selects an adaptation, and carries it out. The degree of adaptivity of a software system critically depends on the space of possible adaptations supported (and implemented) by the system. The larger the space, the more adaptations a system is capable of. This thesis tackles the following questions: (a) How can we define multi-dimensional adaptation spaces that subsume proposals for requirements- and architecture-based adaptation spaces? (b) Given one of more failures, how can we select an optimal adaptation with respect to one or more objective functions? To answer the first question, we propose a design process for three-dimensional adaptation spaces, named the Three-Peaks Process, that iteratively elicits control and environmental parameters from requirements, architectures and behaviours for the system-to-be. For the second question, we propose three adaptation mechanisms. The first mechanism is founded on the assumption that only qualitative information is available about the impact of changes of the system's control parameters on its goals. The absence of quantitative information is mitigated by a new class of requirements, namely Adaptation Requirements, that impose constraints on the adaptation process itself and dictate policies about how conflicts among failing requirements must be handled. The second mechanism assumes that there is quantitative information about the impact of changes of control parameters on the system’s goals and that the problem of finding an adaptation is formulated as a constrained multi-objective optimization problem. The mechanism measures the degree of failure of each requirement and selects an adaptation that minimizes it along with other objective functions, such as cost. Optimal solutions are derived exploiting OMT/SMT (Optimization Modulo Theories/Satisfiability Modulo Theories) solvers. The third mechanism operates under the assumption that the environment changes dynamically over time and the chosen adaptation has to take into account such changes. Towards this direction, we apply Model Predictive Control, a well-developed theory with myriads of successful applications in Control Theory. In our work, we rely on state-of-the-art system identification techniques to derive the dynamic relationship between requirements and possible adaptations and then propose the use of a controller that exploits this relationship to optimize the satisfaction of requirements relative to a cost-function. This adaptation mechanism can guarantee a certain level of requirements satisfaction over time, by dynamically composing adaptation strategies when necessary. Finally, each piece of our work is evaluated through experimentation using variations of the Meeting-Scheduler exemplar.
13

Sensing Social Interactions Using Non-Visual and Non-Auditory Mobile Sources, Maximizung Privacy and Minimizing Obtrusiveness.

Aleksandar, Matic January 2012 (has links)
Social interaction is one of the basic components of human life which impacts thoughts, emotions, decisions, and the overall wellbeing of individuals. In this regard, monitoring social activity constitutes an important factor for a number of disciplines, particularly the ones related to social and health sciences. Sensor-based social interaction data collection has been seen as a groundbreaking tool which has the potential to overcome the drawbacks of traditional self-reporting methods and to revolutionize social behavior analysis. However, monitoring social interactions typically implies a trade-off between the quality of collected data and the levels of unobtrusiveness and of privacy respecting, aspects which can affect spontaneity in subjectsâ€TM behavior. Despite the substantial research in the area of automatic recording of social interactions, the existing solutions remain limited: they either capture audio/video data which may raise privacy concerns in monitored subjects and may restrict the application to very specific areas, or provide low accuracy in detecting social interactions that occur on small spatio-temporal scale. The objective of this thesis is to provide and evaluate a solution for mobile monitoring of face-to-face social interactions, which maximizes privacy and minimizes obtrusiveness. In order to reliably detect social interactions that occur on small spatio-temporal scale, the proposed solution infers two types of information, namely spatial settings between subjects and their speech activity status. The challenge was to select appropriate sources that do not restrict application scenarios only to certain areas and do not capture privacy sensitive data, which are the drawbacks of video/audio systems. The second stage was to interpret the data acquired from non-visual and non-auditory sources and to model social interactions on small space- and time- scales. The work in this thesis assesses the reliability of the proposed approach in several scenarios, demonstrating the accuracy of approximately 90% in detecting the occur-rence of face-to-face social interactions. The feasibility of using the proposed approach for social interaction data collection is further evaluated with respect to the study of social psychology, which serves as the guideline for extracting the relevant features of social interactions. The evaluation has demonstrated the possibility to extract various nonverbal behavioral cues related to spatial organization between individuals and their vocal behavior in social interactions. By modeling social context using the extracted features, it is possible to achieve the accuracy of 81% in the automatic classification between formal versus informal social interactions. In addition, the proposed approach was applied to gather daily patterns of social activity for investigating their correlation with the mood changes in individuals, which has been explored so far only using the traditional self-reporting methods. The findings are consistent with previous studies thus indicating the possibility to use the proposed method of collecting social interaction data for investigating psychological effects of social activities.
14

Dynamic Biological Modelling: a language-based approach

Romanel, Alessandro January 2010 (has links)
Systems biology investigates the interactions and relationships among the components of biological systems to understand how they globally work. The metaphor “cells as computations†, introduced by Regev and Shapiro, opened the realm of biological modelling to concurrent languages. Their peculiar characteristics led to the development of many different bio-inspired languages that allow to abstract and study specific aspects of biological systems. In this thesis we present a language based on the process calculi paradigm and specifically designed to account for the complexity of signalling networks. We explore a new design space for bio-inspired languages, with the aim to capture in an intuitive and simple way the fundamental mechanisms governing protein-protein interactions. We develop a formal framework for modelling, simulating and analysing biological systems. An implementation of the framework is provided to enable in-silico experimentation.
15

Parametric Real-Time System Feasibility Analysis Using Parametric Timed Automata

Ramadian, Yusi January 2012 (has links)
Real-time applications are playing an increasingly significant role in our life. The cost and risk involved in their design leads to the need for a correct and robust modelling of the system before its deployment. Many approaches have been proposed to verify the schedulability of real-time task system. A frequent limitation is that they force the task activation to restrictive patterns (e.g. periodic). Furthermore, the type of analysis carried out by the real-time scheduling theory relies on restrictive assumptions that could make the designers miss important optimization opportunities. On the other hand, the application of formal methods for verification of timed systems typically produces a yes/no answer that does not suggest any correction action or robustness margins of a given design. This work proposes an approach to combine the benefits of formal method in terms of flexibility with the production of a clear feedback for the designers. The key idea is to use parametric timed automata to enable the definition of flexible task activation patterns. The Parametric Verification of Temporal Properties (PTVP) algorithm proposed in this work produces a region of feasible parameters for realtime system. All the parameter valuation within this region is guaranteed to make the system respect the desired temporal behaviour. In this way developers are provided with a richer information than the simple feasibility of a given design choice. This method uses symbolic model checking technique to produce the result that is a union of polyhedral regions in the parameter space associated with feasible parameters. It is implemented in the tool Quinq that is based on NuSMV3. The tool also implemented an optimization to speed up the search, such as using non-parametric model checker to find counterexamples (i.e. traces) related to the unfeasible choices of parameters. Two applications of the tool and of the underlying method to several real-time system examples are presented in this dissertation : periodic real-time system tasks with offset and heterogeneous distributed real-time systems. A work that applies the tool in collaboration with another real-time system analysis tool, Modular Performance Analysis Toolbox, is also presented to show one of the many possible application of the method presented in this work. In this work we also compare our approach to the state of the art in the field of sensitivity analysis of real-time systems. However, compared to the other tools and approaches in this field, the method offered in this work presents unique advantages in the generality of the system modelling approach and the possibility to analyse the entire region of feasibility of any desired parameter in the system.
16

Closing the Gap between Business Process Analysis and Service Workflow Design with the BPM-SIC Methodology

Vairetti, Carla Marina January 2016 (has links)
Nowadays companies and organizations are challenged to integrate and automate their business processes. A business process is a set of logically related tasks, carried out to produce a product or service. Business processes are typically implemented using Web services. Web services are programmable interfaces that can be invoked through standard communication protocols. In general, the need to outsource parts of a business processes results in a large number of Web services, which are, generally, heterogeneous and distributed among various organizations and platforms. The ability to select and integrate these Web services at runtime is desirable as it would enable Web services platforms a quick reaction to changing business needs and failures, reducing implementation costs and minimizing losses by poor availability. The goal of dynamic and automatic Web services composition is to generate a composition plan (workflow) at runtime that meets certain business goal. Semantics based techniques exploit specialized services annotation to facilitate the discovery of simple or composed services (matchmaking) that form part of composition plan. Usually, the process of matchmaking places more attention in the selection of services and much less on the behavior of the composed service (workflow) that tends to be very simple. In the industry, on the contrary, the service compounds or workflows are manually defined and typically follow complex control flow patterns that implement elaborate business processes. Although a technique of dynamic and automatic service composition produces an executable workflow that implements a business process, it must be validated in relation to the business goal. This high-level analysis is usually performed by domain experts (BPA Business Process Analyst) who must coordinate with the experts (SA: System Architect) the implementation of the business processes. The conversation between BPA and SA is a fundamental requirement for the cycle of creation of an executable business process. The lack of communication between both participants not only causes delays in development time, but also generates product failures and unnecessary cycles involving often increases in production costs and large losses of money in organizations. In this thesis, we have developed three approaches that allow decreasing the gap between BPA and SA and making their collaboration more effective. On one hand we present a Web service composition technique that is dynamic and automatic and is based on services’ semantic descriptions. The composed service corresponds to an executable workflow with complex control flow, facilitating the SAs implementation task. On the other hand, we provide a tool that allows BPAs to verify and analyze the performance of their business processes. And finally, we exploit both tools in order to propose a methodology that integrates both perspectives allowing knowledge transfer in both directions. We obtained promising results that reveal inconsistencies in the development and design of the business processes as well as provide recommendations for best practices in both directions.
17

Analysis of users' psycho-physiological parameters in response to affective multimedia - A mutlimodal and implicit approach for user-centric multimedia tagging

Khomami Abadi, Mojtaba January 2017 (has links)
The affective state of a user, during an interaction with a computer, is a great source of information for the computer in order to (i) employ the information for adapting an interaction, make the interaction flawless, leading in adaptive affective interfaces. The computer may also use emotional responses of a user to some affective multimedia content (ii) to tag the multimedia content with affective labels. The second is very useful to create affective profiles of users within real world applications for user-centric multimedia retrieval. Affective responses of users could be collected either explicitly (i.e. users directly assess their own emotions through computer interfaces) or implicitly (i.e. via sensors that collect psycho-physiological signals such as facial expressions, vocal clues, neuro-physiological signals, gestures and body postures). The affective state of a user, during an interaction with a computer, is a great source of information for the computer in order to (i) employ the information for adapting an interaction, make the interaction flawless, leading in adaptive affective interfaces. The computer may also use emotional responses of a user to some affective multimedia content (ii) to tag the multimedia content with affective labels. The second is very useful to create affective profiles of users within real world applications for user-centric multimedia retrieval. Affective responses of users could be collected either explicitly (i.e. users directly assess their own emotions through computer interfaces) or implicitly (i.e. via sensors that collect psycho-physiological signals such as facial expressions, vocal clues, neuro-physiological signals, gestures and body postures). The major contributions of this thesis are as follows: (i) We present (and made publicly available) the very first multimodal dataset that includes the MEG brain signals, facial videos and some peripheral physisological signals of 30 users in response to two sets of affective dynamic stimuli. The dataset is recorded via cutting-edge lab equipments in highly controlled lab environments, facilitating proper analysis of MEG brain responses for affective neuro-science research. (ii) We then present two other multimodal datasets that we recorded using off-the-shelves market-available sensors for the purpose of analyzing users' affective responses to video clips and computer-generated music excerpts. The stimuli are selectively chosen to evoke certain target emotions. The first dataset also includes the BigFive personality traits of individuals and we show that it is possible to infer users' personality traits given their spontaneous reactions to affective videos. Both multimodal datasets are acquired via commercial sensors that are prone to noise artifacts that lead to some noisy uni-modal recordings. We made both datasets publicly available together with quality-assessments of each signal recording. Within the research on the second dataset we present a multimodal inference system that jointly considers the quality of signals and ends up with highly signal noise tolerance. We also show that peripheral physiological signals include patterns that are similar across user. We develop a cross-user affect recognition system that is successfully validated via a leave-one-subject-out cross-validation scheme on the second dataset. (iii) We also present a crowdsourcing protocol for the collection of time-continuous affect annotations for videos. We collect a dataset of affective annotations for 12 videos with the contribution of over 1500 crowd-workers. We introduce algorithms to extract high quality time-continuous affect annotations for the 12 videos from the noisy crowd annotations. We observe that, for the prediction of time-continuous affect annotations given low-level multimedia content, higher regression accuracies are achieved when the crowd sourced annotations are employed as labels than expert annotations. The study suggests that expensive expert annotations for large affective video corpora developments could be replaced by crowdsourcing annotation techniques. Finally, we discuss opportunities for future applications of our research, and conclude with a summary of our contributions to the field of affective computing
18

Human Activity Analytics Based on Mobility and Social Media Data

Paraskevopoulos, Pavlos January 2017 (has links)
The development of social networks such as Twitter, Facebook and Google+ allow users to share their beliefs, feelings, or observations with their circles of friends. Based on these data, a range of applications and techniques has been developed, targeting to provide a better quality of life to the users. Nevertheless, the quality of results of the geolocationaware applications is signicantly restricted due to the tiny percentage of the social media data that is geotagged ( 2% for Twitter). Hence, increasing this percentage is an important and challenging problem. Moreover, information extracted from social media data can be complemented by the analysis of mobile phone usage data, in order to provide further insights on human activity patterns. In this thesis, we present a novel method for analyzing and geolocalizing non-geotagged Twitter posts. The proposed method is the rst to do so at the ne-grain of city neighborhoods, while being both eective and time ecient. Our method is based on the extraction of representative keywords for each candidate location,as well as the analysis of the tweet volume time series. We also describe a system built on top of our method, which geolocalizes tweets and allows users to visually examine the results and their evolution over time. Our system allows the user to get a better idea of how the activity of a particular location changes, which the most important keywords are, as well as to geolocalize individual tweets of interest. Moreover, we study the activity and mobility characteristics of the users that post geotagged tweets and compared the mobility of users who attended the event with a random set of users. Interestingly, the results of this analysis indicate that a very small number of users (i.e., less than 35 users in this study) is able to represent the mobility patterns present in the entire dataset. Finally, we study the call activity and mobility patterns, clustering the observed behaviors that exhibited similar characteristics, and characterizing the anomalous behaviors. We analyzed a Call Detail Record (CDR) dataset, containing (aggregated) information on the calls among mobile phones. Employing density-based algorithms and statistical analysis, we developed a framework that identies abnormal locations, as well abnormal time intervals. The results of this work can be used for early identication of exceptional situations, monitoring the eects of important events in urban and transportation planning, and others.
19

Video Scene Understanding: Semantic-based representation, Temporal Variation Modeling, Multi-Task Learning

Rostamzadeh, Negar January 2017 (has links)
One of the major research topics in computer vision is automatic video scene understanding where the ultimate goal is to build artificial intelligence systems comparable with humans in understanding video contents. Automatic video scene understanding covers many applications including (i) semantic functional complex scene categorization, (ii) human body-pose estimation in videos, (iii) human fine-grained daily living action recognition, (vi) video retrieval, and genre recognition. In this thesis, we introduce computer vision and pattern analysis techniques that outperform the state of art of the above mentioned applications on some publicly available datasets. Our major research contributions towards automatic video scene understanding are (i) introducing an efficient approach to combine low and high-level information content of videos, (ii) modeling temporal variation of frame-based descriptors in videos, and (iii) proposing a multitask learning framework to leverage the huge amount of unlabeled videos. The first category covers a method for enriching visual words that contain local motion information but they lack information about the cause of the motion. Our proposed approach embeds the source of a generated motion in video descriptors and hence induces some semantic information in the employed visual words in the pattern analysis task. Our approach is validated on traffic scene analysis as well as human body pose estimation applications. When employing an already-trained off-the-shelves model over an unseen dataset, the accuracy of the model usually drops significantly. We present an approach that considers low-level cues such as the optical flow in the foreground of a video to make an already-trained, off-the-shelves, pictorial deformable model work well on a body pose estimation working well for an unseen dataset. The second category covers methods that induce temporal variation information to video descriptors. Many video descriptors are based on global video representations, where, frame-based descriptors are combined to a unified video descriptor without preserving much of the temporal information content. To include the temporal information content in video descriptors, we introduce a descriptor, namely, the Hard and Soft Cluster Encoding. The descriptor includes how similar frames are distributed over a video timespan. We present that our approach yields significant improvements on the human fine-grained daily living action recognition task. The third category includes a novel Multi-Task Clustering (MTC) approach to leverage the information of unlabeled videos. Our proposed method is on human fine-grained daily living action recognition application. People tend to perform similar activities in the similar environments. Therefore, a proper clustering approach could determine patterns of fine-grained activities during some learning process. Our proposed MTC approach rather than clustering the data of each individual separately, capture more generic patterns across users over the training data and hence leads to remarkable recognition rates. Finally, we discuss opportunities for future applications of our research and conclude with a summary of our contributions to video understanding.
20

Speech Adaptation Modeling for Statistical Machine Translation

Ruiz, Nicholas January 2017 (has links)
Spoken language translation (SLT) exists within one of the most challenging intersections of speech and natural language processing. While machine translation (MT) has demonstrated its effectiveness on the translation of textual data, the translation of spoken language remains a challenge, largely due to the mismatch between the training conditions of MT and the noisy signal that is output by an automatic speech recognition (ASR) system. In the interchange between ASR and MT, errors propagated from noisy speech recognition outputs may become compounded, rendering the speech translation to be unintelligible. Additionally, aspects such as stylistic differences between written and spoken registers can lead to the generation of inadequate translations. This scenario is predominantly caused by a mismatch between the training conditions of ASR and MT. Due to the lack of training data that couples speech audio with translated transcripts, MT systems in the SLT pipeline must rely predominantly on textual data that does not represent well the characteristics of spoken language. Likewise, independence assumptions between each sentence results in ASR and MT systems that do not yield consistent outputs. In this thesis develop techniques to overcome the mismatch between speech and textual data by improving the robustness of the MT system. Our work can be divided into three parts. First we analyze the effects the difference between spoken and written registers has on SLT quality. We additionally introduce a data analysis methodology to measure the impact of ASR errors on translation quality. Secondly, we propose several approaches to improve the MT component's tolerance of noisy ASR outputs: by adapting its models based on the bilingual statistics of each sentence's neighboring context, and through the introduction of a process by which textual resources can be transformed into synthetic ASR data to use when training a speech-centric MT system. In particular, we focus on the translation from spoken English to French and German -- the two parent languages of English -- and demonstrate that information about the types and frequency of ASR errors can improve the robustness of machine translation for SLT. Finally, we introduce and motivate several challenges in spoken language translation with neural machine translation models that are specific to their modeling architecture.

Page generated in 0.0577 seconds