• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 28
  • 19
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 247
  • 68
  • 50
  • 49
  • 40
  • 38
  • 33
  • 31
  • 23
  • 22
  • 19
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Scalable action detection in video collections / Détection scalable d'actions dans des collections vidéo

Stoian, Andrei 15 January 2016 (has links)
Cette thèse a pour but de proposer de nouvelles méthodes d'indexation des bases de données vidéo de type archive culturelle à partir des actions humaines qu'elles contiennent. Les actions humaines, représentent un aspect important des contenus multimédia, à côté des sons, images ou de la parole. L'interrogation technique principale à laquelle nous répondons est ``Comment détecter et localiser précisément et rapidement dans une vidéo une action humaine, à partir de quelques exemples de cette même action?''. Le défi relevé par cette interrogation se trouve dans la satisfaction de ces deux critères: qualité de détection et rapidité.Nous avons traité, dans une première partie, l'adaptation des mesures de similarité aux contraintes de temps de calcul et mémoire nécessaires pour avoir un système rapide de détection d'actions. Nous avons montré qu'une approche de type "alignement de séquences" couplée avec une sélection de variables permet de répondre rapidement à des requêtes et obtient une bonne qualité des résultats. L'ajout d'un filtrage préliminaire permet d'améliorer encore les performances.Dans une seconde partie de la thèse nous avons crée une méthode d'accélération de l'étage de filtrage pour obtenir une complexité de recherche sous-linéaire dans la taille de la base. Nous nous sommes basés sur le hachage sensible à la similarité et sur une nouvelle approche à l'exploration dans l'espace de hachage, adaptée à la << requête-par-détecteur >>.Nous avons testé les méthodes proposées sur une nouvelle base annotée de vidéos de grande taille destinée à la détection et localisation d'actions humaines. Nous avons montré que nos approches donnent des résultats de bonne qualité et qu'elles peuvent passer à l'échelle. / This thesis proposes new methods for indexing video collections with varied content, such as cultural archives. We focus on human actions, which represent an important cultural aspect, together with sound, images and speech. Our main technical challenge is 'How to quickly detect and precisely localize human actions in a large video collection, when these actions are given as a query through example video clips?'. Thus, the difficulty of the task is due to criteria: quality of detection and search response time.
22

Scalable Perceptual Image Coding for Remote Sensing Systems

Oh, Han, Lalgudi, Hariharan G. 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / In this work, a scalable perceptual JPEG2000 encoder that exploits properties of the human visual system (HVS) is presented. The algorithm modifies the final three stages of a conventional JPEG2000 encoder. In the first stage, the quantization step size for each subband is chosen to be the inverse of the contrast sensitivity function (CSF). In bit-plane coding, two masking effects are considered during distortion calculation. In the final bitstream formation step, quality layers are formed corresponding to desired perceptual distortion thresholds. This modified encoder exhibits superior visual performance for remote sensing images compared to conventional JPEG2000 encoders. Additionally, it is completely JPEG2000 Part-1 compliant, and therefore can be decoded by any JPEG2000 decoder.
23

Scalable and distributed constrained low rank approximations

Kannan, Ramakrishnan 27 May 2016 (has links)
Low rank approximation is the problem of finding two low rank factors W and H such that the rank(WH) << rank(A) and A ≈ WH. These low rank factors W and H can be constrained for meaningful physical interpretation and referred as Constrained Low Rank Approximation (CLRA). Like most of the constrained optimization problem, performing CLRA can be computationally expensive than its unconstrained counterpart. A widely used CLRA is the Non-negative Matrix Factorization (NMF) which enforces non-negativity constraints in each of its low rank factors W and H. In this thesis, I focus on scalable/distributed CLRA algorithms for constraints such as boundedness and non-negativity for large real world matrices that includes text, High Definition (HD) video, social networks and recommender systems. First, I begin with the Bounded Matrix Low Rank Approximation (BMA) which imposes a lower and an upper bound on every element of the lower rank matrix. BMA is more challenging than NMF as it imposes bounds on the product WH rather than on each of the low rank factors W and H. For very large input matrices, we extend our BMA algorithm to Block BMA that can scale to a large number of processors. In applications, such as HD video, where the input matrix to be factored is extremely large, distributed computation is inevitable and the network communication becomes a major performance bottleneck. Towards this end, we propose a novel distributed Communication Avoiding NMF (CANMF) algorithm that communicates only the right low rank factor to its neighboring machine. Finally, a general distributed HPC- NMF framework that uses HPC techniques in communication intensive NMF operations and suitable for broader class of NMF algorithms.
24

A toolbox for multi-objective optimisation of low carbon powertrain topologies

Mohan, Ganesh 05 1900 (has links)
Stricter regulations and evolving environmental concerns have been exerting ever-increasing pressure on the automotive industry to produce low carbon vehicles that reduce emissions. As a result, increasing numbers of alternative powertrain architectures have been released into the marketplace to address this need. However, with a myriad of possible alternative powertrain configurations, which is the most appropriate type for a given vehicle class and duty cycle? To that end, comparative analyses of powertrain configurations have been widely carried out in literature; though such analyses only considered limited types of powertrain architectures at a time. Collating the results from these literature often produced findings that were discontinuous, which made it difficult for drawing conclusions when comparing multiple types of powertrains. The aim of this research is to propose a novel methodology that can be used by practitioners to improve the methods for comparative analyses of different types of powertrain architectures. Contrary to what has been done so far, the proposed methodology combines an optimisation algorithm with a Modular Powertrain Structure that facilitates the simultaneous approach to optimising multiple types of powertrain architectures. The contribution to science is two-folds; presenting a methodology to simultaneously select a powertrain architecture and optimise its component sizes for a given cost function, and demonstrating the use of multi-objective optimisation for identifying trade-offs between cost functions by powertrain architecture selection. Based on the results, the sizing of the powertrain components were influenced by the power and energy requirements of the drivecycle, whereas the powertrain architecture selection was mainly driven by the autonomy range requirements, vehicle mass constraints, CO2 emissions, and powertrain costs. For multi-objective optimisation, the creation of a 3-dimentional Pareto front showed multiple solution points for the different powertrain architectures, which was inherent from the ability of the methodology to concurrently evaluate those architectures. A diverging trend was observed on this front with the increase in the autonomy range, driven primarily by variation in powertrain cost per kilometre. Additionally, there appeared to be a trade-off in terms of electric powertrain sizing between CO2 emissions and lowest mass. This was more evident at lower autonomy ranges, where the battery efficiency was a deciding factor for CO2 emissions. The results have demonstrated the contribution of the proposed methodology in the area of multi-objective powertrain architecture optimisation, thus addressing the aims of this research.
25

NETWORKING SATELLITE GROUND STATIONS USING LABVIEW

Mauldin, Kendall 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / A multi-platform network design that is automated, bi-directional, capable of store and forward operations, and low-bandwidth has been developed to connect multiple satellite ground stations together in real-time. The LabVIEW programming language has been used to develop both the server and client aspects of this network. Future plans for this project include implementing a fully operational ground network using the described concepts, and using this network for real-time satellite operations. This paper describes the design requirements, RF and ground-based network configuration, software implementation, and operational testing of the ground network.
26

Smart Radio Control System (For Flight Test Centers)

Rubio, Pedro, Alvarez, Jesus 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / Among the rich infrastructure of a Telemetry/Ground Station Center dwells the subset dedicated to radio communications. Radios are mainly used to communicate with the aircraft under test in order to give guidance and feedback from ground specialists. Sometimes, however, radios themselves become the subject of the test, requiring a full set of them with all their features and capabilities (Military Modes, HF ALE, SELCAL, etc). Remote control (and audio routing) of these radios is a critical as infrastructures scale over tens of radios, distributed amid different test centers separated by hundreds of kilometers. Addition of a remote touch user interface, MIL COMSEC and TRANSEC modes, automatic audio routing, together with a maintenance free requirement, makes the whole issue far more difficult to manage. Airbus Defense & Space has developed a Smart Radio Control System allowing to profit from those advantages and more benefits: *Intuitive Touch UI *Automatic Audio Routing *Distributed infrastructure (network based) *Autonomous and service free (no one, other than FTC needed to operate it) *Heterogeneous (any radio can be controlled by creating a plug & play library) *Special Modes support (COMSEC, TRANSEC, HF ALE, and SELCAL) Future additions will include, amongst others, VoIP integration and tablet use.
27

Representing short sequences in the context of a model organism genome

Lewis, Christopher Thomas 25 May 2009
<p>In the post-genomics era, the sheer volume of data is overwhelming without appropriate tools for data integration and analysis. Studying genomic sequences in the context of other related genomic sequences, i.e. comparative genomics, is a powerful technique enabling the identification of functionally interesting sequence regions based on the principal that similar sequences tend to be either homologous or provide similar functionality.</p> <p>Costs associated with full genome sequencing make it infeasible to sequence every genome of interest. Consequently, simple, smaller genomes are used as model organisms for more complex organisms, for instance, Mouse/Human. An annotated model organism provides a source of annotation for transcribed sequences and other gene regions of the more complex organism based on sequence homology. For example, the gene annotations from the model organism aid interpretation of expression studies in more complex organisms.</p> <p>To assist with comparative genomics research in the Arabidopsis/Brassica (Thale-cress/Canola) model-crop pair, a web-based, graphical genome browser (BioViz) was developed to display short Brassica genomic sequences in the context of the Arabidopsis model organism genome. This involved the development of graphical representations to integrate data from multiple sources and tools, and a novel user interface to provide the user with a more interactive web-based browsing experience. While BioViz was developed for the Arabidopsis/Brassica comparative genomics context, it could be applied to comparative browsing relative to other reference genomes.</p> <p>BioViz proved to be an valuable research support tool for Brassica / Arabidopsis comparative genomics. It provided convenient access to the underlying Arabidopsis annotation, allowed the user to view specific EST sequences in the context of the Arabidopsis genome and other related EST sequences. In addition, the limits to which the project pushed the SVG specification proved influential in the SVG community. The work done for BioViz inspired the definition of an opensource project to define standards for SVG based web applications and a standard framework for SVG based widget sets.</p>
28

Error control for scalable image and video coding

Kuang, Tianbo 24 November 2003
Scalable image and video has been proposed to transmit image and video signals over lossy networks, such as the Internet and wireless networks. However, scalability alone is not a complete solution since there is a conflict between the unequal importance of the scalable bit stream and the agnostic nature of packet losses in the network. This thesis investigates three methods to combat the detrimental effects of random packet losses to scalable images and video, namely the error resilient method, the error concealment method, and the unequal error protection method within the joint source-channel coding framework. For the error resilient method, an optimal bit allocation algorithm is proposed without considering the distortion caused by packet losses. The allocation algorithm is then extended to accommodate packet losses. For the error concealment method, a simple temporal error concealment mechanism is designed to work for video signals. For the unequal error protection method, the optimal protection allocation problem is formulated and solved. These methods are tested on the wavelet-based Set Partitioning in Hierarchical Trees(SPIHT) scalable image coder. Performance gains and losses in lossy and lossless environments are studied for both the original coder and the error-controlled coders. The results show performance advantages of all three methods over the original SPIHT coder. Particularly, the unequal error protection method and error concealment method are promising for future Internet/wireless image and video transmission, because the former has very good performance even at heavy packet loss (a PSNR of 22.00 dB has been seen at nearly 60% packet loss) and the latter does not introduce any extra overhead.
29

Scalable Techniques for Anomaly Detection

Yadav, Sandeep 1985- 14 March 2013 (has links)
Computer networks are constantly being attacked by malicious entities for various reasons. Network based attacks include but are not limited to, Distributed Denial of Service (DDoS), DNS based attacks, Cross-site Scripting (XSS) etc. Such attacks have exploited either the network protocol or the end-host software vulnerabilities for perpetration. Current network traffic analysis techniques employed for detection and/or prevention of these anomalies suffer from significant delay or have only limited scalability because of their huge resource requirements. This dissertation proposes more scalable techniques for network anomaly detection. We propose using DNS analysis for detecting a wide variety of network anomalies. The use of DNS is motivated by the fact that DNS traffic comprises only 2-3% of total network traffic reducing the burden on anomaly detection resources. Our motivation additionally follows from the observation that almost any Internet activity (legitimate or otherwise) is marked by the use of DNS. We propose several techniques for DNS traffic analysis to distinguish anomalous DNS traffic patterns which in turn identify different categories of network attacks. First, we present MiND, a system to detect misdirected DNS packets arising due to poisoned name server records or due to local infections such as caused by worms like DNSChanger. MiND validates misdirected DNS packets using an externally collected database of authoritative name servers for second or third-level domains. We deploy this tool at the edge of a university campus network for evaluation. Secondly, we focus on domain-fluxing botnet detection by exploiting the high entropy inherent in the set of domains used for locating the Command and Control (C&C) server. We apply three metrics namely the Kullback-Leibler divergence, the Jaccard Index, and the Edit distance, to different groups of domain names present in Tier-1 ISP DNS traces obtained from South Asia and South America. Our evaluation successfully detects existing domain-fluxing botnets such as Conficker and also recognizes new botnets. We extend this approach by utilizing DNS failures to improve the latency of detection. Alternatively, we propose a system which uses temporal and entropy-based correlation between successful and failed DNS queries, for fluxing botnet detection. We also present an approach which computes the reputation of domains in a bipartite graph of hosts within a network, and the domains accessed by them. The inference technique utilizes belief propagation, an approximation algorithm for marginal probability estimation. The computation of reputation scores is seeded through a small fraction of domains found in black and white lists. An application of this technique, on an HTTP-proxy dataset from a large enterprise, shows a high detection rate with low false positive rates.
30

Error control for scalable image and video coding

Kuang, Tianbo 24 November 2003 (has links)
Scalable image and video has been proposed to transmit image and video signals over lossy networks, such as the Internet and wireless networks. However, scalability alone is not a complete solution since there is a conflict between the unequal importance of the scalable bit stream and the agnostic nature of packet losses in the network. This thesis investigates three methods to combat the detrimental effects of random packet losses to scalable images and video, namely the error resilient method, the error concealment method, and the unequal error protection method within the joint source-channel coding framework. For the error resilient method, an optimal bit allocation algorithm is proposed without considering the distortion caused by packet losses. The allocation algorithm is then extended to accommodate packet losses. For the error concealment method, a simple temporal error concealment mechanism is designed to work for video signals. For the unequal error protection method, the optimal protection allocation problem is formulated and solved. These methods are tested on the wavelet-based Set Partitioning in Hierarchical Trees(SPIHT) scalable image coder. Performance gains and losses in lossy and lossless environments are studied for both the original coder and the error-controlled coders. The results show performance advantages of all three methods over the original SPIHT coder. Particularly, the unequal error protection method and error concealment method are promising for future Internet/wireless image and video transmission, because the former has very good performance even at heavy packet loss (a PSNR of 22.00 dB has been seen at nearly 60% packet loss) and the latter does not introduce any extra overhead.

Page generated in 0.0562 seconds