• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 28
  • 19
  • 8
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 241
  • 68
  • 50
  • 48
  • 40
  • 37
  • 32
  • 31
  • 23
  • 22
  • 19
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Scalable action detection in video collections / Détection scalable d'actions dans des collections vidéo

Stoian, Andrei 15 January 2016 (has links)
Cette thèse a pour but de proposer de nouvelles méthodes d'indexation des bases de données vidéo de type archive culturelle à partir des actions humaines qu'elles contiennent. Les actions humaines, représentent un aspect important des contenus multimédia, à côté des sons, images ou de la parole. L'interrogation technique principale à laquelle nous répondons est ``Comment détecter et localiser précisément et rapidement dans une vidéo une action humaine, à partir de quelques exemples de cette même action?''. Le défi relevé par cette interrogation se trouve dans la satisfaction de ces deux critères: qualité de détection et rapidité.Nous avons traité, dans une première partie, l'adaptation des mesures de similarité aux contraintes de temps de calcul et mémoire nécessaires pour avoir un système rapide de détection d'actions. Nous avons montré qu'une approche de type "alignement de séquences" couplée avec une sélection de variables permet de répondre rapidement à des requêtes et obtient une bonne qualité des résultats. L'ajout d'un filtrage préliminaire permet d'améliorer encore les performances.Dans une seconde partie de la thèse nous avons crée une méthode d'accélération de l'étage de filtrage pour obtenir une complexité de recherche sous-linéaire dans la taille de la base. Nous nous sommes basés sur le hachage sensible à la similarité et sur une nouvelle approche à l'exploration dans l'espace de hachage, adaptée à la << requête-par-détecteur >>.Nous avons testé les méthodes proposées sur une nouvelle base annotée de vidéos de grande taille destinée à la détection et localisation d'actions humaines. Nous avons montré que nos approches donnent des résultats de bonne qualité et qu'elles peuvent passer à l'échelle. / This thesis proposes new methods for indexing video collections with varied content, such as cultural archives. We focus on human actions, which represent an important cultural aspect, together with sound, images and speech. Our main technical challenge is 'How to quickly detect and precisely localize human actions in a large video collection, when these actions are given as a query through example video clips?'. Thus, the difficulty of the task is due to criteria: quality of detection and search response time.
22

Adaptive Error Control Schemes for Scalable Video Transmission over Wireless Internet

Lee, Chen-Wei 22 July 2008 (has links)
Based on the fast evolution of wireless networks and multimedia compression technologies in recent years, real-time multimedia transmission over wireless networks will be the next step for the implementation of contemporary communication system. Lower bandwidth and higher loss rate make wireless networks hard to transmit multimedia content than its wired counterpart. In addition, the common delay constraint from real-time multimedia transmission raises the challenges for the design of wireless communication system. This dissertation proposes an adaptive unequal error protection (UEP) and packet size assignment scheme for scalable video transmission over a burst error channel. An analytic model is developed to evaluate the impact of channel bit-error-rate on the quality of streaming scalable video. A video transmission scheme, which combines the adaptive assignment of packet size with unequal error protection to increase the end-to-end video quality is proposed. Several distinct scalable video transmission schemes over burst-error channel have been compared, and the simulation results reveal that the proposed transmission schemes can react to varying channel conditions with less and smoother quality degradation. Furthermore, in order to meet the real time need in many video transmission applications, this dissertation has proposed low time-complexity packet size assignment schemes. Meanwhile, from the test result, it can be seen that although this method has sacrificed a little bit video quality as compared to optimized method, yet it can adapt to all kinds of network situations and display smoother quality and performance. Moreover, as compared to optimized method, this strategy greatly reduces the calculation time-complexity.
23

The Expandable Display: an ad hoc grid of autonomous displays

MacDougall, James Scott 29 April 2014 (has links)
Networking multiple "smart" displays together is an affordable way of creating large high-resolution display systems. In this work I propose a new structure and data distribution paradigm for displays of this nature. I model my work on the peer-to-peer style of content distribution, as opposed to the traditional client-server model for this kind of system. In taking a peer-to-peer approach, I present a low-cost and scalable system without the inherent constraints imposed by the client-server model. I present a new class of applications specifically designed for this peer-to-peer style of display system, and provide an easy-to-use framework for developers to use in creating this type of system. / Graduate / 0984
24

On Causal Video Coding with Possible Loss of the First Encoded Frame

Eslamifar, Mahshad January 2013 (has links)
Multiple Description Coding (MDC) was fi rst formulated by A. Gersho and H. Witsenhausen as a way to improve the robustness of telephony links to outages. Lots of studies have been done in this area up to now. Another application of MDC is the transmission of an image in diff erent descriptions. If because of the link outage during transmission, any one of the descriptions fails, the image could still be reconstructed with some quality at the decoder side. In video coding, inter prediction is a way to reduce temporal redundancy. From an information theoretical point of view, one can model inter prediction with Causal Video Coding (CVC). If because of link outage, we lose any I-frame, how can we reconstruct the corresponding P- or B-frames at the decoder? In this thesis, we are interested in answering this question and we call this scenario as causal video coding with possible loss of the fi rst encoded frame and we denote it by CVC-PL as PL stands for possible loss. In this thesis for the fi rst time, CVC-PL is investigated. Although, due to lack of time, we mostly study two-frame CVC-PL, we extend the problem to M-frame CVC-PL as well. To provide more insight into two-frame CVC-PL, we derive an outer-bound to the achievable rate-distortion sets to show that CVC-PL is a subset of the region combining CVC and peer-to-peer coding. In addition, we propose and prove a new achievable region to highlight the fact that two-frame CVC-PL could be viewed as MDC followed by CVC. Afterwards, we present the main theorem of this thesis, which is the minimum total rate of CVC-PL with two jointly Gaussian distributed sources, i.e. X1 and X2 with normalized correlation coeffi cient r, for di fferent distortion pro files (D1,D2,D3). Defi ning Dr = r^2(D1 -1) + 1, we show that for small D3, i.e. D3 < Dr +D2 -1, CVC-PL could be treated as CVC with two jointly Gaussian distributed sources; for large D3, i.e. D3 > DrD2/(Dr+D2-DrD2), CVC-PL could be treated as two parallel peer-to-peer networks with distortion constraints D1 and D2; and for the other cases of D3, the minimum total rate is 0.5 log (1+ ??)(D3+??)/ (Dr+?? )(D2+?? ) + 0.5 log Dr/(D1D3) where ??=D3-DrD2+r[(1-D1)(1-D2)(D3-Dr)(D3-D2)]^0.5/[Dr+D2-(D3+1) ] We also determine the optimal coding scheme which achieves the minimum total rate. We conclude the thesis by comparing the scenario of CVC-PL with two frames with a coding scheme, in which both of the sources are available at the encoders, i.e. distributed source coding versus centralized source coding. We show that for small D2 or large D3, the distributed source coding can perform as good as the centralized source coding. Finally, we talk about future work and extend and formulate the problem for M sources.
25

Scalable Embeddings for Kernel Clustering on MapReduce

Elgohary, Ahmed 14 February 2014 (has links)
There is an increasing demand from businesses and industries to make the best use of their data. Clustering is a powerful tool for discovering natural groupings in data. The k-means algorithm is the most commonly-used data clustering method, having gained popularity for its effectiveness on various data sets and ease of implementation on different computing architectures. It assumes, however, that data are available in an attribute-value format, and that each data instance can be represented as a vector in a feature space where the algorithm can be applied. These assumptions are impractical for real data, and they hinder the use of complex data structures in real-world clustering applications. The kernel k-means is an effective method for data clustering which extends the k-means algorithm to work on a similarity matrix over complex data structures. The kernel k-means algorithm is however computationally very complex as it requires the complete data matrix to be calculated and stored. Further, the kernelized nature of the kernel k-means algorithm hinders the parallelization of its computations on modern infrastructures for distributed computing. This thesis defines a family of kernel-based low-dimensional embeddings that allows for scaling kernel k-means on MapReduce via an efficient and unified parallelization strategy. Then, three practical methods for low-dimensional embedding that adhere to our definition of the embedding family are proposed. Combining the proposed parallelization strategy with any of the three embedding methods constitutes a complete scalable and efficient MapReduce algorithm for kernel k-means. The efficiency and the scalability of the presented algorithms are demonstrated analytically and empirically.
26

The Expandable Display: an ad hoc grid of autonomous displays

MacDougall, James Scott 29 April 2014 (has links)
Networking multiple "smart" displays together is an affordable way of creating large high-resolution display systems. In this work I propose a new structure and data distribution paradigm for displays of this nature. I model my work on the peer-to-peer style of content distribution, as opposed to the traditional client-server model for this kind of system. In taking a peer-to-peer approach, I present a low-cost and scalable system without the inherent constraints imposed by the client-server model. I present a new class of applications specifically designed for this peer-to-peer style of display system, and provide an easy-to-use framework for developers to use in creating this type of system. / Graduate / 0984
27

Scalable Distributed Networked Haptic Cooperation

Rakhsha, Ramtin 24 April 2015 (has links)
In cooperative networked haptic systems, some distributed distant users may decide to leave or join the cooperation while other users continue to manipulate the shared virtual object (SVO). Cooperative haptic systems that support interaction among a variable number of users, called scalable haptic cooperation systems herein, are the focus of this research. In this thesis, we develop distributed control strategies that provide stable and realistic force feedback to a varying number of users manipulating a SVO when connected across a computer network with imperfections (such as limited packet update rate, delay, jitter, and packet-loss). We fi rst propose the average position (AP) scheme to upper bound the effective stiff ness of the SVO coordination and thus, to enhance the stability of the distributed multi-user haptic cooperation. For constant and small communication delays and over power-domain communications, the effectiveness of the proposed AP paradigm is compared with the traditional proportional-derivative (PD) scheme via multi-rate stability and performance analyses supported with experimental verif cations. Next, in a passivity-based approach, the scalability is pursued by implementing the AP scheme over wave-domain communication channels along with passive simulation of the dynamics. By constructing a passive distributed SVO in closed-loop with passive human users and haptic devices, we guarantee the stability of the distributed haptic cooperation system. However, energy leak at joining/leaving instances may compromise the passivity of the SVO. We examine the preservation of passivity of the proposed SVO scheme for such situations. A switching algorithm is then introduced in order to improve the performance of the cooperative haptic system. Experiments in which three users take turn in leaving or joining the cooperation over a network with varying delay and packet-loss will support the theoretical results. / Graduate / 0771 / 0548 / 0537 / 0544 / rrakhsha@uvic.ca
28

Representing short sequences in the context of a model organism genome

Lewis, Christopher Thomas 25 May 2009
<p>In the post-genomics era, the sheer volume of data is overwhelming without appropriate tools for data integration and analysis. Studying genomic sequences in the context of other related genomic sequences, i.e. comparative genomics, is a powerful technique enabling the identification of functionally interesting sequence regions based on the principal that similar sequences tend to be either homologous or provide similar functionality.</p> <p>Costs associated with full genome sequencing make it infeasible to sequence every genome of interest. Consequently, simple, smaller genomes are used as model organisms for more complex organisms, for instance, Mouse/Human. An annotated model organism provides a source of annotation for transcribed sequences and other gene regions of the more complex organism based on sequence homology. For example, the gene annotations from the model organism aid interpretation of expression studies in more complex organisms.</p> <p>To assist with comparative genomics research in the Arabidopsis/Brassica (Thale-cress/Canola) model-crop pair, a web-based, graphical genome browser (BioViz) was developed to display short Brassica genomic sequences in the context of the Arabidopsis model organism genome. This involved the development of graphical representations to integrate data from multiple sources and tools, and a novel user interface to provide the user with a more interactive web-based browsing experience. While BioViz was developed for the Arabidopsis/Brassica comparative genomics context, it could be applied to comparative browsing relative to other reference genomes.</p> <p>BioViz proved to be an valuable research support tool for Brassica / Arabidopsis comparative genomics. It provided convenient access to the underlying Arabidopsis annotation, allowed the user to view specific EST sequences in the context of the Arabidopsis genome and other related EST sequences. In addition, the limits to which the project pushed the SVG specification proved influential in the SVG community. The work done for BioViz inspired the definition of an opensource project to define standards for SVG based web applications and a standard framework for SVG based widget sets.</p>
29

Error control for scalable image and video coding

Kuang, Tianbo 24 November 2003
Scalable image and video has been proposed to transmit image and video signals over lossy networks, such as the Internet and wireless networks. However, scalability alone is not a complete solution since there is a conflict between the unequal importance of the scalable bit stream and the agnostic nature of packet losses in the network. This thesis investigates three methods to combat the detrimental effects of random packet losses to scalable images and video, namely the error resilient method, the error concealment method, and the unequal error protection method within the joint source-channel coding framework. For the error resilient method, an optimal bit allocation algorithm is proposed without considering the distortion caused by packet losses. The allocation algorithm is then extended to accommodate packet losses. For the error concealment method, a simple temporal error concealment mechanism is designed to work for video signals. For the unequal error protection method, the optimal protection allocation problem is formulated and solved. These methods are tested on the wavelet-based Set Partitioning in Hierarchical Trees(SPIHT) scalable image coder. Performance gains and losses in lossy and lossless environments are studied for both the original coder and the error-controlled coders. The results show performance advantages of all three methods over the original SPIHT coder. Particularly, the unequal error protection method and error concealment method are promising for future Internet/wireless image and video transmission, because the former has very good performance even at heavy packet loss (a PSNR of 22.00 dB has been seen at nearly 60% packet loss) and the latter does not introduce any extra overhead.
30

Scalable Techniques for Anomaly Detection

Yadav, Sandeep 1985- 14 March 2013 (has links)
Computer networks are constantly being attacked by malicious entities for various reasons. Network based attacks include but are not limited to, Distributed Denial of Service (DDoS), DNS based attacks, Cross-site Scripting (XSS) etc. Such attacks have exploited either the network protocol or the end-host software vulnerabilities for perpetration. Current network traffic analysis techniques employed for detection and/or prevention of these anomalies suffer from significant delay or have only limited scalability because of their huge resource requirements. This dissertation proposes more scalable techniques for network anomaly detection. We propose using DNS analysis for detecting a wide variety of network anomalies. The use of DNS is motivated by the fact that DNS traffic comprises only 2-3% of total network traffic reducing the burden on anomaly detection resources. Our motivation additionally follows from the observation that almost any Internet activity (legitimate or otherwise) is marked by the use of DNS. We propose several techniques for DNS traffic analysis to distinguish anomalous DNS traffic patterns which in turn identify different categories of network attacks. First, we present MiND, a system to detect misdirected DNS packets arising due to poisoned name server records or due to local infections such as caused by worms like DNSChanger. MiND validates misdirected DNS packets using an externally collected database of authoritative name servers for second or third-level domains. We deploy this tool at the edge of a university campus network for evaluation. Secondly, we focus on domain-fluxing botnet detection by exploiting the high entropy inherent in the set of domains used for locating the Command and Control (C&C) server. We apply three metrics namely the Kullback-Leibler divergence, the Jaccard Index, and the Edit distance, to different groups of domain names present in Tier-1 ISP DNS traces obtained from South Asia and South America. Our evaluation successfully detects existing domain-fluxing botnets such as Conficker and also recognizes new botnets. We extend this approach by utilizing DNS failures to improve the latency of detection. Alternatively, we propose a system which uses temporal and entropy-based correlation between successful and failed DNS queries, for fluxing botnet detection. We also present an approach which computes the reputation of domains in a bipartite graph of hosts within a network, and the domains accessed by them. The inference technique utilizes belief propagation, an approximation algorithm for marginal probability estimation. The computation of reputation scores is seeded through a small fraction of domains found in black and white lists. An application of this technique, on an HTTP-proxy dataset from a large enterprise, shows a high detection rate with low false positive rates.

Page generated in 0.0677 seconds