• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 28
  • 19
  • 8
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 241
  • 68
  • 50
  • 48
  • 40
  • 37
  • 32
  • 31
  • 23
  • 22
  • 19
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Error control for scalable image and video coding

Kuang, Tianbo 24 November 2003 (has links)
Scalable image and video has been proposed to transmit image and video signals over lossy networks, such as the Internet and wireless networks. However, scalability alone is not a complete solution since there is a conflict between the unequal importance of the scalable bit stream and the agnostic nature of packet losses in the network. This thesis investigates three methods to combat the detrimental effects of random packet losses to scalable images and video, namely the error resilient method, the error concealment method, and the unequal error protection method within the joint source-channel coding framework. For the error resilient method, an optimal bit allocation algorithm is proposed without considering the distortion caused by packet losses. The allocation algorithm is then extended to accommodate packet losses. For the error concealment method, a simple temporal error concealment mechanism is designed to work for video signals. For the unequal error protection method, the optimal protection allocation problem is formulated and solved. These methods are tested on the wavelet-based Set Partitioning in Hierarchical Trees(SPIHT) scalable image coder. Performance gains and losses in lossy and lossless environments are studied for both the original coder and the error-controlled coders. The results show performance advantages of all three methods over the original SPIHT coder. Particularly, the unequal error protection method and error concealment method are promising for future Internet/wireless image and video transmission, because the former has very good performance even at heavy packet loss (a PSNR of 22.00 dB has been seen at nearly 60% packet loss) and the latter does not introduce any extra overhead.
32

Representing short sequences in the context of a model organism genome

Lewis, Christopher Thomas 25 May 2009 (has links)
<p>In the post-genomics era, the sheer volume of data is overwhelming without appropriate tools for data integration and analysis. Studying genomic sequences in the context of other related genomic sequences, i.e. comparative genomics, is a powerful technique enabling the identification of functionally interesting sequence regions based on the principal that similar sequences tend to be either homologous or provide similar functionality.</p> <p>Costs associated with full genome sequencing make it infeasible to sequence every genome of interest. Consequently, simple, smaller genomes are used as model organisms for more complex organisms, for instance, Mouse/Human. An annotated model organism provides a source of annotation for transcribed sequences and other gene regions of the more complex organism based on sequence homology. For example, the gene annotations from the model organism aid interpretation of expression studies in more complex organisms.</p> <p>To assist with comparative genomics research in the Arabidopsis/Brassica (Thale-cress/Canola) model-crop pair, a web-based, graphical genome browser (BioViz) was developed to display short Brassica genomic sequences in the context of the Arabidopsis model organism genome. This involved the development of graphical representations to integrate data from multiple sources and tools, and a novel user interface to provide the user with a more interactive web-based browsing experience. While BioViz was developed for the Arabidopsis/Brassica comparative genomics context, it could be applied to comparative browsing relative to other reference genomes.</p> <p>BioViz proved to be an valuable research support tool for Brassica / Arabidopsis comparative genomics. It provided convenient access to the underlying Arabidopsis annotation, allowed the user to view specific EST sequences in the context of the Arabidopsis genome and other related EST sequences. In addition, the limits to which the project pushed the SVG specification proved influential in the SVG community. The work done for BioViz inspired the definition of an opensource project to define standards for SVG based web applications and a standard framework for SVG based widget sets.</p>
33

A NETWORK PATH ADVISING SERVICE

Wu, Xiongqi 01 January 2015 (has links)
A common feature of emerging future Internet architectures is the ability for applications to select the path, or paths, their packets take between a source and destination. Unlike the current Internet architecture where routing protocols find a single (best) path between a source and destination, future Internet routing protocols will present applications with a set of paths and allow them to select the most appropriate path. Although this enables applications to be actively involved in the selection of the paths their packets travel, the huge number of potential paths and the need to know the current network conditions of each of the proposed paths will make it virtually impossible for applications to select the best set of paths, or just the best path. To tackle this problem, we introduce a new Network Path Advising Service (NPAS) that helps future applications choose network paths. Given a set of possible paths, the NPAS service helps applications select appropriate paths based on both recent path measurements and end-to-end feedback collected from other applications. We describe the NPAS service abstraction, API calls, and a distributed architecture that achieves scalability by determining the most important things to monitor based on actual usage. By analyzing existing traffic patterns, we will demonstrate it is feasible for NPAS to monitor only a few nodes and links and yet be able to offer advice about the most important paths used by a high percentage of traffic. Finally, we describe a prototype implementation of the NPAS components as well as a simulation model used to evaluate the NPAS architecture.
34

Affordable and Scalable Manufacturing of Wearable Multi-Functional Sensory “Skin” for Internet of Everything Applications

Nassar, Joanna M. 10 1900 (has links)
Demand for wearable electronics is expected to at least triple by 2020, embracing all sorts of Internet of Everything (IoE) applications, such as activity tracking, environmental mapping, and advanced healthcare monitoring, in the purpose of enhancing the quality of life. This entails the wide availability of free-form multifunctional sensory systems (i.e “skin” platforms) that can conform to the variety of uneven surfaces, providing intimate contact and adhesion with the skin, necessary for localized and enhanced sensing capabilities. However, current wearable devices appear to be bulky, rigid and not convenient for continuous wear in everyday life, hindering their implementation into advanced and unexplored applications beyond fitness tracking. Besides, they retail at high price tags which limits their availability to at least half of the World’s population. Hence, form factor (physical flexibility and/or stretchability), cost, and accessibility become the key drivers for further developments. To support this need in affordable and adaptive wearables and drive academic developments in “skin” platforms into practical and functional consumer devices, compatibility and integration into a high performance yet low power system is crucial to sustain the high data rates and large data management driven by IoE. Likewise, scalability becomes essential for batch fabrication and precision. Therefore, I propose to develop three distinct but necessary “skin” platforms using scalable and cost effective manufacturing techniques. My first approach is the fabrication of a CMOS-compatible “silicon skin”, crucial for any truly autonomous and conformal wearable device, where monolithic integration between heterogeneous material-based sensory platform and system components is a challenge yet to be addressed. My second approach displays an even more affordable and accessible “paper skin”, using recyclable and off-the-shelf materials, targeting environmental mapping through 3D stacked arrays, or advanced personalized healthcare through the developed “paper watch” prototype. My last approach targets a harsh environment waterproof “marine skin” tagging system, using marine animals as allies to study the marine ecosystem. The “skin” platforms offer real-time and simultaneous monitoring while preserving high performance and robust behaviors under various bending conditions, maintaining system compatibility using cost-effective and scalable approaches for a tangible realization of a truly flexible wearable device.
35

A toolbox for multi-objective optimisation of low carbon powertrain topologies

Mohan, Ganesh January 2016 (has links)
Stricter regulations and evolving environmental concerns have been exerting ever-increasing pressure on the automotive industry to produce low carbon vehicles that reduce emissions. As a result, increasing numbers of alternative powertrain architectures have been released into the marketplace to address this need. However, with a myriad of possible alternative powertrain configurations, which is the most appropriate type for a given vehicle class and duty cycle? To that end, comparative analyses of powertrain configurations have been widely carried out in literature; though such analyses only considered limited types of powertrain architectures at a time. Collating the results from these literature often produced findings that were discontinuous, which made it difficult for drawing conclusions when comparing multiple types of powertrains. The aim of this research is to propose a novel methodology that can be used by practitioners to improve the methods for comparative analyses of different types of powertrain architectures. Contrary to what has been done so far, the proposed methodology combines an optimisation algorithm with a Modular Powertrain Structure that facilitates the simultaneous approach to optimising multiple types of powertrain architectures. The contribution to science is two-folds; presenting a methodology to simultaneously select a powertrain architecture and optimise its component sizes for a given cost function, and demonstrating the use of multi-objective optimisation for identifying trade-offs between cost functions by powertrain architecture selection. Based on the results, the sizing of the powertrain components were influenced by the power and energy requirements of the drivecycle, whereas the powertrain architecture selection was mainly driven by the autonomy range requirements, vehicle mass constraints, CO2 emissions, and powertrain costs. For multi-objective optimisation, the creation of a 3-dimentional Pareto front showed multiple solution points for the different powertrain architectures, which was inherent from the ability of the methodology to concurrently evaluate those architectures. A diverging trend was observed on this front with the increase in the autonomy range, driven primarily by variation in powertrain cost per kilometre. Additionally, there appeared to be a trade-off in terms of electric powertrain sizing between CO2 emissions and lowest mass. This was more evident at lower autonomy ranges, where the battery efficiency was a deciding factor for CO2 emissions. The results have demonstrated the contribution of the proposed methodology in the area of multi-objective powertrain architecture optimisation, thus addressing the aims of this research.
36

Scalable and distributed constrained low rank approximations

Kannan, Ramakrishnan 27 May 2016 (has links)
Low rank approximation is the problem of finding two low rank factors W and H such that the rank(WH) << rank(A) and A ≈ WH. These low rank factors W and H can be constrained for meaningful physical interpretation and referred as Constrained Low Rank Approximation (CLRA). Like most of the constrained optimization problem, performing CLRA can be computationally expensive than its unconstrained counterpart. A widely used CLRA is the Non-negative Matrix Factorization (NMF) which enforces non-negativity constraints in each of its low rank factors W and H. In this thesis, I focus on scalable/distributed CLRA algorithms for constraints such as boundedness and non-negativity for large real world matrices that includes text, High Definition (HD) video, social networks and recommender systems. First, I begin with the Bounded Matrix Low Rank Approximation (BMA) which imposes a lower and an upper bound on every element of the lower rank matrix. BMA is more challenging than NMF as it imposes bounds on the product WH rather than on each of the low rank factors W and H. For very large input matrices, we extend our BMA algorithm to Block BMA that can scale to a large number of processors. In applications, such as HD video, where the input matrix to be factored is extremely large, distributed computation is inevitable and the network communication becomes a major performance bottleneck. Towards this end, we propose a novel distributed Communication Avoiding NMF (CANMF) algorithm that communicates only the right low rank factor to its neighboring machine. Finally, a general distributed HPC- NMF framework that uses HPC techniques in communication intensive NMF operations and suitable for broader class of NMF algorithms.
37

NETWORKING SATELLITE GROUND STATIONS USING LABVIEW

Mauldin, Kendall 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / A multi-platform network design that is automated, bi-directional, capable of store and forward operations, and low-bandwidth has been developed to connect multiple satellite ground stations together in real-time. The LabVIEW programming language has been used to develop both the server and client aspects of this network. Future plans for this project include implementing a fully operational ground network using the described concepts, and using this network for real-time satellite operations. This paper describes the design requirements, RF and ground-based network configuration, software implementation, and operational testing of the ground network.
38

A toolbox for multi-objective optimisation of low carbon powertrain topologies

Mohan, Ganesh 05 1900 (has links)
Stricter regulations and evolving environmental concerns have been exerting ever-increasing pressure on the automotive industry to produce low carbon vehicles that reduce emissions. As a result, increasing numbers of alternative powertrain architectures have been released into the marketplace to address this need. However, with a myriad of possible alternative powertrain configurations, which is the most appropriate type for a given vehicle class and duty cycle? To that end, comparative analyses of powertrain configurations have been widely carried out in literature; though such analyses only considered limited types of powertrain architectures at a time. Collating the results from these literature often produced findings that were discontinuous, which made it difficult for drawing conclusions when comparing multiple types of powertrains. The aim of this research is to propose a novel methodology that can be used by practitioners to improve the methods for comparative analyses of different types of powertrain architectures. Contrary to what has been done so far, the proposed methodology combines an optimisation algorithm with a Modular Powertrain Structure that facilitates the simultaneous approach to optimising multiple types of powertrain architectures. The contribution to science is two-folds; presenting a methodology to simultaneously select a powertrain architecture and optimise its component sizes for a given cost function, and demonstrating the use of multi-objective optimisation for identifying trade-offs between cost functions by powertrain architecture selection. Based on the results, the sizing of the powertrain components were influenced by the power and energy requirements of the drivecycle, whereas the powertrain architecture selection was mainly driven by the autonomy range requirements, vehicle mass constraints, CO2 emissions, and powertrain costs. For multi-objective optimisation, the creation of a 3-dimentional Pareto front showed multiple solution points for the different powertrain architectures, which was inherent from the ability of the methodology to concurrently evaluate those architectures. A diverging trend was observed on this front with the increase in the autonomy range, driven primarily by variation in powertrain cost per kilometre. Additionally, there appeared to be a trade-off in terms of electric powertrain sizing between CO2 emissions and lowest mass. This was more evident at lower autonomy ranges, where the battery efficiency was a deciding factor for CO2 emissions. The results have demonstrated the contribution of the proposed methodology in the area of multi-objective powertrain architecture optimisation, thus addressing the aims of this research.
39

Scalable video coding sobre TCP

Sanhueza Gutiérrez, Andrés Edgardo January 2015 (has links)
Ingeniero Civil Eléctrico / En tiempos modernos la envergadura del contenido multimedia avanza más rápido que el desarrollo de las tecnologías necesarias para su correcta difusión a través de la red. Es por esto que se hacen necesarios nuevos protocolos que sirvan como puente entre ambas entidades para así obtener un máximo de provecho del contenido a pesar de que la tecnología para distribuirlos aún no sea la adecuada. Es así, que dentro de las últimas tecnologías de compresión de video se encuentra Scalable Video Coding (SVC), la cual tiene por objetivo codi car distintas calidades en un único bitstream capaz de mostrar cualquiera de las calidades embebidas en éste según se reciba o no toda la información. En el caso de una conexión del tipo streaming, en donde es necesaria una uidez y delidad en ambos extremos, la tecnología SVC tiene un potencial muy grande respecto de descartar un mínimo de información para privilegiar la uidez de la transmisión. El software utilizado para la creación y manipulación de estos bitstreams SVC es Joint Scalable Video Model (JSVM). En este contexto, se desarrolla el algoritmo de deadline en Matlab, que omite informaci ón del video SVC de acuerdo a qué tan crítico sea el escenario de transmisión. En este escenario se considera la percepción de uidez del usuario como medida clave, por lo cual se prioriza mantener siempre una tasa de 30 fps a costa de una pérdida de calidad mínima. El algoritmo, omite información de acuerdo a qué tan lejos se esté de este deadline de 30 fps, si se está muy lejos, se omite información poco relevante, y si se está muy cerca, información más importante. Los resultados se contrastan con TCP y se evalúan para distintos valores de RTTs, cumpliendo totalmente el objetivo para valores menores a 150 ms que resultan en diferencias de hasta 20 s a favor del algoritmo de deadline al término de la transmisión. Esta mejora en tiempo de arribo no descarta información esencial y sólo degrada ligeramente la calidad del video en pos de mantener la tasa de 30fps. Por el contrario, en escenarios muy adversos de 300 ms en RTT, las omisiones son de gran envergadura y comprometen frames completos, en conjunto con una degradación generalizada del video y la aparición de artefactos en éste. Por tanto la propuesta cumple los objetivos en ambientes no muy adversos. Para toda la simulación se uso un video en movimiento de 352x288 y 150 frames de largo.
40

Compactação de vídeo escalável / Scalable Compression

Soler, Luciano January 2006 (has links)
A codificação de vídeo é um problema cuja solução deve ser projetada de acordo com as necessidades da aplicação desejada. Neste trabalho, um método de compressão de vídeo com escalabilidade é apresentado, apresentando melhorias dos formatos de compressão atuais. A escalabilidade corresponde a capacidade de extrair do bitstream completo, conjuntos eficientes de bits que são decodificados oferecendo imagens ou vídeos decodificados com uma variação (escala) segundo uma dada característica da imagem ou vídeo. O número de conjuntos que podem ser extraídos do bitstream completo definem a granularidade da escalabilidade fornecida, que pode ser muito fina ou com passos grossos. Muitas das técnicas de codificação escalável utilizam uma camada base que deve ser sempre decodificada e uma ou mais camadas superiores que permitem uma melhoria em termos de qualidade (SNR), resolução espacial e/ou resolução temporal. O esquema de codificação escalável final presente na norma MPEG-4 é uma das técnicas mais promissoras, pois pode adaptar-se às características dos canais (Internet) ou terminais que apresentam um comportamento variável ou desconhecido, como velocidade maxima de acesso, variações de largura de banda, erros de canal, etc. Apesar da norma MPEG-4 FGS se afirmar como uma alternativa viável para aplicações de distribuição de vídeo, possui uma quebra significativa de desempenho em comparação com a codificação não escalável de vídeo (perfil ASP da norma MPEG-4 Visual). Este trabalho tem por objetivo estudar novas ferramentas de codificação de vídeo introduzidas na recente norma H.264/AVC e MPEG-4 Visual, desenvolvendo um modelo que integre a escalabilidade granular presente no MPEG-4 aos avanços na área de codificação presentes no H.264/AVC. Esta estrutura de escalabilidade permite reduzir o custo em termos de eficiência da codificação escalável. Os resultados apresentados dentro de cada capítulo mostram a eficácia do método proposto bem como idéias para melhorias em trabalhos futuros. / Video encoding is a problem whose solution should be designed according to the need of intended application. This work presents a method of video compression with scalability that improves the current compression formats. Scalability represents the extracting capacity of full bitstream, efficient set of bits that are decoded to supply images or decoded videos with a variation according to a given image or video feature. A number of sets that can be extracted from full bitstream defines the supplied scalability granularity, which can be very thin or with thick steps. Most scalable video coding techniques use a base layer which must always be decoded and one or more higher layers which allow improvements in terms of quality (also known as SNR), frame/sampling rate or spatial resolution (for images and video). The MPEG-4 Fine Granularity Scalable (FGS) video coding scheme is one of the most promising techniques, because it can adapt itself to the features of channels (Internet) or terminals that present an unpredictable or unknown behavior, as maximum speed of access, variations of the bandwidth, channel errors, etc. Although the MPEG-4 FGS standard is a feasible solution for video streaming applications, it shows a significant loss of performance in comparison with non-scalable video coding, in particular the rather efficient Advanced Simple Profile defined in MPEG-4 Visual Standard. This work aims at studying new tools of video encoding introduced by the recent H.264/AVC norm and Visual MPEG-4, developing a model that integrates the granular scalability present in MPEG-4 to the coding improvements present in H.264/AVC. This new scalability structure allows cost reduction in terms of efficiency of the scalable coding. The results presented in each chapter show the effectiveness of the proposed method as well as ideas for improvements in future work.

Page generated in 0.0985 seconds