• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 28
  • 19
  • 8
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 251
  • 68
  • 50
  • 49
  • 40
  • 39
  • 33
  • 31
  • 23
  • 22
  • 20
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Skalierbares und flexibles Live-Video Streaming mit der Media Internet Streaming Toolbox

Pranke, Nico 17 November 2009 (has links)
Die Arbeit befasst sich mit der Entwicklung und Anwendung verschiedener Konzepte und Algorithmen zum skalierbaren Live-Streaming von Video sowie deren Umsetzung in der Media Internet Streaming Toolbox. Die Toolbox stellt eine erweiterbare, plattformunabhängige Infrastruktur zur Erstellung aller Teile eines Live-Streamingsystems von der Videogewinnung über die Medienverarbeitung und Codierung bis zum Versand bereit. Im Vordergrund steht die flexible Beschreibung der Medienverarbeitung und Stromerstellung sowie die Erzeugung von klientenindividuellen Stromformaten mit unterschiedlicher Dienstegüte für eine möglichst große Zahl von Klienten und deren Verteilung über das Internet. Es wird ein integriertes graphenbasiertes Konzept entworfen, in dem das Component Encoding Stream Construction, die Verwendung von Compresslets und eine automatisierte Flussgraphenkonstruktion miteinander verknüpft werden. Die für die Stromkonstruktion relevanten Teile des Flussgraphen werden für Gruppen mit identischem Zustand entkoppelt vom Rest ausgeführt. Dies führt zu einer maximalen Rechenlast, die unabhängig von der Zahl der Klienten ist, was sowohl theoretisch gezeigt als auch durch konkrete Messungen bestätigt wird. Als Beispiele für die Verwendung der Toolbox werden unter Anderem zwei waveletbasierte Stromformate entwickelt, integriert und bezüglich Codiereffizienz und Skalierbarkeit miteinander verglichen
222

An open source HPC-enabled model of cardiac defibrillation of the human heart

Bernabeu Llinares, Miguel Oscar January 2011 (has links)
Sudden cardiac death following cardiac arrest is a major killer in the industrialised world. The leading cause of sudden cardiac death are disturbances in the normal electrical activation of cardiac tissue, known as cardiac arrhythmia, which severely compromise the ability of the heart to fulfill the body's demand of oxygen. Ventricular fibrillation (VF) is the most deadly form of cardiac arrhythmia. Furthermore, electrical defibrillation through the application of strong electric shocks to the heart is the only effective therapy against VF. Over the past decades, a large body of research has dealt with the study of the mechanisms underpinning the success or failure of defibrillation shocks. The main mechanism of shock failure involves shocks terminating VF but leaving the appropriate electrical substrate for new VF episodes to rapidly follow (i.e. shock-induced arrhythmogenesis). A large number of models have been developed for the in silico study of shock-induced arrhythmogenesis, ranging from single cell models to three-dimensional ventricular models of small mammalian species. However, no extrapolation of the results obtained in the aforementioned studies has been done in human models of ventricular electrophysiology. The main reason is the large computational requirements associated with the solution of the bidomain equations of cardiac electrophysiology over large anatomically-accurate geometrical models including representation of fibre orientation and transmembrane kinetics. In this Thesis we develop simulation technology for the study of cardiac defibrillation in the human heart in the framework of the open source simulation environment Chaste. The advances include the development of novel computational and numerical techniques for the solution of the bidomain equations in large-scale high performance computing resources. More specifically, we have considered the implementation of effective domain decomposition, the development of new numerical techniques for the reduction of communication in Chaste's finite element method (FEM) solver, and the development of mesh-independent preconditioners for the solution of the linear system arising from the FEM discretisation of the bidomain equations. The developments presented in this Thesis have brought Chaste to the level of performance and functionality required to perform bidomain simulations with large three-dimensional cardiac geometries made of tens of millions of nodes and including accurate representation of fibre orientation and membrane kinetics. This advances have enabled the in silico study of shock-induced arrhythmogenesis for the first time in the human heart, therefore bridging an important gap in the field of cardiac defibrillation research.
223

The integration of earthquake engineering resources

Lamata Martinez, Ignacio January 2014 (has links)
Earthquake engineering is increasingly focusing on large international collaborations to address complex problems. Recent computing advances have greatly contributed to the way scientific collaborations are conducted, where web-based solutions are an emerging trend to manage and present results to the scientific community and the general public. However, collaborations in earthquake engineering lack a common interoperability framework, resulting in tedious and complex processes to integrate results, which cannot be efficiently used by third-party institutions. The work described in this thesis applies novel computing techniques to enable the interoperability of earthquake engineering resources, by integrating data, distributed simulation services and laboratory facilities. This integration focuses on distributed approaches rather than centralised solutions, and has been materialised in a platform called Celestina, that supports the integration of hazard mitigation resources. The prototype of Celestina has been implemented and validated within the context of two of the current largest earthquake engineering networks, the SERIES network in Europe and the NEES network in the USA. It has been divided into three sub-systems to address different problems: (i) Celestina Data, to develop best methods to define, store, integrate and share earthquake engineering experimental data. Celestina Data uses a novel approach based on Semantic Web technologies, and it has accomplished the first data integration between earthquake engineering institutions from the United States and Europe by means of a formalised infrastructure. (ii) Celestina Tools, to research applications that can be implemented on top of the data integration, in order to provide a practical benefit for the end user. (iii) Celestina Simulations, to create the most efficient methods to integrate distributed testing software and to support the planning, definition and execution of the experimental workflow from a high-level perspective. Celestina Simulations has been implemented and validated by conducting distributed simulations between the Universities of Oxford and Kassel. Such validation has demonstrated the feasibility to conduct both flexible, general-purpose and high performance simulations under the framework. Celestina has enabled global analysis of data requirements for the whole community, the definition of global policies for data authorship, curation and preservation, more efficient use of efforts and funding, more accurate decision support systems and more efficient sharing and evaluation of data results in scientific articles.
224

Algorithmes parallèles et architectures évolutives de faible complexité pour systèmes optiques OFDM cohérents temps réel / Low-Complexity Parallel Algorithms and Scalable Architectures for Real-Time Coherent Optical OFDM Systems

Udupa, Pramod 19 June 2014 (has links)
Dans cette thèse, des algorithmes à faible complexité et des architectures parallèles et efficaces sont explorés pour les systèmes CO-OFDM. Tout d'abord, des algorithmes de faible complexité pour la synchronisation et l'estimation du décalage en fréquence en présence d'un canal dispersif sont étudiés. Un nouvel algorithme de synchronisation temporelle à faible complexité qui peut résister à grande quantité de retard dispersif est proposé et comparé par rapport aux propositions antérieures. Ensuite, le problème de la réalisation d'une architecture parallèle à faible coût est étudié et une architecture parallèle générique et évolutive qui peut être utilisée pour réaliser tout type d'algorithme d'auto-corrélation est proposé. Cette architecture est ensuite étendue pour gérer plusieurs échantillons issus du convertisseur analogique/numérique (ADC) en parallèle et fournir une sortie qui suive la fréquence des ADC. L'évolutivité de l'architecture pour un nombre plus élevé de sorties en parallèle et les différents types d'algorithmes d'auto-corrélation sont explorés. Une approche d'adéquation algorithme-architecture est ensuite appliquée à l'ensemble de la chaîne de l'émetteur-récepteur CO-OFDM. Du côté de l'émetteur, un algorithme IFFT à radix-22 est choisi pour et une architecture parallèle Multipath Delay Commutator (MDC). Feed-forward (FF) est choisie car elle consomme moins de ressources par rapport aux architectures MDC-FF en radix-2/4. Au niveau du récepteur, un algorithme efficace pour l'estimation du Integer CFO est adopté et implémenté de façon optimisée sans l'utilisation de multiplicateurs complexes. Une réduction de la complexité matérielle est obtenue grâce à la conception d'architectures efficaces pour la synchronisation temporelle, la FFT et l'estimation du CFO. Une exploration du compromis entre la précision des calculs en virgule fixe et la complexité du matériel est réalisée pour la chaîne complète de l'émetteur- récepteur, de façon à trouver des points de fonctionnement qui n'affectent pas le taux d'erreur binaire (TEB) de manière significative. Les algorithmes proposés sont validés à l'aide d'une part d'expériences off-line en utilisant un générateur AWG (arbitrary wave- form generator) à l'émetteur et un oscilloscope numérique à mémoire (DSO) en sortie de la détection cohérente au récepteur, et d'autre part un émetteur-récepteur temps-réel basé sur des plateformes FPGA et des convertisseurs numériques. Le TEB est utilisé pour montrer la validité du système intégré et en donner les performances. / In this thesis, low-complexity algorithms and architectures for CO-OFDM systems are explored. First, low-complexity algorithms for estimation of timing and carrier frequency offset (CFO) in dispersive channel are studied. A novel low-complexity timing synchro- nization algorithm, which can withstand large amount of dispersive delay, is proposed and compared with previous proposals. Then, the problem of realization of low-complexity parallel architecture is studied. A generalized scalable parallel architecture, which can be used to realize any auto-correlation algorithm, is proposed. It is then extended to handle multiple parallel samples from ADC and provide outputs, which can match the input ADC rate. The scalability of the architecture for higher number of parallel outputs and different kinds of auto-correlation algorithms is explored. An algorithm-architecture approach is then applied to the entire CO-OFDM transceiver chain. At the transmitter side, radix-22 algorithm for IFFT is chosen and parallel Mul- tipath Delay Commutator (MDC) Feed-forward (FF) architecture is designed which con- sumes lesser resources compared to MDC FF architectures of radix-2/4. At the receiver side, efficient algorithm for Integer CFO estimation is adopted and efficiently realized with- out the use of complex multipliers. Reduction in complexity is achieved due to efficient architectures for timing synchronization, FFT and Integer CFO estimation. Fixed-point analysis for the entire transceiver chain is done to find fixed-point sensitive blocks, which affect bit error rate (BER) significantly. The algorithms proposed are validated using opti- cal experiments by the help of arbitrary waveform generator (AWG) at the transmitter and digital storage oscilloscope (DSO) and Matlab at the receiver. BER plots are used to show the validity of the system built. Hardware implementation of the proposed synchronization algorithm is validated using real-time FPGA platform.
225

Dielectric elastomer actuators in electro-responsive surfaces based on tunable wrinkling and the robotic arm for powerful and continuous movement

Lin, I-Ting January 2019 (has links)
Dielectric elastomer actuators (DEAs) have been used for artificial muscles for years. Recently the DEA-based deformable surfaces have demonstrated controllable microscale roughness, ease of operation, fast response, and possibilities for programmable control. DEA muscles used in bioinspired robotic arms for large deformation and strong force also become desirable for their efficiency, low manufacturing cost, high force-to-weight ratio, and noiseless operation. The DEA-based responsive surfaces in microscale roughness control, however, exhibit limited durability due to irreversible dielectric breakdown. Lowering device voltage to avoid this issue is hindered by an inadequate understanding of the electrically-induced wrinkling deformation as a function of the deformable dielectric film thickness. Also, the programmable control and geometric analysis of the structured surface deformation have not yet been fully explored. Current methods to generate anisotropic wrinkles rely on mechanical pre-loading such as stretching or bending, which complicates the fabrication and operation of the devices. With a fixed mechanical pre-loading, the device can only switch between the flat state and the preset wrinkling state. In this thesis, we overcome these shortcomings by demonstrating a simple method for fabricating fault-tolerant electro-responsive surfaces and for controlling surface wrinkling patterns. The DEA-based system can produce different reversible surface topographies (craters, irregular wrinkles, structured wrinkles) upon the geometrical design of electrode and application of voltage. It remains functional due to its ability to self-insulate breakdown faults even after multiple high voltage breakdowns, and the induced breakdown punctures can be used for amplification of local electric fields for wrinkle formation at lower applied voltages. We enhance fundamental understanding of the system by using different analytical models combined with numerical simulation to discuss the mechanism and critical conditions for wrinkle formation, and compare it with the experimental results from surface topography, critical field to induce wrinkles in films of different thickness, and wrinkling patterns quantitatively analysed by different disorder metrics. Based on the results, we demonstrate its wide applicability in adjustable transparency films, dynamic light-grating filter, molding for static surface patterns, and multi-stable mirror-diffusor-diffraction grating device. For DEAs used for macroscopic-scale deformation in robotic arms, the main issue that undermines the performance of DEA muscles is the trade-off between strong force and large displacement, which limits the durability and range of potential robotic and automation applications of DEA-driven devices. In this thesis, this challenge is tackled by using DEAs in loudspeaker configuration for independent scaling-up of force and displacement, developing a theoretical prediction to optimise the operation of such DEAs in bioinspired antagonistic system to maximise speed and power of the robotic arm, and designing a clutch-gear-shaft mechanical system collaborating with the muscles to decouple the displacement and output force. Therefore, the trade-off between force and displacement in traditional DEA muscles can be resolved. The mechanical system can also convert the short linear spurt to an unlimited rotary motion. Combining these advantages, continuous movement with high output force can be accomplished.
226

Flexible Radio Resource Management for Multicast Multimedia Service Provision : Modeling and Optimization / Allocation de ressources radio pour les services multimédias : modélisation et optimisation

Xu, Qing 29 August 2014 (has links)
Le conflit entre la demande de services multimédia en multidiffusion à haut débit (MBMS) et les limites en ressources radio demandent une gestion efficace de l'allocation des ressources radio (RRM) dans les réseaux 3G UMTS. À l'opposé des travaux existant dans ce domaine, cette thèse se propose de résoudre le problème de RRM dans les MBMS par une approche d’optimisation combinatoire. Le travail commence par une modélisation formelle du problème cible, désigné comme Flexible Radio Resource Management Model (F2R2M). Une analyse de la complexité et du paysage de recherche est effectuée à partir de ce modèle. Tout d’abord on montre qu'en assouplissant les contraintes de code OVSF, le problème de RRM pour les MBMS peut s'apparenter à un problème de sac à dos à choix multiples (MCKP). Une telle constatation permet de calculer les limites théoriques de la solution en résolvant le MCKP similaire. En outre, l'analyse du paysage montre que les espaces de recherche sont accidentés et constellés d'optima locaux. Sur la base de cette analyse, des algorithmes métaheuristiques sont étudiés pour résoudre le problème. Nous montrons tout d'abord que un Greedy Local Search (GLS) et un recuit simulé (SA) peuvent trouver de meilleures solutions que les approches existantes implémentées dans le système UMTS, mais la multiplicité des optima locaux rend les algorithmes très instables. Un algorithme de recherche tabou (TS) incluant une recherche à voisinage variable (VNS) est aussi développé et comparé aux autres algorithmes (GLS et SA) et aux approches actuelles du système UMTS ; les résultats de la recherche tabou dépassent toutes les autres approches. Enfin les meilleures solutions trouvées par TS sont également comparées avec les solutions théoriques générées par le solveur MCKP. On constate que les meilleures solutions trouvées par TS sont égales ou très proches des solutions optimales théoriques. / The high throughputs supported by the multimedia multicast services (MBMS) and the limited radio resources result in strong requirement for efficient radio resource management (RRM) in UMTS 3G networks. This PhD thesis proposes to solve the MBMS RRM problem as a combinatorial optimization problem. The work starts with a formal modeling of the problem, named as the Flexible Radio Resource Management Model (F2R2M). An in-depth analysis of the problem complexity and the search landscape is done from the model. It is showed that, by relaxing the OVSF code constraints, the MBMS RRM problem can be approximated as a Multiple-Choice Knapsack Problem (MCKP). Such work allows us to compute the theoretical solution bounds by solving the approximated MCKP. Then the fitness landscape analysis shows that the search spaces are rough and reveal several local optimums. Based on the analysis, some metaheuristic algorithms are studied to solve the MBMS RRM problem. We first show that a Greedy Local Search (GLS) and a Simulated Annealing (SA) allow us to find better solutions than the existing approaches implemented in the UMTS system, however the results are instable due to the landscape roughness. Finally we have developed a Tabu Search (TS) mixed with a Variable Neighborhood Search (VNS) algorithm and we have compared it with GLS, SA and UMTS embedded algorithms. Not only the TS outperforms all the other approaches on several scenarios but also, by comparing it with the theoretical solution bounds generated by the MCKP solver, we observe that TS is equal or close to the theoretical optimal solutions.
227

Scalable video compression with optimized visual performance and random accessibility

Leung, Raymond, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
This thesis is concerned with maximizing the coding efficiency, random accessibility and visual performance of scalable compressed video. The unifying theme behind this work is the use of finely embedded localized coding structures, which govern the extent to which these goals may be jointly achieved. The first part focuses on scalable volumetric image compression. We investigate 3D transform and coding techniques which exploit inter-slice statistical redundancies without compromising slice accessibility. Our study shows that the motion-compensated temporal discrete wavelet transform (MC-TDWT) practically achieves an upper bound to the compression efficiency of slice transforms. From a video coding perspective, we find that most of the coding gain is attributed to offsetting the learning penalty in adaptive arithmetic coding through 3D code-block extension, rather than inter-frame context modelling. The second aspect of this thesis examines random accessibility. Accessibility refers to the ease with which a region of interest is accessed (subband samples needed for reconstruction are retrieved) from a compressed video bitstream, subject to spatiotemporal code-block constraints. We investigate the fundamental implications of motion compensation for random access efficiency and the compression performance of scalable interactive video. We demonstrate that inclusion of motion compensation operators within the lifting steps of a temporal subband transform incurs a random access penalty which depends on the characteristics of the motion field. The final aspect of this thesis aims to minimize the perceptual impact of visible distortion in scalable reconstructed video. We present a visual optimization strategy based on distortion scaling which raises the distortion-length slope of perceptually significant samples. This alters the codestream embedding order during post-compression rate-distortion optimization, thus allowing visually sensitive sites to be encoded with higher fidelity at a given bit-rate. For visual sensitivity analysis, we propose a contrast perception model that incorporates an adaptive masking slope. This versatile feature provides a context which models perceptual significance. It enables scene structures that otherwise suffer significant degradation to be preserved at lower bit-rates. The novelty in our approach derives from a set of "perceptual mappings" which account for quantization noise shaping effects induced by motion-compensated temporal synthesis. The proposed technique reduces wavelet compression artefacts and improves the perceptual quality of video.
228

Techniques avancées pour la compression d'images médicales

Taquet, Jonathan 15 December 2011 (has links) (PDF)
La compression d'images médicales et biologiques, en particulier sur les modalités d'imagerie telles que la tomodensitométrie (TDM), l'imagerie par résonance magnétique (IRM) et les lames virtuelles en anatomo-pathologie (LV), est un enjeu économique important, notamment pour leur archivage et pour leur transmission. Cette thèse dresse un panorama des besoins et des solutions existantes en compression, et cherche à proposer, dans ce contexte, de nouveaux algorithmes de compression numérique efficaces en comparaison aux algorithmes de référence standardisés. Pour les TDM et IRM, les contraintes médico-légales imposent un archivage de très bonne qualité. Ces travaux se sont donc focalisés sur la compression sans perte et presque sans perte. Il est proposé de i) fusionner le modèle prédictif hiérarchique par interpolation avec le modèle prédictif DPCM adaptatif afin de fournir une représentation scalable en résolution efficace pour la compression sans perte et surtout presque sans perte, ii) s'appuyer sur une optimisation pour la compression sans perte d'une décomposition en paquets d'ondelettes, spécifique au contenu de l'image. Les résultats de ces deux contributions montrent qu'il existe encore une marge de progression pour la compression des images les plus régulières et les moins bruitées. Pour les LV, la lame physique peut être conservée, la problématique concerne donc plus le transfert pour la consultation à distance que l'archivage. De par leur contenu, une approche basée sur l'apprentissage des spécificités structurelles de ces images semble intéressante. Cette troisième contribution vise donc une optimisation hors-ligne de K transformées orthonormales optimales pour la décorrélation des données d'apprentissage (K-KLT). Cette méthode est notamment appliquée pour réaliser un apprentissage concernant des post-transformées sur une décomposition en ondelettes. Leur application dans un modèle de compression scalable en qualité montre que l'approche peut permettre d'obtenir des gains de qualité intéressants en terme de PSNR.
229

High-Efficiency Linear RF Power Amplifiers Development

Srirattana, Nuttapong 14 April 2005 (has links)
Next generation mobile communication systems require the use of linear RF power amplifier for higher data transmission rates. However, linear RF power amplifiers are inherently inefficient and usually require additional circuits or further system adjustments for better efficiency. This dissertation focuses on the development of new efficiency enhancement schemes for linear RF power amplifiers. The multistage Doherty amplifier technique is proposed to improve the performance of linear RF power amplifiers operated in a low power level. This technique advances the original Doherty amplifier scheme by improving the efficiency at much lower power level. The proposed technique is supported by a new approach in device periphery calculation to reduce AM/AM distortion and a further improvement of linearity by the bias adaptation concept. The device periphery adjustment technique for efficiency enhancement of power amplifier integrated circuits is also proposed in this work. The concept is clearly explained together with its implementation on CMOS and SiGe RF power amplifier designs. Furthermore, linearity improvement technique using the cancellation of nonlinear terms is proposed for the CMOS power amplifier in combination with the efficiency enhancement technique. In addition to the efficiency enhancement of power amplifiers, a scalable large-signal MOSFET model using the modified BSIM3v3 approach is proposed. A new scalable substrate network model is developed to enhance the accuracy of the BSIM3v3 model in RF and microwave applications. The proposed model simplifies the modeling of substrate coupling effects in MOS transistor and provides great accuracy in both small-signal and large-signal performances.
230

Robust video streaming over time-varying wireless networks

Demircin, Mehmet Umut 03 July 2008 (has links)
Multimedia services and applications became the driving force in the development and widespread deployment of wireless broadband access technologies and high speed local area networks. Mobile phone service providers are offering wide range of multimedia applications over high speed wireless data networks. People can watch live TV, stream on-demand video clips and place videotelephony calls using multimedia capable mobile devices. Mobile devices will soon support capturing and displaying high definition video. Similar evolution is also occurring in the local area domain. The video receiver or storage devices were conventionally connected to display devices using cables. By using wireless local area networking (WLAN) technologies, convenient and cable-free connectivity can be achieved. Media over wireless home networks prevents the cable mess and provides mobility to portable TVs. However, there still exit challenges for improving the quality-of-service (QoS) of multimedia applications. Conventional service architectures, network structures and protocols lack to provide a robust distribution medium since most of them are not designed considering the high data rate and real-time transmission requirements of digital video. In this thesis the challenges of wireless video streaming are addressed in two main categories. Streaming protocol level issues constitute the first category. We will refer to the collection of network protocols that enable transmitting digital compressed video from a source to a receiver as the streaming protocol. The objective of streaming protocol solutions is the high quality video transfer between two networked devices. Novel application-layer video bit-rate adaptation methods are designed for handling short- and long-term bandwidth variations of the wireless local area network (WLAN) links. Both transrating and scalable video coding techniques are used to generate video bit-rate flexibility. Another contribution of this thesis study is an error control method that dynamically adjusts the forward error correction (FEC) rate based on channel bit-error rate (BER) estimation and video coding structure. The second category is the streaming service level issues, which generally surface in large scale systems. Service system solutions target to achieve system scalability and provide low cost / high quality service to consumers. Peer-to-peer assisted video streaming technologies are developed to reduce the load of video servers. Novel video file segment caching strategies are proposed for more efficient peer-to-peer collaboration.

Page generated in 0.0262 seconds