• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 28
  • 19
  • 8
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 251
  • 68
  • 50
  • 49
  • 40
  • 39
  • 33
  • 31
  • 23
  • 22
  • 20
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Scalable video communications: bitstream extraction algorithms for streaming, conferencing and 3DTV

Palaniappan, Ramanathan 19 August 2011 (has links)
This research investigates scalable video communications and its applications to video streaming, conferencing and 3DTV. Scalable video coding (SVC) is a layer-based encoding scheme that provides spatial, temporal and quality scalability. Heterogeneity of the Internet and clients' operating environment necessitate the adaptation of media content to ensure a satisfactory multimedia experience. SVC's layer structure allows the extraction of partial bitstreams at reduced spatial, quality and temporal resolutions that adjust the media bitrate at a fine granularity to changes in network state. The main focus of this research work is in developing such extraction algorithms in the context of SVC. Based on a combination of metadata computations and prediction mechanisms, these algorithms evaluate the quality contribution of each layer in the SVC bitstream and make extraction decisions that are aimed at maximizing video quality while operating within the available bandwidth resources. These techniques are applied in two-way interaction and one-way streaming of 2D and 3D content. Depending on the delay tolerance of these applications, rate-distortion optimized extraction algorithms are proposed. For conferencing applications, the extraction decisions are made over single frames and frame pairs due to tight end-to-end delay constraints. The proposed extraction algorithms for 3D content streaming maximize the overall perceived 3D quality based on human stereoscopic perception. When compared to current extraction methods, the new algorithms offer better video quality at a given bitrate while performing lesser number of metadata computations in the post-encoding phase. The solutions proposed for each application achieve the recurring goal of maintaining the best possible level of end-user quality of multimedia experience in spite of network impairments.
192

Checkpointing Algorithms for Parallel Computers

Kalaiselvi, S 02 1900 (has links)
Checkpointing is a technique widely used in parallel/distributed computers for rollback error recovery. Checkpointing is defined as the coordinated saving of process state information at specified time instances. Checkpoints help in restoring the computation from the latest saved state, in case of failure. In addition to fault recovery, checkpointing has applications in fault detection, distributed debugging and process migration. Checkpointing in uniprocessor systems is easy due to the fact that there is a single clock and events occur with respect to this clock. There is a clear demarcation of events that happens before a checkpoint and events that happens after a checkpoint. In parallel computers a large number of computers coordinate to solve a single problem. Since there might be multiple streams of execution, checkpoints have to be introduced along all these streams simultaneously. Absence of a global clock necessitates explicit coordination to obtain a consistent global state. Events occurring in a distributed system, can be ordered partially using Lamport's happens before relation. Lamport's happens before relation ->is a partial ordering relation to identify dependent and concurrent events occurring in a distributed system. It is defined as follows: ·If two events a and b happen in the same process, and if a happens before b, then a->b ·If a is the sending event of a message and b is the receiving event of the same message then a -> b ·If neither a à b nor b -> a, then a and b are said to be concurrent. A consistent global state may have concurrent checkpoints. In the first chapter of the thesis we discuss issues regarding ordering of events in a parallel computer, need for coordination among checkpoints and other aspects related to checkpointing. Checkpointing locations can either be identified statically or dynamically. The static approach assumes that a representation of a program to be checkpointed is available with information that enables a programmer to specify the places where checkpoints are to be taken. The dynamic approach identifies the checkpointing locations at run time. In this thesis, we have proposed algorithms for both static and dynamic checkpointing. The main contributions of this thesis are as follows: 1. Parallel computers that are being built now have faster communication and hence more efficient clock synchronisation compared to those built a few years ago. Based on efficient clock synchronisation protocols, the clock drift in current machines can be maintained within a few microseconds. We have proposed a dynamic checkpointing algorithm for parallel computers assuming bounded clock drifts. 2. The shared memory paradigm is convenient for programming while message passing paradigm is easy to scale. Distributed Shared Memory (DSM) systems combine the advantage of both paradigms and can be visualized easily on top of a network of workstations. IEEE has recently proposed an interconnect standard called Scalable Coherent Interface (SCI), to con6gure computers as a Distributed Shared Memory system. A periodic dynamic checkpointing algorithm has been proposed in the thesis for a DSM system which uses the SCI standard. 3. When information about a parallel program is available one can make use of this knowledge to perform efficient checkpointing. A static checkpointing approach based on task graphs is proposed for parallel programs. The proposed task graph based static checkpointing approach has been implemented on a Parallel Virtual Machine (PVM) platform. We now give a gist of various chapters of the thesis. Chapter 2 of the thesis gives a classification of existing checkpointing algorithms. The chapter surveys algorithm that have been reported in literature for checkpointing parallel/distributed systems. A point to be noted is that most of the algorithms published for checkpointing message passing systems are based on the seminal article by Chandy & Lamport. A large number of checkpointing algorithms have been published by relaxing the assumptions made in the above mentioned article and by extending the features to minimise the overheads of coordination and context saving. Checkpointing for shared memory systems primarily extend cache coherence protocols to maintain a consistent memory. All of them assume that the main memory is safe for storing the context. Recently algorithms have been published for distributed shared memory systems, which extend the cache coherence protocols used in shared memory systems. They however also include methods for storing the status of distributed memory in stable storage. Chapter 2 concludes with brief comments on the desirable features of a checkpointing algorithm. In Chapter 3, we develop a dynamic checkpointing algorithm for message passing systems assuming that the clock drift of processors in the system is bounded. Efficient clock synchronisation protocols have been implemented on recent parallel computers owing to the fact that communication between processors is very fast. Based on efficient clock synchronisation protocols, clock skew can be limited to a few microseconds. The algorithm proposed in the thesis uses clocks for checkpoint coordination and vector counts for identifying messages to be logged. The algorithm is a periodic, distributed algorithm. We prove correctness of the algorithm and compare it with similar clock based algorithms. Distributed Shared Memory (DSM) systems provide the benefit of ease of programming in a scalable system. The recently proposed IEEE Scalable Coherent Interface (SCI) standard, facilitates the construction of scalable coherent systems. In Chapter 4 we discuss a checkpointing algorithm for an SCI based DSM system. SCI maintains cache coherence in hardware using a distributed cache directory which scales with the number of processors in the system. SCI recommends a two phase transaction protocol for communication. Our algorithm is a two phase centralised coordinated algorithm. Phase one initiates checkpoints and the checkpointing activity is completed in phase two. The correctness of the algorithm is established theoretically. The chapter concludes with the discussion of the features of SCI exploited by the checkpointing algorithm proposed in the thesis. In Chapter 5, a static checkpointing algorithm is developed assuming that the program to be executed on a parallel computer is given as a directed acyclic task graph. We assume that the estimates of the time to execute each task in the task graph is given. Given the timing at which checkpoints are to be taken, the algorithm identifies a set of edges where checkpointing tasks can be placed ensuring that they form a consistent global checkpoint. The proposed algorithm eliminates coordination overhead at run time. It significantly reduces the context saving overhead by taking checkpoints along edges of the task graph. The algorithm is used as a preprocessing step before scheduling the tasks to processors. The algorithm complexity is O(km) where m is the number of edges in the graph and k the maximum number of global checkpoints to be taken. The static algorithm is implemented on a parallel computer with a PVM environment as it is widely available and portable. The task graph of a program can be constructed manually or through program development tools. Our implementation is a collection of preprocessing and run time routines. The preprocessing routines operate on the task graph information to generate a set of edges to be checkpointed for each global checkpoint and write the information on disk. The run time routines save the context along the marked edges. In case of recovery, the recovery algorithms read the information from stable storage and reconstruct the context. The limitation of our static checkpointing algorithm is that it can operate only on deterministic task graphs. To demonstrate the practical feasibility of the proposed approach, case studies of checkpointing some parallel programs are included in the thesis. We conclude the thesis with a summary of proposed algorithms and possible directions to continue research in the area of checkpointing.
193

A feasibility study of building Set-top box user interfaces using Scalable Vector Graphics

Vinkvist, Fredrik January 2008 (has links)
<p>An IPTV Set-top box enables the possibility of doing much more than decodingtelevision content. Its Ethernet interface gives it the same possibilities to communicatewith the outside world as any network device. This enables a wide rangeof services from internet radio to acting as a digital media receiver in your homenetwork. These highly interactive services increase the demands for responsiveand visually attractive user interfaces.Due to the cost-sensitive market of IPTV STBs the preferred platform to developthe user interface is the HTML browser as it allows for fast developmenttimes and low costs. As a W3C standard it also offers high portability and hardwareabstraction making it easy to use more than one STB vendor. The cons ofHTML based GUIs are low performance and lacklustre graphics.This thesis aims to find out if SVG can be used to achieve rich, scalable and animatedgraphics with high performance and still keep the attractive characteristicsof HTML.To do this much effort was put into identifying the strenghts and weaknesses ofSVG. The lessons learned resulted in an SVG AJAX framework called TOIXSVGmaking it possible to develop SVG GUIs in the same manner as modern Rich InternetApplications, enabling component reuse to make sure development time scalespreferably with the scope and complexity of the user interface. Along with theframework several new widgets had to be developed to achieve the targeted functionality.As a proof of concept a mock-up GUI was created with the frameworkand widgets.</p>
194

A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion

Martin, James Robert, Ph. D. 18 September 2015 (has links)
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
195

Δυναμική μετάφραση για τη γλώσσα προγραμματισμού Java

Προύντζος, Δημήτριος 27 February 2009 (has links)
Η γλώσσα Java έχει πλέον εδραιωθεί σαν μια από τις πιο συχνά χρησιμοποιούμενες γλώσσες όχι μόνο λόγω της εξαιρετικής υποστήριξης σύγχρονων παραδειγμάτων προγραμματισμού, όπως ο αντικειμενοστραφής και ο γενικευμένος προγραμματισμός, αλλά κυρίως λόγω της εύκολης μεταφερσιμότητας του κώδικα και της ανεξαρτησίας που παρέχει στα προγράμματά της από κάποια συγκεκριμένη πλατφόρμα υλικού-λειτουργικού συστήματος. Η δυνατότητα αυτή συνοψίζεται στο σύνθημα “Write once, run anywhere” που καθιέρωσε η Sun, η εταιρία η οποία σχεδίασε αρχικά την γλώσσα. Κάτι τέτοιο, επιτυγχάνεται με την μετάφραση ενός προγράμματος από πηγαίο κώδικα Java σε μια ενδιάμεση αναπαράσταση object κώδικα (bytecode), η οποία στη συνέχεια εκτελείται στα πλαίσια μιας εικονικής μηχανής. Η πατροπαράδοτη μέθοδος εκτέλεσης των προγραμμάτων από την εικονική μηχανή ακολουθεί το μοντέλο της διερμήνευσης (interpretation), το οποίο στην πράξη δεν είναι καθόλου αποδοτικό, σε ότι αφορά το χρόνο εκτέλεσης. Μια διαφορετική προσέγγιση στην εκτέλεση Java bytecode είναι αυτή της δυναμικής μετάφρασης (Just In Time compilation – JIT compilation). Εδώ, την πρώτη φορά που εμφανίζεται η ανάγκη να εκτελεστεί ένα συγκεκριμένο τμήμα κώδικα, η εικονική μηχανή το επεξεργάζεται, εφαρμόζοντας προαιρετικά μετασχηματισμούς βελτιστοποίησης και παράγει τον αντίστοιχο κώδικα για το συγκεκριμένο σύστημα-οικοδεσπότη στο οποίο εκτελείται και η ίδια. Ο κώδικας αυτός στη συνέχεια μπορεί να επαναχρησιμοποιηθεί, εξαλείφοντας το κόστος της επαναληπτικής μετάφρασης του ίδιου τμήματος bytecode και μειώνοντας το συνολικό χρόνο εκτέλεσης. Στο πλαίσιο της συγκεκριμένης μεταπτυχιακής εργασίας κατασκευάζουμε ένα JIT μεταφραστή για μια εικονική μηχανή ειδικού σκοπού, το DSJOS (Distributed Scalable Java Operating System). Όπως φανερώνει και το όνομα του, το DSJOS είναι ουσιαστικά ένα κατανεμημένο σύστημα που προσφέρει στα προγράμματα που εκτελούνται εντός αυτού την αφαίρεση μιας Java εικονικής μηχανής. Ο JIT που δημιουργούμε χρησιμοποιεί ως εσωτερική αναπαράσταση το Ιεραρχικό Γράφημα Εργασιών (Hierarchical Task Graph – HTG) και στηρίζεται στη βιβλιοθήκη μετασχηματισμών και βελτιστοποιήσεων (compilation framework) PROMIS. Η υλοποίηση μας διαρθρώνεται σε τρία κυρίως στάδια: το frontend το οποίο είναι υπεύθυνο για την μετατροπή Java bytecode στην ενδιάμεση αναπαράσταση, το backend που μετατρέπει την ενδιάμεση αναπαράσταση σε κώδικα μηχανής για συστήματα x86 και, τέλος, το επίπεδο χρόνου εκτέλεσης που παρέχει στα εκτελούμενα προγράμματα διάφορες υπηρεσίες απαραίτητες για την εκτέλεση του (π.χ. διαχείριση εξαιρέσεων). Παράλληλα με το σχεδιασμό του βασικού μεταφραστή και την ενσωμάτωση του στο DSJOS, σχεδιάζουμε και υλοποιούμε και ένα σύνολο μετασχηματισμών, τόσο στο frontend όσο και στο backend, οι οποίοι έχουν ως σκοπό να βελτιώσουν την ποιότητα του παραγόμενου κώδικα και να μειώσουν το χρόνο εκτέλεσης των προγραμμάτων. / -
196

Adaptive Multicast Live Streaming for A/V Conferencing Systems over Software-Defined Networks / Diffusion multipoint adaptable pour les systèmes de télé- et visioconférences déployés sur les réseaux à définition logicielle

Al Hasrouty, Christelle 04 December 2018 (has links)
Les applications en temps réel, telles que les systèmes de conférence multi-utilisateurs, ont des exigences de qualité de service élevées pour garantir une qualité d'expérience décente. De nos jours, la plupart de ces conférences sont effectuées sur des appareils sans fil. Ainsi, les appareils mobiles hétérogènes et la dynamique du réseau doivent être correctement gérés pour fournir une bonne qualité d’expérience. Dans cette thèse, nous proposons deux algorithmes pour construire et gérer des sessions de conférence basées sur un réseau défini par logiciel qui utilise à la fois la distribution multicast et l’adaptation de flux. Le premier algorithme configure la conférence téléphonique en créant des arborescences de multidiffusion pour chaque participant. Ensuite, il place de manière optimale les emplacements et les règles d’adaptation des flux sur le réseau afin de minimiser la consommation de bande passante. Nous avons créé deux versions de cet algorithme: le premier, basé sur les arborescences les plus courtes, minimise la latence, tandis que le second, basé sur les arborescences, minimise la consommation de bande passante. Le deuxième algorithme adapte les arborescences de multidiffusion en fonction des modifications du réseau qui se produisent pendant un appel. Il ne recalcule pas les arbres, mais ne déplace que les emplacements et les règles d’adaptation des flux. Cela nécessite un calcul très faible au niveau du contrôleur, ce qui rend notre proposition rapide et hautement réactive. Des résultats de simulation étendus confirment l'efficacité de notre solution en termes de temps de traitement et d'économies de bande passante par rapport aux systèmes de conférence existants basés sur une unité de contrôle multipoint et une multidiffusion de couche d'application. / Real-time applications, such as Multi-party conferencing systems, have strong Quality of Service requirements for ensuring a decent Quality of Experience. Nowadays, most of these conferences are performed on wireless devices. Thus, heterogeneous mobile devices and network dynamics must be properly managed to provide a good quality of experience. In this thesis, we propose two algorithms for building and maintaining conference sessions based on Software-Defined Network that uses both multicast distribution and streams adaptation. The first algorithm set up the conference call by building multicast trees for each participant. Then, it optimally places the stream adaptation locations and rules inside the network in order to minimize the bandwidth consumption. We have created two versions of this algorithm: the first one, based on the shortest path trees is minimizing the latency, while the second one, based on spanning trees is minimizing the bandwidth consumption. The second algorithm adapts the multicast trees according to the network changes occurring during a call. It does not recompute the trees, but only relocates the locations and rules of stream adaptation. It requires very low computation at the controller, thus making our proposal fast and highly reactive. Extensive simulation results confirm the efficiency of our solution in terms of processing time and bandwidth savings compared to existing conferencing systems based on a Multipoint Control Unit and Application Layer Multicast.
197

Cu-catalyzed chemical vapour deposition of graphene : synthesis, characterization and growth kinetics

Wu, Xingyi January 2017 (has links)
Graphene is a two dimensional carbon material whose outstanding properties have been envisaged for a variety of applications. Cu-catalyzed chemical vapour deposition (Cu-CVD) is promising for large scale production of high quality monolayer graphene. But the existing Cu-CVD technology is not ready for industry-level production. It still needs to be improved on some aspects, three of which include synthesizing industrially useable graphene films under safe conditions, visualizing the domain boundaries of the continuous graphene, and understanding the kinetic features of the Cu-CVD process. This thesis presents the research aiming at these three objectives. By optimizing the Cu pre-treatments and the CVD process parameters, continuous graphene monolayers with the millimetre-scale domain sizes have been synthesized. The process safety has been ensured by delicately diluting the flammable gases. Through a novel optical microscope set up, the spatial distributions of the domains in the continuous Cu-CVD graphene films have been directly imaged and the domain boundaries visualised. This technique is non-destructive to the graphene and hence could help manage the domain boundaries of the large area graphene. By establishing the novel rate equations for graphene nucleation and growth, this study has revealed the essential kinetic characteristics of general Cu-CVD processes. For both the edge-attachment-controlled and the surface-diffusion-controlled growth, the rate equations for the time-evolutions of the domain size, the nucleation density, and the coverage are solved, interpreted, and used to explain various Cu-CVD experimental results. The continuous nucleation and inter-domain competitions prove to have non-trivial influences over the growth process. This work further examines the temperature-dependence of the graphene formation kinetics leading to a discovery of the internal correlations of the associated energy barriers. The complicated effects of temperature on the nucleation density are explored. The criteria for identifying the rate-limiting step is proposed. The model also elucidates the kinetics-dependent formation of the characteristic domain outlines. By accomplishing these three objectives, this research has brought the current Cu-CVD technology a large step forward towards practical implementation in the industry level and hence made high quality graphene closer to being commercially viable.
198

Childhood neighborhood and the transition to parenthood in Sweden

Sabil, Ezdani Khan January 2018 (has links)
This thesis is exploring the association between childhood neighborhood and the time of the transition to parenthood. In addition, it also explores the relationship between neighborhood and individual attitudes related to fertility behavior. For this purpose, two different datasets were combined. The Swedish Housing and Life Course Cohort Study (HOLK) was used to attain longitudinal housing data, as well as individual level attitudes and control variables for the year 2005, from birth cohorts 1964 and 1974. Neighborhood variables for the year 1990 were attained from the research project ResSegr – Residential segregation in five European countries. By using the same methods as earlier research concerning scalable neighborhoods, five different neighborhood characteristics were identified for parishes in Sweden in 1990; elite, foreign-born, low income, high employment and social assistance. These characteristics were used as independent variables in order to explore any association that might exist between neighborhood at age 16 and the transition to parenthood, using ordinal logistic, logistic and cox proportional models. The result indicated an association between neighborhood characteristic at age 16 and transition to parenthood. Where growing up in a neighborhood characterized with high income and completed tertiary education causes a delay in the timing of the transition to parenthood. Attitudes were also observed to be affected by neighborhood characteristics from age 16. Indicating neighborhood characteristics having a long-lasting effect of influencing the individuals attitude even 15-25 years later.
199

A scalable back-end system for web games using a RESTful architecture

Helg, Emil, Silverhav, Kristoffer January 2016 (has links)
The objective of this thesis was to design and implement a scalable and load efficient back-end system for web game services. This is of interest since web applications may overnight gain a significant increase in user base, because of viral sharing. Therefore designing the web application to service an increasing amount of users can make or break the application, in regard to keep the user base. Because of this, testing how well the system performs during heavy load can be used as a foundation when making a decision of when and where to scale up the application. The system was to be generically accessible through an Application Programming Interface (API) by the different game services. This was done using a RESTful architecture where emphasise was put on building the system scalable and load efficient. This thesis focuses on designing and implementing such a system, and how load testing can be used to evaluate this systems performance for an increasing amount of simultaneous clients using the web application. The results from load testing the implemented system was above the expectations, considering the hardware used when running the tests and hosting the system. The conclusion of this thesis is that by following REST when designing a web service, scalability becomes a natural part of how one would design the system.
200

A feasibility study of building Set-top box user interfaces using Scalable Vector Graphics

Vinkvist, Fredrik January 2008 (has links)
An IPTV Set-top box enables the possibility of doing much more than decodingtelevision content. Its Ethernet interface gives it the same possibilities to communicatewith the outside world as any network device. This enables a wide rangeof services from internet radio to acting as a digital media receiver in your homenetwork. These highly interactive services increase the demands for responsiveand visually attractive user interfaces.Due to the cost-sensitive market of IPTV STBs the preferred platform to developthe user interface is the HTML browser as it allows for fast developmenttimes and low costs. As a W3C standard it also offers high portability and hardwareabstraction making it easy to use more than one STB vendor. The cons ofHTML based GUIs are low performance and lacklustre graphics.This thesis aims to find out if SVG can be used to achieve rich, scalable and animatedgraphics with high performance and still keep the attractive characteristicsof HTML.To do this much effort was put into identifying the strenghts and weaknesses ofSVG. The lessons learned resulted in an SVG AJAX framework called TOIXSVGmaking it possible to develop SVG GUIs in the same manner as modern Rich InternetApplications, enabling component reuse to make sure development time scalespreferably with the scope and complexity of the user interface. Along with theframework several new widgets had to be developed to achieve the targeted functionality.As a proof of concept a mock-up GUI was created with the frameworkand widgets.

Page generated in 0.0277 seconds