• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1105
  • 284
  • 141
  • 81
  • 81
  • 77
  • 73
  • 73
  • 30
  • 22
  • 21
  • 17
  • 16
  • 15
  • 12
  • Tagged with
  • 2382
  • 481
  • 397
  • 364
  • 307
  • 268
  • 251
  • 250
  • 215
  • 204
  • 199
  • 190
  • 189
  • 188
  • 181
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

Measurement of interactive manual effectiveness : How do we know if our manuals are effective?

Ståhlberg, Henrik January 2018 (has links)
Multimedia learning is today a part of everyday life. Learning from digital sources on the internet is probably more common than printed material. The goal of this project is to determine if measuring user interaction in a interactive manual can be of use to evaluate the effectiveness of the manual. Since feedback of multimedia learning materials is costly to achieve in face-to-face interaction, automatic feedback data might be useful for evaluating and improving the quality of multimedia learning materials. In this project an interactive manual was developed for a real-world report generating application. The manual was then tested on 21 test users. Using the k-nearest neighbour machine learning algorithm the results shows that time taken on each step and the number of views on each step did not provide for good evaluation of the manual. Number of faults done by the user was good at predicting if the user would abort the manual and in combination with the number of acceptable interactions the usability data did provide for a better classification then ZeroR classification. The conclusions can be questioned by the small dataset used in this project.
842

On lowering the error-floor of low-complexity turbo-codes

Blazek, Zeljko 26 November 2018 (has links)
Turbo-codes are a popular error correction method for applications requiring bit error rates from 10−3 to 10−6, such as wireless multimedia applications. In order to reduce the complexity of the turbo-decoder, it is advantageous to use the simplest possible constituent codes, such as 4-state recursive systematic convolutional (RSC) codes. However, for such codes, the error floor can be high, thus making them unable to achieve the target bit error range. In this dissertation, two methods of lowering the error floor are investigated. These methods are interleaver selection, and puncturing selective data bits. Through the use of appropriate code design criteria, various types of interleavers, and various puncturing parameters are evaluated. It was found that by careful selection of interleavers and puncturing parameters, a substantial reduction in the error floor can be achieved. From the various interleaver types investigated, the variable s-random type was found to provide the best performance. For the puncturing parameters, puncturing of both the data and parity bits of the turbo-code, as well as puncturing only the parity bits of the turbo-code, were considered. It was found that for applications requiring BERs around 10−3 , it is sufficient to only puncture the parity bits. However, for applications that require the full range of BER values, or for applications where the FER is the important design parameter, puncturing some of the data bits appears to be beneficial. / Graduate
843

Authoring interactive media : a logical & temporal approach / Une approche logico-temporelle pour la création de médias interactifs

Celerier, Jean-Michael 29 March 2018 (has links)
La question de la conception de médias interactifs s'est posée dès l'apparition d'ordinateurs ayant des capacités audio-visuelles. Un thème récurrent est la question de la spécification temporelle d'objets multimédia interactifs : comment peut-on créer des présentations multimédia dont le déroulé prend en compte des événements extérieurs au système.Ce problème rejoint un autre champ d'application, qui est celui de la musique et plus spécifiquement des partitions interactives : des pièces musicales dont l'interprétation pourra varier dans le temps en fonction d'indications données par la partition.Dans les deux cas, il est nécessaire de spécifier les médias et données musicales qui seront orchestrées par le système. C'est le sujet de la première partie de cette thèse, qui présente un modèle adapté pour la conception d'applications multimédia permettant de répondre à des problématiques d'accès réparti et de contrôle à distance, ainsi que de documentation.Une fois ce modèle défini, on construit en s'inspirant des systèmes à flots de donnée courants dans les environnements adaptés à la musique en temps réel un environnement de calcul permettant de contrôler les paramètres des applications définies précédemment, ainsi que de générer des entrées & sorties sous forme audio-visuelle. En particulier, une notion d'environnement permanent dans ce modèle de données est introduite. Elle simplifie certains cas d'usages courants en informatique musicale, et améliore les performances par rapport à une solution uniquement basée sur de la communication entre nœuds explicites du système.Enfin, une structure de graphe temporel est introduite : elle permet de définir les parties du graphe de données qui vont être actives à un instant donné d'une partition interactive. En particulier, les connections entre objets du graphe de données sont étudiées dans le cadre de déroulements synchrones et différés.Un langage d'édition visuel est introduit pour l'écriture de scénarios dans un modèle graphique réunissant les éléments introduits précédemment.La structure temporelle est par la suite étudiée sous l'axe de la répartition. On montre notamment qu'il est possible d'acquérir un pouvoir expressif supplémentaire en supposant une exécution concurrente de certains objets de la structure temporelle.Enfin, on présente comment le système permet de recréer nombre de systèmes musicaux existants : séquenceurs, live-loopers, et patchers, ainsi que les nouveaux types de comportements multimédias rendus possibles. / Interactive media design is a field which has been researched as soon as computers started showing audio-visual capabilities. A common research theme is the temporal specification of interactive media objects: how is it possible to create multimedia presentations whose schedule takes into account events external to the system.This problem is shared with another research field, which is interactive music and more precisely interactive scores. That is, musical works whose performance will evolve in time according to a given score.In both cases, it is necessary to specify the medias and musical data orchestrated by the system: this is the subject of the first part of this thesis, which presents a model tailored for the design of multimedia applications. This model allows to simplify distributed access and remote control questions, and solves documentation-related problems.Once this model has been defined, we construct by inspiration with well-known data-flow systems used in music programming, a computation structure able to control and orchestrate the applications defined previously, as well as handling audio-visual data input and output.Specifically, a notion of permanent environment is introduced in the data-flow model: it simplifies multiple use cases common when authoring interactive media and music, and improves performance when comparing to a purely node-based approach.Finally, a temporal graph structure is presented: it allows to score parts of the data graph in time. Especially, nodes of the data graph are studied in the context of both synchronous and delayed cases.A visual edition language is introduced to allow for authoring of interactive scores in a graphical model which unites the previously introduced elements.The temporal structure is then studied from the distribution point of view: we show in particular that it is possible to earn an additional expressive power by supposing a concurrent execution of specific objects of the temporal structure.Finally, we expose how the system is able to recreate multiple existing media systems: sequencers, live-loopers, patchers, as well as new multimedia behaviours.
844

Artistic Fusion in the Piano Concert: The Piano Recital and Concepts of Artistic Synergy: Includes two multimedia projects: Picturing Rachmaninoff & Picturing Ravel

January 2012 (has links)
abstract: This paper investigates the origins of the piano recital as invented by Franz Liszt, presents varying strategies for program design, and compares Liszt's application of the format with current trends. In addition it examines the concepts of program music, musical ekphrasis, and Gesamtkunstwerk and proposes a new multimedia piano concert format in which music combines with the mediums of literature and the visual arts; Picturing Rachmaninoff, and Picturing Ravel provide two recent examples of this format. / Dissertation/Thesis / D.M.A. Music 2012
845

Separation of educational and technical content in educational hypermedia

Hilmer, Gunter January 2009 (has links)
The creation and development of educational hypermedia by teachers and educational staff is often limited by their lack of computing skills, time and support from the educational institutions. Especially the lack of computing skills is a hinderance to most of today’s educational experts. The problem is to find out how those educational experts could be supported by computer based tools which are tailored especially to their needs without having any technical limitations. In this study the separation of technical and educational content in educational hypermedia is examined as a solution to this problem. The main hypothesis of this study is that the separation of technical and educational content is possible if it is based on a fine-grained structure of different teaching and learning strategies and their conversion into an authoring tool. Such an authoring tool would make the creation of educational hypermedia very easy for teachers and therefore enable them to overcome the existing obstacles. The development of a new model, the creation of a new XML language and the implementation of a new authoring tool form the basis for a detailed investigation. The investigation was done by undertaking several research tasks like the evaluation of the XML language and the authoring tool by a group of educational experts of different knowledge domains, the practical usage of the authoring tool for the creation of real-life based educational material and the analysis of the gained research results. The analysis of the qualitative data showed that the separation of educational and technical content in educational hypermedia is possible and that it can be applied by educational experts with low computing skills as well as by technical experts with no educational background. Furthermore, the analysis allowed some additional insights into the creation of educational material by teachers and how it can be improved. The main conclusion of this study is that authoring tools in educational hypermedia should use the separation of educational and technical content based on different teaching and learning strategies which allows educational experts with low computing skills to create educational content for delivery via the World Wide Web.
846

Development of a MPEG-7 based multimedia content description and retrieval tool for internet protocol television (IPTV)

Ncube, Prince Daughing Ngqabutho January 2017 (has links)
Search and retrieval of multimedia content from open platforms such as the Internet and IPTV platforms has long been found to be hugely inefficient. It has been noted that a major cause of such inefficient results is the improper labeling or incomplete description of multimedia content by its creators. The lack of adequate description of video content by the proper annotation of video content with the relevant metadata leads to poor search and retrieval yields. The creation of such metadata itself is a major problem as there are various metadata description standards which users could employ. On the other hand there are tools such as FFprobe that can retrieve important features of video that can be used in searching and retrieval. The combination of such tools and metadata description standards could be the solution to the metadata problem. The Multimedia Content Description Interface (MPEG-7) is an example of a metadata description standard. It has been adopted by TISPAN for the description of IPTV multimedia content. The MPEG-7 standard is rather complex, seeing as it has over 1200 global Descriptors and Description Schemes which a user would have to know in order to implement such technology. This complexity is a nuisance when we consider the existence of multitudes of amateur video producers. These multimedia content creators have no idea how to use the MPEG-7 standard to annotate their creations with metadata. Consequently, overloading of the IPTV platform with content that has not been annotated in a standardized manner occurs, making search and retrieval of the multimedia content (videos, in this instance) inefficient. Therefore, it was imperative to try and determine whether the use of the MPEG-7 standard could be made much easier by creating a tool that is MPEG-7 enabled which will allow for the annotation of video content by any user without concerning themselves about how to use the MPEG-7 standard. In attempting to develop a tool for metadata generation, it was incumbent for us to understand the issues associated with metadata generation for users wishing to create IPTV services. An extensive literature review on IPTV standardization was carried out to determine the issues associated with metadata generation for IPTV and their proposed solutions. An experimental research approach was taken in an attempt to figure out if our proposed solution to the lack of technical expertise by users about the MPEG-7 standard could be the final solution to the metadata generation problem. We developed a Multimedia Content Description and Management System (MCDMS) prototype which enabled us to describe video content by annotating it with 16 different metadata elements and storing the descriptions in XML MPEG-7 format. Incremental development and re-use oriented development were used during the development phase of this research. The MCDMS underwent functional testing; smoke testing of the individual system components and Big Bang integration testing for the combined components. Our results indicate that the more metadata is appended to a video as description the better it is to search for and retrieve. The MCDMS hides the complexity of MPEG-7 metadata creation from the users. With the effortless creation of MPEG-7 based metadata, it becomes easier to annotate videos. Consequently, search and retrieval of video content becomes more efficient. It is important to note that the description of multimedia content remains a complex feat. Even with the metadata elements laid out for users, there still exist other issues that affect metadata creation such as polysemy and the semantic gap. However, the provision of a tool that does the MPEG-7 standardizing behind the scenes for users when they upload a video makes the description of multimedia content in a standardized manner a much easier feat to achieve.
847

Black Things, White Spaces

Washington, Lindsay Amadi 01 May 2018 (has links)
This thesis paper, Black Things, White Spaces, offers an in depth look into my journey as an artist and how my artistic practice has evolved over the years. Throughout this time of self-exploration, I have developed an interest in themes of racism, structures of power, representation and stereotypes. In my artistic work, I explore how these themes affect the African American community, as well as myself, as an African American woman. This paper utilizes the creative and theoretical frameworks by artists and scholars like, Bill Viola, Adrian Piper, bell hooks, and Franz Fanon to support the intentions of my work. This thesis illustrates for the reader how my work approaches these themes through certain methodologies, such as: tactical media, blurring the lines between art and life, and the manipulation of time and space. In this paper, I argue the importance of placing my work within the context of African American experiences throughout history. By doing this, my work is able to reference several events throughout history, while addressing our current moment in time. Included in this manuscript are detailed descriptions and analyses of each piece in the thesis exhibition. It is important to speak about the development and the intentions of my art. While speaking about the work, I compare and contrast my thesis work to previous artworks I’ve done, as well as other artists works, in order to place these pieces within an art-historical framework. Finally, this thesis, also addresses how my current work presented in the thesis exhibition will inform my future artistic practice. I believe that my contributions to the African American media arts practice creates spaces to celebrate diversity, empower the voiceless, but most importantly, creates new avenues for change.
848

Enmity

Horton, David Christopher, 1986- 06 1900 (has links)
1 score (viii, 57 p.) / Enmity was conceived in collaboration with choreographer Liana Conyers and was premiered on May 17, 2011 in Dougherty Dance Theater at the University of Oregon, School of Music and Dance. This piece was born out of my strong belief in art as collaboration. The initial idea for this project began with my prior interest in music as it pertains to dance and the dynamic relationship between the two art forms. Having composed several works for dance, I explore the specific relationships between music and movement and how they combine to engage the viewer. The narrative of Enmity shares a social commentary that is relevant and personal to my experience as an artist. Enmity was consciously composed with the intent of movement being part of the compositional process. There is a strong influence and connection between sound and movement; often, composers are subconsciously thinking about music as it relates to movement, conceptually or physically. / Committee in charge: David Crumb, Chairperson; Robert Kyr, Member; Christian Cherry, Member
849

The distributed utility model applied to optimal admission control & QoS adaptation in multimedia systems & enterprise networks

Akbar, Md Mostofa 05 November 2018 (has links)
Allocation and reservation of resources, such as CPU cycles and I/O bandwidth of multimedia servers and link bandwidth in the network, is essential to ensure Quality of Service (QoS) of multimedia services delivered over the Internet. We propose a Distributed Multimedia Server System (DMSS) configured out of a collection of networked multimedia servers where multimedia data are partitioned and replicated among the servers. We also introduce Utility Model-Distributed (UM-D), the distributed version of the Utility Model, for admission control and QoS adaptation of multimedia sessions to maximize revenue from multimedia services for the DMSS. Two control architectures, a centralized and a distributed, have been proposed to solve the admission control problem formalized by the UM-D. In the centralized broker architecture, the admission control in a DMSS can be mapped to the Multidimensional Multiple-choice Knapsack Problem (MMKP), a variant of the classical 0–1 Knapsack Problem. An exact solution of MMKP, an NP-hard problem, is not applicable for the on line admission control problem in the DMSS. We therefore developed three new heuristics, M-HEU, I-HEU and C-HEU for solving the MMKP for on-line real-time admission control and QoS adaptation. We present a qualitative analysis of the performance of these heuristics to solve admission control problems based on the worst-case complexity analysis and the experimental results from different sized data sets. The fully distributed admission control problem in a DMSS, on the other hand, maps to the Multidimensional Multiple-choice Multi Knapsack Problem (MMMKP), a new variant of the Knapsack Problem. We have developed D-HEU and A-HEU, two new distributed heuristics to solve the MMMKP. D-HEU requires a large number of messages and it is not suitable for a on line admission controller. A-HEU finds the solution with fewer messages but achieves less optimality than D-HEU. We have applied the admission control strategy described in the UM-D to the set of Media Server Farms providing streaming videos to users. The performance of different heuristics in the broker has been discussed using the simulation results. We have also shown application of UM-D to Distributed SLA (Service Level Agreement) Controllers in Enterprise Networks. Simulation results and qualitative comparison of different heuristics are also provided. / Graduate
850

Adaptive Layered Multicast TCP-Friendly : análise e validação experimental / Adaptive layered multicast TCP-friendly

Krob, Andrea Collin January 2009 (has links)
Um dos obstáculos para o uso disseminado do multicast na Internet global é o desenvolvimento de protocolos de controle de congestionamento adequados. Um fator que contribui para este problema é a heterogeneidade de equipamentos, enlaces e condições de acesso dos receptores, a qual aumenta a complexidade de implementação e validação destes protocolos. Devido ao multicast poder envolver milhares de receptores simultaneamente, o desafio deste tipo de protocolo se torna ainda maior, pois além das questões relacionadas ao congestionamento da rede, é necessário considerar fatores como sincronismo, controle de feedbacks, equidade de tráfego, entre outros. Por esses motivos, os protocolos de controle de congestionamento multicast têm sido um tópico de intensa pesquisa nos últimos anos. Uma das alternativas para o controle de congestionamento multicast na Internet é o protocolo ALMTF (Adaptive Layered Multicast TCP-Friendly), o qual faz parte do projeto SAM (Sistema Adaptativo Multimídia). Uma vantagem desse algoritmo é inferir o nível de congestionamento da rede, determinando a taxa de recebimento mais apropriada para cada receptor. Além disso, ele realiza o controle da banda recebida, visando à justiça e a imparcialidade com os demais tráfegos concorrentes. O ALMTF foi desenvolvido originalmente em uma Tese de doutorado e teve a sua validação no simulador de redes NS-2 (Network Simulator). Este trabalho tem como objetivo estender o protocolo para uma rede real, implementando, validando os seus mecanismos e propondo novas alternativas que o adaptem para esse ambiente. Além disso, efetuar a comparação dos resultados reais com a simulação, identificando as diferenças e promovendo as pesquisas experimentais na área. / One of the obstacles for the widespread use of the multicast in the global Internet is the development of adequate protocols for congestion control. One factor that contributes for this problem is the heterogeneity of equipments, enlaces and conditions of access of the receivers, which increases the implementation and validation complexity of these protocols. Due to the number (thousands) of receivers simultaneously involved in multicast, the challenge of these protocols is even higher. Besides the issues related to the network congestion, it is necessary to consider factors such as synchronism, feedback control, fairness, among others. For these reasons, the multicast congestion control protocols have been a topic of intense research in recent years. The ALMTF protocol (Adaptive Layered Multicast TCP-Friendly), which is part of project SAM, is one of the alternatives for the multicast congestion control in the Internet. One advantage of this algorithm is its ability to infer the network congestion level, assigning the best receiving rate for each receptor. Besides that, the protocol manages the received rate, aiming to achieve fairness and impartiality with the competing network traffic. The ALMTF was developed originally in a Ph.D. Thesis and had its validation under NS-2 simulator. The goal this work is to extend the protocol ALMTF for a real network, validating its mechanisms and considering new alternatives to adapt it for this environment. Moreover, to make the comparison of the real results with the simulation, being identified the differences and promoting the experimental research in the area.

Page generated in 0.0738 seconds