Spelling suggestions: "subject:"found synthesis""
1 |
Non-standard sound synthesis with dynamic modelsValsamakis, Nikolas January 2013 (has links)
This Thesis proposes three main objectives: (i) to provide the concept of a new generalized non-standard synthesis model that would provide the framework for incorporating other non-standard synthesis approaches; (ii) to explore dynamic sound modeling through the application of new non-standard synthesis techniques and procedures; and (iii) to experiment with dynamic sound synthesis for the creation of novel sound objects. In order to achieve these objectives, this Thesis introduces a new paradigm for non-standard synthesis that is based in the algorithmic assemblage of minute wave segments to form sound waveforms. This paradigm is called Extended Waveform Segment Synthesis (EWSS) and incorporates a hierarchy of algorithmic models for the generation of microsound structures. The concepts of EWSS are illustrated with the development and presentation of a novel non-standard synthesis system, the Dynamic Waveform Segment Synthesis (DWSS). DWSS features and combines a variety of algorithmic models for direct synthesis generation: list generation and permutation, tendency masks, trigonometric functions, stochastic functions, chaotic functions and grammars. The core mechanism of DWSS is based in an extended application of Cellular Automata. The potential of the synthetic capabilities of DWSS is explored in a series of Case Studies where a number of sound object were generated revealing (i) the capabilities of the system to generate sound morphologies belonging to other non-standard synthesis approaches and, (ii) the capabilities of the system of generating novel sound objects with dynamic morphologies. The introduction of EWSS and DWSS is preceded by an extensive and critical overview on the concepts of microsound synthesis, algorithmic composition, the two cultures of computer music, the heretical approach in composition, non- standard synthesis and sonic emergence along with the thorough examination of algorithmic models and their application in sound synthesis and electroacoustic composition. This Thesis also proposes (i) a new definition for “algorithmic composition”, (ii) the term “totalistic algorithmic composition”, and (iii) four discrete aspects of non-standard synthesis.
|
2 |
Physical modelling of the bowed string and applications to sound synthesisDesvages, Charlotte Genevieve Micheline January 2018 (has links)
This work outlines the design and implementation of an algorithm to simulate two-polarisation bowed string motion, for the purpose of realistic sound synthesis. The algorithm is based on a physical model of a linear string, coupled with a bow, stopping fi ngers, and a rigid, distributed fingerboard. In one polarisation, the normal interaction forces are based on a nonlinear impact model. In the other polarisation, the tangential forces between the string and the bow, fingers, and fingerboard are based on a force-velocity friction curve model, also nonlinear. The linear string model includes accurate time-domain reproduction of frequency-dependent decay times. The equations of motion for the full system are discretised with an energy-balanced finite difference scheme, and integrated in the discrete time domain. Control parameters are dynamically updated, allowing for the simulation of a wide range of bowed string gestures. The playability range of the proposed algorithm is explored, and example synthesised gestures are demonstrated.
|
3 |
Human expressivity in the control and integration of computationally generated audioHeinrichs, Christian January 2018 (has links)
While physics-based synthesis offers a wide range of benefits in the real-time generation of sound for interactive environments, it is difficult to incorporate nuanced and complex behaviour that enhances the sound in a narrative or aesthetic context. The work presented in this thesis explores real-time human performance as a means of stylistically augmenting computational sound models. Transdisciplinary in nature, this thesis builds upon previous work in sound synthesis, film sound theory and physical sound interaction. Two levels on which human performance can enhance the aesthetic value of computational models are investigated: first, in the real-time manipulation of an idiosyncratic parameter space to generate unique sound effects, and second, in the performance of physical source models in synchrony with moving images. In the former, various mapping techniques were evaluated to control a model of a creaking door based on a proposed extension of practical synthesis techniques. In the latter, audio post-production professionals with extensive experience in performing Foley were asked to perform the soundtrack to a physics-based animation using bespoke physical interfaces and synthesis engines. The generated dataset was used to gain insights into stylistic features afforded by performed sound synchronisation, and potential ways of integrating them into an interactive environment such as a game engine. Interacting with practical synthesis models that have extended to incorporate performability enables rapid generation of unique and expressive sound effects, while maintaining a believable source-sound relationship. Performatively authoring behaviours of sound models makes it possible to enhance the relationship between sound and image (both stylistically and perceptually) in ways precluded by one-to-one mappings between physics-based parameters. Mediation layers are required in order to facilitate performed behaviour: in the design of the model on one hand, and in the integration of such behaviours into interactive environments on the other. This thesis provides some examples of how such a system could be implemented. Furthermore, some interesting observations are made regarding the design of physical interfaces for performing environmental sound, and the creative exploitation of model constraints.
|
4 |
Normal mode for chamber ensemble and electronicsNeuman, Israel 01 May 2010 (has links)
Normal Mode is a composition for chamber ensemble and electronics that makes reference to the microtonality employed in Turkish music. In this composition I have made an attempt to expand the timbral palette of standard Western instruments by the use of electronic sounds, which were constructed through digital sound synthesis. The microtonal frequencies, which were used in this synthesis process, were derived from the Turkish tonal system. The ensemble material, on the other hand, was conceived within a Western-influenced serial pitch organization. These two distinct influences invite a dynamic discourse between the ensemble and the electronics. As a new instrument, which was developed specifically for this composition, the electronics initially attracts more attention. Over time a new equilibrium is established and the electronics part is integrated in the ensemble.
The electronics part of Normal Mode was created in the object-oriented programming environment Max/MSP. It is realized in a performance of the composition with the same software. Five of the chapters of this thesis discuss the compositional process of the electronic part and the system of organization that guided this process. These chapters describe how this system was incorporated in the programming of Max/MSP patchers which generated the composition's sound library and perform the electronics part in real time. They also describe the relationships between the ensemble and the electronics. The sixth chapter presents the composition Normal Mode. The Max/MSP patchers that perform the electronics part are included in the supplement of this thesis.
|
5 |
Percussion instrument modelling in 3D : sound synthesis through time domain numerical simulationTorin, Alberto January 2016 (has links)
This work is concerned with the numerical simulation of percussion instruments based on physical principles. Three novel modular environments for sound synthesis are presented: a system composed of various plates vibrating under nonlinear conditions, a model for a nonlinear double membrane drum and a snare drum. All are embedded in a 3D acoustic environment. The approach adopted is based on the finite difference method, and extends recent results in the field. Starting from simple models, the modular instruments can be created by combining different components in order to obtain virtual environments with increasing complexity. The resulting numerical codes can be used by composers and musicians to create music by specifying the parameters and a score for the systems. Stability is a major concern in numerical simulation. In this work, energy techniques are employed in order to guarantee the stability of the numerical schemes for the virtual instruments, by imposing suitable coupling conditions between the various components of the system. Before presenting the virtual instruments, the various components are individually analysed. Plates are the main elements of the multiple plate system, and they represent the first approximation to the simulation of gongs and cymbals. Similarly to plates, membranes are important in the simulation of drums. Linear and nonlinear plate/membrane vibration is thus the starting point of this work. An important aspect of percussion instruments is the modelling of collisions. A novel approach based on penalty methods is adopted here to describe lumped collisions with a mallet and distributed collisions with a string in the case of a membrane. Another point discussed in the present work is the coupling between 2D structures like plates and membranes with the 3D acoustic field, in order to obtain an integrated system. It is demonstrated how the air coupling can be implemented when nonlinearities and collisions are present. Finally, some attention is devoted to the experimental validation of the numerical simulation in the case of tom tom drums. Preliminary results comparing different types of nonlinear models for membrane vibration are presented.
|
6 |
[en] POLYPHONIC SYNTHESIZER CONTROLLED BY MICROCOMPUTER / [pt] SINTETIZADOR POLIFÔNICO CONTROLADO POR MICROCOMPUTADORMARCIO DA COSTA PEREIRA BRANDAO 06 August 2007 (has links)
[pt] Este trabalho descreve um sistema destinado à síntese de
quatro sons simultâneos controlado a microcomputador
utilizando a técnica de síntese substantiva. O hardware
utiliza blocos analógicos que realizam as funções de
geração, filtragem e conformação, e a execução em tempo
real é possível através de um teclado musical. A
programação desenvolvida permite ao usuário atuar nestes
blocos determinando parâmetros básicos do som, que podem
ser armazenados e recuperados de discos flexíveis.
O hardware permite a síntese da voz humana através de um
software a ser desenvolvido, que pode utilizar uma
biblioteca de fonemas em discos flexíveis, contendo as
diversas funções de controle necessárias. / [en] This work describes a system intendent to perform
synthesis of four simultaneous sonds controlled by
microcomputer using sustrative syntesis technique.
Hardware employs analogical performing generating,
filtering and conformative functions and performance in
real time is allowed throug use of a Keyboard. Programming
developed allows user to act upon those blocks,
determining basic sound parameters which can be stored
recovered from diskettes. Human voice synthesis is
possible by means of a futurely developed software wuch
will use a phonem library in flexible disks containing the
different needed controlling functions.
|
7 |
Re-Sonification of Objects, Events, and EnvironmentsJanuary 2013 (has links)
abstract: Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds. / Dissertation/Thesis / Ph.D. Electrical Engineering 2013
|
8 |
Contrôle intuitif de la synthèse sonore d’interactions solidiennes : vers les métaphores sonores / Intuitive control of solid-interaction sound synthesis : toward sonic metaphorsConan, Simon 03 December 2014 (has links)
Un des enjeux actuels de la synthèse sonore est le contrôle perceptif (i.e. à partir d’évocations) des processus de synthèse. En effet, les modèles de synthèse sonore dépendent généralement d’un grand nombre de paramètres de bas niveau dont la manipulation nécessite une expertise des processus génératifs. Disposer de contrôles perceptifs sur un synthétiseur offre cependant beaucoup d’avantages en permettant de générer les sons à partir d’une description du ressenti et en offrant à des utilisateurs non-experts la possibilité de créer et de contrôler des sons intuitivement. Un tel contrôle n’est pas immédiat et se base sur des hypothèses fortes liées à notre perception, notamment la présence de morphologies acoustiques, dénommées ``invariants'', responsables de l’identification d’un évènement sonore.Cette thèse aborde cette problématique en se focalisant sur les invariants liés à l’action responsable de la génération des sons. Elle s’articule suivant deux parties. La première a pour but d’identifier des invariants responsables de la reconnaissance de certaines interactions continues : le frottement, le grattement et le roulement. Le but est de mettre en œuvre un modèle de synthèse temps-réel contrôlable intuitivement et permettant d’effectuer des transitions perceptives continues entre ces différents types d’interactions (e.g. transformer progressivement un son de frottement en un son de roulement). Ce modèle s'inscrira dans le cadre du paradigme ``action-objet'' qui stipule que chaque son résulte d’une action (e.g. gratter) sur un objet (e.g. une plaque en bois). Ce paradigme s’adapte naturellement à une approche de la synthèse par modèle source-filtre, où l’information sur l’objet est contenue dans le ``filtre'', et l’information sur l’action dans la ``source''. Pour ce faire, diverses approches sont abordées : études de modèles physiques, approches phénoménologiques et tests perceptifs sur des sons enregistrés et synthétisés.La seconde partie de la thèse concerne le concept de ``métaphores sonores'' en élargissant la notion d’objet à des textures sonores variées. La question posée est la suivante : étant donnée une texture sonore quelconque, est-il possible de modifier ses propriétés intrinsèques pour qu’elle évoque une interaction particulière comme un frottement ou un roulement par exemple ? Pour créer ces métaphores, un processus de synthèse croisée est utilisé dans lequel la partie ``source'' est basée sur les morphologies sonores des actions précédemment identifiées et la partie ``filtre'' restitue les propriétés de la texture. L’ensemble de ces travaux ainsi que le paradigme choisi offre dès lors de nouvelles perspectives pour la constitution d’un véritable langage des sons. / Perceptual control (i.e. from evocations) of sound synthesis processes is a current challenge. Indeed, sound synthesis models generally involve a lot of low-level control parameters, whose manipulation requires a certain expertise with respect to the sound generation process. Thus, intuitive control of sound generation is interesting for users, and especially non-experts, because they can create and control sounds from evocations. Such a control is not immediate and is based on strong assumptions linked to our perception, and especially the existence of acoustic morphologies, so-called ``invariants'', responsible for the recognition of specific sound events.This thesis tackles the problem by focusing on invariants linked to specific sound generating actions. If follows two main parts. The first is to identify invariants responsible for the recognition of three categories of continuous interactions: rubbing, scratching and rolling. The aim is to develop a real-time sound synthesizer with intuitive controls that enables users to morph continuously between the different interactions (e.g. progressively transform a rubbing sound into a rolling one). The synthesis model will be developed in the framework of the ``action-object'' paradigm which states that sounds can be described as the result of an action (e.g. scratching) on an object (e.g. a wood plate). This paradigm naturally fits the well-known source-filter approach for sound synthesis, where the perceptually relevant information linked to the object is described in the ``filter'' part, and the action-related information is described in the ``source'' part. To derive our generic synthesis model, several approaches are treated: physical models, phenomenological approaches and listening tests with recorded and synthesized sounds.The second part of the thesis deals with the concept of ``sonic metaphors'' by expanding the object notion to various sound textures. The question raised is the following: given any sound texture, is it possible to modify its intrinsic properties such that it evokes a particular interaction, like rolling or rubbing for instance? To create these sonic metaphors, a cross-synthesis process is used where the ``source'' part is based on the sound morphologies linked to the actions previously identified, and the ``filter'' part renders the sound texture properties. This work, together with the chosen paradigm offers new perspectives to build a sound language.
|
9 |
Generátor hudby a zvukové efekty / Generator of Music and Sound EffectsVaňků, Nikita January 2018 (has links)
The aim of this work is to design digital synthesizer and modulator on embedded sys- tems. Work is exploring existing digital synthesizer and modulators in embedded systems and PC and with that gained knowledge is presenting possible solution of design on Field Programmable Gate Array.
|
10 |
Návrh virtuálního síťového kolaborativního zvukového nástroje / Design of Net-Based Virtual Collaborative Musical InstrumentLiudkevich, Denis January 2020 (has links)
The aim of this work was to create an online platform for multi-user sound creation with original sound synthesis tools. The educational context of the application was also taken into account by hiding the controls of the sound parameters behind the subconsciously known physical phenomena and the game form of the application. A substantial part of the logic and all graphics of the instruments is written in the JavaScript programming language and its library p5.js. It is located on the client side and communicates with the Node.js-based server via a web socket. The audio part is on another server in the SuperCollider environment, it is transmitted via IceCast and communicates with the main OSC message server. The application contains 3 instruments for generating sounds and one effects module. Each instrument is designed for multiple users and requires their cooperation. Acceptable transmission speeds and minimum computational demands have been achieved by optimizing the instrument's internal algorithms, the way in which the graphic content is displayed and the appropriate routing of the individual sound modules. The sound is specific for each instrument. The instruments in the application are tuned and designed so that the user can both achieve interesting sound results himself and play his role as a whole with others. Methods such as granular synthesis, chaotic oscillators, string instrument modeling, filter combinations, and so on are used to generate sound. Great emphasis in the development of the application was placed on the separation of roles, simultaneous control of one instrument by several players and communication of users through playing the instruments and text expression - chat. An important part is also a block for displaying descriptive information.
|
Page generated in 0.0675 seconds