591 |
Suivi de chansons par reconnaissance automatique de parole et alignement temporelBeaudette, David January 2010 (has links)
Le suivi de partition est défini comme étant la synchronisation sur ordinateur entre une partition musicale connue et le signal sonore de l'interprète de cette partition. Dans le cas particulier de la voix chantée, il y a encore place à l'amélioration des algorithmes existants, surtout pour le suivi de partition en temps réel. L'objectif de ce projet est donc d'arriver à mettre en oeuvre un logiciel suiveur de partition robuste et en temps-réel utilisant le signal numérisé de voix chantée et le texte des chansons. Le logiciel proposé utilise à la fois plusieurs caractéristiques de la voix chantée (énergie, correspondance avec les voyelles et nombre de passages par zéro du signal) et les met en correspondance avec la partition musicale en format MusicXML. Ces caractéristiques, extraites pour chaque trame, sont alignées aux unités phonétiques de la partition. En parallèle avec cet alignement à court terme, le système ajoute un deuxième niveau d'estimation plus fiable sur la position en associant une segmentation du signal en blocs de chant à des sections chantées en continu dans la partition. La performance du système est évaluée en présentant les alignements obtenus en différé sur 3 extraits de chansons interprétés par 2 personnes différentes, un homme et une femme, en anglais et en français.
|
592 |
Simulation of tribological interactions in bonded particle-solid contactsVan Wyk, Geritza 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: In this study, tool forces from rock cutting tests were numerically simulated through a discrete element method (DEM) in association with PFC3D™. Tribological interactions such as contact, shearing, fracturing, friction and wear were presented during these cutting simulations. Particle assemblies, representing Paarl granite and Sandstone-2, were created in PFC3D™ through a material-genesis procedure. The macro-properties of these particle assemblies, namely Young’s modulus, Poisson’s ratio, uniaxial and triaxial compressive strength and Brazilian tensile strength, were calibrated by modelling the uniaxial and triaxial compressive strength test and the Brazilian tensile strength test. The calibration was done through adjustment of the micro-properties of the assembly, namely the stiffness and strength parameters of the particles and bonds. The influence of particle size on the calibration was also investigated. These assemblies were used in the rock cutting tests. Results suggested that DEM can reproduce the damage formation during calibration tests successfully. From the results obtained from the calibration tests, it was also concluded that particle size is not a free parameter but influences the macro-properties greatly.
Different rock cutting tools were simulated, namely point-attack (conical) picks, chisel-shaped tools and button-shaped tools. The numerical cutting tools were treated as rigid walls to simplify the simulation and the tool forces were not influenced by wear. In each simulation the cutting tools advanced at a constant velocity. The tool forces acting on the cutting tool, in three orthogonal directions, were recorded during the numerical simulations and the peak cutting forces were predicted by theoretical equations. The damage to the Paarl granite and Sandstone-2 assemblies was revealed as broken bonds, which merge into microscopic fractures. The mean peak cutting forces of sharp cutting tools obtained from numerical, theoretical and experimental models (from the literature) were compared. Finally the influence of factors, including wear on the tool and depth of cut, on the value of tool forces was also investigated.
The results from the rock cutting tests revealed that the correlation between the numerical and the experimental models as well as the theoretical and experimental models was not strong when using sharp point-attack and chisel-shaped picks. It was concluded that the influence of wear plays a substantial part in the cutting process and it has to be included during the numerical simulation for the results to be accurate and verifiable. This study also found that there is a non-linear increase in tool forces with an increase in depth of cut, since the contact area increases. At larger cutting depths, chip formation also generally increased and therefore damage to the sample as well as wear on the cutting tool will be minimized at shallow cutting depths. Overall this study concludes that DEM are capable of simulating calibration methods and rock cutting processes with different cutting tools and producing results which are verifiable with experimental data. Therefore numerical prediction of tool forces will allow the design of efficient cutting systems and the operational parameters as well as the performance prediction of excavation machines. / AFRIKAANSE OPSOMMING: In hierdie studie is die kragte wat tydens rotssny-toetse op die sny gereedskap inwerk, numeries gesimuleer met behulp van ‘n diskrete element metode (DEM) in samewerking met PFC3D™. Tribologiese interaksies soos kontak, skeer, breking, wrywing en slytasie is gedurende hiersie snytoetse voorgestel. Partikel versamelings, wat Paarl graniet en Sandsteen-2 verteenwoordig, is in PFC3D™ geskep deur middel van ‘n materiaal-skeppings prosedure. Die makro-eienskappe van die partikel versamelings, naamlik Young se modulus, Poisson se verhouding, eenassige en drie-assige druksterkte en Brasiliaanse treksterkte, is gekalibreer deur modellering van die eenassige en drie-assige druksterkte toets en die Brasiliaanse treksterkte toets. Die kalibrasie is gedoen deur aanpassing van die mikro-eienskappe, naamlik die styfheid en die sterkte parameters van die partikels en bindings. Die invloed van partikelgrootte is ook ondersoek. Daarna is hierdie versamelings in die rotssny-toetse gebruik. Resultate het daarop gedui dat DEM die kraakvorming gedurende kalibrasie toetse suksesvol kan reproduseer. Vanuit die kalibrasie is ook gevind dat die partikelgrootte nie ‘n vrye parameter is nie, maar die makro-eienskappe grotendeels beïnvloed.
Verskillende rotssny gereedskap is gesimuleer, naamlik koniese, beitel-vormige en knopie-vormige instrumente. Die numeriese sny gereedskap is gesimuleer as rigiede mure om simulasies te vereenvoudig en die gereedskap-kragte is dus nie deur slytasie beïnvloed nie. Tydens elke simulasie is die sny gereedskap vorentoe beweeg teen ‘n konstante snelheid. Die gereedskap-kragte, in drie ortogonale rigtings, is aangeteken gedurende die numeriese simulasies en die piek snykragte is ook voorspel deur teoretiese vergelykings. Die skade aan die Paarl graniet en Sandsteen-2 versamelings, is voorgestel as gebreekte bindings, wat saamsmelt tot mikroskopiese frakture. Die gemiddelde piek snykragte van skerp sny gereedskap van numeriese, teoretiese en eksperimentele modelle (uit die literatuur) is vergelyk. Ten slotte is die invloed wat faktore, onder andere die slytasie van gereedskap en die snydiepte, op die grootte van die kragte het ondersoek.
Die resultate van die rotssny-toetse het aan die lig gebring dat die korrelasie tussen die numeriese en eksperimentale modelle sowel as die teoretiese en eksperimentele modelle nie sterk is tydens die gebruik van skerp koniese en beitel-vormige instrumente nie. Die gevolgtrekking is gemaak dat die invloed van slytasie van sny gereedskap ‘n wesenlike rol speel in die snyproses en dat dit in die numeriese simulasie ingesluit moet word sodat die resultate akkuraat en virifieerbaar is. Hierdie studie het ook gevind dat daar ‘n nie-lineêre toename in die gereedskap-kragte is met ‘n toename in snydiepte aangesien die kontak-area toeneem met ‘n toename in die snydiepte. By groter snydieptes, het die formasie van afsplinterings verhoog en dus sal skade aan die partikel versamelings en die slytasie van die gereedskap geminimeer word by vlakker snydieptes. Algeheel het die studie tot die gevolgtrekking gekom dat DEM in staat is om kalibrasie metodes en rotssny-toetse met verskillende sny gereedskap te simuleer asook om resultate te produseer wat verifieerbaar is met eksperimentele data. Numeriese voorspellings van die gereedskap-kragte sal dus toelaat om doeltreffende sny prosesse en operasionele parameters te ontwerp sowel as om die werkverrigting van uitgrawings masjiene te voorspel.
|
593 |
Experimental and numerical investigation into the destemming of grapesLombard, Stephanus Gerhardus 03 1900 (has links)
Thesis (MScEng (Mechanical and Mechatronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The removal of grape berries from the stems is an important step in the wine
making process. Various problems are experienced using the destemming
machines currently available, where the berries are mechanically removed and
separated from the stems by a rotating beater shaft and drum. Not all berries are
removed from the stems and broken stems can end up with the removed berries
which can result in unwanted characters and flavours in the wine. The
development of these machines is currently limited to experimental tests.
In this study, the destemming process was investigated experimentally. The
ability of the Discrete Element Method (DEM) to simulate this process was also
investigated. A range of experiments was designed to obtain the material
properties of the grapes. These experiments included the measurement of the
stem stiffness and break strength, the berry stiffness, and the force needed to
remove a berry from the stem.
Experiments were conducted to gain further insight into the destemming process.
Firstly, a simplified destemming machine with only a beater shaft and a single
grape bunch was built. The influence of the bunch size and the speed of the
beater shaft on the number of berries removed from the stems were investigated.
Secondly, field tests on a commercial destemming machine were conducted and
the performance of the machine was measured.
A DEM model of both the simplified and the commercial destemming machine
were built. Commercial DEM software was used with linear contact and bond
models. The stems were built from spherical particles bonded together and a
single spherical particle was used to represent each berry. The measured
stiffnesses and break strengths were used to set the particle and bond
properties. Modelling the simplified destemming machine, it was found that the
DEM model could accurately predict the effect of the bunch size and the speed of
the beater shaft on the number of berries removed from the stems. The model of
the commercial destemming machine could accurately predict the machine’s
performance in terms of the number of berries removed as well as the number of
broken stems. / AFRIKAANSE OPSOMMING: Die verwydering van druiwekorrels vanaf die stingels is ʼn belangrike stap tydens
die wynmaak proses. Verskeie probleme word ondervind met huidige beskikbare
ontstingelaars, waar die korrels meganies verwyder en skei word vanaf die
stingels deur middel van ʼn roterende klop-as en drom. Nie alle korrels word vanaf
die stingels verwyder nie en gebreekte stingels kan saam met die verwyderde
korrels beland, wat ongewensde karakters en geure in die wyn kan veroorsaak.
Die ontwikkeling van ontstingelaars is tans beperk tot eksperimentele toetse.
In hierdie studie is die ontstingel proses eksperimenteel ondersoek Die vermoë
van die Diskrete Element Metode (DEM) om hierdie proses te simuleer is ook
ondersoek. ʼn Reeks eksperimente is ontwikkel om die materiaal eienskappe van
die druiwe te bepaal. Hierdie eksperimente sluit in die meet van die styfheid en
breeksterkte van die stingel, die korrel styfheid, en die krag benodig om ʼn korrel
vanaf die stingel te verwyder.
Eksperimente is gedoen om verdere insig oor die ontstingel proses te bekom.
Eerstens is ʼn vereenvoudigde ontstingelaar gebou, met slegs ʼn klop-as en een
tros. Die invloed van die trosgrootte en die klop-as spoed op die aantal korrels
wat verwyder is, is ondersoek. Tweedens is ʼn toets in die veld gedoen met ʼn
kommersiële ontstingelaar om die werkverrigting van die masjien te bepaal.
ʼn DEM model van beide die vereenvoudigde en kommersiële ontstingelaar is
gebou. Kommersiële DEM sagteware is gebruik met lineêre kontak- en
bindingsmodelle. Die stingels is gebou deur sferiese partikels aan mekaar te bind
en ʼn enkele sferiese partikel is gebruik om ʼn druiwe korrel voor te stel. Die
gemete styfhede en breeksterktes is gebruik om die partikel- en
bindingseienskappe te spesifiseer. Die modellering van die vereenvoudigde
ontstingelaar het getoon dat die DEM model akkuraat kan voorspel wat die
invloed is van die trosgrootte en die klop-as spoed op die aantal korrels wat
verwyder is. Die model van die kommersiële ontstingelaar kon die werkverrigting
van die masjien akkuraat voorspel in terme van die aantal korrels wat verwyder is
asook die aantal gebreekte stingels.
|
594 |
The impact of trust, risk and disaster exposure on microinsurance demand: Results of a DCE analysis in CambodiaFiala, Oliver, Wende, Danny 31 May 2016 (has links) (PDF)
Natural disasters are increasing in frequency and intensity and have devastating impacts on individuals, both humanitarian and economic, particularly in developing countries.
Microinsurance is seen as one promising instrument of disaster risk management, however the level of demand for respective projects remains low. Using behavioural games and a discrete choice experiment, this paper analyses the demand for hypothetical microinsurance products in rural Cambodia and contributes significant household level evidence to the current research. A general preference for microinsurance can be found, with demand significantly affected by price, provider, requirements for prevention and combinations with credit. Furthermore, financial literacy, risk aversion, levels of trust and previous disaster experience impact the individual demand for flood insurance in rural Cambodia.
|
595 |
Optimization approaches for designing baseball scout networks under uncertaintyOzlu, Ahmet Oguzhan 27 May 2016 (has links)
Major League Baseball (MLB) is a 30-team North American professional baseball league and Minor League Baseball (MiLB) is the hierarchy of developmental professional baseball teams for MLB. Most MLB players first develop their skills in MiLB, and MLB teams employ scouts, experts who evaluate the strengths, weaknesses, and overall potential of these players. In this dissertation, we study the problem of designing a scouting network for a Major League Baseball (MLB) team. We introduce the problem to the operations research literature to help teams make strategic and operational level decisions when managing their scouting resources. The thesis consists of three chapters that aim to address decisions such as how the scouts should be assigned to the available MiLB teams, how the scouts should be routed around the country, how many scouts are needed to perform the major scouting tasks, are there any trade-off s between the scouting objectives, and if there are any, what are the outcomes and insights. In the first chapter, we study the problem of assigning and scheduling minor league scouts for Major League Baseball (MLB) teams. There are multiple objectives in this problem. We formulate the problem as an integer program, use decomposition and both column-generation-based and problem-specific heuristics to solve it, and evaluate policies on multiple objective dimensions based on 100 bootstrapped season schedules. Our approach can allow teams to improve operationally by finding better scout schedules, to understand quantitatively the strategic trade-offs inherent in scout assignment policies, and to select the assignment policy whose strategic and operational performance best meets their needs. In the second chapter, we study the problem under uncertainty. In reality we observe that there are always disruptions to the schedules: players are injured, scouts become unavailable, games are delayed due to bad weather, etc. We presented a minor league baseball season simulator that generates random disruptions to the scout's schedules and uses optimization based heuristic models to recover the disrupted schedules. We evaluated the strategic benefits of different policies for team-to-scout assignment using the simulator. Our results demonstrate that the deterministic approach is insufficient for evaluating the benefits and costs of each policy, and that a simulation approach is also much more effective at determining the value of adding an additional scout to the network. The real scouting network design instances we solved in the first two chapters have several detailed complexities that can make them hard to study, such as idle day constraints, varying season lengths, off days for teams in the schedule, days where some teams play and others do not, etc. In the third chapter, we analyzed a simplified version of the Single Scout Problem (SSP), stripping away much of the real-world complexities that complicate SSP instances. Even for this stylized, archetypal version of SSP, we find that even small instances can be computationally difficult. We showed by reduction from Minimum Cost Hamiltonian Path Problem that archetypal version of SSP is NP-complete, even without all of the additional complexity introduced by real scheduling and scouting operations.
|
596 |
Energy Efficient Lighting: Consumer Preferences, Choices, and System Wide EffectsMin, Jihoon 01 December 2014 (has links)
Lighting accounts for nearly 20% of overall U.S. electricity consumption, 14% of U.S. residential electricity consumption, and 6% of total U.S. carbon dioxide equivalent (CO2e) emissions. A transition to alternative energy-efficient technologies could reduce this energy consumption considerably. We studied three questions related to energy efficiency lighting choices and consequences, which are: • Question 1: How large is the system-wide effect of a residential lighting retrofit with more efficient lighting technologies? • Question 2: Based on stated preference (SP) data, which factors influence consumer choices for general service light bulbs? What is the effect of the new lighting efficiency label mandated by the Federal Trade Commission? • Question 3: What can we learn about market trends and consumer choices from consumer panel data (i.e. revealed preference (RP) data) for general service light bulbs between 2004 and 2009? How can we compare the findings from SP and RP data, and which findings are robust across the two? In Chapter 2, we focus on the issue of lighting heat replacement effects. The issue is as follows: lighting efficiency goals have been emphasized in various U.S. energy efficiency policies. However, incandescent bulbs release up to 95% of input energy as heat, and it has been argued that replacing them with more efficient alternatives has a side effect in the overall building energy consumption: it increases the heating service that needs to be provided by the heating systems and decreases the cooling service that needs to be provided by the cooling systems. We investigate the net energy consumption, CO2e emissions, and saving in energy bills for single family detached houses across the U.S. as one moves towards more efficient lighting systems. In some regions, these heating and cooling effects from more efficient lighting can undermine up to 40% of originally intended primary energy savings, erode anticipated carbon savings completely, and lead to 30% less household monetary savings than intended. However, this overall effect is at most one percent of total emissions or energy consumption by a house. The size of the effect depends on various regional factors such as climate, electricity fuel mix, differences in emission factors of main energy sources used for heating and cooling, and electricity prices. Other tested factors such as building orientation, insulation level, occupancy scenario, or day length do not significantly affect the results. Then, in Chapter 3, we focus on factors that drive consumer choices for light bulbs. We collected stated preference data from a choice-based conjoint field experiment with 183 participants. We estimate discrete choice models from the data and find that politically liberal consumers have a stronger preference for compact fluorescent lighting technology and for low energy consumption. Greater willingness-to-pay for lower energy consumption and longer life is observed in conditions where estimated operating cost information was provided. Providing estimated annual cost information to consumers reduces their implicit discount rate by a factor of five, lowering barriers to adoption of energy efficient alternatives with higher up-front costs; however, even with cost information provided, consumers continue to use implicit discount rates of around 100%, which is larger than that estimated for other energy technologies. Finally, we complemented the stated preference study with a revealed preference study. This is because stated preference data alone have limitations in explaining consumer choices, as purchases are affected by many other factors that are outside of the experimenter control. We investigate consumer preferences for lighting technology based on revealed preference data between 2004 and 2009. We assess the trends in lighting sales for different lighting technologies across the country, and by store type. We find that, across the period between 2004 and 2009, sales of all general service light bulbs are almost monotonically decreasing, while CFL sales peaked in 2007. Thanks to increasing adoption of CFLs during the period, newly purchased light bulbs contributed to lowering carbon emissions and electricity consumption, while not sacrificing total produced lumens as much. We study consumer preferences for real light bulbs by estimating choice models, from which we estimate willingness-to-pay (WTP) for light bulb attributes (watt and type) and implicit discount rates (IDR) consumers adopt for their purchases. We find that the campaign for efficient bulbs in Wal-Mart in 2007 is potentially related to the peak in CFL adoption in 2007 in addition to the effects of the EISA or other factors/programs around the same period. Consumers are willing to pay, $1.84 more for a change from an incandescent bulb to a CFL and -$0.06 for 10W increase, the values which also include willingness-to-pays for corresponding changes in unobserved variables such as life and color. IDRs for four representative states range between around 230% and 330%, which is in a similar range we estimate from the choice experiment. Overall, even with energy efficiency labels, nationwide promotion of CFLs by retailers, or better availability of CFLs in the transforming residential lighting market, we see the barriers to energy efficient residential lighting are still persistent, which are reflected in high implicit discount rates observed from the models. While we can expect the EISA to be effective in lowering the barriers through regulation, it alone will not close energy efficiency gap in the residential lighting sector.
|
597 |
On the classification of integrable differential/difference equations in three dimensionsRoustemoglou, Ilia January 2015 (has links)
Integrable systems arise in nonlinear processes and, both in their classical and quantum version, have many applications in various fields of mathematics and physics, which makes them a very active research area. In this thesis, the problem of integrability of multidimensional equations, especially in three dimensions (3D), is explored. We investigate systems of differential, differential-difference and discrete equations, which are studied via a novel approach that was developed over the last few years. This approach, is essentially a perturbation technique based on the so called method of dispersive deformations of hydrodynamic reductions . This method is used to classify a variety of differential equations, including soliton equations and scalar higher-order quasilinear PDEs. As part of this research, the method is extended to differential-difference equations and consequently to purely discrete equations. The passage to discrete equations is important, since, in the case of multidimensional systems, there exist very few integrability criteria. Complete lists of various classes of integrable equations in three dimensions are provided, as well as partial results related to the theory of dispersive shock waves. A new definition of integrability, based on hydrodynamic reductions, is used throughout, which is a natural analogue of the generalized hodograph transform in higher dimensions. The definition is also justified by the fact that Lax pairs the most well-known integrability criteria are given for all classification results obtained.
|
598 |
Computational modelling of monocyte deposition in abdominal aortic aneurysmsHardman, David January 2011 (has links)
Abdominal aortic aneurysm (AAA) disease involves a dilation of the aorta below the renal arteries. If the aneurysm becomes sufficiently dilated and tissue strength is less than vascular pressure, rupture of the aorta occurs entailing a high mortality rate. Despite improvements in surgical technique, the mortality rate for emergency repair remains high and so an accurate predictor of rupture risk is required. Inflammation and the associated recruitment of monocytes into the aortic wall are critical in the pathology of AAA disease, stimulating the degradation and remodeling of the vessel wall. Areas with high concentrations of macrophages may experience an increase in tissue degradation and therefore an increased risk of rupture. Determining the magnitude and distribution of monocyte recruitment can help us understand the pathology of AAA disease and add spatial accuracy to the existing rupture risk prediction models. In this study finite element computational fluid dynamics simulations of AAA haemodynamics are seeded with monocytes to elucidate patterns of cell deposition and probability of recruitment. Haemodynamics are first simulated in simplified AAA geometries of varying diameters with a patient averaged flow waveform inlet boundary condition. This allows a comparison with previous experimental investigations as well as determining trends in monocyte adhesion with aneurysm progression. Previous experimental investigations show a transition to turbulent flow occurring during the deceleration phase of the cardiac cycle. There has thus far been no investigation into the accuracy of turbulence models in simulating AAA haemodynamics and so simulations are compared using RNG κ − ε, κ − ω and LES turbulence models. The RNG κ − ε model is insufficient to model secondary flows in AAA and LES models are sensitive to inlet turbulence intensity. The probability of monocyte adhesion and recruitment depends on cell residence time and local wall shear stress. A near wall particle residence time (NWPRT)model is created incorporating a wall shear stress-limiter based on in vitro experimental data. Simulated haemodynamics show qualitative agreement with experimental results. Peaks of maximum NWPRT move downstream in successively larger geometries, correlating with vortex behaviour. Average NWPRT rises sharply in models above a critical maximum diameter. These techniques are then applied to patient-specific AAAs. Geometries are created from CT slices and velocity boundary conditions taken from Phase Contrast-MRI (PC-MRI) data for 3 patients. There is no gold standard for inlet boundary conditions and so simulations using 3 velocity components, 1 velocity component and parabolic flow profiles at the inlet are compared with each other and with PC-MRI data at the AAA midsection. The general trends in flow and wall shear stress are similar between simulations with 3 and 1 components of inlet velocity despite differences in the nature and complexity of secondary flow. Applying parabolic velocity profiles, however, can cause significant deviations in haemodynamics. Axial velocities show average to good correlation with PC-MRI data though the lower magnitude radial velocities produce high levels of noise in the raw data making comparisons difficult. Patient specific NWPRT models show monocyte infiltration is most likely at or around the iliac bifurcation.
|
599 |
Design and realization of switched capacitor filtersYassine, Hatem Mahmoud January 1985 (has links)
No description available.
|
600 |
Two Generalizations of the Filippov OperationEryuzlu, Menevse 01 April 2016 (has links)
The purpose of this thesis is to generalize Filippov's operation, and to get more useful results. It includes two main parts: The C-Filippov operation for the finite and countable cases and the Filippov operation with different measures. In the first chapter, we give brief information about the importance of Filippov's operation, our goal and the ideas behind our generalizations. In the second chapter, we give some sufficient background notes. In the third chapter, we introduce the Filippov operation, explain how to calculate the Filippov of a function and give some sufficient properties of it. In the fourth chapter, we introduce a generalization of the Filippov operation, the C-Filippov, and give some of its properties which we need for the next chapter. In the fifth chapter, in the first main part, we discuss some properties of the C-Filippov for special cases and observe the differences and common properties between the Filippov and C-Filippov operations. Finally, in the sixth chapter, we present the other generalization of the Filippov operation which is Filippov with different measures. We observe the properties of the corresponding Filippovs when we know the relationship between the measures. We finish the thesis by summarizing our work and discussing future work.
|
Page generated in 0.1613 seconds