31 |
Instrument Timbres and Pitch Estimation in Polyphonic MusicLoeffler, Dominik B. 14 April 2006 (has links)
In the past decade, the availability of digitally encoded, downloadable music has increased dramatically, pushed mainly by the release of the now famous MP3 compression format (Fraunhofer-Gesellschaft, 1994). Online sales of music in the US doubled in 2005, according to a recent news article (*), while the number of files exchanged on P2P platforms is much higher, but hard to estimate.
The existing and coming informational flood in digital music prompts the need for sophisticated content-based information retrieval. Query-by-Humming is a prototypical technique aimed at locating pieces of music by melody; automatic annotation algorithms seek to enable finer search criteria, such as instruments, genre, or meter. Score transcription systems strive for an abstract, compressed form of a piece of music understandable by composers and musicians.
Much research still has to be performed to achieve these goals.
This thesis connects essential knowledge about music and human auditory perception with signal processing algorithms to solve the specific problem of pitch estimation. The designed algorithm obtains an estimate of the magnitude spectrum via STFT and models the harmonic structure of each pitch contained in the magnitude spectrum with Gaussian density mixtures, whose parameters are subsequently estimated via an Expectation-Maximization (EM) algorithm.
Heuristics for EM initialization are formulated mathematically.
The system is implemented in MATLAB, featuring a GUI that provides for visual (spectrogram) and numerical (console) verification of results. The algorithm is tested using an array of data ranging from single to triple superposed instrument recordings. Its advantages and limitations are discussed, and a brief outlook over potential future research is given.
(*) "Online and Wireless Music Sales Tripled in 2005"; Associated Press; January 19, 2006
|
32 |
The Impact of Income Inequality on Crime¡GThe Empirical Study of TaiwanShih, Yi-Siou 14 August 2012 (has links)
This paper investigates the impact of income inequality on grand total crime, larceny and violent crime by using the dynamic panel data of 20 cities and counties in Taiwan during 1998 to 2010.
The empirical results show that income inequality has a significant positive impact on grand total crime, larceny and violent crime, but unemployment rate and the proportion of the population between 15 and 64 years old both have no significant influence on three kinds of crimes. Moreover the effect of the other explanatory variables is significant on at least one kind of crimes. The empirical results also support that the criminal expected utility theory, social anomic, disorganization, conflict and strain theories are helpful to explain the criminal behavior of Taiwan.
|
33 |
The Brazilian tax collection and the ratchet effectGuedes, Kelly Pereira 31 March 2008 (has links)
This thesis analyses the ratchet effect in the context of the performance scheme implemented by Brazilian tax collection in 1988 to reward tax officials for their effort in collecting taxes and uncovering tax violations, using panel data for 110 tax agencies from August 1989 to April 1993 and employing the GMM-system estimator. The estimates suggest the presence of ratchet effect, i.e., the more the tax officials do today, the more the tax officials are asked to do in the future. This result endangers the credibility of the Brazilian tax authority's incentive program as an incentive system.
|
34 |
Estimation and inference of microeconometric models based on moment condition modelsKhatoon, Rabeya January 2014 (has links)
The existing estimation techniques for grouped data models can be analyzed as a class of estimators of instrumental variable-Generalized Method of Moments (GMM) type with the matrix of group indicators being the set of instruments. Econometric literature (e.g. Smith, 1997; Newey and Smith, 2004) show that, in some cases of empirical relevance, GMM can have shortcomings in terms of the large sample behaviour of the estimator being different from the finite sample properties. Generalized Empirical Likelihood (GEL) estimators are developed that are not sensitive to the nature and number of instruments and possess improved finite sample properties compared to GMM estimators. In this thesis, with the assumption that the data vector is iid within a group, but inid across groups, we developed GEL estimators for grouped data model having population moment conditions of zero mean of errors in each group. First order asymptotic analysis of the estimators show that they are √N consistent (N being the sample size) and normally distributed. The thesis explores second order bias properties that demonstrate sources of bias and differences between choices of GEL estimators. Specifically, the second order bias depends on the third moments of the group errors and correlation among the group errors and explanatory variables. With symmetric errors and no endogeneity all three estimators Empirical Likelihood (EL), Exponential Tilting (ET) and Continuous Updating Estimator (CUE) yield unbiased estimators. A detailed simulation exercise is performed to test comparative performance of the EL, ET and their bias corrected estimators to the standard 2SLS/GMM estimators. Simulation results reveal that while, with a few strong instruments, we can simply use 2SLS/GMM estimators, in case of many and/or weak instruments, increased degree of endogeneity, or varied signal to noise ratio, bias corrected EL, ET estimators dominate in terms of both least bias and accurate coverage proportions of asymptotic confidence intervals even for a considerably large sample. The thesis includes a case where there are within group dependent data, to assess the consequences of a key assumption being violated, namely the within-group iid assumption. Theoretical analysis and simulation results show that ignoring this feature can result in misleading inference. The proposed estimators are used to estimate the returns to an additional year of schooling in the UK using Labour Force Survey data over 1997-2009. Pooling the 13 years data yields roughly the same estimate of 11.27% return for British-born men aged 25-50 using any of the estimation techniques. In contrast using 2009 LFS data only, for a relatively small sample and many weak instruments, the return to first degree holder men is 13.88% using EL bias corrected estimator, where 2SLS estimator yields an estimate of 6.8%.
|
35 |
Achieving Automatic Speech Recognition for Swedish using the Kaldi toolkit / Automatisk taligenkänning på svenska med verktyget KaldiMossberg, Zimon January 2016 (has links)
The meager offering of online commercial Swedish Automatic Speech Recognition ser-vices prompts the effort to develop a speech recognizer for Swedish using the open sourcetoolkit Kaldi and publicly available NST speech corpus. Using a previous Kaldi recipeseveral GMM-HMM models are trained and evaluated against commercial options toallow for reasoning of the performance of a customized solution for Automatic SpeechRecognition to that of commercial services. The evaluation takes both accuracy andcomputational speed into consideration. Initial results of the evaluation indicate a sys-tematic bias in the selected test set confirmed by a follow up investigative evaluation.The conclusion is that building a speech recognizer for Swedish using the NST corpusand Kaldi without expert knowledge is feasible but requires further work. / En taligenkännare för svenska utvecklas med målet att utvärdera hur en taligenkännareutvecklad med fritt tillgängliga verktyg står sig mot kommersiella taligenkänningstjänster.Verktyget som används är det öppna källkodsverktyget Kaldi och som träningsdataanvänds det offentligt tillgängliga talkorpuset för svenska från NST. De framtagna mod-ellerna jämförs mot kommersielt tillgängliga tjänster för taligenkänning på svenska.Tidiga resultat i jämförelsen indikerar ett systemiskt jäv i den valda testdata, vilketbekräftas av en uppföljande undersökande utvärdering. Slutsatsen av arbetet är attutsikterna att ta fram en taligenkännare för svenska är goda men kräver omfattandearbete.
|
36 |
More on the relationship between corporate governance and firm performance in the UK: Evidence from the application of generalized method of moments estimationAkbar, Saeed, Poletti-Hughes, J., El-Faitouri, R., Shah, S.Z.A. 12 June 2019 (has links)
Yes / This study examines the relationship between corporate governance compliance and firm performance in the UK. We develop a Governance Index and investigate its impact on corporate performance after controlling for potential endogeneity through the use of a more robust methodology, Generalized Method of Moments (GMM) Estimation. Our evidence is based on a sample of 435 non-financial publicly listed firms over the period 1999–2009. In contrast to earlier findings in the UK literature, our results suggest that compliance with corporate governance regulations is not a determinant of corporate performance in the UK. We argue that results from prior studies showing a positive impact of corporate governance on firms’ performance may be biased as they fail to control for potential endogeneity. There may be a possibility of reverse causality in the results of prior studies due to which changes in the internal characteristics of firms may be responsible for the corporate governance compliance and performance relationship. Our findings are based on GMM, which controls for the effects of unobservable heterogeneity, simultaneity and dynamic endogeneity and thus present more robust conclusions as compared to the findings of previously published studies in this area.
|
37 |
Organizational non-compliance with principles-based governance provisions and corporate risk-takingAhmad, S., Akbar, Saeed, Halari, A., Shah, S.Z. 19 May 2022 (has links)
No / This paper examines how risk-taking is affected by non-compliance with a ‘comply or explain’ based system of corporate governance. Using System Generalized Methods of Moments (GMM) estimates to control for various types of endogeneity, the results of this study show that non-compliance with the UK Corporate Governance Code is positively associated with total, systematic, and idiosyncratic risk. However, profitability moderates the impact of non-compliance on firms' risk-taking. The findings of this study further reveal that the impact of non-compliance with various provisions of the UK Corporate Governance Code is not uniform. That is, non-compliance with board independence provisions is associated with higher risk-taking. However, non-compliance with committees' chair independence is associated with lower risk-taking. These findings have implications for investors, policy makers, and corporations regarding the usefulness of compliance with a prescribed code of corporate governance.
|
38 |
Classification of ADHD and non-ADHD Using AR Models and Machine Learning AlgorithmsLopez Marcano, Juan L. 12 December 2016 (has links)
As of 2016, diagnosis of ADHD in the US is controversial. Diagnosis of ADHD is based on subjective observations, and treatment is usually done through stimulants, which can have negative side-effects in the long term. Evidence shows that the probability of diagnosing a child with ADHD not only depends on the observations of parents, teachers, and behavioral scientists, but also on state-level special education policies. In light of these facts, unbiased, quantitative methods are needed for the diagnosis of ADHD. This problem has been tackled since the 1990s, and has resulted in methods that have not made it past the research stage and methods for which claimed performance could not be reproduced.
This work proposes a combination of machine learning algorithms and signal processing techniques applied to EEG data in order to classify subjects with and without ADHD with high accuracy and confidence. More specifically, the K-nearest Neighbor algorithm and Gaussian-Mixture-Model-based Universal Background Models (GMM-UBM), along with autoregressive (AR) model features, are investigated and evaluated for the classification problem at hand. In this effort, classical KNN and GMM-UBM were also modified in order to account for uncertainty in diagnoses.
Some of the major findings reported in this work include classification performance as high, if not higher, than those of the highest performing algorithms found in the literature. One of the major findings reported here is that activities that require attention help the discrimination of ADHD and Non-ADHD subjects. Mixing in EEG data from periods of rest or during eyes closed leads to loss of classification performance, to the point of approximating guessing when only resting EEG data is used. / Master of Science / As of 2016, diagnosis of ADHD in the US is controversial. Diagnosis of ADHD is based on subjective observations, and treatment is usually done through stimulants, which can have negative side-effects in the long term. Evidence shows that the probability of diagnosing a child with ADHD not only depends on the observations of parents, teachers, and behavioral scientists, but also on state-level special education policies. In light of these facts, unbiased, quantitative methods are needed for the diagnosis of ADHD. This problem has been tackled since the 1990s, and has resulted in methods that have not made it past the research stage and methods for which claimed performance could not be reproduced.
This work proposes a combination of machine learning algorithms and signal processing techniques applied to EEG data in order to classify subjects with and without ADHD with high accuracy and confidence. Signal processing techniques are used to extract autoregressive (AR) coefficients, which contain information about brain activities and are used as “features”. Then, the features, extracted from datasets containing ADHD and Non-ADHD subjects, are used to create or train models that can classify subjects as either ADHD or Non-ADHD. Lastly, the models are tested using datasets that are different from the ones used in the previous stage, and performance is analyzed based on how many of the predicted labels (ADHD or Non-ADHD) match the expected labels.
Some of the major findings reported in this work include classification performance as high, if not higher, than those of the highest performing algorithms found in the literature. One of the major findings reported here is that activities that require attention help the discrimination of ADHD and Non-ADHD subjects. Mixing in EEG data from periods of rest or during eyes closed leads to loss of classification performance, to the point of approximating guessing when only resting EEG data is used.
|
39 |
Where There’s Smoke, There’s Fire : An Analysis of the Riksbank’s Interest Setting PolicyLahlou, Mehdi, Sandstedt, Sebastian January 2017 (has links)
We analyse the Swedish central bank, the Riksbank’s, interest setting policy in a Taylor rule framework. In particular, we examine whether or not the Riksbank has reacted to fluctuations in asset prices during the period 1995:Q1 to 2016:Q2. This is done by estimating a forward-looking Taylor rule with interest rate smoothing, augmented with stock prices, house prices and the real exchange rate, using IV GMM. In general, we find that the Riksbank’s interest setting policy is well described by a forward-looking Taylor rule with interest rate smoothing and that the use of factors as instruments, derived from a PCA, serves to alleviate the weak-identification problem that tend to plague GMM. Moreover, apart from finding evidence that the Riksbank exhibit a substantial degree of policy rate inertia and has acted so as to stabilize inflation and the real economy, we also find evidence that the Riksbank has been reacting to fluctuations in stock prices, house prices, and the real exchange rate.
|
40 |
Jak konkurence mezi bankami ovlivňuje finanční stabilitu / How Bank Competition Influences Financial StabilityVildová, Romana January 2017 (has links)
How Bank Competition Influences Financial Stability Abstract This paper investigates the link between financial stability and bank competition by means of the Arellano & Bond (1991) GMM model using annual panel data over the period 2000 - 2014 for 205 countries. Our data source is a new, richer and updated dataset The Global Financial Development Database available at World Bank. Due to the specifics of this dataset we are able to use new combinations of measures of financial stability and of bank competition and to study their relationship in greater depth. We find a positive link between financial stability and bank competition. Furthermore, our results provide evidence that it matters what measures of financial stability and bank competition we apply. Lastly, we ascertain that the relationship between financial stability and bank competition does not change over time. Keywords Financial Stability, Bank Competition, Dynamic GMM, the Arellano and Bond Estimator Author's e-mail VildovaRomana@gmail.com Supervisor's e-mail Roman.Horvath@gmail.com
|
Page generated in 0.0225 seconds