• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 61
  • 21
  • 10
  • 10
  • 8
  • 8
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Sabermetrics - Statistical Modeling of Run Creation and Prevention in Baseball

Chernoff, Parker 30 March 2018 (has links)
The focus of this thesis was to investigate which baseball metrics are most conducive to run creation and prevention. Stepwise regression and Liu estimation were used to formulate two models for the dependent variables and also used for cross validation. Finally, the predicted values were fed into the Pythagorean Expectation formula to predict a team’s most important goal: winning. Each model fit strongly and collinearity amongst offensive predictors was considered using variance inflation factors. Hits, walks, and home runs allowed, infield putouts, errors, defense-independent earned run average ratio, defensive efficiency ratio, saves, runners left on base, shutouts, and walks per nine innings were significant defensive predictors. Doubles, home runs, walks, batting average, and runners left on base were significant offensive regressors. Both models produced error rates below 3% for run prediction and together they did an excellent job of estimating a team’s per-season win ratio.
42

Construction of Minimal Partially Replicated Orthogonal Main-Effect Plans with 3 Factors

朱正中, Chu, Cheng-Chung Unknown Date (has links)
正交主效應計畫(Orthogonal main-effect plans)因可無相關地估計主效應,故常被應用於一般工業上作為篩選因子之用。然而,實驗通常費時耗財。因此,如何設計一個較經濟且有效的計劃是很重要的。回顧過去相關的研究,Jacroux (1992)提供了最小正交主效應計劃的充份條件及正交主效應計畫之最少實驗次數表(Jacroux 1992),張純明(1998)針對此表提出修正與補充。在此,我們再次的補足此表。 正交主效應計畫中,如有重複實驗點,則純誤差可被估計,且據此檢定模型之適合度。Jacroux (1993)及張純明(1998)皆曾提出具最多部份重複之正交主效應計畫(Partially replicated orthogonal main-effect plans)。在此,我們討論所有三因子部份重複正交主效應計畫中,可能重複之最大次數,且具體提出建構此最大部份重複之最小正交主效應計畫之方法。 / Orthogonal main-effect plans (OMEP's), being able to estimate the main effects without correlation, are often employed in industrial situations for screening purpose. But experiments are expensive and time consuming. When an economical and efficient design is desired, a minimal orthogonal main-effect plans is a good choice. Jacroux (1992) derived a sufficient condition for OEMP's to have minimal number of runs and provided a table of minimal OMEP run numbers. Chang (1998) corrected and supplemented the table. In this paper, we try to improve the table to its perfection. A minimal OMEP with replicated runs is appreciated even more since then the pure error can be estimated and the goodness-of-fit of the model can be tested. Jacroux (1993) and Chang (1998) gave some partially replicated orthogonal main-effect plans (PROMEP's) with maximal number of replicated points. Here, we discuss minimal PROMEP's with 3 factors in detail. Methods of constructing minimal PROMEP's with replicated runs are provided, and the number of replicated runs are maximal for most cases.
43

Digital lines, Sturmian words, and continued fractions

Uscka-Wehlou, Hanna January 2009 (has links)
In this thesis we present and solve selected problems arising from digital geometry and combinatorics on words. We consider digital straight lines and, equivalently, upper mechanical words with positive irrational slopes a<1 and intercept 0. We formulate a continued fraction (CF) based description of their run-hierarchical structure. Paper I gives a theoretical basis for the CF-description of digital lines. We define for each irrational positive slope less than 1 a sequence of digitization parameters which fully specifies the run-hierarchical construction. In Paper II we use the digitization parameters in order to get a description of runs using only integers. We show that the CF-elements of the slopes contain the complete information about the run-hierarchical structure of the line. The index jump function introduced by the author indicates for each positive integer k the index of the CF-element which determines the shape of the digitization runs on level k. In Paper III we present the results for upper mechanical words and compare our CF-based formula with two well-known methods, one of which was formulated by Johann III Bernoulli and proven by Markov, while the second one is known as the standard sequences method. Due to the special treatment of some CF-elements equal to 1 (essential 1's in Paper IV), our method is currently the only one which reflects the run-hierarchical structure of upper mechanical words by analogy to digital lines. In Paper IV we define two equivalence relations on the set of all digital lines with positive irrational slopes a<1. One of them groups into classes all the lines with the same run length on all digitization levels, the second one groups the lines according to the run construction in terms of long and short runs on all levels. We analyse the equivalence classes with respect to minimal and maximal elements. In Paper V we take another look at the equivalence relation defined by run construction, this time independently of the context, which makes the results more general. In Paper VI we define a run-construction encoding operator, by analogy with the well-known run-length encoding operator. We formulate and present a proof of a fixed-point theorem for Sturmian words. We show that in each equivalence class under the relation based on run length on all digitization levels (as defined in Paper IV), there exists exactly one fixed point of the run-construction encoding operator.
44

A comparative study regarding weakly stationarity assumptions and time dependency : Signal processing of vibrational loading and its influence on fatigue life

Dahlman, Rikard, Johansson, Ebba January 2018 (has links)
Simplifications regarding calculations of fatigue life due to vibrational loading is based on weakly stationarity assumptions which is a time independent method. The hypothesis was based on the uncertainty of these assumptions. The aim of this study was to examine whether the analysed data fulfilled the assumptions of weakly stationarity. It was determined that the assumption was not valid for most signals and a comparison of time dependent methods should be performed to evaluate the difference compared with the time independent method. Two time dependent methods were constructed and implemented on the signals based on the results of performed stationarity tests. The result determined that a decrease in fatigue life of an investigated weld might occur for the two time dependent methods compared with the time independent method. The method which was considered to produce the most accurate results was also the most constrained as to the amount of data that fulfilled its requirements. A conclusion was drawn that signals containing more data was necessary to achieve conclusive results of the fatigue life. The hypothesis was proven to be mostly true since most of the analysed signals were found to be piecewise weakly stationary.
45

Essays on coordination problems in economics

Pereira, Ana Elisa Gonçalves 24 June 2016 (has links)
Submitted by Ana Elisa Gonçalves Pereira (anaelisagpereira@gmail.com) on 2016-07-15T20:58:30Z No. of bitstreams: 1 tese_biblio.pdf: 1099623 bytes, checksum: 3fae0f61b515374855a0c7773cd4cb47 (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2016-07-18T12:03:10Z (GMT) No. of bitstreams: 1 tese_biblio.pdf: 1099623 bytes, checksum: 3fae0f61b515374855a0c7773cd4cb47 (MD5) / Made available in DSpace on 2016-07-18T13:39:54Z (GMT). No. of bitstreams: 1 tese_biblio.pdf: 1099623 bytes, checksum: 3fae0f61b515374855a0c7773cd4cb47 (MD5) Previous issue date: 2016-06-24 / There are several economic situations in which an agent’s willingness to take a given action is increasing in the amount of other agents who are expected to do the same. These kind of strategic complementarities often lead to multiple equilibria. Moreover, the outcome achieved by agents’ decentralized decisions may be inefficient, leaving room for policy interventions. This dissertation analyzes different environments in which coordination among individuals is a concern. The first chapter analyzes how information manipulation and disclosure affect coordination and welfare in a bank-run model. There is a financial regulator who cannot credibly commit to reveal the situation of the banking sector truthfully. The regulator observes banks’ idiosyncratic information (through a stress test, for example) and chooses whether to disclose it to the public or only to release a report on the health of the entire financial system. The aggregate report may be distorted at a cost – higher cost means higher credibility. Investors are aware of the regulator’s incentives to conceal bad news from the market, but manipulation may still be effective. If the regulator’s credibility is not too low, the disclosure policy is state-contingent and there is always a range of states in which there is information manipulation in equilibrium. If credibility is low enough, the regulator opts for full transparency, since opacity would trigger a systemic run no matter the state. In this case only the most solid banks survive. The level of credibility that maximizes welfare from an ex ante perspective is interior. The second and the third chapters study coordination problems in dynamic environments. The second chapter analyzes welfare in a setting where agents receive random opportunities to switch between two competing networks. It shows that whenever the intrinsically worst one prevails, this is efficient. In fact, a central planner would be even more inclined towards the worst option. Inefficient shifts to the intrinsically best network might occur in equilibrium. When there are two competing standards or networks of different qualities, if everyone were to opt for one of them at the same time, the efficient solution would be to choose the best one. However, when there are timing frictions and agents do not switch from one option to another all at once, the efficient solution differs from conventional wisdom. The third chapter analyzes a dynamic coordination problem with staggered decisions where agents are ex ante heterogeneous. We show there is a unique equilibrium, which is characterized by thresholds that determine the choices of each type of agent. Although payoffs are heterogeneous, the equilibrium features a lot of conformity in behavior. Equilibrium vii thresholds for different types of agents partially coincide as long as there exists a set of beliefs that would make this coincidence possible. However, the equilibrium strategies never fully coincide. Moreover, we show conformity is not inefficient. In the efficient solution, agents follow others even more often than in the decentralized equilibrium. / No estudo da economia, há diversas situações em que a propensão de um indivíduo a tomar determinada ação é crescente na quantidade de outras pessoas que este indivíduo acredita que tomarão a mesma ação. Esse tipo de complementaridade estratégica geralmente leva à existência de múltiplos equilíbrios. Além disso, o resultado atingido pelas decisões decentralizadas dos agentes pode ser ineficiente, deixando espaço para intervenções de política econômica. Esta tese estuda diferentes ambientes em que a coordenação entre indivíduos é importante. O primeiro capítulo analisa como a manipulação de informação e a divulgação de informação afetam a coordenação entre investidores e o bem-estar em um modelo de corridas bancárias. No modelo, há uma autoridade reguladora que não pode se comprometer a revelar a verdadeira situação do setor bancário. O regulador observa informações idiossincráticas dos bancos (através de um stress test, por exemplo) e escolhe se revela essa informação para o público ou se divulga somente um relatório agregado sobre a saúde do sistema financeiro como um todo. O relatório agregado pode ser distorcido a um custo – um custo mais elevado significa maior credibilidade do regulador. Os investidores estão cientes dos incentivos do regulador a esconder más notícias do mercado, mas a manipulação de informação pode, ainda assim, ser efetiva. Se a credibilidade do regulador não for muito baixa, a política de divulgação de informação é estado-contingente, e existe sempre um conjunto de estados em que há manipulação de informação em equilíbrio. Se a credibilidade for suficientemente baixa, porém, o regulador opta por transparência total dos resultados banco-específicos, caso em que somente os bancos mais sólidos sobrevivem. Uma política de opacidade levaria a uma crise bancária sistêmica, independentemente do estado. O nível de credibilidade que maximiza o bem-estar agregado do ponto de vista ex ante é interior. O segundo e o terceiro capítulos estudam problemas de coordenação dinâmicos. O segundo capítulo analisa o bem-estar em um ambiente em que agentes recebem oportunidades aleatórias para migrar entre duas redes. Os resultados mostram que sempre que a rede de pior qualidade (intrínseca) prevalece, isto é eficiente. Na verdade, um planejador central estaria ainda mais inclinado a escolher a rede de pior qualidade. Em equilíbrio, pode haver mudanças ineficientes que ampliem a rede de qualidade superior. Quando indivíduos escolhem entre dois padrões ou redes com níveis de qualidade diferentes, se todos os indivíduos fizessem escolhas simultâneas, a solução eficiente seria que todos adotassem a rede de melhor qualidade. No entanto, quando há fricções e os agentes tomam decisões escalonadas, a solução eficiente difere ix do senso comum. O terceiro capítulo analisa um problema de coordenação dinâmico com decisões escalonadas em que os agentes são heterogêneos ex ante. No modelo, existe um único equilíbrio, caracterizado por thresholds que determinam as escolhas para cada tipo de agente. Apesar da heterogeneidade nos payoffs, há bastante conformidade nas ações individuais em equilíbrio. Os thresholds de diferentes tipos de agentes coincidem parcialmente contanto que exista um conjunto de crenças arbitrário que justifique esta conformidade. No entanto, as estratégias de equilíbrio de diferentes tipos nunca coincidem totalmente. Além disso, a conformidade não é ineficiente. A solução eficiente apresentaria estratégias ainda mais similares para tipos distintos em comparação com o equilíbrio decentralizado.
46

Testování slabé formy efektivnosti devizového trhu / Testing of weak-form efficiency of the exchange market

Havel, Radek January 2009 (has links)
The goal of my thesis is to verify the weak form of the efficiency of the exchange market. The paper results from the presumptions for efficient price movements on the financial markets. They are applied to the time series of exchange rates of five currency pairs. After definitions of testing methodology, the given exchange rates series are analysed with the help of correlation and autocorrelation test, runs test and a test based on technical analysis. The conclusion of the thesis anwers the question if the exchange rates movements are suitable with the efficient market hypothesis.
47

Programmatic Geographical Depictions in Large-Scale Jazz Ensemble Works: Major Works by Gil Evans and Chuck Owen and a New Work by Aaron Hedenstrom

Hedenstrom, Aaron 05 1900 (has links)
This dissertation explores the creative process in large-scale jazz ensemble works that are programmatic in depicting geographical locations. This is achieved through analyses of Gil Evans's Sketches of Spain, Chuck Owen's River Runs: A Concerto for Jazz Guitar, Saxophone, & Orchestra, and Aaron Hedenstrom's Sketches of Minnesota. Each work is examined using five analytical categories: orchestration, large-scale form, harmonic/melodic development, programmatic framework, and use of featured soloists. The analyses draw from musical scores, interviews, biographies, recordings, and articles to reveal more about each composer's artistic intentions. This study contributes to the broader knowledge of large-ensemble jazz works and programmatic jazz works. This research meets the need for more critical analyses of important jazz ensemble works relevant to composers, arrangers, and scholars.
48

Study of concurrency in real-time distributed systems

Balaguer, Sandie 13 December 2012 (has links) (PDF)
This thesis is concerned with the modeling and the analysis of distributedreal-time systems. In distributed systems, components evolve partlyindependently: concurrent actions may be performed in any order, withoutinfluencing each other and the state reached after these actions does notdepends on the order of execution. The time constraints in distributed real-timesystems create complex dependencies between the components and the events thatoccur. So far, distributed real-time systems have not been deeply studied, andin particular the distributed aspect of these systems is often left aside. Thisthesis explores distributed real-time systems. Our work on distributed real-timesystems is based on two formalisms: time Petri nets and networks of timedautomata, and is divided into two parts.In the first part, we highlight the differences between centralized anddistributed timed systems. We compare the main formalisms and their extensions,with a novel approach that focuses on the preservation of concurrency. Inparticular, we show how to translate a time Petri net into a network of timedautomata with the same distributed behavior. We then study a concurrency relatedproblem: shared clocks in networks of timed automata can be problematic when oneconsiders the implementation of a model on a multi-core architecture. We showhow to avoid shared clocks while preserving the distributed behavior, when thisis possible.In the second part, we focus on formalizing the dependencies between events inpartial order representations of the executions of Petri nets and time Petrinets. Occurrence nets is one of these partial order representations, and theirstructure directly provides the causality, conflict and concurrency relationsbetween events. However, we show that, even in the untimed case, some logicaldependencies between event occurrences are not directly described by thesestructural relations. After having formalized these logical dependencies, wesolve the following synthesis problem: from a formula that describes a set ofruns, we build an associated occurrence net. Then we study the logicalrelations in a simplified timed setting and show that time creates complexdependencies between event occurrences. These dependencies can be used to definea canonical unfolding, for this particular timed setting.
49

Poisson type approximations for sums of dependent variables / Priklausomų atsitiktinių dydžių sumų aproksimavimas Puasono tipo matais

Petrauskienė, Jūratė 07 March 2011 (has links)
Our aim is to investigate Poisson type approximations to the sums of dependent integer-valued random variables. In this thesis, only one type of dependence is considered, namely m-dependent random variables. The accuracy of approximation is measured in the total variation, local, uniform (Kolmogorov) and Wasserstein metrics. Results can be divided into four parts. The first part is devoted to 2-runs, when pi=p. We generalize Theorem 5.2 from A.D. Barbour and A. Xia “Poisson perturbations” in two directions: by estimating the second order asymptotic expansion and asymptotic expansion in the exponent. Moreover, lower bound estimates are established, proving the optimality of upper bound estimates. Since, the method of proof does not allow to get small constants, in certain cases, we calculate asymptotically sharp constants. In the second part, we consider sums of 1-dependent random variables, concentrated on nonnegative integers and satisfying analogue of Franken's condition. All results of this part are comparable to the known results for independent summands. In the third part, we consider Poisson type approximations for sums of 1-dependent symmetric three-point distributions. We are unaware about any Poisson-type approximation result for dependent random variables, when symmetry of the distribution is taken into account. In the last part, we consider 1-dependent non-identically distributed Bernoulli random variables. It is shown, that even for this simple... [to full text] / Disertacijoje tiriamas diskrečių m-priklausomų atsitiktinių dydžių aproksimavimo Puasono tipo matais tikslumas. Silpnai priklausomų atsitiktinių dydžių sumos yra natūralus nepriklausomų atsitiktinių dydžių sumų apibendrinimas. Vis dėlto atsitiktinių dydžių priklausomybė žymiai pasunkina tokių sumų tyrimą. Disertacijoje pagrindinis dėmesys skiriamas dviparametrėms ir triparametrėms diskrečiosioms aproksimacijoms. Gautus rezultatus galima suskirstyti į keturias dalis. Pirmoje dalyje nagrinėjant dviejų narių serijų statistikos aproksimaciją Puasono ir sudėtiniais Puasono skirstiniais buvo nustatyta, kad dviparametrė sudėtinė Puasono aproksimacija yra tikslesnė už Puasono dėsnio asimptotinį skleidinį su vienu asimptotikos nariu. Aproksimacijos tikslumas įvertintas pilnosios variacijos ir lokalioje metrikoje. Specialiu atveju apskaičiuotos asimptotiškai tikslios konstantos. Taip pat nustatyta, kad gautieji įverčiai iš apačios yra tos pačios eilės, kaip ir įverčiai iš viršaus. Antroje dalyje buvo gauta, kad sveikaskaičiai atsitiktiniai dydžiai, tenkinantys Frankeno sąlygos analogą, gali būti naudojami perėjimui nuo m-priklausomų prie 1-priklausomų atsitiktinių dydžių. Nustatyta, kad ženklą keičiančios sudėtinės Puasono aproksimacijos yra tokios pačios tikslumo eilės, kaip žinomi rezultatai nepriklausomų atsitiktinių dydžių sumoms. Trečioje dalyje nustatyta, kad kai atsitiktiniai dydžiai yra simetriniai, tuomet sudėtinio Puasono aproksimacijos tikslumas yra daug geresnis nei... [toliau žr. visą tekstą]
50

Essays on banking theory and history of financial arrangements

Ferreira, Murilo Resende 27 June 2014 (has links)
Submitted by Murilo Resende Ferreira (muriloresende82@gmail.com) on 2014-10-28T15:22:42Z No. of bitstreams: 1 Tesedoutoradomuriloresende.PDF: 806113 bytes, checksum: e6d9cfcc660128de80d20f44f9c5213e (MD5) / Approved for entry into archive by BRUNA BARROS (bruna.barros@fgv.br) on 2014-11-10T11:34:49Z (GMT) No. of bitstreams: 1 Tesedoutoradomuriloresende.PDF: 806113 bytes, checksum: e6d9cfcc660128de80d20f44f9c5213e (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2014-11-13T13:40:01Z (GMT) No. of bitstreams: 1 Tesedoutoradomuriloresende.PDF: 806113 bytes, checksum: e6d9cfcc660128de80d20f44f9c5213e (MD5) / Made available in DSpace on 2014-11-13T13:40:15Z (GMT). No. of bitstreams: 1 Tesedoutoradomuriloresende.PDF: 806113 bytes, checksum: e6d9cfcc660128de80d20f44f9c5213e (MD5) Previous issue date: 2014-06-27 / This thesis contains two chapters, each one dealing with banking theory and the history of financiai arrangements. In Chapter 1, we extend a Diamond and Dybvig economy with imperfect monitoring of early withdrawals and make a welfare comparison between all possible allocations, as proposed by Prescott and Weinberg(2003) [37]. This imperfect monitoring is introduced by establishing indirect communication( trough a mean of payment) between the agents and the machine that is an aggregate of the financiai and the productive sector. The extension consists in studying allocations where a fraction of the agents can exploit imperfect monitoring and defraud the contracted arrangement by consuming more in the early period trough multiple means of payment. With limited punishment in the period of late consumption, this new allocation is called a separating allocation in contrast with pooling allocations where the agent with the ability of fraud is blocked from it by a costly mean of payment or by receiving enough future consumption to make fraud unattractive. The welfare comparison in the chosen range of parameters show that separating allocations are optimal for poor economies and pooling allocations for intermediary and rich ones. We end with a possible historical context for this kind of model, which connects with the historical narrative in chapter 2. In Chapter 2 we explore the quantitative properties of an early warning system for financiai crises based on the boom and bust framework described in more detail in appendix 1. The main variables are: real growth in equity and housing prices, the yield spread between the 10-year government bond and the 3-month interbank rate and the growth in total banking system assets. These variables display a higher degree of correct signals for recent crises (1984-2008) than comparable early warning systerns. Taking into account an increasing base-line risk ( due to increasing rates of credit expansion , lower interest rates and the accumulation of distortions) also proves to be informative and to help signaling crises in countries that did not go trough a great boom in previous years. / Esta tese contém dois capítulos, cada um lidando com a teoria e a história dos bancos e arranjos financeiros. No capítulo 1, busca-se extender uma economia Diamond-Dybvig com monitoramento imperfeito dos saques antecipados e realizar uma comparação do bem estar social em cada uma das alocações possíveis, como proposto em Presscott and Weinberg(2003). Esse monitoramento imperfeito é implementado a partir da comunicação indireta ( através de um meio de pagamento) entre os agentes e a máquina de depósitos e saques que é um agregado do setor produtivo e financeiro. A extensão consiste em estudar alocações onde uma fração dos agentes pode explorar o monitoramento imperfeito e fraudar a alocação contratada ao consumirem mais cedo além do limite, usando múltiplos meios de pagamento. Com a punição limitada no período de consumo tardio, essa nova alocação pode ser chamada de uma alocação separadora em contraste com as alocações agregadoras onde o agente com habilidade de fraudar é bloqueado por um meio de pagamento imune a fraude, mas custoso, ou por receber consumo futuro suficiente para tornar a fraude desinteressante. A comparação de bem estar na gama de parâmetros escolhida mostra que as alocações separadoras são ótimas para as economias com menor dotação e as agregadoras para as de nível intermediário e as ricas. O capítulo termina com um possível contexto histórico para o modelo, o qual se conecta com a narrativa histórica encontrada no capítulo 2. No capítulo 2 são exploradas as propriedade quantitativas de um sistema de previsão antecedente para crises financeiras, com as váriaveis sendo escolhidas a partir de um arcabouço de ``boom and bust'' descrito mais detalhadamente no apêndice 1. As principais variáveis são: o crescimento real nos preços de imóveis e ações, o diferencial entre os juros dos títulos governamentais de 10 anos e a taxa de 3 meses no mercado inter-bancário e o crescimento nos ativos totais do setor bancário. Essas variáveis produzem uma taxa mais elevada de sinais corretos para as crises bancárias recentes (1984-2008) do que os sistemas de indicadores antecedentes comparáveis. Levar em conta um risco de base crescente ( devido à tendência de acumulação de distorções no sistema de preços relativos em expansões anteriores) também provê informação e eleva o número de sinais corretos em países que não passaram por uma expansão creditícia e nos preços de ativos tão vigorosa.

Page generated in 0.077 seconds