• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 23
  • 22
  • 18
  • 10
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 352
  • 352
  • 52
  • 38
  • 35
  • 34
  • 33
  • 33
  • 28
  • 27
  • 27
  • 26
  • 26
  • 25
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Developing High School Students' Ability to Write about their Art through the Use of Art Criticism Practices in Sketchbooks: A Case Study

Jones, Rita A. 05 August 2008 (has links)
No description available.
252

Reinforcement Learning for Multi-Agent Strategy Synthesis Using Higher-Order Knowledge

Forsell, Gustav, Gergi, Shamoun January 2023 (has links)
Imagine for a moment we are living in the distant future where autonomous robots are patrollingthe streets as police officers. Two such robots are chasing a robber through the city streets. Fearingthe thief might listen in to any potential transmission, both robots remain radio silent and are thuslimited to a strictly visual pursuit. Since the robots cannot see the robber the entire time, they haveto deduce the potential location of the robber. What would the best strategy be for these robots toachieve their objective? This bachelor's thesis investigated the above example by creating strategies through reinforcementlearning. The thesis also investigated the performance of the players when they have differentabilities of deduction. This was tested by creating a suitable game and corresponding reinforcementlearning algorithm and running the simulations for different degrees of knowledge. The study provedthat reinforcement learning is a viable method for strategy construction, reaching nearly guaranteedvictory for cases when the agent knows everything about the environment and a slightly lower winratio when there is uncertainty introduced. The implementation yielded only a small gain in win ratiowhen the agents could deduce even more about each other. / Föreställ dig för ett ögonblick att vi lever i en avlägsen framtid där autonoma robotar patrullerar pågatorna som poliser. Två sådana robotar jagar en rånare genom stadens gator. Eftersom de är räddaför att tjuven kan lyssna på alla möjliga sändningar, förblir båda robotarna radiotysta och är därförbegränsade till en strikt visuell strävan. Eftersom robotarna inte kan se rånaren hela tiden, måste dehärleda den potentiella platsen för rånaren. Vilken skulle den bästa strategin vara för dessa robotarför att uppnå sitt mål? Denna kandidatuppsats undersökte ovanstående exempel genomskapa strategier genomförstärkningsinlärning. Avhandlingen undersökte också spelarnas prestationer när de har olikaavdragsförmåga. Detta testades genom att skapa ett lämpligt spel och motsvarandeförstärkningsinlärningsalgoritm och köra simuleringarna för olika kunskapsgrader. Studien visade attförstärkningsinlärning är en användbar metod för strategikonstruktion, och når nästan garanteradseger i fall då agenten vet allt om miljön och en något lägre vinstkvot när det finns osäkerhet.Implementeringen gav bara en liten vinst i vinstförhållandet när agenterna kunde härleda ännu merom varandra. / Kandidatexjobb i elektroteknik 2023, KTH, Stockholm
253

Estimating the parameters of polynomial phase signals

Farquharson, Maree Louise January 2006 (has links)
Nonstationary signals are common in many environments such as radar, sonar, bioengineering and power systems. The nonstationary nature of the signals found in these environments means that classicalspectralanalysis techniques are notappropriate for estimating the parameters of these signals. Therefore it is important to develop techniques that can accommodate nonstationary signals. This thesis seeks to achieve this by firstly, modelling each component of the signal as having a polynomial phase and by secondly, developing techniques for estimating the parameters of these components. Several approaches can be used for estimating the parameters of polynomial phase signals, eachwithvarying degrees ofsuccess.Criteria to consider in potential estimation algorithms are (i) the signal-to-noise (SNR) ratio threshold of the algorithm, (ii) the amount of computation required for running the algorithm, and (iii) the closeness of the resulting estimates' mean-square errors to the minimum theoretical bound. These criteria will be used to compare the new techniques developed in this thesis with existing techniques. The literature on polynomial phase signal estimation highlights the recurring trade-off between the accuracy of the estimates and the amount of computation required. For example, the Maximum Likelihood (ML) method provides near-optimal estimates above threshold, but also incurs a heavy computational cost for higher order phase signals. On the other hand, multi-linear techniques such as the high-order ambiguity function (HAF) method require little computation, but have a significantly higher SNR threshold than the ML method. Of the existing techniques, the cubic phase (CP) function method is a promising technique because it provides an attractive SNR threshold and computational complexity trade-off. For this reason, the analysis techniques developed in this thesis will be derived from the CP function. A limitation of the CP function is its inability to accurately process phase orders greater than three. Therefore, the first novel contribution to this thesis develops a broadened class of discrete-time higher order phase (HP)functions to address this limitation.This broadened class is achieved by providing a multi-linear extension of the CP function. Monte Carlo simulations are performed to demonstrate the statistical advantage of the HP functions compared to the HAFs. A first order statistical analysis of the HP functions is presented. This analysis verifies the simulation results. The next novel contribution is a technique called the lower SNR cubic phase function (LCPF)method. It is an extension of the CP function, with the extension enabling performance at lower signal-to-noise ratios (SNRs). The improvement of the SNR threshold's performance is achieved by coherently integrating the CP function over a compact interval in the two-dimensional CP function space. The computation of the new algorithm is quite moderate, especially when compared to the ML method. Above threshold, the LCPF method's parameter estimates are asymptotically efficient. Monte Carlo simulation results are presented and a threshold analysis of the algorithm closely predicts the thresholds observed in these results. The next original contribution to this research involves extending the LCPF method so that it is able to process multicomponent cubic phase signals and higher order phase signals. The LCPF method is extended to higher orders by applying a windowing technique as opposed to adjusting the order of the kernel as implemented in the HP function method. To demonstrate the extension of the LCPF method for processing higher order phase signals and multicomponent cubic phase signals, some Monte Carlo simulations are presented. Finally, these estimation techniques are applied to real-worldscenarios in the fields of Power Systems Analysis, Neuroethology and Speech Analysis.
254

Kegelsnedes as integrerende faktor in skoolwiskunde

Stols, Gert Hendrikus 30 November 2003 (has links)
Text in Afrikaans / Real empowerment of school learners requires preparing them for the age of technology. This empowerment can be achieved by developing their higher-order thinking skills. This is clearly the intention of the proposed South African FET National Curriculum Statements Grades 10 to 12 (Schools). This research shows that one method of developing higher-order thinking skills is to adopt an integrated curriculum approach. The research is based on the assumption that an integrated curriculum approach will produce learners with a more integrated knowledge structure which will help them to solve problems requiring higher-order thinking skills. These assumptions are realistic because the empirical results of several comparative research studies show that an integrated curriculum helps to improve learners' ability to use higher-order thinking skills in solving nonroutine problems. The curriculum mentions four kinds of integration, namely integration across different subject areas, integration of mathematics with the real world, integration of algebraic and geometric concepts, and integration into and the use of dynamic geometry software in the learning and teaching of geometry. This research shows that from a psychological, pedagogical, mathematical and historical perspective, the theme conic sections can be used as an integrating factor in the new proposed FET mathematics curriculum. Conics are a powerful tool for making the new proposed curriculum more integrated. Conics can be used as an integrating factor in the FET band by means of mathematical exploration, visualisation, relating learners' experiences of various parts of mathematics to one another, relating mathematics to the rest of the learners' experiences and also applying conics to solve real-life problems. / Mathematical Sciences / D.Phil. (Wiskundeonderwys)
255

The effect of using Lakatos' heuristic method to teach surface area of cone on students' learning : the case of secondary school mathematics students in Cyprus

Dimitriou-Hadjichristou, Chrysoula 02 1900 (has links)
The purpose of this study was to examine the effect of using Lakatos’ heuristic method to teach the surface area of the cone (SAC) on students’ learning. The Lakatos (1976) heuristic framework and the Oh (2010) model of “the enhanced-conflict map” were employed as framework for the study. The first research question examined the impact of the Lakatosian heuristic method on students’ learning of the SAC, which was addressed in three sub-questions: the impact of the method on the students’ achievement, the impact of the method on their conceptual learning and the impact of the method on their higher order thinking skills. The second question examined whether the heuristic method of teaching the SAC helped students to sustain their learning better than the traditional method (Euclidean method). The third question examined whether the heuristic method of teaching SAC could change students’ readiness level, according to Bloom’s taxonomy. A pre-test and post-test quasi-experimental research design was used in the study that involved a total of 198 Grade 11 students (98 in the experimental group and 100 in the control group) from two schools in Cyprus. The instruments used for data collection were cognitive tests, lesson observations (video-recorded), interviews and questionnaire. Data was analysed using inferential statistics and the Oh (2010) model of the enhanced conflict map. Student achievement within time was the dependent variable and the method of training the independent variable. Therefore, time was the “within” factor and each group was measured three times (pre-test, post-test and delayed). The differences in students’ achievement within each group over time were examined. Results indicated that the average mean score achievement of the students in the experimental group was double that of the students in the control group. The Jun- Young Oh’s model of the enhanced conflict map showed that students in both groups changed from alternative conceptions to scientific conceptions with the experimental group showing greater improvement. It was also observed that from the post-test to delayed test, the Lakatosian method of teaching the SAC has a significant positive effect on students’ achievement at all levels of Bloom’s taxonomy, especially at the higher order thinking (HOT) levels (application and analysis-synthesis levels) as compared to the Euclidean method of teaching. In addition, the Lakatosian method helped the students to sustain their learning over time better than the Euclidean method did and also helped them to change their readiness level, especially at the HOT levels. The Lakatosian method helped students to foster skills that promote active learning. Of great importance was the use of mathematical language, as well as, the enhanced perception in the experimental group in comparison with the control group, through the use of the Lakatosian method. The results of this study are promising. It is recommended that pre-service teachers should be trained on how to effectively implement the Lakatosian heuristic method in their teaching. / Mathematics Education / D. Phil. (Mathematics, Science and Technology Education (Mathematics Education))
256

Algebraic and multilinear-algebraic techniques for fast matrix multiplication

Gouaya, Guy Mathias January 2015 (has links)
This dissertation reviews the theory of fast matrix multiplication from a multilinear-algebraic point of view, as well as recent fast matrix multiplication algorithms based on discrete Fourier transforms over nite groups. To this end, the algebraic approach is described in terms of group algebras over groups satisfying the triple product Property, and the construction of such groups via uniquely solvable puzzles. The higher order singular value decomposition is an important decomposition of tensors that retains some of the properties of the singular value decomposition of matrices. However, we have proven a novel negative result which demonstrates that the higher order singular value decomposition yields a matrix multiplication algorithm that is no better than the standard algorithm. / Mathematical Sciences / M. Sc. (Applied Mathematics)
257

Consumption Euler Equation: The Theoretical and Practical Roles of Higher-Order Moments / 消費尤拉方程式:高階動差的理論與實證重要性

藍青玉, Lan, Ching-Yu Unknown Date (has links)
本論文共分三章,全數圍繞在消費尤拉方程式中,消費成長的高階動差在理論與實證上的重要性。分別說明如下: 本論文第一章討論消費高階動差在實證估計消費結構性參數之重要性。消費尤拉方程式是消費者極大化問題的一階條件,而自Hall (1978)起,估計消費結構參數如跨期替代彈性時,也多是利用這個尤拉方程式所隱涵的消費動態關係,進行估計。但是由於消費資料存在嚴重的衡量誤差問題,實證上多將尤拉方程式進行對數線性化,或是二階線性化後進行估計。 然而前述一、二階線性化,固然處理了資料的衡量誤差問題,卻也造成了參數估計上的近似誤差(approximation bias)。其原因來自於線性化過程中所忽略的高階動差實為內生,而與迴歸式中的二階動差相關。這使得即便用工具變數進行估計,仍然無法產生具有一致性的估計結果。這當中的原因在於足以解釋二階動差,卻又與殘差項中的高階動差直交的良好(valid)的工具變數無法取得。 我們認為在資料普遍存在衡量誤差的情況下,線性化估計尤拉方程式不失為一可行又易於操作的方法。於是我們嘗試在線性化的尤拉方程式中,將高階動差引入,並檢視這種高階近似是否能有效降低近似誤差。我們的模擬結果首先證實,過去二階近似尤拉方程式的估計,確實存在嚴重近似誤差。利用工具變數雖然可以少部份降低該誤差,但由於高階動差的內生性質,誤差仍然顯著。我們也發現,將高階動差引入模型,確實可以大幅降低近似誤差,但是在偏誤降低的同時,參數估計效率卻也隨之降低。 高階動差的引入,除了降低近似偏誤外,卻也必須付出估計效率降低的代價。我們因此並不建議無限制地放入高階動差。則近似階次選取,乃為攸關估計績效的重要因素。本章的第二部份,即著眼於該最適近似階次選取。我們首先定義使參數估計均方誤(mean squared error, MSE)為最小的近似階次,為最適近似階次。我們發現,該最適階次與樣本大小、效用函數的彎曲程度都有直接的關係。 然而在實際進行估計時,由於參數真值無法得知,MSE準則自然無法作為階次選取之依據。我們於是利用目前在模型與階次選取上經常被使用的一些準則進行階次選取,並比較這些不同準則下參數估計的MSE。我們發現利用這些準則,確實可以使高階近似尤拉方程式得到MSE遠低於目前被普遍採用的二階近似的估計結果,而為估計消費結構參數時更佳的選擇。 本論文第二章延續前一章的模擬結果,嘗試利用消費高階動差間的非線性關係,進一步改善高階近似消費尤拉方程式的估計表現。由第一章的研究結果,我們發現高階近似估計確有助大幅降低近似誤差,但這其中可能產生的估計效率喪失,卻是輕乎不得的。這個效率喪失,很大一部份來自於我們所使用的工具變數,雖然可以有效掌握消費成長二階動差的變動,但是當這同一組工具變數被用來解釋如偏態與峰態等這些更高階動差時,預測力卻大幅滑落。這使待得當我們將這些配適度偏低的配適後高階動差,放到迴歸式中進行估計時,所能提供的額外情報也就相當有限。而所造成的共線性問題,也自然使得估計效率大幅惡化。 於是在其他合格的工具變數相對有限的情況下,我們利用高階動差間所存在的均衡關係,將原來的工具變數進行非線性轉換,以求得對高階動差的較佳配適。由於消費動差間之關係,尚未見諸相關文獻。於是我們首先透過數值分析,進一步釐清消費高階動差間之關係。這其中尤為重要的是由消費二階動差所衡量的消費風險,與更高階動差間之關係。因為這些關係將為我們轉換工具變數之依據。 我們發現與二階動差相一致地,消費者對這些高階動差之預期,都隨其財富水準的提高而減少。這隱涵消費風險與更高階動差間之正向關係。更進一步檢視消費風險與高階動差間之關係也發現,二者間確實存在非線性之正向關係。而這也解釋了何以前一章線性的工具變數,雖可適切捕捉消費風險,但對高階動差的解釋力卻異常薄弱。 利用這些非線性關係,我們將原始的工具變數進行非線性轉換後,用以配適更高階動差。透過模擬分析,我們證實了這些非線性工具變數,確實大幅改善高階近似尤拉方程式的估計表現。除了仍保有與線性工具變數般的一些特性,諸如隨樣本的增加,最適近似階次也隨之增加之外,相較於線性工具變數,非線性工具變數可以在較低的近似階次下,就使得估計偏誤大幅下降。在近似階次愈高估計效率愈低的情況下,這自然大幅度地提高了估計效率。比較兩種工具變數估計結構數參數所產生的MSE也證實,非線性工具變數確實有遠低於原始線性工具變數的MSE表現。 然而我們同時也發現,利用非線性工具變數估計,若未適當選擇近似階次,效率喪失的速度,可能更甚於線性工具變數時。這凸顯了選擇近似階次的重要性。於是我們同樣檢視了前述階次選擇準則在目前非線性工具變數環境下的適用性。而總結第一、二章的研究結果,我們凸顯了高階動差的重要性,確實助益重要消費結構參數估計。而利用過去尚未被討論過的高階動差間非線性關係,更可大幅度改善估計績效。 本論文的最後一章,則旨在理論上建立高階動差的重要性。我們在二次式的效用函數(quadratic utility function)設定下,推導借貸限制下的最適消費決策。二次式的效用函數,由於其邊際價值函數(marginal value function)為一線性函數,因此所隱涵的消費決策,具有確定相等(certainty equivalence)的特性。這表示消費者只關心未來的期望消費水準,二階以上的更高階動差,都不影響其消費決策。然而這種確定相等的特性,將因為借貸限制的存在而不復存在,而高階動差的重要性也就因此凸顯。 我們證明,確定相等特性的喪失,其背後的理論原因在於,借貸限制的存在,使得二次式效用函數的邊際價值函數,產生凸性。消費者因而因應未來的不確定性,進行預防性儲蓄。透過分析解的求得,我們也得以進一步分析更高階動差的對消費決策的理論性質。同時我們也引申理論推導的實證意涵,其中較重要者諸如未受限消費者因預防性儲蓄行為所引發的消費過度敏感性現象,實證上樣本分割法的選取,以及高階動差的引入模型。 / The theme of this thesis seeks to explore the importance of higher-order moments in the consumption Euler equation, both theoretically and empirically. Applying log-linearized versions of Euler equations has been a dominant approach to obtaining sensible analytical solutions, and a popular choice of model specifications for estimation. The literature however by now has been no lack of conflicting empirical results that are attributed to the use of the specific version of Euler equations. Important yet natural questions whether the higher-order moments can be safely ignored, or whether higher-order approximations offer explanations to the stylized facts remain unanswered. Such inquires as in the thesis thus can improve our understanding toward consumer behaviors over prior studies based on the linear approximation. 1. What Do We Gain from Estimating Euler Equations with Higher-Order Approximations? Despite the importance of estimating structural parameters governing consumption dynamics, such as the elasticity of intertemporal substitution, empirical attempts to unveil these parameters using a log-linearized version of the Euler equation have produced many puzzling results. Some studies show that the approximation bias may well constitute a compelling explanation. Even so, the approximation technique continues to be useful and convenient in estimation of the parameters, because noisy consumption data renders a full-fledged GMM estimation unreliable. Motivated by its potential success in reducing the bias, we investigate the economic significance and empirical relevance of higher-order approximations to the Euler equation with simulation methodology. The higher-order approximations suggest a linear relationship between expected consumption growth and its higher-order moments. Our simulation results clearly reveal that the approximation bias can be significantly reduced when the higher-order moments are introduced into estimation, but at the cost of efficiency loss. It therefore documents a clear tradeoff between approximation bias reduction and efficiency loss in the consumption growth regression when higher-order approximations to the Euler equation is considered. A question of immediate practical interest arises ``How many higher-order terms are needed?'' The second part of our Monte-Carlo studies then deals with this issue. We judge whether a particular consumption moment should be included in the regression by the criterion of mean squared errors (MSE) that accounts for a trade-off between estimation bias and efficiency loss. The included moments leading to smaller MSE are regarded as ones to be needed. We also investigate the usefulness of the model and/or moment selection criteria in providing guidance in selecting the approximation order. We find that improvements over the second-order approximated Euler equation can always be achieved simply by allowing for the higher-order moments in the consumption regression, with the approximation order selected by these criteria. 2. Uncovering Preference Parameters with the Utilization of Relations between Higher-Order Consumption Moments Our previous attempt to deliver more desirable estimation performance with higher-order approximations to the consumption Euler equation reveals that the approximation bias can be significantly reduced when the higher-order moments are introduced into estimation, but at the cost of efficiency loss. The latter results from the difficulty in identifying independent variation in the higher-order moments by sets of linear instruments used to identify that in variability in consumption growth, mainly consisting of individual-specific characteristics. Thus, one major challenge in the study is how to obtain quality instruments that are capable of doing so. With the numerical analysis technique, we first establish the nonlinear equilibrium relation between consumption risk and higher-order consumption moments. This nonlinear relation is then utilized to form quality instruments that can better capture variations in higher-order moments. A novelty of this chapter lies in adopting a set of nonlinear instruments that is to cope with this issue. They are very simple moment transformations of the characteristic-related instruments, thereby easy to obtain in practice. As expected, our simulations demonstrate that for a comparable amount of the bias corrected, applying the nonlinear instruments does entail an inclusion of fewer higher-order moments in estimation. A smaller simulated MSE that reveals the improvement over our previous estimation results can thus be achieved.\ 3. Precautionary Saving and Consumption with Borrowing Constraint This last chapter offers a theoretical underpinning for the importance of the higher-order moments in a simple environment where economic agents have a quadratic-utility preference. The resulting Euler equation gives rise to a linear policy function in essence, or a random-walk consumption rule. The twist in our theory comes from a presence of borrowing constraint facing consumers. The analysis shows that the presence of the constraint induces precautionary motives for saving as responses from consumers to income uncertainties, even there has been no such motives inherent in consumers' preference. The corresponding value function now displays a convexity property that is virtually only associated with more general preferences than a quadratic utility. The analytical framework allows us to be able to characterize saving behaviors that are of precautionary motives, and their responses to changes in different moments of income process. As empirical implications, our analysis shed new light on the causes of excess sensitivity, the consequences of sample splitting between the rich and the poor, as well as the relevance of the higher-order moments to consumption dynamics, specifically skewness and kurtosis.
258

The Matrix Element Method at next-to-leading order QCD using the example of single top-quark production at the LHC

Martini, Till 10 July 2018 (has links)
Hochenergiephysikanalysen zielen darauf ab, das Standardmodell—die gemeinhin akzeptierte Theorie—zu testen. Für überzeugende Schlüsse, sind Analysemethoden nötig, welche einen eindeutigen Vergleich zwischen Daten und Theorie ermöglichen und zuverlässige Abschätzung der Unsicherheiten erlauben. Die Matrixelement-Methode (MEM) ist eine Maximum-Likelihood-Methode, welche speziell auf Signalsuche und Parameterschätzung an Beschleunigern zugeschnitten ist. Die MEM hat sich durch optimale Nutzung vorhandener Information und sauberer statistischer Interpretation der Ergebnisse als vorteilhaft erwiesen. Sie hat jedoch einen großen Nachteil: In der Originalformulierung ist die Berechnung der Likelihood intrinsisch auf die erste störungstheoretische Ordnung in der Kopplung limitiert. Höhere Ordnungskorrekturen verbessern die Genauigkeit theoretischer Vorhersagen und erlauben eindeutige feldtheoretische Interpretation der gewonnen Informationen. In dieser Arbeit wird erstmalig die MEM unter Einbezug der Korrekturen der nächstführenden Ordnung (NLO) der QCD-Kopplung durch Definition von Ereignisgewichten für die Berechnung der Likelihood präsentiert. Diese Gewichte ermöglichen auch die Erzeugung ungewichteter Ereignisse, welche dem in NLO-Genauigkeit berechneten Wirkungsquerschnitt folgen. Der Methode wird anhand von Top-Quark-Ereignissen veranschaulicht. Die Top-Quark-Masse wird aus den erzeugten Ereignissen mithilfe der MEM in NLO-Genauigkeit bestimmt. Die erhaltenen Schätzer stimmen mit den Eingabewerten aus der Ereigniserzeugung überein. Wiederholung der Massenbestimmung aus denselben Ereignissen, ohne NLO-Korrekturen in den Vorhersagen, führt zu verfälschten Schätzern. Diese Verschiebungen werden nicht durch abgeschätzte theoretische Unsicherheiten berücksichtigt, was die Abschätzung der theoretischen Unsicherheiten der Analyse in führender Ordnung unzuverlässig macht. Die Resultate unterstreichen die Wichtigkeit der Berücksichtigung von NLO-Korrekturen in der MEM. / Analyses in high energy physics aim to put the Standard Model—the commonly accepted theory—to test. For convincing conclusions, analysis methods are needed which offer an unambiguous comparison between data and theory while allowing reliable estimates of uncertainties. The Matrix Element Method (MEM) is a Maximum Likelihood method which is especially tailored for signal searches and parameter estimation at colliders. The MEM has proven to be beneficial due to optimal use of the available information and a clean statistical interpretation of the results. But it has a big drawback: In its original formulation, the likelihood calculation is intrinsically limited to the leading perturbative order in the coupling. Higher-order corrections improve the accuracy of theoretical predictions and allow for unambiguous field-theoretical interpretation of the extracted information. In this work, the MEM incorporating corrections of next-to-leading order (NLO) in QCD by defining event weights suited for the likelihood calculation is presented for the first time. These weights also enable the generation of unweighted events following the cross section calculated at NLO accuracy. The method is demonstrated for top-quark events. The top-quark mass is determined with the MEM at NLO accuracy from the generated events. The extracted estimators are in agreement with the input values from the event generation. Repeating the mass determinations from the same events, without NLO corrections in the predictions, results in biased estimators. These shifts may not be accounted for by estimated theoretical uncertainties rendering the estimation of the theoretical uncertainties unreliable in the leading-order analysis. The results emphasise the importance of the inclusion of NLO corrections into the MEM.
259

Análise de componentes independentes aplicada à separação de sinais de áudio. / Independent component analysis applied to separation of audio signals.

Moreto, Fernando Alves de Lima 19 March 2008 (has links)
Este trabalho estuda o modelo de análise em componentes independentes (ICA) para misturas instantâneas, aplicado na separação de sinais de áudio. Três algoritmos de separação de misturas instantâneas são avaliados: FastICA, PP (Projection Pursuit) e PearsonICA; possuindo dois princípios básicos em comum: as fontes devem ser independentes estatisticamente e não-Gaussianas. Para analisar a capacidade de separação dos algoritmos foram realizados dois grupos de experimentos. No primeiro grupo foram geradas misturas instantâneas, sinteticamente, a partir de sinais de áudio pré-definidos. Além disso, foram geradas misturas instantâneas a partir de sinais com características específicas, também geradas sinteticamente, para avaliar o comportamento dos algoritmos em situações específicas. Para o segundo grupo foram geradas misturas convolutivas no laboratório de acústica do LPS. Foi proposto o algoritmo PP, baseado no método de Busca de Projeções comumente usado em sistemas de exploração e classificação, para separação de múltiplas fontes como alternativa ao modelo ICA. Embora o método PP proposto possa ser utilizado para separação de fontes, ele não pode ser considerado um método ICA e não é garantida a extração das fontes. Finalmente, os experimentos validam os algoritmos estudados. / This work studies Independent Component Analysis (ICA) for instantaneous mixtures, applied to audio signal (source) separation. Three instantaneous mixture separation algorithms are considered: FastICA, PP (Projection Pursuit) and PearsonICA, presenting two common basic principles: sources must be statistically independent and non-Gaussian. In order to analyze each algorithm separation capability, two groups of experiments were carried out. In the first group, instantaneous mixtures were generated synthetically from predefined audio signals. Moreover, instantaneous mixtures were generated from specific signal generated with special features, synthetically, enabling the behavior analysis of the algorithms. In the second group, convolutive mixtures were probed in the acoustics laboratory of LPS at EPUSP. The PP algorithm is proposed, based on the Projection Pursuit technique usually applied in exploratory and clustering environments, for separation of multiple sources as an alternative to conventional ICA. Although the PP algorithm proposed could be applied to separate sources, it couldnt be considered an ICA method, and source extraction is not guaranteed. Finally, experiments validate the studied algorithms.
260

Prise en compte de la transition laminaire / turbulent dans un code Navier-Stokes éléments finis non structurés / Automatic prediction of laminar/turbulent transition in an unstructured finite element Navier-Stokes solver

Gross, Raphaël 27 October 2015 (has links)
La thèse vise à intégrer des critères de transition dans le solveur Navier-Stokes non structuré Aether utilisé chez Dassault Aviation. Une méthodologie de prévision de la transition laminaire/turbulent a été élaborée et implémentée dans le solveur RANS Aether. Deux stratégies de calcul de transition ont été testées. Soit Aether est couplé avec le code de couche limite de l’ONERA 3C3D. Soit la position de transition est calculée en utilisant directement les profils de vitesse RANS. Les deux méthodes ont été testées pour des écoulements subsoniques et transsoniques. L’influence des solveurs numériques, des critères de transition et du processus de couplage sont étudiés. L’utilisation de schémas numériques d’ordre élevé est également considérée. / This thesis present the state-of-the-art of the transition prediction numerical chain which has been developed at Dassault Aviation in the RANS solver AETHER. Two strategies for transition location estimations exist. First, AETHER is coupled with the ONERA boundary layer code 3C3D. Second, the transition location is computed by using directly the RANS velocity profiles. Both methods were preliminarily tested in subsonic and transonic. The issues of the influence of the numerical solvers, transition onset criteria and coupling process are studied. The influence of higher order numerical method, are also considered.

Page generated in 0.0461 seconds