• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 23
  • 22
  • 18
  • 10
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 352
  • 352
  • 52
  • 38
  • 35
  • 34
  • 33
  • 33
  • 28
  • 27
  • 27
  • 26
  • 26
  • 25
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Essays in econometric theory

Casalecchi, Alessandro Ribeiro de Carvalho 25 May 2017 (has links)
Submitted by Alessandro Ribeiro de Carvalho Casalecchi (alercc@gmail.com) on 2017-07-03T21:17:55Z No. of bitstreams: 1 Tese_Alessandro_Casalecchi.pdf: 2174297 bytes, checksum: 27298549cf220c58b7eb52f7323446d7 (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2017-07-04T11:10:20Z (GMT) No. of bitstreams: 1 Tese_Alessandro_Casalecchi.pdf: 2174297 bytes, checksum: 27298549cf220c58b7eb52f7323446d7 (MD5) / Made available in DSpace on 2017-07-05T13:46:08Z (GMT). No. of bitstreams: 1 Tese_Alessandro_Casalecchi.pdf: 2174297 bytes, checksum: 27298549cf220c58b7eb52f7323446d7 (MD5) Previous issue date: 2017-05-25 / Os dois artigos desta tese, os capítulos 2 e 3, referem-se a testes de hipótese mas têm focos diferentes. O capítulo 2, intitulado "Improvements for external validity tests in fuzzy regression discontinuity designs," apresenta condições --- hipóteses de continuidade, monotonicidade estrita e convergência pontual --- sob as quais testes de qualidade de ajuste para duas amostras podem ser usados para testes de validade externa em modelos de tratamento-controle que sofrem de "compliance" imperfeito. Modelos com "compliance" imperfeito permitem a estimação de efeitos de tratamento apenas para a subpopulação de "compliers", sendo que tais estimativas não são necessariamente válidas para outras subpopulações ("always-takers" e "never-takers"). Sob as condições do capítulo 2, o uso do teste de qualidade de ajuste no lugar do teste de diferença de médias representa um avanço para testes de validade externa, uma vez que mais hipóteses alternativas são detectáveis pelo primeiro teste. Sugerimos combinar duas estatísticas de teste de qualidade de ajuste (uma para tratados e outra para não tratados) na forma de um teste múltiplo ao invés de um teste conjunto. O capítulo 3, intitulado "Higher-order UMP tests", sugere uma estratégia para se escolher, dentro de um conjunto de estatísticas de teste disponíveis, aquela que fornece o teste mais poderoso quando as funções de poder dos testes em questão não podem ser diferenciadas através de métodos assintóticos usuais, como análise de poder local ("local power analysis"). Propomos o uso de aproximações assintóticas de ordem mais alta, como expansões de Edgeworth, para se aproximar as densidades amostrais das estatísticas disponíveis e, com isso, verificar-se quais delas possuem a propriedade da razão monotônica de verossimilhança. Tal propriedade implica, pelo Teorema de Karlin-Rubin, que o teste é uniformemente mais poderoso (UMP) --- ao menos até certa ordem de aproximação --- se a estatística for suficiente para o parâmetro relevante. Para o caso em que as estatísticas sendo comparadas não são suficientes, argumentamos que frequentemente elas podem se tornar suficientes para uma família paramétrica de interesse após reparametrizações apropriadas. Para fins de ilustração, nós aplicamos o método proposto para determinar o valor ótimo, em termos de poder, do parâmetro de suavização do estimador de densidade por kernel em bases de dados simuladas e concluímos que a ordem de aproximação usada nesta aplicação (segunda ordem) não é alta o suficiente para permitir a diferenciação das funções de poder associadas aos diferentes valores do parâmetro de suavização. / The two papers in this work, chapters 2 and 3, regard hypothesis testing but address different issues. Chapter 2, entitled "Improvements for external validity tests in fuzzy regression discontinuity designs", shows conditions --- assumptions of continuity, strict monotonicity and pointwise convergence --- under which two-sample goodness-of-fit (GOF) tests can be used to test for external validity in treatment-control models that suffer from imperfect compliance of units with respect to the assigned treatment. Imperfect compliance allows researchers to estimate only treatment effects for the subpopulation of compliers, and the validity of these estimates for other subpopulations (always-takers and never-takers) remains an open problem. Under the conditions in Chapter 2, the use of GOF tests in place of mean difference tests represents an improvement over other external validity tests in the literature, since more alternative hypotheses are detectable by the test statistic. We suggested to combine two GOF test statistics (one for the treated and one for the untreated) in a multiple test instead of a joint test. Chapter 3, entitled "Higher-order UMP tests", suggests a strategy to choose among candidate test statistics, according to a power criterion, when their power performances are not distinguishable by usual methods of asymptotic comparison like local power analysis. We propose the use of higher-order asymptotic expansions, like Edgeworth expansions, to approximate the sample densities of the candidate test statistics and verify which of them has the monotone likelihood ratio property. This property implies, by the Karlin-Rubin Theorem, that the test is uniformly most powerful (UMP) --- at least to an order of approximation --- if the statistic is sufficient for the relevant parameter. When the statistics under study are not sufficient, we argue that they can often be made sufficient for a desired parametric family after appropriate reparameterization. We applied the method to search for the power-optimal bandwidth for the kernel density estimator in simulated data sets, and concluded that the order of approximation that we used (second order) is still too low to allow us to distinguish among bandwidths.
282

Kegelsnedes as integrerende faktor in skoolwiskunde

Stols, Gert Hendrikus 30 November 2003 (has links)
Text in Afrikaans / Real empowerment of school learners requires preparing them for the age of technology. This empowerment can be achieved by developing their higher-order thinking skills. This is clearly the intention of the proposed South African FET National Curriculum Statements Grades 10 to 12 (Schools). This research shows that one method of developing higher-order thinking skills is to adopt an integrated curriculum approach. The research is based on the assumption that an integrated curriculum approach will produce learners with a more integrated knowledge structure which will help them to solve problems requiring higher-order thinking skills. These assumptions are realistic because the empirical results of several comparative research studies show that an integrated curriculum helps to improve learners' ability to use higher-order thinking skills in solving nonroutine problems. The curriculum mentions four kinds of integration, namely integration across different subject areas, integration of mathematics with the real world, integration of algebraic and geometric concepts, and integration into and the use of dynamic geometry software in the learning and teaching of geometry. This research shows that from a psychological, pedagogical, mathematical and historical perspective, the theme conic sections can be used as an integrating factor in the new proposed FET mathematics curriculum. Conics are a powerful tool for making the new proposed curriculum more integrated. Conics can be used as an integrating factor in the FET band by means of mathematical exploration, visualisation, relating learners' experiences of various parts of mathematics to one another, relating mathematics to the rest of the learners' experiences and also applying conics to solve real-life problems. / Mathematical Sciences / D.Phil. (Wiskundeonderwys)
283

The effect of using Lakatos' heuristic method to teach surface area of cone on students' learning : the case of secondary school mathematics students in Cyprus

Dimitriou-Hadjichristou, Chrysoula 02 1900 (has links)
The purpose of this study was to examine the effect of using Lakatos’ heuristic method to teach the surface area of the cone (SAC) on students’ learning. The Lakatos (1976) heuristic framework and the Oh (2010) model of “the enhanced-conflict map” were employed as framework for the study. The first research question examined the impact of the Lakatosian heuristic method on students’ learning of the SAC, which was addressed in three sub-questions: the impact of the method on the students’ achievement, the impact of the method on their conceptual learning and the impact of the method on their higher order thinking skills. The second question examined whether the heuristic method of teaching the SAC helped students to sustain their learning better than the traditional method (Euclidean method). The third question examined whether the heuristic method of teaching SAC could change students’ readiness level, according to Bloom’s taxonomy. A pre-test and post-test quasi-experimental research design was used in the study that involved a total of 198 Grade 11 students (98 in the experimental group and 100 in the control group) from two schools in Cyprus. The instruments used for data collection were cognitive tests, lesson observations (video-recorded), interviews and questionnaire. Data was analysed using inferential statistics and the Oh (2010) model of the enhanced conflict map. Student achievement within time was the dependent variable and the method of training the independent variable. Therefore, time was the “within” factor and each group was measured three times (pre-test, post-test and delayed). The differences in students’ achievement within each group over time were examined. Results indicated that the average mean score achievement of the students in the experimental group was double that of the students in the control group. The Jun- Young Oh’s model of the enhanced conflict map showed that students in both groups changed from alternative conceptions to scientific conceptions with the experimental group showing greater improvement. It was also observed that from the post-test to delayed test, the Lakatosian method of teaching the SAC has a significant positive effect on students’ achievement at all levels of Bloom’s taxonomy, especially at the higher order thinking (HOT) levels (application and analysis-synthesis levels) as compared to the Euclidean method of teaching. In addition, the Lakatosian method helped the students to sustain their learning over time better than the Euclidean method did and also helped them to change their readiness level, especially at the HOT levels. The Lakatosian method helped students to foster skills that promote active learning. Of great importance was the use of mathematical language, as well as, the enhanced perception in the experimental group in comparison with the control group, through the use of the Lakatosian method. The results of this study are promising. It is recommended that pre-service teachers should be trained on how to effectively implement the Lakatosian heuristic method in their teaching. / Mathematics Education / D. Phil. (Mathematics, Science and Technology Education (Mathematics Education))
284

Algebraic and multilinear-algebraic techniques for fast matrix multiplication

Gouaya, Guy Mathias January 2015 (has links)
This dissertation reviews the theory of fast matrix multiplication from a multilinear-algebraic point of view, as well as recent fast matrix multiplication algorithms based on discrete Fourier transforms over nite groups. To this end, the algebraic approach is described in terms of group algebras over groups satisfying the triple product Property, and the construction of such groups via uniquely solvable puzzles. The higher order singular value decomposition is an important decomposition of tensors that retains some of the properties of the singular value decomposition of matrices. However, we have proven a novel negative result which demonstrates that the higher order singular value decomposition yields a matrix multiplication algorithm that is no better than the standard algorithm. / Mathematical Sciences / M. Sc. (Applied Mathematics)
285

Le bonheur est dans l'ignorance : logiques épistémiques dynamiques basées sur l'observabilité et leurs applications / Ignorance is bliss : observability-based dynamic epistemic logics and their applications

Maffre, Faustine 23 September 2016 (has links)
Dans les logiques épistémiques, la connaissance est généralement modélisée par un graphe de mondes possibles, qui correspondent aux alternatives à l'état actuel du monde. Ainsi, les arêtes entre les mondes représentent l'indistinguabilité. Connaître une proposition signifie que cette proposition est vraie dans toutes les alternatives possibles. Les informaticiens théoriques ont cependant remarqué que cela a conduit à plusieurs problèmes, à la fois intuitifs et techniques : plus un agent est ignorant, plus elle a d'alternatives à examiner ; les modèles peuvent alors devenir trop grands pour la vérification de système. Ils ont récemment étudié comment la connaissance pourrait être réduite à la notion de visibilité. Intuitivement, l'idée de base est que quand un agent voit quelque chose, alors elle sait sa valeur de vérité. A l'inverse, toute combinaison de valeurs de vérité des variables non observables est possible pour l'agent. Ces informations d'observabilité permettent de reconstituer la sémantique standard de la connaissance : deux mondes sont indistinguables pour un agent si et seulement si chaque variable observée par cet agent a la même valeur dans les deux mondes. Notre objectif est de démontrer que les logiques épistémiques fondées sur la visibilité constituent un outil approprié pour plusieurs applications importantes dans le domaine de l'intelligence artificielle. Dans le cadre actuel de ces logiques de visibilité, chaque agent a un ensemble de variables propositionnelles qu'elle peut observer ; ces visibilités sont constantes à travers le modèle. Cela accompagne une hypothèse forte : les visibilités sont connues de tous, et sont même connaissance commune. De plus, la construction de la connaissance à partir de la visibilité entraîne des validités contre-intuitives, la plus importante étant que l'opérateur de la connaissance distribue sur les disjonctions de littéraux : si un agent sait que p ou q est vrai, alors elle sait que p est vrai ou que q est vrai, parce qu'elle peut les voir. Dans cette thèse, nous proposons des solutions à ces deux problèmes et les illustrons sur diverses applications telles que la planification épistémique ou les jeux booléens épistémiques, et sur des exemples plus spécifiques tels que le problème des enfants sales ou le problème du bavardage. Nous étudions en outre des propriétés formelles des logiques que nous concevons, fournissant axiomatisations et résultats de complexité. / In epistemic logic, knowledge is usually modelled by a graph of possible worlds, representing the alternatives to the current state of the world. So edges between worlds stand for indistinguishability. To know a proposition means that that proposition is true in all possible alternatives. Theoretical computer scientists however noticed that this led to several issues, both intuitively and technically: the more an agent is ignorant, the more alternatives she must consider; models may then become too big for system verification. They recently investigated how knowledge could be reduced to the notion of visibility. Intuitively, the basic idea is that when an agent sees something, then she knows its truth value. The other way round, any combination of truth values of the non-observable variables is possible for the agent. Such observability information allows us to reconstruct the standard semantics of knowledge: two worlds are indistinguishable for an agent if and only if every variable observed by her has the same value in both worlds. We aim to demonstrate that visibility-based epistemic logics provide a suitable tool for several important applications in the field of artificial intelligence. In the current settings of these logics of visibility, every agent has a set of propositional variables that she can observe; these visibilities are constant across the model. This comes with a strong assumption: visibilities are known to everyone, and are even common knowledge. Moreover, constructing knowledge from visibility brings about counter-intuitive validities, the most important being that the knowledge operator distributes over disjunction of literals: if an agent knows that p or q is true, then she knows that p is true or that q is true because she can see them. In this thesis, we propose solutions to these two problems and illustrate them on various applications such as epistemic planning or epistemic boolean games, and on more specific examples such as the muddy children problem or the gossip problem. We moreover study formal properties of the logics we design, providing axiomatizations and complexity results.
286

On the use of Volterra series in structural dynamics: contributions from input-output to output-only analysis and identification / Sobre o uso das séries de Volterra em dinâmica estrutural: contribuições na análise e identificação

Scussel, Oscar [UNESP] 27 March 2017 (has links)
Submitted by OSCAR SCUSSEL null (oscar.scussel@gmail.com) on 2017-04-29T13:57:37Z No. of bitstreams: 1 PhDThesisScussel.pdf: 4308679 bytes, checksum: 08a1260ebbd5cc5320910fff695b1037 (MD5) / Approved for entry into archive by Luiz Galeffi (luizgaleffi@gmail.com) on 2017-05-03T16:39:39Z (GMT) No. of bitstreams: 1 scussel_o_dr_ilha.pdf: 4308679 bytes, checksum: 08a1260ebbd5cc5320910fff695b1037 (MD5) / Made available in DSpace on 2017-05-03T16:39:39Z (GMT). No. of bitstreams: 1 scussel_o_dr_ilha.pdf: 4308679 bytes, checksum: 08a1260ebbd5cc5320910fff695b1037 (MD5) Previous issue date: 2017-03-27 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Muitas aplicações da engenharia envolvem estruturas essencialmente não-lineares onde várias técnicas têm sido recentemente estudadas e investigadas por muitos pesquisadores. Dentre as várias abordagems, as que usam séries de Volterra têm apresentado propriedades úteis para fornecer um melhor entendimento para identificação e análise. Neste contexto, a presente tese propõem novas contribuições em como usar as séries de Volterra para caracterização, identificação e análise dinâmica de sistemas não-lineares usando sinais de entrada e saída e sinais somente de saída. Inicialmente, apresenta-se uma metodologia para análise de sistemas mecânicos não-lineares através das funções de resposta em frequência de alta-ordem (HOFRFs) e o conceito de HOFRFs estendidas com dados apenas de saída é introduzido e descrito em detalhes. Após isso, uma abordagem para identificação de sistemas não-lineares com base nas séries de Volterra através da expansão na base ortonormal de Kautz é proposta. Essa técnica permite identificar os seus núcleos mais facilmente e permite separar as contribuições dos termos lineares e não-lineares usando somente sinais de saída. Além disso, uma metodologia para análise modal de sistemas fracamente não-lineares sujeito a excitações com vários níveis de amplitude é também apresentada. A contribuição desse novo método reside no fato de que as HOFRFs são simplesmente estimadas como função das FRFs lineares. Basicamente, essa metodologia estende o conceito de métodos convencionais de analise modal experimental para caracterizar e tratar efeitos não-lineares. Os resultados via exemplos numéricos e experimentais apresentados ao longo da tese mostram as contribuições, benefícios e eficácia da proposta. / Most recent engineering applications involve structures essentially nonlinear where several techniques have been recently studied and investigated by many researchers. Among them, the methods based on Volterra series expansion have presented powerful properties to provide a better understanding for identification and analysis. In this context, the present thesis proposes new contributions in how to use Volterra series for characterization, identification and dynamical analysis of nonlinear systems based on input and output signals and output-only signals. Initially, a methodology for analysis of nonlinear mechanical systems through higher-order frequency response functions (HOFRFs) is presented and the concept of extended HOFRFs based on output-only is introduced and described in detail. Afterwards, an approach for identification of nonlinear systems based on Volterra series through the expansion onto orthonormal Kautz basis is proposed. This technique allows to identify the Volterra kernels easily and enable to split the contribution of the linear and nonlinear terms using input-output as well as output-only signals. Furthermore, a methodology for modal analysis of weakly nonlinear systems under multilevel excitation is also proposed. The contribution of this new approach lies in the fact that HOFRFs are simply computed as functions of the linear FRFs. Basically, it extends the conventional experimental modal analysis methods in order to characterize and treat nonlinear effects. The results based on numerical and experimental examples presented along the thesis show the contributions, benefits and effectiveness of the proposal. / FAPESP: 2012/09135-3 / CNPq: 47058/2012-0 / CNPq: 203610/2014-8
287

Vícerozměrné bodové procesy a jejich použití na neurofyziologických datech / Multivariate point processes and their application on neurophysiological data

Bakošová, Katarína January 2018 (has links)
This thesis examines a multivariate point process in time with focus on a mu- tual relations of its marginal point processes. The first chapter acquaints the re- ader with the theoretical background of multivariate point processes and their properties, especially the higher-order cumulant-correlation measures. Later on, several models of multivariate point processes with different dependence structu- res are characterized, such as the random superposition model, a Poisson depen- dent superposition point process, a jitter Poisson dependent superposition point process orrenewal processes models. Simulations of each of them are provided. Furthermore, two statistical methods for higher-order correlations are presented; the cumulant based inference of higher-order correlations, and the extended til- ling coefficient. Finally, the introduced methods are applied not only on the data from simulations, but also on the real, simultaneously recorded nerve cells spike train data. The results are discussed. 1
288

Planejamento da expansão de sistemas de transmissão usando os modelos CC - CA e tecnicas de programação não-linear / Transmission systems expansion planning using DC-AC models and non-linear programming techniques

Rider Flores, Marcos Julio, 1975- 22 February 2006 (has links)
Orientador: Ariovaldo Verandio Garcia, Ruben Augusto Romero Lazaro / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-06T06:56:43Z (GMT). No. of bitstreams: 1 RiderFlores_MarcosJulio_D.pdf: 1021887 bytes, checksum: 6000961c2f5457b410ac691912476270 (MD5) Previous issue date: 2006 / Resumo: Neste trabalho são propostos modelos matemáticos e técnicas de solução para resolver o problema de planejamento da expansão de sistemas de transmissão através de três enfoques. a) Usando o modelo de corrente alternada do sistema de transmissão e um algoritmo heurístico construtivo especializado para resolver o problema de planejamento, e, ainda, realiza-se uma primeira tentativa de alocação de fontes de potência reativas; b) Usando o modelo de corrente contínua e técnicas de programação não-linear especializadas. Nesse caso emprega-se uma versão relaxada do problema de planejamento da expansão de sistemas de transmissão usando o modelo de corrente contínua, onde a integralidade das variáveis de investimento é desprezada. Resolve-se o problema de programação não-linear, modelado de forma matricial com um algoritmo de otimização especializado e, além disso, um algoritmo heurístico construtivo especializado é utilizado para resolver o problema de planejamento. c) Usando o modelo de corrente contínua e um algoritmo Branch and Bound (B&B) sem empregar técnicas de decomposição. Para isso foram redefinidos os chamados testes de sondagem no algoritmo B&B e em cada nó da árvore de B&B tem-se um problema de programação não-linear que são resolvidos usando a metodologia desenvolvida no item (b). Os ítens (a), (b) e (c) requerem a solução de problemas de programação não-linear diferenciados. Uma revisão das características principais da resolução iterativa dos métodos de pontos interiores é apresentada. Foi desenvolvida uma técnica baseada em uma combinação de métodos de pontos interiores de alta ordem (MPI-AO) para resolver os problemas de programação não-linear de forma rápida, eficiente e robusta. Essa combinação dos MPI-AO tem como objetivo colocar num único método as características particulares de cada um dos MPI-AO e melhorar o desempenho computacional comparado com os MPI-AO de forma individual / Abstract: In this work mathematical models and solution techniques are proposed to solve the power system transmission expansion planning problem through three approaches: a) Using the nonlinear model ofthe transmission system (AC model) and a specialized constructive heuristic algorithm to solve the problem and, yet, a first attempt to allocate reactive power sources is also considered; b) Using the direct-current (DC) model and specialized techniques of nonlinear programming. In this case a version of the power system transmission expansion planning problem using the DC model where the integrality of the investment variables is relaxed is used. The nonlinear programming problem is solved with a specialized optimization algorithm and, moreover, a constructive heuristic algorithm is employed to solve the planning problem. c) Using the DC model and Branch and Bound (B&B) algorithm without the use of decomposition techniques. The so called fathoming tests of the B&B were redefined and at each node of the tree a nonlinear programming problem is solved using the method developed in b). Items a), b) and c) require the solution of distinct problems of nonlinear programming. A revision of the main characteristics of the iterative solution of the interior points methods is presented. An optimization technique based on a combination of the higher order interior point methods (HO-IPM) had been developed to solve the nonlinear programming problems in a fast, efficient and robust way. This combination of the HO-IPM has as objective to explore the particular characteristics of each method in a single one and to improve the comparative computational performance with the HO-IPM of individual form / Doutorado / Energia Eletrica / Doutor em Engenharia Elétrica
289

Análise de componentes independentes aplicada à separação de sinais de áudio. / Independent component analysis applied to separation of audio signals.

Fernando Alves de Lima Moreto 19 March 2008 (has links)
Este trabalho estuda o modelo de análise em componentes independentes (ICA) para misturas instantâneas, aplicado na separação de sinais de áudio. Três algoritmos de separação de misturas instantâneas são avaliados: FastICA, PP (Projection Pursuit) e PearsonICA; possuindo dois princípios básicos em comum: as fontes devem ser independentes estatisticamente e não-Gaussianas. Para analisar a capacidade de separação dos algoritmos foram realizados dois grupos de experimentos. No primeiro grupo foram geradas misturas instantâneas, sinteticamente, a partir de sinais de áudio pré-definidos. Além disso, foram geradas misturas instantâneas a partir de sinais com características específicas, também geradas sinteticamente, para avaliar o comportamento dos algoritmos em situações específicas. Para o segundo grupo foram geradas misturas convolutivas no laboratório de acústica do LPS. Foi proposto o algoritmo PP, baseado no método de Busca de Projeções comumente usado em sistemas de exploração e classificação, para separação de múltiplas fontes como alternativa ao modelo ICA. Embora o método PP proposto possa ser utilizado para separação de fontes, ele não pode ser considerado um método ICA e não é garantida a extração das fontes. Finalmente, os experimentos validam os algoritmos estudados. / This work studies Independent Component Analysis (ICA) for instantaneous mixtures, applied to audio signal (source) separation. Three instantaneous mixture separation algorithms are considered: FastICA, PP (Projection Pursuit) and PearsonICA, presenting two common basic principles: sources must be statistically independent and non-Gaussian. In order to analyze each algorithm separation capability, two groups of experiments were carried out. In the first group, instantaneous mixtures were generated synthetically from predefined audio signals. Moreover, instantaneous mixtures were generated from specific signal generated with special features, synthetically, enabling the behavior analysis of the algorithms. In the second group, convolutive mixtures were probed in the acoustics laboratory of LPS at EPUSP. The PP algorithm is proposed, based on the Projection Pursuit technique usually applied in exploratory and clustering environments, for separation of multiple sources as an alternative to conventional ICA. Although the PP algorithm proposed could be applied to separate sources, it couldnt be considered an ICA method, and source extraction is not guaranteed. Finally, experiments validate the studied algorithms.
290

Esquemas centrais para leis de conservação em meios porosos

Tristão, Denise Schimitz de Carvalho 30 August 2013 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-02T18:09:12Z No. of bitstreams: 1 deniseschimitzdecarvalhotristao.pdf: 734334 bytes, checksum: 9fda9bda660d5bfec3204e328fe66d1c (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T19:58:58Z (GMT) No. of bitstreams: 1 deniseschimitzdecarvalhotristao.pdf: 734334 bytes, checksum: 9fda9bda660d5bfec3204e328fe66d1c (MD5) / Made available in DSpace on 2017-03-06T19:58:58Z (GMT). No. of bitstreams: 1 deniseschimitzdecarvalhotristao.pdf: 734334 bytes, checksum: 9fda9bda660d5bfec3204e328fe66d1c (MD5) Previous issue date: 2013-08-30 / O desenvolvimento de modelos matemáticos e métodos computacionais para a simulação de escoamentos em meios porosos é de grande interesse, devido à sua aplicação em diversas áreas da engenharia e ciências aplicadas. Em geral, na simulação numérica de um modelo de escoamento em meios porosos, são adotadas estratégias de desacoplamento dos sistemas de equações diferenciais parciais que o compõem. Este estudo recai sobre esquemas numéricos para leis de conservação hiperbólicas, cuja aproximação é não-trivial. Os esquemas de volumes finitos de alta resolução baseados no algoritmo REA (Reconstruct, Evolve, Average) têm sido empregados com considerável sucesso para a aproximação de leis de conservação. Recentemente, esquemas centrais de alta ordem, baseados nos métodos de Lax-Friedrichs e de Rusanov (Local Lax-Friedrichs) têm sido apresentados de forma a reduzir a excessiva difusão numérica característica destes esquemas de primeira ordem. Nesta dissertação apresentamos o estudo e a aplicação de esquemas de volumes finitos centrais de alta ordem para equações hiperbólicas que aparecem na modelagem de escoamentos em meios porosos. / The development of mathematical models and computational methods for the simulation of flow in porous media has a great interest because of its applications in engineering and other sciences. In general, in order to solve numerically the flow model in porous media the system of partial differential equations are decoupled. This study focus on the numerical schemes for the hyperbolic conservation laws, which solution is non-trivial. The finite volume schemes based on high order algorithm REA (Reconstruct, Evolve, Average) have been used with considerable success for the numerical solution of the conservation laws. Recently, high-order central schemes, based on the methods of Lax-Friedrichs and Rusanov (Local Lax-Friedrichs) have been presented, they reduce the excessive numerical diffusion presented in the first order schemes. In this dissertation we present the study and application of the high-order finite volume central schemes for hyperbolic equations as appear in the porous media flow modeling.

Page generated in 0.0438 seconds