Spelling suggestions: "subject:"square""
501 |
Filter Optimization for Personal Sound Zones SystemsMolés Cases, Vicent 02 September 2022 (has links)
[ES] Los sistemas de zonas de sonido personal (o sus siglas en inglés PSZ) utilizan altavoces y técnicas de procesado de señal para reproducir sonidos distintos en diferentes zonas de un mismo espacio compartido. Estos sistemas se han popularizado en los últimos años debido a la amplia gama de aplicaciones que podrían verse beneficiadas por la generación de zonas de escucha individuales. El diseño de los filtros utilizados para procesar las señales de sonido es uno de los aspectos más importantes de los sistemas PSZ, al menos para las frecuencias bajas y medias. En la literatura se han propuesto diversos algoritmos para calcular estos filtros, cada uno de ellos con sus ventajas e inconvenientes. En el presente trabajo se revisan los algoritmos para sistemas PSZ propuestos en la literatura y se evalúa experimentalmente su rendimiento en un entorno reverberante. Los distintos algoritmos se comparan teniendo en cuenta aspectos como el aislamiento acústico entre zonas, el error de reproducción, la energía de los filtros y el retardo del sistema. Además, se estudian estrategias computacionalmente eficientes para obtener los filtros y también se compara su complejidad computacional. Los resultados experimentales obtenidos revelan que las soluciones existentes no pueden ofrecer una complejidad computacional baja y al mismo tiempo un buen rendimiento con baja latencia. Por ello se propone un nuevo algoritmo basado en el filtrado subbanda, y se demuestra experimentalmente que este algoritmo mitiga las limitaciones de los algoritmos existentes. Asimismo, este algoritmo ofrece una mayor versatilidad que los algoritmos existentes, ya que se pueden utilizar configuraciones distintas en cada subbanda, como por ejemplo, diferentes longitudes de filtro o distintos conjuntos de altavoces. Por último, se estudia la influencia de las respuestas objetivo en la optimización de los filtros y se propone un nuevo método en el que se aplica una ventana temporal a estas respuestas. El método propuesto se evalúa experimentalmente en dos salas con diferentes tiempos de reverberación y los resultados obtenidos muestran que se puede reducir la energía de las interferencias entre zonas gracias al efecto de la ventana temporal. / [CA] Els sistemes de zones de so personal (o les seves sigles en anglés PSZ) fan servir altaveus i tècniques de processament de senyal per a reproduir sons distints en diferents zones d'un mateix espai compartit. Aquests sistemes s'han popularitzat en els últims anys a causa de l'àmplia gamma d'aplicacions que podrien veure's beneficiades per la generació de zones d'escolta individuals. El disseny dels filtres utilitzats per a processar els senyals de so és un dels aspectes més importants dels sistemes PSZ, particularment per a les freqüències baixes i mitjanes. En la literatura s'han proposat diversos algoritmes per a calcular aquests filtres, cadascun d'ells amb els seus avantatges i inconvenients. En aquest treball es revisen els algoritmes proposats en la literatura per a sistemes PSZ i s'avalua experimentalment el seu rendiment en un entorn reverberant. Els distints algoritmes es comparen tenint en compte aspectes com l'aïllament acústic entre zones, l'error de reproducció, l'energia dels filtres i el retard del sistema. A més, s'estudien estratègies de còmput eficient per obtindre els filtres i també es comparen les seves complexitats computacionals. Els resultats experimentals obtinguts revelen que les solucions existents no poder oferir al mateix temps una complexitat computacional baixa i un bon rendiment amb latència baixa. Per això es proposa un nou algoritme basat en el filtrat subbanda que mitiga aquestes limitacions. A més, l'algoritme proposat ofereix una major versatilitat que els algoritmes existents, ja que en cada subbanda el sistema pot utilitzar configuracions diferents, com per exemple, distintes longituds de filtre o distints conjunts d'altaveus. L'algoritme proposat s'avalua experimentalment en un entorn reverberant, i es mostra com pot mitigar satisfactòriament les limitacions dels algoritmes existents. Finalment, s'estudia la influència de les respostes objectiu en l'optimització dels filtres i es proposa un nou mètode en el que s'aplica una finestra temporal a les respostes objectiu. El mètode proposat s'avalua experimentalment en dues sales amb diferents temps de reverberació i els resultats obtinguts mostren que es pot reduir el nivell d'interferència entre zones grècies a l'efecte de la finestra temporal. / [EN] Personal Sound Zones (PSZ) systems deliver different sounds to a number of listeners sharing an acoustic space through the use of loudspeakers together with signal processing techniques. These systems have attracted a lot of attention in recent years because of the wide range of applications that would benefit from the generation of individual listening zones, e.g., domestic or automotive audio applications. A key aspect of PSZ systems, at least for low and mid frequencies, is the optimization of the filters used to process the sound signals. Different algorithms have been proposed in the literature for computing those filters, each exhibiting some advantages and disadvantages. In this work, the state-of-the-art algorithms for PSZ systems are reviewed, and their performance in a reverberant environment is evaluated. Aspects such as the acoustic isolation between zones, the reproduction error, the energy of the filters, and the delay of the system are considered in the evaluations. Furthermore, computationally efficient strategies to obtain the filters are studied, and their computational complexity is compared too. The performance and computational evaluations reveal the main limitations of the state-of-the-art algorithms. In particular, the existing solutions can not offer low computational complexity and at the same time good performance for short system delays. Thus, a novel algorithm based on subband filtering that mitigates these limitations is proposed for PSZ systems. In addition, the proposed algorithm offers more versatility than the existing algorithms, since different system configurations, such as different filter lengths or sets of loudspeakers, can be used in each subband. The proposed algorithm is experimentally evaluated and tested in a reverberant environment, and its efficacy to mitigate the limitations of the existing solutions is demonstrated. Finally, the effect of the target responses in the optimization is discussed, and a novel approach that is based on windowing the target responses is proposed. The proposed approach is experimentally evaluated in two rooms with different reverberation levels. The evaluation results reveal that an appropriate windowing of the target responses can reduce the interference level between zones. / Molés Cases, V. (2022). Filter Optimization for Personal Sound Zones Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/186111
|
502 |
Novel chemometric proposals for advanced multivariate data analysis, processing and interpretationVitale, Raffaele 03 November 2017 (has links)
The present Ph.D. thesis, primarily conceived to support and reinforce the relation between academic and industrial worlds, was developed in collaboration with Shell Global Solutions (Amsterdam, The Netherlands) in the endeavour of applying and possibly extending well-established latent variable-based approaches (i.e. Principal Component Analysis - PCA - Partial Least Squares regression - PLS - or Partial Least Squares Discriminant Analysis - PLSDA) for complex problem solving not only in the fields of manufacturing troubleshooting and optimisation, but also in the wider environment of multivariate data analysis. To this end, novel efficient algorithmic solutions are proposed throughout all chapters to address very disparate tasks, from calibration transfer in spectroscopy to real-time modelling of streaming flows of data. The manuscript is divided into the following six parts, focused on various topics of interest:
Part I - Preface, where an overview of this research work, its main aims and justification is given together with a brief introduction on PCA, PLS and PLSDA;
Part II - On kernel-based extensions of PCA, PLS and PLSDA, where the potential of kernel techniques, possibly coupled to specific variants of the recently rediscovered pseudo-sample projection, formulated by the English statistician John C. Gower, is explored and their performance compared to that of more classical methodologies in four different applications scenarios: segmentation of Red-Green-Blue (RGB) images, discrimination of on-/off-specification batch runs, monitoring of batch processes and analysis of mixture designs of experiments;
Part III - On the selection of the number of factors in PCA by permutation testing, where an extensive guideline on how to accomplish the selection of PCA components by permutation testing is provided through the comprehensive illustration of an original algorithmic procedure implemented for such a purpose;
Part IV - On modelling common and distinctive sources of variability in multi-set data analysis, where several practical aspects of two-block common and distinctive component analysis (carried out by methods like Simultaneous Component Analysis - SCA - DIStinctive and COmmon Simultaneous Component Analysis - DISCO-SCA - Adapted Generalised Singular Value Decomposition - Adapted GSVD - ECO-POWER, Canonical Correlation Analysis - CCA - and 2-block Orthogonal Projections to Latent Structures - O2PLS) are discussed, a new computational strategy for determining the number of common factors underlying two data matrices sharing the same row- or column-dimension is described, and two innovative approaches for calibration transfer between near-infrared spectrometers are presented;
Part V - On the on-the-fly processing and modelling of continuous high-dimensional data streams, where a novel software system for rational handling of multi-channel measurements recorded in real time, the On-The-Fly Processing (OTFP) tool, is designed;
Part VI - Epilogue, where final conclusions are drawn, future perspectives are delineated, and annexes are included. / La presente tesis doctoral, concebida principalmente para apoyar y reforzar la relación entre la academia y la industria, se desarrolló en colaboración con Shell Global Solutions (Amsterdam, Países Bajos) en el esfuerzo de aplicar y posiblemente extender los enfoques ya consolidados basados en variables latentes (es decir, Análisis de Componentes Principales - PCA - Regresión en Mínimos Cuadrados Parciales - PLS - o PLS discriminante - PLSDA) para la resolución de problemas complejos no sólo en los campos de mejora y optimización de procesos, sino también en el entorno más amplio del análisis de datos multivariados. Con este fin, en todos los capítulos proponemos nuevas soluciones algorítmicas eficientes para abordar tareas dispares, desde la transferencia de calibración en espectroscopia hasta el modelado en tiempo real de flujos de datos.
El manuscrito se divide en las seis partes siguientes, centradas en diversos temas de interés:
Parte I - Prefacio, donde presentamos un resumen de este trabajo de investigación, damos sus principales objetivos y justificaciones junto con una breve introducción sobre PCA, PLS y PLSDA;
Parte II - Sobre las extensiones basadas en kernels de PCA, PLS y PLSDA, donde presentamos el potencial de las técnicas de kernel, eventualmente acopladas a variantes específicas de la recién redescubierta proyección de pseudo-muestras, formulada por el estadista inglés John C. Gower, y comparamos su rendimiento respecto a metodologías más clásicas en cuatro aplicaciones a escenarios diferentes: segmentación de imágenes Rojo-Verde-Azul (RGB), discriminación y monitorización de procesos por lotes y análisis de diseños de experimentos de mezclas;
Parte III - Sobre la selección del número de factores en el PCA por pruebas de permutación, donde aportamos una guía extensa sobre cómo conseguir la selección de componentes de PCA mediante pruebas de permutación y una ilustración completa de un procedimiento algorítmico original implementado para tal fin;
Parte IV - Sobre la modelización de fuentes de variabilidad común y distintiva en el análisis de datos multi-conjunto, donde discutimos varios aspectos prácticos del análisis de componentes comunes y distintivos de dos bloques de datos (realizado por métodos como el Análisis Simultáneo de Componentes - SCA - Análisis Simultáneo de Componentes Distintivos y Comunes - DISCO-SCA - Descomposición Adaptada Generalizada de Valores Singulares - Adapted GSVD - ECO-POWER, Análisis de Correlaciones Canónicas - CCA - y Proyecciones Ortogonales de 2 conjuntos a Estructuras Latentes - O2PLS). Presentamos a su vez una nueva estrategia computacional para determinar el número de factores comunes subyacentes a dos matrices de datos que comparten la misma dimensión de fila o columna y dos planteamientos novedosos para la transferencia de calibración entre espectrómetros de infrarrojo cercano;
Parte V - Sobre el procesamiento y la modelización en tiempo real de flujos de datos de alta dimensión, donde diseñamos la herramienta de Procesamiento en Tiempo Real (OTFP), un nuevo sistema de manejo racional de mediciones multi-canal registradas en tiempo real;
Parte VI - Epílogo, donde presentamos las conclusiones finales, delimitamos las perspectivas futuras, e incluimos los anexos. / La present tesi doctoral, concebuda principalment per a recolzar i reforçar la relació entre l'acadèmia i la indústria, es va desenvolupar en col·laboració amb Shell Global Solutions (Amsterdam, Països Baixos) amb l'esforç d'aplicar i possiblement estendre els enfocaments ja consolidats basats en variables latents (és a dir, Anàlisi de Components Principals - PCA - Regressió en Mínims Quadrats Parcials - PLS - o PLS discriminant - PLSDA) per a la resolució de problemes complexos no solament en els camps de la millora i optimització de processos, sinó també en l'entorn més ampli de l'anàlisi de dades multivariades. A aquest efecte, en tots els capítols proposem noves solucions algorítmiques eficients per a abordar tasques dispars, des de la transferència de calibratge en espectroscopia fins al modelatge en temps real de fluxos de dades.
El manuscrit es divideix en les sis parts següents, centrades en diversos temes d'interès:
Part I - Prefaci, on presentem un resum d'aquest treball de recerca, es donen els seus principals objectius i justificacions juntament amb una breu introducció sobre PCA, PLS i PLSDA;
Part II - Sobre les extensions basades en kernels de PCA, PLS i PLSDA, on presentem el potencial de les tècniques de kernel, eventualment acoblades a variants específiques de la recentment redescoberta projecció de pseudo-mostres, formulada per l'estadista anglés John C. Gower, i comparem el seu rendiment respecte a metodologies més clàssiques en quatre aplicacions a escenaris diferents: segmentació d'imatges Roig-Verd-Blau (RGB), discriminació i monitorització de processos per lots i anàlisi de dissenys d'experiments de mescles;
Part III - Sobre la selecció del nombre de factors en el PCA per proves de permutació, on aportem una guia extensa sobre com aconseguir la selecció de components de PCA a través de proves de permutació i una il·lustració completa d'un procediment algorítmic original implementat per a la finalitat esmentada;
Part IV - Sobre la modelització de fonts de variabilitat comuna i distintiva en l'anàlisi de dades multi-conjunt, on discutim diversos aspectes pràctics de l'anàlisis de components comuns i distintius de dos blocs de dades (realitzat per mètodes com l'Anàlisi Simultània de Components - SCA - Anàlisi Simultània de Components Distintius i Comuns - DISCO-SCA - Descomposició Adaptada Generalitzada en Valors Singulars - Adapted GSVD - ECO-POWER, Anàlisi de Correlacions Canòniques - CCA - i Projeccions Ortogonals de 2 blocs a Estructures Latents - O2PLS). Presentem al mateix temps una nova estratègia computacional per a determinar el nombre de factors comuns subjacents a dues matrius de dades que comparteixen la mateixa dimensió de fila o columna, i dos plantejaments nous per a la transferència de calibratge entre espectròmetres d'infraroig proper;
Part V - Sobre el processament i la modelització en temps real de fluxos de dades d'alta dimensió, on dissenyem l'eina de Processament en Temps Real (OTFP), un nou sistema de tractament racional de mesures multi-canal registrades en temps real;
Part VI - Epíleg, on presentem les conclusions finals, delimitem les perspectives futures, i incloem annexos. / Vitale, R. (2017). Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90442
|
503 |
Development of general finite differences for complex geometries using immersed boundary methodVasyliv, Yaroslav V. 07 January 2016 (has links)
In meshfree methods, partial differential equations are solved on an unstructured cloud of points distributed throughout the computational domain. In collocated meshfree methods, the differential operators are directly approximated at each grid point based on a local cloud of neighboring points. The set of neighboring nodes used to construct the local approximation is determined using a variable search radius. The variable search radius establishes an implicit nodal connectivity and hence a mesh is not required. As a result, meshfree methods have the potential flexibility to handle problem sets where the computational grid may undergo large deformations as well as where the grid may need to undergo adaptive refinement. In this work we develop the sharp interface formulation of the immersed boundary method for collocated meshfree approximations. We use the framework to implement three meshfree methods: General Finite Differences (GFD), Smoothed Particle Hydrodynamics (SPH), and Moving Least Squares (MLS). We evaluate the numerical accuracy and convergence rate of these methods by solving the 2D Poisson equation. We demonstrate that GFD is computationally more efficient than MLS and show that its accuracy is superior to a popular corrected form of SPH and comparable to MLS. We then use GFD to solve several canonic steady state fluid flow problems on meshfree grids generated using uniform and variable radii Poisson disk algorithm.
|
504 |
Non-global regression modellingHuang, Yunkai 21 June 2016 (has links)
In this dissertation, a new non-global regression model - the partial linear threshold regression model (PLTRM) - is proposed. Various issues related to the PLTRM are discussed.
In the first main section of the dissertation (Chapter 2), we define what is meant by the term “non-global regression model”, and we provide a brief review of the current literature associated with such models. In particular, we focus on their advantages and disadvantages in terms of their statistical properties. Because there are some weaknesses in the existing non-global regression models, we propose the PLTRM. The PLTRM combines non-parametric modelling with the traditional threshold regression models (TRMs), and hence can be thought of as an extension of the later models. We verify the performance of the PLTRM through a series of Monte Carlo simulation experiments. These experiments use a simulated data set that exhibits partial linear and partial nonlinear characteristics, and the PLTRM out-performs several competing parametric and non-parametric models in terms of the Mean Squared Error (MSE) of the within-sample fit.
In the second main section of this dissertation (Chapter 3), we propose a method of estimation for the PLTRM. This requires estimating the parameters of the parametric part of the model; estimating the threshold; and fitting the non-parametric component of the model. An “unbalanced penalized least squares” approach is used. This involves using restricted penalized regression spline and smoothing spline techniques for the non-parametric component of the model; the least squares method for the linear parametric part of the model; together with a search procedure to estimate the threshold value. This estimation procedure is discussed for three mutually exclusive situations, which are classified according to the way in which the two components of the PLTRM “join” at the threshold. Bootstrap sampling distributions of the estimators are provided using the parametric bootstrap technique. The various estimators appear to have good sampling properties in most of the situations that are considered. Inference issues such as hypothesis testing and confidence interval construction for the PLTRM are also investigated.
In the third main section of the dissertation (Chapter 4), we illustrate the usefulness of the PLTRM, and the application of the proposed estimation methods, by modelling various real-world data sets. These examples demonstrate both the good statistical performance, and the great application potential, of the PLTRM. / Graduate
|
505 |
Customer perceived value : reconceptualisation, investigation and measurementBruce, Helen Louise January 2013 (has links)
The concept of customer perceived value occupies a prominent position within the strategic agenda of organisations, as firms seek to maximise the value perceived by their customers as arising from their consumption, and to equal or exceed that perceived in relation to competitor propositions. Customer value management is similarly central to the marketing discipline. However, the nature of customer value remains ambiguous and its measurement is typically flawed, due to the poor conceptual foundation upon which previous research endeavours are built. This investigation seeks to address the current poverty of insight regarding the nature and measurement of customer value. The development of a revised conceptual framework synthesises the strengths of previous value conceptualisations while addressing many of their limitations. A multi-dimensional depiction of value arising from customer experience is presented, in which value is conceptualised as arising at both first-order dimension and overall, second-order levels of abstraction. The subsequent operationalisation of this conceptual framework within a two-phase investigation combines qualitative and quantitative methodologies in a study of customer value arising from subscription TV (STV) consumption. Sixty semi-structured interviews with 103 existing STV customers give rise to a multi-dimensional model of value, in which dimensions are categorised as restorative, actualising and hedonic in type, and as arising via individual, reflected or shared modes of perception. The quantitative investigation entails two periods of data collection via questionnaires developed from the qualitative findings, and the gathering of 861 responses, also from existing STV customers. A series of scales with which to measure value dimensions is developed and an index enabling overall perceived value measurement is produced. Contributions to theory of customer value arise in the form of enhanced insights regarding its nature. At the first-order dimension level, the derived dimensions are of specific relevance to the STV industry. However, the empirically derived framework of dimension types and modes of perception has potential applicability in multiple contexts. At the more abstract, second-order level, the findings highlight that value perceptions comprise only a subset of potential dimensions. Evidence is thus presented of the need to consider value at both dimension and overall levels of perception. Contributions to knowledge regarding customer value measurement also arise, as the study produces reliable and valid scales and an index. This latter tool is novel in its formative measurement of value as a second order construct, comprising numerous first-order dimensions of value, rather than quality as incorporated in previously derived measures. This investigation also results in a contribution to theory regarding customer experience through the identification of a series of holistic, discrete, direct and indirect value-generating interactions. Contributions to practice within the STV industry arise as the findings present a solution to the immediate need for enhanced value insight. Contributions to alternative industries are methodological, as this study presents a detailed process through which robust value insight can be derived. Specific methodological recommendations arise in respect of the need for empirically grounded research, an experiential focus and a twostage quantitative methodology.
|
506 |
On the existence and enumeration of sets of two or three mutually orthogonal Latin squares with application to sports tournament schedulingKidd, Martin Philip 03 1900 (has links)
Thesis (PdD)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: A Latin square of order n is an n×n array containing an arrangement of n distinct symbols with
the property that every row and every column of the array contains each symbol exactly once.
It is well known that Latin squares may be used for the purpose of constructing designs which
require a balanced arrangement of a set of elements subject to a number of strict constraints.
An important application of Latin squares arises in the scheduling of various types of balanced
sports tournaments, the simplest example of which is a so-called round-robin tournament — a
tournament in which each team opposes each other team exactly once.
Among the various applications of Latin squares to sports tournament scheduling, the problem
of scheduling special types of mixed doubles tennis and table tennis tournaments using special
sets of three mutually orthogonal Latin squares is of particular interest in this dissertation. A
so-called mixed doubles table tennis (MDTT) tournament comprises two teams, both consisting
of men and women, competing in a mixed doubles round-robin fashion, and it is known that
any set of three mutually orthogonal Latin squares may be used to obtain a schedule for such
a tournament. A more interesting sports tournament design, however, and one that has been
sought by sports clubs in at least two reported cases, is known as a spouse-avoiding mixed
doubles round-robin (SAMDRR) tournament, and it is known that such a tournament may be
scheduled using a self-orthogonal Latin square with a symmetric orthogonal mate (SOLSSOM).
These applications have given rise to a number of important unsolved problems in the theory
of Latin squares, the most celebrated of which is the question of whether or not a set of three
mutually orthogonal Latin squares of order 10 exists. Another open question is whether or not
SOLSSOMs of orders 10 and 14 exist. A further problem in the theory of Latin squares that
has received considerable attention in the literature is the problem of counting the number of
(essentially) different ways in which a set of elements may be arranged to form a Latin square,
i.e. the problem of enumerating Latin squares and equivalence classes of Latin squares of a given
order. This problem quickly becomes extremely difficult as the order of the Latin square grows,
and considerable computational power is often required for this purpose. In the literature on
Latin squares only a small number of equivalence classes of self-orthogonal Latin squares (SOLS)
have been enumerated, namely the number of distinct SOLS, the number of idempotent SOLS
and the number of isomorphism classes generated by idempotent SOLS of orders 4 n 9.
Furthermore, only a small number of equivalence classes of ordered sets of k mutually orthogonal
Latin squares (k-MOLS) of order n have been enumerated in the literature, namely main classes
of 2-MOLS of order n for 3 n 8 and isotopy classes of 8-MOLS of order 9. No enumeration
work on SOLSSOMs appears in the literature.
In this dissertation a methodology is presented for enumerating equivalence classes of Latin
squares using a recursive, backtracking tree-search approach which attempts to eliminate redundancy
in the search by only considering structures which have the potential to be completed
to well-defined class representatives. This approach ensures that the enumeration algorithm only generates one Latin square from each of the classes to be enumerated, thus also generating
a repository of class representatives of these classes. These class representatives may be used in
conjunction with various well-known enumeration results from the theory of groups and group
actions in order to determine the number of Latin squares in each class as well as the numbers
of various kinds of subclasses of each class.
This methodology is applied in order to enumerate various equivalence classes of SOLS and
SOLSSOMs of orders up to and including order 10 and various equivalence classes of k-MOLS
of orders up to and including order 8. The known numbers of distinct SOLS, idempotent SOLS
and isomorphism classes generated by idempotent SOLS are verified for orders 4 n 9, and in
addition the number of isomorphism classes, transpose-isomorphism classes and RC-paratopism
classes of SOLS of these orders are enumerated. The search is further extended to determine the
numbers of these classes for SOLS of order 10 via a large parallelisation of the backtracking treesearch
algorithm on a number of processors. The RC-paratopism class representatives of SOLS
thus generated are then utilised for the purpose of enumerating SOLSSOMs, while existing
repositories of symmetric Latin squares are also used for this purpose as a means of validating
the enumeration results. In this way distinct SOLSSOMs, standard SOLSSOMs, transposeisomorphism
classes of SOLSSOMs and RC-paratopism classes of SOLSSOMs are enumerated,
and a repository of RC-paratopism class representatives of SOLSSOMs is also produced. The
known number of main classes of 2-MOLS of orders 3 n 8 are verified in this dissertation,
and in addition the number of main classes of k-MOLS of orders 3 n 8 are also determined
for 3 k n−1. Other equivalence classes of k-MOLS of order n that are enumerated include
distinct k-MOLS and reduced k-MOLS of orders 3 n 8 for 2 k n − 1.
Finally, a filtering method is employed to verify whether any SOLS of order 10 satisfies two
basic necessary conditions for admitting a common orthogonal mate with its transpose, and it is
found via a computer search that only four of the 121 642 class representatives of RC-paratopism
classes of SOLS satisfy these conditions. It is further verified that none of these four SOLS
admits a common orthogonal mate with its transpose. By this method the spectrum of resolved
orders in terms of the existence of SOLSSOMs is improved in that the non-existence of such
designs of order 10 is established, thereby resolving a longstanding open existence question in
the theory of Latin squares. Furthermore, this result establishes a new necessary condition for
the existence of a set of three mutually orthogonal Latin squares of order 10, namely that such
a set cannot contain a SOLS and its transpose / AFRIKAANSE OPSOMMING: ’n Latynse vierkant van orde n is ’n n × n skikking van n simbole met die eienskap dat elke ry
en elke kolom van die skikking elke element presies een keer bevat. Dit is welbekend dat
Latynse vierkante gebruik kan word in die konstruksie van ontwerpe wat vra na ’n gebalanseerde
rangskikking van ’n versameling elemente onderhewig aan ’n aantal streng beperkings.
’n Belangrike toepassing van Latynse vierkante kom in die skedulering van verskeie spesiale
tipes gebalanseerde sporttoernooie voor, waarvan die eenvoudigste voorbeeld ’n sogenaamde
rondomtalietoernooi is — ’n toernooi waarin elke span elke ander span presies een keer teenstaan.
Onder die verskeie toepassings van Latynse vierkante in sporttoernooi-skedulering, is die probleem
van die skedulering van spesiale tipes gemengde dubbels tennis- en tafeltennistoernooie
deur gebruikmaking van spesiale versamelings van drie paarsgewys-ortogonale Latynse vierkante
in hierdie proefskrif van besondere belang. In sogenaamde gemengde dubbels tafeltennis (GDTT)
toernooi ding twee spanne, elk bestaande uit mans en vrouens, op ’n gemengde-dubbels rondomtalie
wyse mee, en dit is bekend dat enige versameling van drie paarsgewys-ortogonale Latynse
vierkante gebruik kan word om ’n skedule vir s´o ’n toernooi op te stel. ’n Meer interessante
sporttoernooi-ontwerp, en een wat al vantevore in minstens twee gerapporteerde gevalle deur
sportklubs benodig is, is egter ’n gade-vermydende gemengde-dubbels rondomtalie (GVGDR)
toernooi, en dit is bekend dat s´o ’n toernooi geskeduleer kan word deur gebruik te maak van ’n
self-ortogonale Latynse vierkant met ’n simmetriese ortogonale maat (SOLVSOM).
Hierdie toepassings het tot ’n aantal belangrike onopgeloste probleme in die teorie van Latynse
vierkante gelei, waarvan die mees beroemde die vraag na die bestaan van ’n versameling van
drie paarsgewys ortogonale Latynse vierkante van orde 10 is. Nog ’n onopgeloste probleem
is die vraag na die bestaan van SOLVSOMs van ordes 10 en 14. ’n Verdere probleem in die
teorie van Latynse vierkante wat aansienlik aandag in die literatuur geniet, is die bepaling
van die getal (essensieel) verskillende maniere waarop ’n versameling elemente in ’n Latynse
vierkant gerangskik kan word, m.a.w. die probleem van die enumerasie van Latynse vierkante
en ekwivalensieklasse van Latynse vierkante van ’n gegewe orde. Hierdie probleem raak vinnig
baie moeilik soos die orde van die Latynse vierkant groei, en aansienlike berekeningskrag word
dikwels hiervoor benodig. Sover is slegs ’n klein aantal ekwivalensieklasse van self-ortogonale
Latynse vierkante (SOLVe) in die literatuur getel, naamlik die getal verskillende SOLVe, die getal
idempotente SOLVe en die getal isomorfismeklasse voortgebring deur idempotente SOLVe van
ordes 4 n 9. Verder is slegs ’n klein aantal ekwivalensieklasse van geordende versamelings
van k onderling ortogonale Latynse vierkante (k-OOLVs) in die literatuur getel, naamlik die
getal hoofklasse voortgebring deur 2-OOLVs van orde n vir 3 n 8 en die getal isotoopklasse
voortgebring deur 8-OOLVs van orde 9. Daar is geen enumerasieresultate oor SOLVSOMs in
die literatuur beskikbaar nie.
In hierdie proefskrif word ’n metodologie vir die enumerasie van ekwivalensieklasse van Latynse
vierkante met behulp van ’n soekboomalgoritme met terugkering voorgestel. Hierdie algoritme
poog om oorbodigheid in die soektog te minimeer deur net strukture te oorweeg wat die potensiaal
het om tot goed-gedefinieerde klasleiers opgebou te word. Hierdie eienskap verseker dat
die algoritme slegs een Latynse vierkant binne elk van die klasse wat getel word, genereer, en
dus word ’n databasis van verteenwoordigers van hierdie klasse sodoende opgebou. Hierdie
klasverteenwoordigers kan tesame met verskeie welbekende groepteoretiese telresultate gebruik
word om die getal Latynse vierkante in elke klas te bepaal, asook die getal verskeie deelklasse
van verskillende tipes binne elke klas.
Die bogenoemde metodologie word toegepas om verskeie SOLV- en SOLVSOM-klasse van ordes
kleiner of gelyk aan 10 te tel, asook om k-OOLV-klasse van ordes kleiner of gelyk aan 8
te tel. Die getal verskillende SOLVe, idempotente SOLVe en isomorfismeklasse voortgebring
deur SOLVe word vir ordes 4 n 9 geverifieer, en daarbenewens word die getal isomorfismeklasse,
transponent-isomorfismeklasse en RC-paratoopklasse voortgebring deur SOLVe van
hierdie ordes ook bepaal. Die soektog word deur middel van ’n groot parallelisering van die
soekboomalgoritme op ’n aantal rekenaars ook uitgebrei na die tel van hierdie klasse voortgebring
deur SOLVe van orde 10. Die verteenwoordigers van RC-paratoopklasse voortgebring
deur SOLVe wat deur middel van hierdie algoritme gegenereer word, word dan gebruik om
SOLVSOMs te tel, terwyl bestaande databasisse van simmetriese Latynse vierkante as validasie
van die resultate ook vir hierdie doel ingespan word. Op hierdie manier word die getal
verskillende SOLVSOMs, standaardvorm SOLVSOMs, transponent-isomorfismeklasse voortgebring
deur SOLVSOMs asook RC-paratoopklasse voortgebring deur SOLVSOMs bepaal, en
word ’n databasis van verteenwoordigers van RC-paratoopklasse voortgebring deur SOLVSOMs
ook opgebou. Die bekende getal hoofklasse voortgebring deur 2-OOLVs van ordes 3 n 8
word in hierdie proefskrif geverifieer, en so ook word die getal hoofklasse voortgebring deur k-
OOLVs van ordes 3 n 8 bepaal, waar 3 k n−1. Ander ekwivalensieklasse voortgebring
deur k-OOLVs van orde n wat ook getel word, sluit in verskillende k-OOLVs en gereduseerde
k-OOLVs van ordes 3 n 8, waar 2 k n − 1.
Laastens word daar van ’n filtreer-metode gebruik gemaak om te bepaal of enige SOLV van
orde 10 twee basiese nodige voorwaardes om ’n ortogonale maat met sy transponent te deel
kan bevredig, en daar word gevind dat slegs vier van die 121 642 klasverteenwoordigers van
RC-paratoopklasse voortgebring deur SOLVe van orde 10 aan hierdie voorwaardes voldoen.
Dit word verder vasgestel dat geeneen van hierdie vier SOLVe ortogonale maats in gemeen
met hul transponente het nie. Die spektrum van afgehandelde ordes in terme van die bestaan
van SOLVSOMs word dus vergroot deur aan te toon dat geen sulke ontwerpe van orde 10
bestaan nie, en sodoende word ’n jarelange oop bestaansvraag in die teorie van Latynse vierkante
beantwoord. Verder bevestig hierdie metode ’n nuwe noodsaaklike bestaansvoorwaarde vir ’n
versameling van drie paarsgewys-ortogonale Latynse vierkante van orde 10, naamlik dat s´o ’n
versameling nie ’n SOLV en sy transponent kan bevat nie. / Harry Crossley Foundation / National Research Foundation
|
507 |
Provable alternating minimization for non-convex learning problemsNetrapalli, Praneeth Kumar 17 September 2014 (has links)
Alternating minimization (AltMin) is a generic term for a widely popular approach in non-convex learning: often, it is possible to partition the variables into two (or more) sets, so that the problem is convex/tractable in one set if the other is held fixed (and vice versa). This allows for alternating between optimally updating one set of variables, and then the other. AltMin methods typically do not have associated global consistency guarantees; even though they are empirically observed to perform better than methods (e.g. based on convex optimization) that do have guarantees. In this thesis, we obtain rigorous performance guarantees for AltMin in three statistical learning settings: low rank matrix completion, phase retrieval and learning sparsely-used dictionaries. The overarching theme behind our results consists of two parts: (i) devising new initialization procedures (as opposed to doing so randomly, as is typical), and (ii) establishing exponential local convergence from this initialization. Our work shows that the pursuit of statistical guarantees can yield algorithmic improvements (initialization in our case) that perform better in practice. / text
|
508 |
A systems engineering approach to metallurgical accounting of integrated smelter complexesMtotywa, Busisiwe Percelia, Lyman, G. J. 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2008. / ENGLISH ABSTRACT: The growing need to improve accounting accuracy, precision and to standardise
generally accepted measurement methods in the mining and processing industries
has led to the joining of a number of organisations under the AMIRA International
umbrella, with the purpose of fulfilling these objectives. As part of this venture,
Anglo Platinum undertook a project on the material balancing around its largest
smelter, the Waterval Smelter.
The primary objective of the project was to perform a statistical material balance
around the Waterval Smelter using the Maximum Likelihood method with respect
to platinum, rhodium, nickel, sulphur and chrome (III) oxide.
Pt, Rh and Ni were selected for their significant contribution to the company’s profit
margin, whilst S was included because of its environmental importance. Cr2O3 was
included for its importance in as far as the difficulties its presence poses in
smelting of PGMs.
The objective was achieved by performing a series of statistical computations.
These include; quantification of total and analytical uncertainties, detection of
outliers, estimation and modelling of daily and monthly measurement uncertainties,
parameter estimation and data reconciliation. Comparisons were made between
the Maximum Likelihood and Least Squares methods.
Total uncertainties associated with the daily grades were determined by use of
variographic studies. The estimated Pt standard deviations were within 10%
relative to the respective average grades with a few exceptions. The total
uncertainties were split into their respective components by determining analytical variances from analytical replicates. The results indicated that the sampling
components of the total uncertainty were generally larger as compared to their
analytical counterparts. WCM, the platinum rich Waterval smelter product, has an
uncertainty that is worth ~R2 103 000 in its daily Pt grade. This estimated figure
shows that the quality of measurements do not only affect the accuracy of metal
accounting, but can have considerable implications if not quantified and managed.
The daily uncertainties were estimated using Kriging and bootstrapped to obtain
estimates for the monthly uncertainties. Distributions were fitted using MLE on the
distribution fitting tool of the JMP6.0 programme and goodness of fit tests were
performed. The data were fitted with normal and beta distributions, and there was
a notable decrease in the skewness from the daily to the monthly data.
The reconciliation of the data was performed using the Maximum Likelihood and
comparing that with the widely used Least Squares. The Maximum Likelihood and
Least Squares adjustments were performed on simulated data in order to conduct
a test of accuracy and to determine the extent of error reduction after the
reconciliation exercise. The test showed that the two methods had comparable
accuracies and error reduction capabilities. However, it was shown that modelling
of uncertainties with the unbounded normal distribution does lead to the estimation
of adjustments so large that negative adjusted values are the result. The benefit of
modelling the uncertainties with a bounded distribution, which is the beta
distribution in this case, is that the possibility of obtaining negative adjusted values
is annihilated. ML-adjusted values (beta) will always be non-negative, therefore
feasible. In a further comparison of the ML(bounded model) and the LS methods in
the material balancing of the Waterval smelter complex, it was found that for all
those streams whose uncertainties were modelled with a beta distribution, i.e.
those whose distribution possessed some degree of skewness, the ML
adjustments were significantly smaller than the LS counterparts
It is therefore concluded that the Maximum Likelihood (bounded models) is a
rigorous alternative method of data reconciliation to the LS method with the benefits of; -- Better estimates due to the fact that the nature of the data (distribution) is not assumed, but determined through distribution fitting and parameter estimation
-- Adjusted values can never be negative due to the bounded nature of the
distribution
The novel contributions made in this thesis are as follows;
-- The Maximum Likelihood method was for the first time employed in the
material balancing of non-normally distributed data and compared with the
well-known Least Squares method
-- This was an original integration of geostatistical methods with data
reconciliation to quantify and predict measurement uncertainties.
-- For the first time, measurement uncertainties were modeled with a
distribution that was non-normal and bounded in nature, leading to smaller
adjustments / AFRIKAANSE OPSOMMING: Die groeiende behoefte aan rekeningkundige akkuraatheid, en om presisie te
verbeter, en te standardiseer op algemeen aanvaarde meetmetodes in die mynbou
en prosesseringsnywerhede, het gelei tot die samwewerking van 'n aantal van
organisasies onder die AMIRA International sambreel, met die doel om
bogenoemde behoeftes aan te spreek. As deel van hierdie onderneming, het
Anglo Platinum onderneem om 'n projek op die materiaal balansering rondom sy
grootste smelter, die Waterval smelter.
Die primêre doel van die projek was om 'n statistiese materiaal balans rondom die
Waterval smelter uit te voer deur gebruik te maak van die sogenaamde maksimum
waarskynlikheid metode met betrekking tot platinum, rodium, nikkel, swawel en
chroom (iii) oxied.
Pt, Rh en Ni was gekies vir hul beduidende bydrae tot die maatskappy se
winsmarge, terwyl S ingesluit was weens sy belangrike omgewingsimpak. Cr2O3
was ingesluit weens sy impak op die smelting van Platinum groep minerale.
Die doelstelling was bereik deur die uitvoering van 'n reeks van statistiese
berekeninge. Hierdie sluit in: die kwantifisering van die totale en analitiese
variansies, opsporing van uitskieters, beraming en modellering van daaglikse en
maandelikse metingsvariansies, parameter beraming en data rekonsiliasie.
Vergelykings was getref tussen die maksimum waarskynlikheid en kleinste
kwadrate metodes.
Totale onsekerhede of variansies geassosieer met die daaglikse grade was bepaal
deur ’n Variografiese studie. Die beraamde Pt standaard afwykings was binne 10% relatief tot die onderskeie gemiddelde grade met sommige uitsonderings. Die totale
onsekerhede was onderverdeel in hul onderskeie komponente deur bepaling van
die ontledingsvariansies van duplikate. Die uitslae toon dat die monsternemings
komponente van die totale onsekerheid oor die algemeen groter was as hul
bypassende analitiese variansies. WCM, ‘n platinum-ryke Waterval Smelter
produk, het 'n onsekerheid in die orde van ~twee miljoen rand in sy daagliks Pt
graad. Hierdie beraamde waarde toon dat die kwaliteit van metings nie alleen die
akkuraatheid van metaal rekeningkunde affekteer nie, maar aansienlike finansiële
implikasies het indien nie die nie gekwantifiseer en bestuur word nie.
Die daagliks onsekerhede was beraam deur gebruik te maak van “Kriging” en
“Bootstrap” metodes om die maandelikse onsekerhede te beraam. Verspreidings
was gepas deur gebruik te maak van hoogste waarskynlikheid beraming passings
en goedheid–van-pas toetse was uitgevoer. Die data was gepas met Normaal en
Beta verspreidings, en daar was 'n opmerklike vermindering in die skeefheid van
die daaglikse tot die maandeliks data.
Die rekonsiliasies van die massabalans data was uitgevoer deur die gebruik die
maksimum waarskynlikheid metodes en vergelyk daardie met die algemeen
gebruikde kleinste kwadrate metode. Die maksimum waarskynlikheid (ML) en
kleinste kwadrate (LS) aanpassings was uitgevoer op gesimuleerde data ten einde
die akkuraatheid te toets en om die mate van fout vermindering na die rekonsiliasie
te bepaal. Die toets getoon dat die twee metodes het vergelykbare akkuraathede
en foutverminderingsvermoëns. Dit was egter getoon dat modellering van die
onsekerhede met die onbegrensde Normaal verdeling lei tot die beraming van
aanpassings wat so groot is dat negatiewe verstelde waardes kan onstaan na
rekosniliasie. Die voordeel om onsekerhede met 'n begrensde distribusie te
modelleer, soos die beta distribusie in hierdie geval, is dat die moontlikheid om
negatiewe verstelde waardes te verkry uitgelsuit word. ML-verstelde waardes (met
die Beta distribusie funksie) sal altyd nie-negatief wees, en om hierdie rede
uitvoerbaar. In 'n verdere vergelyking van die ML (begrensd) en die LS metodes in
die materiaal balansering van die waterval smelter kompleks, is dit gevind dat vir
almal daardie strome waarvan die onserkerhede gesimuleer was met 'n Beta distribusie, dus daardie strome waarvan die onsekerheidsdistribusie ‘n mate van
skeefheid toon, die ML verstellings altyd beduidend kleiner was as die
ooreenkomstige LS verstellings. Vervolgens word die Maksimum Waarskynlikheid
metode (met begrensde modelle) gesien as 'n beter alternatiewe metode van data
rekosiliasie in vergelyking met die kleinste kwadrate metode met die voordele van:
• Beter beramings te danke aan die feit dat die aard van die
onsekerheidsdistribusie nie aangeneem word nie, maar bepaal is deur die
distribusie te pas en deur van parameter beraming gebruik te maak.
• Die aangepaste waardes kan nooit negatief wees te danke aan die begrensde
aard van die verdeling.
Die volgende oorspronklike bydraes is gelewer in hierdie verhandeling:
• Die Maksimum Waarskynlikheid metode was vir die eerste keer geëvalueer vir
massa balans rekonsiliasie van nie-Normaal verspreide data en vergelyk met die
bekendde kleinste kwadrate metode.
• Dit is die eerste keer geostatistiese metodes geïntegreer is met data rekonsiliasie
om onsekerhede te beraam waarbinne verstellings gemaak word.
• Vir die eerste keer, is meetonsekerhede gemoddelleer met 'n distribusie wat nie-
Normaal en begrensd van aard is, wat lei tot kleiner en meer realistiese verstellings.
|
509 |
An unstructured numerical method for computational aeroacousticsPortas, Lance O. January 2009 (has links)
The successful application of Computational Aeroacoustics (CAA) requires high accuracy numerical schemes with good dissipation and dispersion characteristics. Unstructured meshes have a greater geometrical flexibility than existing high order structured mesh methods. This work investigates the suitability of unstructured mesh techniques by computing a two-dimensionallinearised Euler problem with various discretisation schemes and different mesh types. The goal of the present work is the development of an unstructured numerical method with the high accuracy, low dissipation and low dispersion required to be an effective tool in the study of aeroacoustics. The suitability of the unstructured method is investigated using aeroacoustic test cases taken from CAA Benchmark Workshop proceedings. Comparisons are made with exact solutions and a high order structured method. The higher order structured method was based upon a standard central differencing spatial discretisation. For the unstructured method a vertex-based data structure is employed. A median-dual control volume is used for the finite volume approximation with the option of using a Green-Gauss gradient approximation technique or a Least Squares approximation. The temporal discretisation used for both the structured and unstructured numerical methods is an explicit Runge-Kutta method with local timestepping. For the unstructured method, the gradient approximation technique is used to compute gradients at each vertex, these are then used to reconstruct the fluxes at the control volume faces. The unstructured mesh types used to evaluate the numerical method include semi-structured and purely unstructured triangular meshes. The semi-structured meshes were created directly from the associated structured mesh. The purely unstructured meshes were created using a commercial paving algorithm. The Least Squares method has the potential to allow high order reconstruction. Results show that a Weighted Least gradient approximation gives better solutions than unweighted and Green-Gauss gradient computation. The solutions are of acceptable accuracy on these problems with the absolute error of the unstructured method approaching that of a high order structured solution on an equivalent mesh for specific aeroacoustic scenarios.
|
510 |
Estimation and Testing of Higher-Order Spatial Autoregressive Panel Data Error Component ModelsBadinger, Harald, Egger, Peter 10 1900 (has links) (PDF)
This paper develops an estimator for higher-order spatial autoregressive panel data error component models with spatial autoregressive disturbances, SARAR(R,S). We derive the moment conditions and optimal weighting matrix without distributional assumptions for a generalized moments (GM) estimation procedure of the spatial autoregressive parameters of the disturbance process and define a generalized two-stage least squares estimator for the regression parameters of the model. We prove consistency of the proposed estimators, derive their joint asymptotic distribution, and provide Monte Carlo evidence on their small sample performance.
|
Page generated in 0.0749 seconds