• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

LU-SGS Implicit Scheme For A Mesh-Less Euler Solver

Singh, Manish Kumar 07 1900 (has links) (PDF)
Least Square Kinetic Upwind Method (LSKUM) belongs to the class of mesh-less method that solves compressible Euler equations of gas dynamics. LSKUM is kinetic theory based upwind scheme that operates on any cloud of points. Euler equations are derived from Boltzmann equation (of kinetic theory of gases) after taking suitable moments. The basic update scheme is formulated at Boltzmann level and mapped to Euler level by suitable moments. Mesh-less solvers need only cloud of points to solve the governing equations. For a complex configuration, with such a solver, one can generate a separate cloud of points around each component, which adequately resolves the geometric features, and then combine all the individual clouds to get one set of points on which the solver directly operates. An obvious advantage of this approach is that any incremental changes in geometry will require only regeneration of the small cloud of points where changes have occurred. Additionally blanking and de-blanking strategy along with overlay point cloud can be adapted in some applications like store separation to avoid regeneration of points. Naturally, the mesh-less solvers have advantage in tackling complex geometries and moving components over solvers that need grids. Conventionally, higher order accuracy for space derivative term is achieved by two step defect correction formula which is computationally expensive. The present solver uses low dissipation single step modified CIR (MCIR) scheme which is similar to first order LSKUM formulation and provides spatial accuracy closer to second order. The maximum time step taken to march solution in time is limited by stability criteria in case of explicit time integration procedure. Because of this, explicit scheme takes a large number of iterations to achieve convergence. The popular explicit time integration schemes like four stages Runge-Kutta (RK4) are slow in convergence due to this reason. The above problem can be overcome by using the implicit time integration procedure. The implicit schemes are unconditionally stable i.e. very large time steps can be used to accelerate the convergence. Also it offers superior robustness. The implicit Lower-Upper Symmetric Gauss-Seidel (LU-SGS) scheme is very attractive due to its low numerical complexity, moderate memory requirement and unconditional stability for linear wave equation. Also this scheme is more efficient than explicit counterparts and can be implemented easily on parallel computers. It is based on the factorization of the implicit operator into three parts namely lower triangular matrix, upper triangular matrix and diagonal terms. The use of LU-SGS results in a matrix free implicit framework which is very economical as against other expensive procedures which necessarily involve matrix inversion. With implementation of the implicit LU-SGS scheme larger time steps can be used which in turn will reduce the computational time substantially. LU-SGS has been used widely for many Finite Volume Method based solvers. The split flux Jacobian formulation as proposed by Jameson is most widely used to make implicit procedure diagonally dominant. But this procedure when applied to mesh-less solvers leads to block diagonal matrix which again requires expensive inversion. In the present work LU-SGS procedure is adopted for mesh-less approach to retain diagonal dominancy and implemented in 2-D and 3-D solvers in matrix free framework. In order to assess the efficacy of the implicit procedure, both explicit and implicit 2-D solvers are tested on NACA 0012 airfoil for various flow conditions in subsonic and transonic regime. To study the performance of the solvers on different point distributions two types of the cloud of points, one unstructured distribution (4074 points) and another structured distribution (9600 points) have been used. The computed 2-D results are validated against NASA experimental data and AGARD test case. The density residual and lift coefficient convergence history is presented in detail. The maximum speed up obtained by use of implicit procedure as compared to explicit one is close to 6 and 14 for unstructured and structured point distributions respectively. The transonic flow over ONERA M6 wing is a classic test case for CFD validation because of simple geometry and complex flow. It has sweep angle of 30° and 15.6° at leading edge and trailing edge respectively. The taper ratio and aspect ratio of the wing are 0.562 and 3.8 respectively. At M∞=0.84 and α=3.06° lambda shock appear on the upper surface of the wing. 3¬D explicit and implicit solvers are tested on ONERA M6 wing. The computed pressure coefficients are compared with experiments at section of 20%, 44%, 65%, 80%, 90% and 95% of span length. The computed results are found to match very well with experiments. The speed up obtained from implicit procedure is over 7 for ONERA M6 wing. The determination of the aerodynamic characteristics of a wing with the control surface deflection is one of the most important and challenging task in aircraft design and development. Many military aircraft use some form of the delta wing. To demonstrate the effectiveness of 3-D solver in handling control surfaces and small gaps, implicit 3-D code is used to compute flow past clipped delta wing with aileron deflection of 6° at M∞ = 0.9 and α = 1° and 3°. The leading edge backward sweep is 50.4°. The aileron is hinged from 56.5% semi-span to 82.9% of semi-span and at 80% of the local chord from leading edge. The computed results are validated with NASA experiments
582

Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation

Vitale, Raffaele 03 November 2017 (has links)
The present Ph.D. thesis, primarily conceived to support and reinforce the relation between academic and industrial worlds, was developed in collaboration with Shell Global Solutions (Amsterdam, The Netherlands) in the endeavour of applying and possibly extending well-established latent variable-based approaches (i.e. Principal Component Analysis - PCA - Partial Least Squares regression - PLS - or Partial Least Squares Discriminant Analysis - PLSDA) for complex problem solving not only in the fields of manufacturing troubleshooting and optimisation, but also in the wider environment of multivariate data analysis. To this end, novel efficient algorithmic solutions are proposed throughout all chapters to address very disparate tasks, from calibration transfer in spectroscopy to real-time modelling of streaming flows of data. The manuscript is divided into the following six parts, focused on various topics of interest: Part I - Preface, where an overview of this research work, its main aims and justification is given together with a brief introduction on PCA, PLS and PLSDA; Part II - On kernel-based extensions of PCA, PLS and PLSDA, where the potential of kernel techniques, possibly coupled to specific variants of the recently rediscovered pseudo-sample projection, formulated by the English statistician John C. Gower, is explored and their performance compared to that of more classical methodologies in four different applications scenarios: segmentation of Red-Green-Blue (RGB) images, discrimination of on-/off-specification batch runs, monitoring of batch processes and analysis of mixture designs of experiments; Part III - On the selection of the number of factors in PCA by permutation testing, where an extensive guideline on how to accomplish the selection of PCA components by permutation testing is provided through the comprehensive illustration of an original algorithmic procedure implemented for such a purpose; Part IV - On modelling common and distinctive sources of variability in multi-set data analysis, where several practical aspects of two-block common and distinctive component analysis (carried out by methods like Simultaneous Component Analysis - SCA - DIStinctive and COmmon Simultaneous Component Analysis - DISCO-SCA - Adapted Generalised Singular Value Decomposition - Adapted GSVD - ECO-POWER, Canonical Correlation Analysis - CCA - and 2-block Orthogonal Projections to Latent Structures - O2PLS) are discussed, a new computational strategy for determining the number of common factors underlying two data matrices sharing the same row- or column-dimension is described, and two innovative approaches for calibration transfer between near-infrared spectrometers are presented; Part V - On the on-the-fly processing and modelling of continuous high-dimensional data streams, where a novel software system for rational handling of multi-channel measurements recorded in real time, the On-The-Fly Processing (OTFP) tool, is designed; Part VI - Epilogue, where final conclusions are drawn, future perspectives are delineated, and annexes are included. / La presente tesis doctoral, concebida principalmente para apoyar y reforzar la relación entre la academia y la industria, se desarrolló en colaboración con Shell Global Solutions (Amsterdam, Países Bajos) en el esfuerzo de aplicar y posiblemente extender los enfoques ya consolidados basados en variables latentes (es decir, Análisis de Componentes Principales - PCA - Regresión en Mínimos Cuadrados Parciales - PLS - o PLS discriminante - PLSDA) para la resolución de problemas complejos no sólo en los campos de mejora y optimización de procesos, sino también en el entorno más amplio del análisis de datos multivariados. Con este fin, en todos los capítulos proponemos nuevas soluciones algorítmicas eficientes para abordar tareas dispares, desde la transferencia de calibración en espectroscopia hasta el modelado en tiempo real de flujos de datos. El manuscrito se divide en las seis partes siguientes, centradas en diversos temas de interés: Parte I - Prefacio, donde presentamos un resumen de este trabajo de investigación, damos sus principales objetivos y justificaciones junto con una breve introducción sobre PCA, PLS y PLSDA; Parte II - Sobre las extensiones basadas en kernels de PCA, PLS y PLSDA, donde presentamos el potencial de las técnicas de kernel, eventualmente acopladas a variantes específicas de la recién redescubierta proyección de pseudo-muestras, formulada por el estadista inglés John C. Gower, y comparamos su rendimiento respecto a metodologías más clásicas en cuatro aplicaciones a escenarios diferentes: segmentación de imágenes Rojo-Verde-Azul (RGB), discriminación y monitorización de procesos por lotes y análisis de diseños de experimentos de mezclas; Parte III - Sobre la selección del número de factores en el PCA por pruebas de permutación, donde aportamos una guía extensa sobre cómo conseguir la selección de componentes de PCA mediante pruebas de permutación y una ilustración completa de un procedimiento algorítmico original implementado para tal fin; Parte IV - Sobre la modelización de fuentes de variabilidad común y distintiva en el análisis de datos multi-conjunto, donde discutimos varios aspectos prácticos del análisis de componentes comunes y distintivos de dos bloques de datos (realizado por métodos como el Análisis Simultáneo de Componentes - SCA - Análisis Simultáneo de Componentes Distintivos y Comunes - DISCO-SCA - Descomposición Adaptada Generalizada de Valores Singulares - Adapted GSVD - ECO-POWER, Análisis de Correlaciones Canónicas - CCA - y Proyecciones Ortogonales de 2 conjuntos a Estructuras Latentes - O2PLS). Presentamos a su vez una nueva estrategia computacional para determinar el número de factores comunes subyacentes a dos matrices de datos que comparten la misma dimensión de fila o columna y dos planteamientos novedosos para la transferencia de calibración entre espectrómetros de infrarrojo cercano; Parte V - Sobre el procesamiento y la modelización en tiempo real de flujos de datos de alta dimensión, donde diseñamos la herramienta de Procesamiento en Tiempo Real (OTFP), un nuevo sistema de manejo racional de mediciones multi-canal registradas en tiempo real; Parte VI - Epílogo, donde presentamos las conclusiones finales, delimitamos las perspectivas futuras, e incluimos los anexos. / La present tesi doctoral, concebuda principalment per a recolzar i reforçar la relació entre l'acadèmia i la indústria, es va desenvolupar en col·laboració amb Shell Global Solutions (Amsterdam, Països Baixos) amb l'esforç d'aplicar i possiblement estendre els enfocaments ja consolidats basats en variables latents (és a dir, Anàlisi de Components Principals - PCA - Regressió en Mínims Quadrats Parcials - PLS - o PLS discriminant - PLSDA) per a la resolució de problemes complexos no solament en els camps de la millora i optimització de processos, sinó també en l'entorn més ampli de l'anàlisi de dades multivariades. A aquest efecte, en tots els capítols proposem noves solucions algorítmiques eficients per a abordar tasques dispars, des de la transferència de calibratge en espectroscopia fins al modelatge en temps real de fluxos de dades. El manuscrit es divideix en les sis parts següents, centrades en diversos temes d'interès: Part I - Prefaci, on presentem un resum d'aquest treball de recerca, es donen els seus principals objectius i justificacions juntament amb una breu introducció sobre PCA, PLS i PLSDA; Part II - Sobre les extensions basades en kernels de PCA, PLS i PLSDA, on presentem el potencial de les tècniques de kernel, eventualment acoblades a variants específiques de la recentment redescoberta projecció de pseudo-mostres, formulada per l'estadista anglés John C. Gower, i comparem el seu rendiment respecte a metodologies més clàssiques en quatre aplicacions a escenaris diferents: segmentació d'imatges Roig-Verd-Blau (RGB), discriminació i monitorització de processos per lots i anàlisi de dissenys d'experiments de mescles; Part III - Sobre la selecció del nombre de factors en el PCA per proves de permutació, on aportem una guia extensa sobre com aconseguir la selecció de components de PCA a través de proves de permutació i una il·lustració completa d'un procediment algorítmic original implementat per a la finalitat esmentada; Part IV - Sobre la modelització de fonts de variabilitat comuna i distintiva en l'anàlisi de dades multi-conjunt, on discutim diversos aspectes pràctics de l'anàlisis de components comuns i distintius de dos blocs de dades (realitzat per mètodes com l'Anàlisi Simultània de Components - SCA - Anàlisi Simultània de Components Distintius i Comuns - DISCO-SCA - Descomposició Adaptada Generalitzada en Valors Singulars - Adapted GSVD - ECO-POWER, Anàlisi de Correlacions Canòniques - CCA - i Projeccions Ortogonals de 2 blocs a Estructures Latents - O2PLS). Presentem al mateix temps una nova estratègia computacional per a determinar el nombre de factors comuns subjacents a dues matrius de dades que comparteixen la mateixa dimensió de fila o columna, i dos plantejaments nous per a la transferència de calibratge entre espectròmetres d'infraroig proper; Part V - Sobre el processament i la modelització en temps real de fluxos de dades d'alta dimensió, on dissenyem l'eina de Processament en Temps Real (OTFP), un nou sistema de tractament racional de mesures multi-canal registrades en temps real; Part VI - Epíleg, on presentem les conclusions finals, delimitem les perspectives futures, i incloem annexos. / Vitale, R. (2017). Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90442 / TESIS
583

L'hôpital magnétique : définition, conceptualisation, attributs organisationnels et conséquences perçues sur les attitudes au travail / Magnet hospital : definition, conceptualization, organizational attributes and perceived consequences on work attitudes

Sibé, Matthieu 21 November 2014 (has links)
De nombreux constats contemporains s’alarment du malaise récurrent des ressources humaines hospitalières, particulièrement à l’endroit des médecins et des soignants, et par conséquent du risque de mauvaise qualité de prise en charge des patients. Adoptant une approche plus optimiste, des chercheurs américains en soins infirmiers ont mis en évidence depuis le début des années 1980 l’existence d’hôpitaux dits magnétiques, parce qu’attractifs et fidélisateurs, et où il ferait bon travailler et se faire soigner. Cette thèse vise à approfondir le concept de Magnet Hospital, à éclairer sa définition et sa portée pour la gestion des ressources humaines hospitalières en France. Suivant une démarche hypothético-déductive, la conceptualisation, fondée sur un état de l’art, débute par une appropriation du modèle synthétique du Magnet Hospital. Empruntant une perspective psychosociale, notre modèle original de recherche se focalise sur la perception, à l’échelle des unités de soins, des attributs managériaux du magnétisme hospitalier (leadership transformationnel, empowerment perçu de la participation et climat relationnel collégial entre médecins et soignants) et ses conséquences attitudinales positives (satisfaction, implication, intention de rester, équilibre émotionnel travail/hors travail et efficacité collective perçue). Une méthodologie quantitative interroge au moyen de 8 échelles ad hoc un échantillon représentatif de 133 médecins, 361 infirmières et 362 aides-soignantes de 36 services de médecine polyvalente français. Une série de modélisations par équations structurelles, selon l’algorithme Partial Least Squares, teste la nature et l’intensité des relations directes et indirectes du magnétisme managérial perçu. Les résultats statistiques indiquent une bonne qualité des construits et d’ajustement des modèles. Un contexte managérial magnétique produit son principal effet positif sur l’efficacité collective perçue. Des différences catégorielles existent quant à la perception de sa composition et à la transmission de ses effets par la médiation de l’efficacité collective perçue, signalant le caractère contingent du magnétisme. Ces résultats ouvrent des perspectives managériales et scientifiques, en soulignant l’intérêt des approches positives de l’organisation hospitalière. / Many contemporary findings are alarmed of the recurring discomfort of hospital human resources, especially against doctors and nurses, and consequently against risk of poor quality of care for patients. Adopting a more optimistic approach, American nursing scholars have highlighted since the 1980s, some magnet hospitals, able to attract and retain, and with good working and care conditions. This thesis aims to explore Magnet Hospital concept, to inform its definition and scope for hospital human resource management in France. According to a hypothetico-deductive approach, based on a review of the literature, the conceptualization begins with appropriation of synthetic Magnet Hospital model. Under a psychosocial perspective, our original research model focuses on perception of managerial magnetic attributes (transformational leadership, perceived empowerment of participation, collegial climate between doctors and nurses) and their consequences on positive job attitudes (satisfaction, commitment, intent to rest, emotional equilibrium work/family, perceived collective efficacy), at wards level. A quantitative methodology proceeds by a questionnaire of 8 ad hoc scales and interviews 133 doctors, 361 nurses, 362, auxiliary nurses, in 36 French medicine units. A set of structural equations modeling, according to Partial Least Squares, tests nature and intensity of direct and indirect relationships of perceived managerial magnetism. The statistical results show a good validity of constructs and a good fit of models. The major positive effect of magnetic managerial context is on perceived collective efficacy. Some professional differences exist about perceptions of composition and transmission of magnetic effects (via mediation of perceived collective efficacy), indicating the contingency of magnetism. These findings open managerial and scientific opportunities, emphasizing the interest for positive organizational approach of hospital.
584

Unmanned ground vehicles: adaptive control system for real-time rollover prevention

Mlati, Malavi Clifford 04 1900 (has links)
Real-Time Rollover prevention of Unmanned Ground Vehicle (UGV) is very paramount to its reliability and survivability mostly when operating on unknown and rough terrains like mines or other planets.Therefore this research presents the method of real-time rollover prevention of UGVs making use of Adaptive control techniques based on Recursive least Squares (RLS) estimation of unknown parameters, in order to enable the UGVs to adapt to unknown hush terrains thereby increasing their reliability and survivability. The adaptation is achieved by using indirect adaptive control technique where the controller parameters are computed in real time based on the online estimation of the plant’s (UGV) parameters (Rollover index and Roll Angle) and desired UGV’s performance in order to appropriately adjust the UGV speed and suspension actuators to counter-act the vehicle rollover. A great challenge of indirect adaptive control system is online parameter identification, where in this case the RLS based estimator is used to estimate the vehicles rollover index and Roll Angle from lateral acceleration measurements and height of the centre of gravity of the UGV. RLS is suitable for online parameter identification due to its nature of updating parameter estimate at each sample time. The performance of the adaptive control algorithms and techniques is evaluated using Matlab Simulink® system model with the UGV Model built using SimMechanics physical modelling platform and the whole system runs within Simulink environment to emulate real world application. The simulation results of the proposed adaptive control algorithm based on RLS estimation, show that the adaptive control algorithm does prevent or minimize the likely hood of vehicle rollover in real time. / Electrical and Mining Engineering / M. Tech. (Electrical Engineering)
585

Monitoring Kraft Recovery Boiler Fouling by Multivariate Data Analysis

Edberg, Alexandra January 2018 (has links)
This work deals with fouling in the recovery boiler at Montes del Plata, Uruguay. Multivariate data analysis has been used to analyze the large amount of data that was available in order to investigate how different parameters affect the fouling problems. Principal Component Analysis (PCA) and Partial Least Square Projection (PLS) have in this work been used. PCA has been used to compare average values between time periods with high and low fouling problems while PLS has been used to study the correlation structures between the variables and consequently give an indication of which parameters that might be changed to improve the availability of the boiler. The results show that this recovery boiler tends to have problems with fouling that might depend on the distribution of air, the black liquor pressure or the dry solid content of the black liquor. The results also show that multivariate data analysis is a powerful tool for analyzing these types of fouling problems. / Detta arbete handlar om inkruster i sodapannan pa Montes del Plata, Uruguay. Multivariat dataanalys har anvands for att analysera den stora datamangd som fanns tillganglig for att undersoka hur olika parametrar paverkar inkrusterproblemen. Principal·· Component Analysis (PCA) och Partial Least Square Projection (PLS) har i detta jobb anvants. PCA har anvants for att jamfora medelvarden mellan tidsperioder med hoga och laga inkrusterproblem medan PLS har anvants for att studera korrelationen mellan variablema och darmed ge en indikation pa vilka parametrar som kan tankas att andras for att forbattra tillgangligheten pa sodapannan. Resultaten visar att sodapannan tenderar att ha problem med inkruster som kan hero pa fdrdelningen av luft, pa svartlutens tryck eller pa torrhalten i svartluten. Resultaten visar ocksa att multivariat dataanalys ar ett anvandbart verktyg for att analysera dessa typer av inkrusterproblem.
586

Filter Optimization for Personal Sound Zones Systems

Molés Cases, Vicent 02 September 2022 (has links)
[ES] Los sistemas de zonas de sonido personal (o sus siglas en inglés PSZ) utilizan altavoces y técnicas de procesado de señal para reproducir sonidos distintos en diferentes zonas de un mismo espacio compartido. Estos sistemas se han popularizado en los últimos años debido a la amplia gama de aplicaciones que podrían verse beneficiadas por la generación de zonas de escucha individuales. El diseño de los filtros utilizados para procesar las señales de sonido es uno de los aspectos más importantes de los sistemas PSZ, al menos para las frecuencias bajas y medias. En la literatura se han propuesto diversos algoritmos para calcular estos filtros, cada uno de ellos con sus ventajas e inconvenientes. En el presente trabajo se revisan los algoritmos para sistemas PSZ propuestos en la literatura y se evalúa experimentalmente su rendimiento en un entorno reverberante. Los distintos algoritmos se comparan teniendo en cuenta aspectos como el aislamiento acústico entre zonas, el error de reproducción, la energía de los filtros y el retardo del sistema. Además, se estudian estrategias computacionalmente eficientes para obtener los filtros y también se compara su complejidad computacional. Los resultados experimentales obtenidos revelan que las soluciones existentes no pueden ofrecer una complejidad computacional baja y al mismo tiempo un buen rendimiento con baja latencia. Por ello se propone un nuevo algoritmo basado en el filtrado subbanda, y se demuestra experimentalmente que este algoritmo mitiga las limitaciones de los algoritmos existentes. Asimismo, este algoritmo ofrece una mayor versatilidad que los algoritmos existentes, ya que se pueden utilizar configuraciones distintas en cada subbanda, como por ejemplo, diferentes longitudes de filtro o distintos conjuntos de altavoces. Por último, se estudia la influencia de las respuestas objetivo en la optimización de los filtros y se propone un nuevo método en el que se aplica una ventana temporal a estas respuestas. El método propuesto se evalúa experimentalmente en dos salas con diferentes tiempos de reverberación y los resultados obtenidos muestran que se puede reducir la energía de las interferencias entre zonas gracias al efecto de la ventana temporal. / [CA] Els sistemes de zones de so personal (o les seves sigles en anglés PSZ) fan servir altaveus i tècniques de processament de senyal per a reproduir sons distints en diferents zones d'un mateix espai compartit. Aquests sistemes s'han popularitzat en els últims anys a causa de l'àmplia gamma d'aplicacions que podrien veure's beneficiades per la generació de zones d'escolta individuals. El disseny dels filtres utilitzats per a processar els senyals de so és un dels aspectes més importants dels sistemes PSZ, particularment per a les freqüències baixes i mitjanes. En la literatura s'han proposat diversos algoritmes per a calcular aquests filtres, cadascun d'ells amb els seus avantatges i inconvenients. En aquest treball es revisen els algoritmes proposats en la literatura per a sistemes PSZ i s'avalua experimentalment el seu rendiment en un entorn reverberant. Els distints algoritmes es comparen tenint en compte aspectes com l'aïllament acústic entre zones, l'error de reproducció, l'energia dels filtres i el retard del sistema. A més, s'estudien estratègies de còmput eficient per obtindre els filtres i també es comparen les seves complexitats computacionals. Els resultats experimentals obtinguts revelen que les solucions existents no poder oferir al mateix temps una complexitat computacional baixa i un bon rendiment amb latència baixa. Per això es proposa un nou algoritme basat en el filtrat subbanda que mitiga aquestes limitacions. A més, l'algoritme proposat ofereix una major versatilitat que els algoritmes existents, ja que en cada subbanda el sistema pot utilitzar configuracions diferents, com per exemple, distintes longituds de filtre o distints conjunts d'altaveus. L'algoritme proposat s'avalua experimentalment en un entorn reverberant, i es mostra com pot mitigar satisfactòriament les limitacions dels algoritmes existents. Finalment, s'estudia la influència de les respostes objectiu en l'optimització dels filtres i es proposa un nou mètode en el que s'aplica una finestra temporal a les respostes objectiu. El mètode proposat s'avalua experimentalment en dues sales amb diferents temps de reverberació i els resultats obtinguts mostren que es pot reduir el nivell d'interferència entre zones grècies a l'efecte de la finestra temporal. / [EN] Personal Sound Zones (PSZ) systems deliver different sounds to a number of listeners sharing an acoustic space through the use of loudspeakers together with signal processing techniques. These systems have attracted a lot of attention in recent years because of the wide range of applications that would benefit from the generation of individual listening zones, e.g., domestic or automotive audio applications. A key aspect of PSZ systems, at least for low and mid frequencies, is the optimization of the filters used to process the sound signals. Different algorithms have been proposed in the literature for computing those filters, each exhibiting some advantages and disadvantages. In this work, the state-of-the-art algorithms for PSZ systems are reviewed, and their performance in a reverberant environment is evaluated. Aspects such as the acoustic isolation between zones, the reproduction error, the energy of the filters, and the delay of the system are considered in the evaluations. Furthermore, computationally efficient strategies to obtain the filters are studied, and their computational complexity is compared too. The performance and computational evaluations reveal the main limitations of the state-of-the-art algorithms. In particular, the existing solutions can not offer low computational complexity and at the same time good performance for short system delays. Thus, a novel algorithm based on subband filtering that mitigates these limitations is proposed for PSZ systems. In addition, the proposed algorithm offers more versatility than the existing algorithms, since different system configurations, such as different filter lengths or sets of loudspeakers, can be used in each subband. The proposed algorithm is experimentally evaluated and tested in a reverberant environment, and its efficacy to mitigate the limitations of the existing solutions is demonstrated. Finally, the effect of the target responses in the optimization is discussed, and a novel approach that is based on windowing the target responses is proposed. The proposed approach is experimentally evaluated in two rooms with different reverberation levels. The evaluation results reveal that an appropriate windowing of the target responses can reduce the interference level between zones. / Molés Cases, V. (2022). Filter Optimization for Personal Sound Zones Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/186111 / TESIS
587

Development of general finite differences for complex geometries using immersed boundary method

Vasyliv, Yaroslav V. 07 January 2016 (has links)
In meshfree methods, partial differential equations are solved on an unstructured cloud of points distributed throughout the computational domain. In collocated meshfree methods, the differential operators are directly approximated at each grid point based on a local cloud of neighboring points. The set of neighboring nodes used to construct the local approximation is determined using a variable search radius. The variable search radius establishes an implicit nodal connectivity and hence a mesh is not required. As a result, meshfree methods have the potential flexibility to handle problem sets where the computational grid may undergo large deformations as well as where the grid may need to undergo adaptive refinement. In this work we develop the sharp interface formulation of the immersed boundary method for collocated meshfree approximations. We use the framework to implement three meshfree methods: General Finite Differences (GFD), Smoothed Particle Hydrodynamics (SPH), and Moving Least Squares (MLS). We evaluate the numerical accuracy and convergence rate of these methods by solving the 2D Poisson equation. We demonstrate that GFD is computationally more efficient than MLS and show that its accuracy is superior to a popular corrected form of SPH and comparable to MLS. We then use GFD to solve several canonic steady state fluid flow problems on meshfree grids generated using uniform and variable radii Poisson disk algorithm.
588

Non-global regression modelling

Huang, Yunkai 21 June 2016 (has links)
In this dissertation, a new non-global regression model - the partial linear threshold regression model (PLTRM) - is proposed. Various issues related to the PLTRM are discussed. In the first main section of the dissertation (Chapter 2), we define what is meant by the term “non-global regression model”, and we provide a brief review of the current literature associated with such models. In particular, we focus on their advantages and disadvantages in terms of their statistical properties. Because there are some weaknesses in the existing non-global regression models, we propose the PLTRM. The PLTRM combines non-parametric modelling with the traditional threshold regression models (TRMs), and hence can be thought of as an extension of the later models. We verify the performance of the PLTRM through a series of Monte Carlo simulation experiments. These experiments use a simulated data set that exhibits partial linear and partial nonlinear characteristics, and the PLTRM out-performs several competing parametric and non-parametric models in terms of the Mean Squared Error (MSE) of the within-sample fit. In the second main section of this dissertation (Chapter 3), we propose a method of estimation for the PLTRM. This requires estimating the parameters of the parametric part of the model; estimating the threshold; and fitting the non-parametric component of the model. An “unbalanced penalized least squares” approach is used. This involves using restricted penalized regression spline and smoothing spline techniques for the non-parametric component of the model; the least squares method for the linear parametric part of the model; together with a search procedure to estimate the threshold value. This estimation procedure is discussed for three mutually exclusive situations, which are classified according to the way in which the two components of the PLTRM “join” at the threshold. Bootstrap sampling distributions of the estimators are provided using the parametric bootstrap technique. The various estimators appear to have good sampling properties in most of the situations that are considered. Inference issues such as hypothesis testing and confidence interval construction for the PLTRM are also investigated. In the third main section of the dissertation (Chapter 4), we illustrate the usefulness of the PLTRM, and the application of the proposed estimation methods, by modelling various real-world data sets. These examples demonstrate both the good statistical performance, and the great application potential, of the PLTRM. / Graduate
589

Customer perceived value : reconceptualisation, investigation and measurement

Bruce, Helen Louise January 2013 (has links)
The concept of customer perceived value occupies a prominent position within the strategic agenda of organisations, as firms seek to maximise the value perceived by their customers as arising from their consumption, and to equal or exceed that perceived in relation to competitor propositions. Customer value management is similarly central to the marketing discipline. However, the nature of customer value remains ambiguous and its measurement is typically flawed, due to the poor conceptual foundation upon which previous research endeavours are built. This investigation seeks to address the current poverty of insight regarding the nature and measurement of customer value. The development of a revised conceptual framework synthesises the strengths of previous value conceptualisations while addressing many of their limitations. A multi-dimensional depiction of value arising from customer experience is presented, in which value is conceptualised as arising at both first-order dimension and overall, second-order levels of abstraction. The subsequent operationalisation of this conceptual framework within a two-phase investigation combines qualitative and quantitative methodologies in a study of customer value arising from subscription TV (STV) consumption. Sixty semi-structured interviews with 103 existing STV customers give rise to a multi-dimensional model of value, in which dimensions are categorised as restorative, actualising and hedonic in type, and as arising via individual, reflected or shared modes of perception. The quantitative investigation entails two periods of data collection via questionnaires developed from the qualitative findings, and the gathering of 861 responses, also from existing STV customers. A series of scales with which to measure value dimensions is developed and an index enabling overall perceived value measurement is produced. Contributions to theory of customer value arise in the form of enhanced insights regarding its nature. At the first-order dimension level, the derived dimensions are of specific relevance to the STV industry. However, the empirically derived framework of dimension types and modes of perception has potential applicability in multiple contexts. At the more abstract, second-order level, the findings highlight that value perceptions comprise only a subset of potential dimensions. Evidence is thus presented of the need to consider value at both dimension and overall levels of perception. Contributions to knowledge regarding customer value measurement also arise, as the study produces reliable and valid scales and an index. This latter tool is novel in its formative measurement of value as a second order construct, comprising numerous first-order dimensions of value, rather than quality as incorporated in previously derived measures. This investigation also results in a contribution to theory regarding customer experience through the identification of a series of holistic, discrete, direct and indirect value-generating interactions. Contributions to practice within the STV industry arise as the findings present a solution to the immediate need for enhanced value insight. Contributions to alternative industries are methodological, as this study presents a detailed process through which robust value insight can be derived. Specific methodological recommendations arise in respect of the need for empirically grounded research, an experiential focus and a twostage quantitative methodology.
590

Provable alternating minimization for non-convex learning problems

Netrapalli, Praneeth Kumar 17 September 2014 (has links)
Alternating minimization (AltMin) is a generic term for a widely popular approach in non-convex learning: often, it is possible to partition the variables into two (or more) sets, so that the problem is convex/tractable in one set if the other is held fixed (and vice versa). This allows for alternating between optimally updating one set of variables, and then the other. AltMin methods typically do not have associated global consistency guarantees; even though they are empirically observed to perform better than methods (e.g. based on convex optimization) that do have guarantees. In this thesis, we obtain rigorous performance guarantees for AltMin in three statistical learning settings: low rank matrix completion, phase retrieval and learning sparsely-used dictionaries. The overarching theme behind our results consists of two parts: (i) devising new initialization procedures (as opposed to doing so randomly, as is typical), and (ii) establishing exponential local convergence from this initialization. Our work shows that the pursuit of statistical guarantees can yield algorithmic improvements (initialization in our case) that perform better in practice. / text

Page generated in 0.0531 seconds