• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 88
  • 50
  • 26
  • 14
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 526
  • 144
  • 97
  • 74
  • 64
  • 64
  • 60
  • 59
  • 53
  • 49
  • 46
  • 45
  • 43
  • 41
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

A Conforming to Interface Structured Adaptive Mesh Refinement for Modeling Complex Morphologies

Anand Nagarajan, . January 2019 (has links)
No description available.
312

Complete Equitable Decompositions

Drapeau, Joseph Paul 12 December 2022 (has links)
A well-known result in spectral graph theory states that if a graph has an equitable partition then the eigenvalues of the associated divisor graph are a subset of the graph's eigenvalues. A natural question question is whether it is possible to recover the remaining eigenvalues of the graph. Here we show that if a graph has a Hermitian adjacency matrix then the spectrum of the graph can be decomposed into a collection of smaller graphs whose eigenvalues are collectively the remaining eigenvalues of the graph. This we refer to as a complete equitable decomposition of the graph.
313

A Modelling approach for evaluating the ranking capability of Situational Awareness System in real time operation. Modelling, evaluating and quantifying different situational assessment in real time operation, using an analytical approach for measuring the ranking capability of SWA system

Shurrab, Orabi M.F. January 2016 (has links)
In a dynamically monitored environment the analyst team need timely and accurate information to conduct proactive action over complex situations. Typically, there are thousands of reported activities in a real time operation, therefore steps are taken to direct the analyst’s attention to the most important activity. The data fusion community have introduced the information fusion model, with multiple situational assessments. Each process lends itself to ranking the most important activities into a predetermined order. Unfortunately, the capability of a real time system can be hindered by the knowledge limitation problem, particularly when the underlying system is processing multiple sensor information. Consequently, the situational awareness domains may not rank the identified situation as perfect, as desired by the decision-making resources. This thesis presents advanced research carried out to evaluate the ranking capability of information from the situational awareness domains: perception, comprehension and projection. The Ranking Capability Score (RCS) has been designed for evaluating the prioritisation process. The enhanced (RCS) has been designed for addressing the knowledge representation problem in the user system relation under a situational assessment where the proposed number of tracking activities are dynamically shifted. Finally, the Scheduling Capability Score was designed for evaluating the scheduling capability of the situational awareness system. The proposed performance metrics have been successful in fulfilling their objectives. Furthermore, they have been validated and evaluated using an analytical approach, through conducting a rigorous analysis of the prioritisation and scheduling processes, despite any constraints related to a domain-specific configuration.
314

Matching Pursuit and Residual Vector Quantization: Applications in Image Coding

Ebrahimi-Moghadam, Abbas 09 1900 (has links)
In this thesis, novel progressive scalable region-of-interest (ROI) image coding schemes with rate-distortion-complexity trade-off based on residual vector quantization (RVQ) and matching pursuit (MP) are developed. RVQ and MP provide the encoder with multi-resolution signal analysis tools, which are useful for rate-distortion trade-off and can be used to render a selected region of an image with a specific quality. An image quality refinement strategy is presented in this thesis, which improves the quality of the ROI in a progressive manner. The reconstructed image can mimic foveated images in perceptual image coding context. The systems are unbalanced in the sense that the decoders have less computational requirements than the encoders. The methods also provide interactive way of information refinement for regions of image with receiver 's higher priority. The receiver is free to select multiple regions of interest and change his/her mind and choose alternative regions in the middle of signal transmission. The proposed RVQ and MP based image coding methods in this thesis raise a couple of issues and reveal some capabilities in image coding and communication. In RVQ based image coding, the effects of dictionary size, number of RVQ stages and the size of image blocks on the reconstructed image quality, the resulting bit rate, and the computational complexity are investigated. The progressive nature of the resulting bit-stream makes RVQ and MP based image coding methods suitable platforms for unequal error protection. Researchers have paid lots of attention to joint source-channel ( JSC) coding in recent years. In this popular framework, JSC decoding based on residual redundancy exploitation of a source coder output bit-stream is an interesting bandwidth efficient approach for signal reconstruction. In this thesis, we also addressed JSC decoding and error concealment problem for matching pursuit based coded images transmitted over a noisy memoryless channel. The problem is solved on minimum mean squared error (MMSE) estimation foundation and a suboptimal solution is devised, which yields high quality error concealment with different levels of computational complexity. The proposed decoding and error concealment solution takes advantage of the residual redundancy, which exists in neighboring image blocks as well as neighboring MP analysis stages, to improve the quality of the images with no increase in the required bandwidth. The effects of different parameters such as MP dictionary size and number of analysis stages on the performance of the proposed soft decoding method have also been investigated. / Thesis / Doctor of Philosophy (PhD)
315

SHAPE CASTING HIGH STRENGTH Al-Zn-Mg-Cu ALLOYS: INTRODUCING COMPOSITION-BEHAVIOR RELATIONSHIPS

Mazahery, Ali January 2016 (has links)
This project was funded by Automotive Partnership Canada (APC), an initiative created by the Government of Canada in an attempt to support significant, collaborative R&D activities in order to benefit the entire Canadian automotive industry. / High strength Al-Zn-Mg-Cu alloys have been increasingly employed in the transportation industry due to the increased demands for light structural components. However, their applications have been limited to relatively expensive wrought products. Application of the shape cast Al-Zn-Mg-Cu parts has never been the focus of attention due to their poor castability and mechanical properties. Improving the casting quality is expected to increase their utilization within the automotive industry. The poor castability and mechanical properties of some alloys in this family may be effectively improved through optimized chemistry control and melt treatment including grain refinement. The primary objective of this project is to optimize the chemistry and heat treatment of the Al-Zn-Mg-Cu alloy family that results in improved strength with acceptable level of ductility and casting quality relative to other shape cast Al alloys. The Taguchi experimental design method was used to narrow down the number of required casting experiments required to meet the research objective. Three levels across four elements yielded a total of 9 Al-Zn-Mg-Cu alloys, which were cast using a tilt pour permanent mold process. The effect of each major alloying element on the microstructure, and mechanical properties was investigated. Tensile measurements were made on the 9 alloys subjected to two steps solution treatments. Mechanical properties such as yield strength (YS), ultimate tensile strength (UTS), and elongation at fracture (El.%) were experimentally measured and statistically analyzed. An ANOVA analysis was employed to quantify the percentage contribution of the alloying elements on the material properties. Grain refinement was found to play a significant role in improving the hot tearing resistance and, thereby ameliorating quality. The alloying element that affected the YS and UTS to the greatest extent was Cu, followed by Zn. In contrast, the effect of Mg and Ti on YS and UTS was insignificant. Moreover, a decrease in Mg content had the greatest effect in enhancing the El.%. A regression analysis was used to obtain statistical relationships (models) correlating the material properties with the variations in the content of the major alloying elements. The R-square values of YS, UTS, and El.% were 99.7 %, 98 %, and 90 %, respectively, showing that the models replicated the experimental results. Verification measurements made on shape cast Al-6Zn-2Mg-2Cu alloy revealed that the material property model predictions were in agreement with the experimentally measured values. The results show that secondary and over ageing treatments of the shape cast Al-Zn-Mg-Cu alloys lead to superior combination of YS and El.%. The ongoing advances in shape casting of Al-Zn-Mg-Cu alloys with high will make them suitable choices for commercial load-bearing automotive components, when it comes to the selection of a material meeting the minimum requirements for strength, damage tolerance, cost and weight. / Thesis / Master of Applied Science (MASc)
316

Advancements in Dependability Analysis of Safety-Critical Systems : Addressing Specification Formulation and Verification Challenges / Framsteg inom tillförlitlighetsanalys av säkerhetskritiska system : Utmaningar inom specifikationsformulering och verifiering

Yu, Zelin January 2023 (has links)
Safety-critical systems have garnered increasing attention, particularly regarding their dependability analysis. In modern times, these systems comprise numerous components, making it crucial to verify that lower-level components adhere to their specifications will ensure the overall system’s compliance with its top-level specification. However, two issues arise in this verification process. Firstly, many industrial applications lack lower-level natural-language specifications for their components, relying solely on toplevel specifications. Secondly, many current verification algorithms need to explore the continuous time evolution of the behavioral combinations of these components, and the combination of components to be explored will rise exponentially with the number of components. To address these challenges, this paper presents significant contributions. Firstly, it introduces a novel method that leverages the structures of redundancy systems to create naturallanguage specifications for components derived from a top-level specification. This approach facilitates a more efficient decomposition of the top-level specification, allowing for greater ease in handling component behaviors. Secondly, the proposed method is successfully applied to Scania’s brake system, leading to the decomposition of its top-level specification. To verify this decomposition, an existing verification algorithm is selected, and the results are impressive. The proposed method effectively addresses the issue of exponential growth in component behavior combinations, which was previously mentioned. Specifically, in the case of the Scania brake system, the number of combinations is dramatically reduced from 27 to a mere 13, showcasing the significant improvement achieved with the new method. / Säkerhetskritiska system har fått ökad uppmärksamhet, särskilt när det gäller deras pålitlighetsanalys. I moderna tider består dessa system av talrika komponenter, vilket gör det avgörande att verifiera att komponenter på lägre nivå följer sina specifikationer för att säkerställa att hela systemet följer sin övergripande specifikation. Två utmaningar uppstår dock i denna verifieringsprocess. För det första saknar många industriella tillämpningar naturligspråksspecifikationer för komponenter på lägre nivå och förlitar sig enbart på övergripande specifikationer. För det andra behöver många nuvarande verifieringsalgoritmer utforska de kontinuerliga tidsutvecklingarna av beteendekombinationer hos dessa komponenter, och antalet kombinationer som ska utforskas ökar exponentiellt med antalet komponenter. För att tackla dessa utmaningar presenterar den här artikeln betydande bidrag. För det första introducerar den en ny metod som utnyttjar strukturer i redundanta system för att skapa naturligspråksspecifikationer för komponenter som härleds från en övergripande specifikation. Denna metod underlättar en mer effektiv uppdelning av övergripande specifikation, vilket gör det enklare att hantera komponentbeteenden. För det andra tillämpas den föreslagna metoden framgångsrikt på Scanias bromssystem, vilket leder till en uppdelning av dess övergripande specifikation. För att verifiera denna uppdelning väljs en befintlig verifieringsalgoritm, och resultaten är imponerande. Den föreslagna metoden hanterar effektivt problemet med exponentiell tillväxt i komponentbeteendekombinationer, vilket tidigare nämnts. Specifikt, för Scanias bromssystem minskar antalet kombinationer dramatiskt från 27 till endast 13, vilket tydligt visar den betydande förbättring som uppnåtts med den nya metoden.
317

Quantification of Numerical and Modeling Errors in Simulation of Fluid Flow through a Fixed Particle Bed

Volk, Annette January 2015 (has links)
No description available.
318

Friction Stir Processing of Nickel-base Alloys

Rodelas, Jeffrey M. 13 August 2012 (has links)
No description available.
319

Community Hawkes Models for Continuous-time Networks

Soliman, Hadeel 15 September 2022 (has links)
No description available.
320

Structural Shape Optimization Based On The Use Of Cartesian Grids

Marco Alacid, Onofre 06 July 2018 (has links)
Tesis por compendio / As ever more challenging designs are required in present-day industries, the traditional trial-and-error procedure frequently used for designing mechanical parts slows down the design process and yields suboptimal designs, so that new approaches are needed to obtain a competitive advantage. With the ascent of the Finite Element Method (FEM) in the engineering community in the 1970s, structural shape optimization arose as a promising area of application. However, due to the iterative nature of shape optimization processes, the handling of large quantities of numerical models along with the approximated character of numerical methods may even dissuade the use of these techniques (or fail to exploit their full potential) because the development time of new products is becoming ever shorter. This Thesis is concerned with the formulation of a 3D methodology based on the Cartesian-grid Finite Element Method (cgFEM) as a tool for efficient and robust numerical analysis. This methodology belongs to the category of embedded (or fictitious) domain discretization techniques in which the key concept is to extend the structural analysis problem to an easy-to-mesh approximation domain that encloses the physical domain boundary. The use of Cartesian grids provides a natural platform for structural shape optimization because the numerical domain is separated from a physical model, which can easily be changed during the optimization procedure without altering the background discretization. Another advantage is the fact that mesh generation becomes a trivial task since the discretization of the numerical domain and its manipulation, in combination with an efficient hierarchical data structure, can be exploited to save computational effort. However, these advantages are challenged by several numerical issues. Basically, the computational effort has moved from the use of expensive meshing algorithms towards the use of, for example, elaborate numerical integration schemes designed to capture the mismatch between the geometrical domain boundary and the embedding finite element mesh. To do this we used a stabilized formulation to impose boundary conditions and developed novel techniques to be able to capture the exact boundary representation of the models. To complete the implementation of a structural shape optimization method an adjunct formulation is used for the differentiation of the design sensitivities required for gradient-based algorithms. The derivatives are not only the variables required for the process, but also compose a powerful tool for projecting information between different designs, or even projecting the information to create h-adapted meshes without going through a full h-adaptive refinement process. The proposed improvements are reflected in the numerical examples included in this Thesis. These analyses clearly show the improved behavior of the cgFEM technology as regards numerical accuracy and computational efficiency, and consequently the suitability of the cgFEM approach for shape optimization or contact problems. / La competitividad en la industria actual impone la necesidad de generar nuevos y mejores diseños. El tradicional procedimiento de prueba y error, usado a menudo para el diseño de componentes mecánicos, ralentiza el proceso de diseño y produce diseños subóptimos, por lo que se necesitan nuevos enfoques para obtener una ventaja competitiva. Con el desarrollo del Método de los Elementos Finitos (MEF) en el campo de la ingeniería en la década de 1970, la optimización de forma estructural surgió como un área de aplicación prometedora. El entorno industrial cada vez más exigente implica ciclos cada vez más cortos de desarrollo de nuevos productos. Por tanto, la naturaleza iterativa de los procesos de optimización de forma, que supone el análisis de gran cantidad de geometrías (para las se han de usar modelos numéricos de gran tamaño a fin de limitar el efecto de los errores intrínsecamente asociados a las técnicas numéricas), puede incluso disuadir del uso de estas técnicas. Esta Tesis se centra en la formulación de una metodología 3D basada en el Cartesian-grid Finite Element Method (cgFEM) como herramienta para un análisis numérico eficiente y robusto. Esta metodología pertenece a la categoría de técnicas de discretización Immersed Boundary donde el concepto clave es extender el problema de análisis estructural a un dominio de aproximación, que contiene la frontera del dominio físico, cuya discretización (mallado) resulte sencilla. El uso de mallados cartesianos proporciona una plataforma natural para la optimización de forma estructural porque el dominio numérico está separado del modelo físico, que podrá cambiar libremente durante el procedimiento de optimización sin alterar la discretización subyacente. Otro argumento positivo reside en el hecho de que la generación de malla se convierte en una tarea trivial. La discretización del dominio numérico y su manipulación, en coalición con la eficiencia de una estructura jerárquica de datos, pueden ser explotados para ahorrar coste computacional. Sin embargo, estas ventajas pueden ser cuestionadas por varios problemas numéricos. Básicamente, el esfuerzo computacional se ha desplazado. Del uso de costosos algoritmos de mallado nos movemos hacia el uso de, por ejemplo, esquemas de integración numérica elaborados para poder capturar la discrepancia entre la frontera del dominio geométrico y la malla de elementos finitos que lo embebe. Para ello, utilizamos, por un lado, una formulación de estabilización para imponer condiciones de contorno y, por otro lado, hemos desarrollado nuevas técnicas para poder captar la representación exacta de los modelos geométricos. Para completar la implementación de un método de optimización de forma estructural se usa una formulación adjunta para derivar las sensibilidades de diseño requeridas por los algoritmos basados en gradiente. Las derivadas no son sólo variables requeridas para el proceso, sino una poderosa herramienta para poder proyectar información entre diferentes diseños o, incluso, proyectar la información para crear mallas h-adaptadas sin pasar por un proceso completo de refinamiento h-adaptativo. Las mejoras propuestas se reflejan en los ejemplos numéricos presentados en esta Tesis. Estos análisis muestran claramente el comportamiento superior de la tecnología cgFEM en cuanto a precisión numérica y eficiencia computacional. En consecuencia, el enfoque cgFEM se postula como una herramienta adecuada para la optimización de forma. / Actualment, amb la competència existent en la industria, s'imposa la necessitat de generar nous i millors dissenys . El tradicional procediment de prova i error, que amb freqüència es fa servir pel disseny de components mecànics, endarrereix el procés de disseny i produeix dissenys subòptims, pel que es necessiten nous enfocaments per obtindre avantatge competitiu. Amb el desenvolupament del Mètode dels Elements Finits (MEF) en el camp de l'enginyeria en la dècada de 1970, l'optimització de forma estructural va sorgir com un àrea d'aplicació prometedora. No obstant això, a causa de la natura iterativa dels processos d'optimització de forma, la manipulació dels models numèrics en grans quantitats, junt amb l'error de discretització dels mètodes numèrics, pot fins i tot dissuadir de l'ús d'aquestes tècniques (o d'explotar tot el seu potencial), perquè al mateix temps els cicles de desenvolupament de nous productes s'estan acurtant. Esta Tesi se centra en la formulació d'una metodologia 3D basada en el Cartesian-grid Finite Element Method (cgFEM) com a ferramenta per una anàlisi numèrica eficient i sòlida. Esta metodologia pertany a la categoria de tècniques de discretització Immersed Boundary on el concepte clau és expandir el problema d'anàlisi estructural a un domini d'aproximació fàcil de mallar que conté la frontera del domini físic. L'utilització de mallats cartesians proporciona una plataforma natural per l'optimització de forma estructural perquè el domini numèric està separat del model físic, que podria canviar lliurement durant el procediment d'optimització sense alterar la discretització subjacent. A més, un altre argument positiu el trobem en què la generació de malla es converteix en una tasca trivial, ja que la discretització del domini numèric i la seua manipulació, en coalició amb l'eficiència d'una estructura jeràrquica de dades, poden ser explotats per estalviar cost computacional. Tot i això, estos avantatges poden ser qüestionats per diversos problemes numèrics. Bàsicament, l'esforç computacional s'ha desplaçat. De l'ús de costosos algoritmes de mallat ens movem cap a l'ús de, per exemple, esquemes d'integració numèrica elaborats per poder capturar la discrepància entre la frontera del domini geomètric i la malla d'elements finits que ho embeu. Per això, fem ús, d'una banda, d'una formulació d'estabilització per imposar condicions de contorn i, d'un altra, desevolupem noves tècniques per poder captar la representació exacta dels models geomètrics Per completar la implementació d'un mètode d'optimització de forma estructural es fa ús d'una formulació adjunta per derivar les sensibilitats de disseny requerides pels algoritmes basats en gradient. Les derivades no són únicament variables requerides pel procés, sinó una poderosa ferramenta per poder projectar informació entre diferents dissenys o, fins i tot, projectar la informació per crear malles h-adaptades sense passar per un procés complet de refinament h-adaptatiu. Les millores proposades s'evidencien en els exemples numèrics presentats en esta Tesi. Estes anàlisis mostren clarament el comportament superior de la tecnologia cgFEM en tant a precisió numèrica i eficiència computacional. Així, l'enfocament cgFEM es postula com una ferramenta adient per l'optimització de forma. / Marco Alacid, O. (2017). Structural Shape Optimization Based On The Use Of Cartesian Grids [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86195 / Compendio

Page generated in 0.0461 seconds