• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Functional design of mechanical products based on behavior-driven function-environment-structure modeling framework

Zhang, W.Y., Tor, Shu Beng, Britton, G.A., Deng, Y.M. 01 1900 (has links)
The relative significance of upstream design activity to downstream design activity is widely recognized, due to its critical role in determining the final product’s functionality. Although there are now some general methodologies dealing with functions or reasoning about functions, virtually no commercial CAD system can support functional design. In functional modeling, a design problem is represented in a hierarchy of functions and the behaviors that realize the functions. This paper presents a functional design methodology based on a behavior-driven function-environment-structure (B-FES) modeling framework to guide functional design through functional reasoning steps including causal behavioral reasoning (CBR) and functional decomposition. The proposed functional design starts from a set of design specifications including functional requirements and design constraints, and results in diverse behavioral schema corresponding to a set of design alternatives. A design example for functional design of a terminal cut-off unit in an automatic assembly system is used to provide a demonstration of the proposed functional design methodology. / Singapore-MIT Alliance (SMA)
2

Functional Decomposition Techniques and Their Impact on Performance, Scalability and Maintainability

van Dreven, Jonne January 2021 (has links)
Context The last decade shows many solution proposals of functional decomposition techniques to aid in developing microservice architectures. While some solutions may work, it is uncertain what the effects are on quantitative, measurable metrics; thus, the proposals require validation. Objective The study measures the effects of various functional decomposition techniques on performance, scalability, and maintainability. Furthermore, the study will compare the treatments in order to find whether a statistical significance exists. Method The study uses a controlled experiment containing three functional decomposition techniques—Event Storming, Actor/Action, and Service Cutter—applied on the same use case. The use case follows the CoCoMe framework, which forms the basis of the experiment. Results Each treatment shows similar behavior while presenting different architectural designs. The study found no statistical significance for performance, scalability, and maintainability. Conclusion Evidence suggests that the convenience of an approach might be more important than the resulting architecture since they will likely lead to the same outcome. If performance issues arise, it would likely be due to the microservices architecture and not the functional decomposition technique; therefore, the microservices architecture might not equally benefit any situation or corporation. Furthermore, the study found that service granularity might not be as relevant as some studies claim it to be, and other factors could be more crucial.
3

Aplinkos apsaugos inspekcijos informacinės sistemos posistemių projektavimas ir programinė realizacija / Design and realization of Information System of Environmental Inspection

Černiauskaitė, Agnė 27 May 2005 (has links)
Nowadays the complexity and size of business systems is growing steadily and, as practise shows, the situation is not going to change in the nearest future. This necessitates difficulties not only for lifecycle of programmes’ development but for maintenance and expansibility also. That’s why, solving them, requires a broader perspective that views the system as consisting of other systems. This work looks what kind of methods can be applied in this context. The analysis of modern technologies (Java programming language) and modelling methods (functional decomposition and object-oriented system engineering) are presented. Suggested system decomposition approach is the synthesis of already existing ones. In this work new method has been applied for the development of experimental Information System of Territorial Departments of Environmental Inspection. The prototype of the mentioned system has been realized by using J2EE (Java 2, Enterprise Edition) technologies. Developed software implements the requirements for usage and maintenance (large number of the users, security rights, compatibility, centralised administration, etc.) of large distributed systems. Suggested approach may be used while expanding the existing software or developing similar applications.
4

Software Development For Multi Level Petri Net Based Design Inference Network

Coskun, Cagdas 01 August 2004 (has links) (PDF)
This thesis presents the computer implementation of a multi resolutional concurrent, design inference network, whose nodes are refined by PNDN (Petri Net Based Design Inference Network) modules. The extended design network is named as N-PNDN and consists of several embedded PNDN modules which models the information flow on a functional basis to facilitate the design automation at the conceptual design phase of an engineering design. Information flow in N-PNDN occurs between parent and child PNDN modules in a hierarchical structure and is provided by the token flow between the modules. In this study, computer implementation of the design network construction and token flow algorithms for the N-PNDN structure is restored and therefore the previous DNS (Design Network Simulator) is adapted for the multi layer design and decomposition of mechatronic products. The related algorithms are developed by using an object oriented, visual programming environment. The graphical user interface is also modified. The further developed DNS has been used for the application of the N-PNDN structure in the conceptual design of 5 mechatronic systems. In the guidance of this study, it has been understood that the further developed DNS is a powerful tool for designers coming from different engineering disciplines in order to interchange their ideas.
5

Functional decomposition - A contribution to overcome the parameter space explosion during validation of highly automated driving

Amersbach, Christian, Winner, Hermann 29 September 2020 (has links)
Objective: Particular testing by functional decomposition of the automated driving function can potentially contribute to reducing the effort of validating highly automated driving functions. In this study, the required size of test suites for scenario-based testing and the potential to reduce it by functional decomposition are quantified for the first time. Methods: The required size of test suites for scenario-based approval of a so-called Autobahn-Chauffeur (SAE Level 3) is analyzed for an exemplary set of scenarios. Based on studies of data from failure analyses in other domains, the possible range for the required test coverage is narrowed down and suitable discretization steps, as well as ranges for the influence parameters, are assumed. Based on those assumptions, the size of the test suites for testing the complete system is quantified. The effects that lead to a reduction in the parameter space for particular testing of the decomposed driving function are analyzed and the potential to reduce the validation effort is estimated by comparing the resulting test suite sizes for both methods. Results: The combination of all effects leads to a reduction in the test suites’ size by a factor between 20 and 130, depending on the required test coverage. This means that the size of the required test suite can be reduced by 95–99% by particular testing compared to scenario-based testing of the complete system. Conclusions: The reduction potential is a valuable contribution to overcome the parameter space explosion during the validation of highly automated driving. However, this study is based on assumptions and only a small set of exemplary scenarios. Thus, the findings have to be validated in further studies.
6

Two-phase spectral wave explicit Navier-Stokes equations method for wave-structure interactions / Méthode SWENSE bi-phasique : application à l’étude des interactions houle-structure

Li, Zhaobin 27 November 2018 (has links)
Cette thèse propose un algorithme efficace pour la simulation numérique des interactions houle-structure avec des solveurs CFD bi-phasiques. L'algorithme est basé sur le couplage de la théorie potentielle et des équations bi-phasiques de Navier-Stokes. C'est une extension de la méthode Spectral Wave Explicit Navier-Stokes Equations (SWENSE) pour les solveurs CFD bi-phasiques avec une technique de capture d'interface. Dans cet algorithme, la solution totale est décomposée en une composante incidente et une composante complémentaire. La partie incidente est explicitement obtenue avec des méthodes spectrales basées sur la théorie des écoulements potentiels ; seule la partie complémentaire est résolue avec des solveurs CFD, représentant l'influence de la structure sur les houles incidentes. La décomposition assure la précision de la cinématique des houles incidentes quel que soit le maillage utilisé parles solveurs CFD. Une réduction significative de la taille du maillage est attendue dans les problèmes typiques des interactions houle structure. Les équations sont présentées sous trois formes : la forme conservative, la forme non conservative et la forme Ghost of Fluid Method. Les trois versions d'équations sont implémentées dans OpenFOAM et validées par une série de cas de test. Une technique d'interpolation efficace pour reconstruire la solution des houles irrégulières donnée par la méthode Higher-Order Spectral (HOS) sur le maillage CFD est également proposée. / This thesis proposes an efficient algorithm for simulating wave-structure interaction with two-phase Computational Fluid Dynamics (CFD) solvers. The algorithm is based on the coupling of potential wave theory and two phase Navier-Stokes equations. It is an extension of the Spectral Wave Explicit Navier-Stokes Equations (SWENSE) method for generalized two-phase CFD solvers with interface capturing techniques. In this algorithm, the total solution isdecomposed into an incident and acomplementary component. The incident solution is explicitly obtained with spectral wave models based on potential flow theory; only the complementary solution is solved with CFD solvers, representing the influence of the structure on the incident waves. The decomposition ensures the accuracy of the incident wave’s kinematics regardless of the mesh in CFD solvers. A significant reduction of the mesh size is expected in typical wave structure interaction problems. The governing equations are given in three forms: the conservative form, the non conservative form, and the Ghost of Fluid Method (GFM) form. The three sets of governing equations are implemented in OpenFOAM and validated by a series of wave-structure interaction cases. An efficient interpolation technique to map the irregular wave solution from a Higher-Order Spectral (HOS) Method onto the CFD grid is also proposed.
7

A process for function based architecture definition and modeling

Armstrong, Michael James 01 April 2008 (has links)
Developments in electric technologies have the potential to increase the efficiency and performance of commercial aircraft. However, without proper architecture innovation, technology developments at the subsystem level are not sufficient to ensure successful integration. Adaptations to existing architectures work well when trades are made strictly between equivalent systems which fulfill and induce the same functional requirements. However, this approach does not provide the architect with adequate flexibility to integrate technologies with differing functional and physical interfaces. Architecture redefinition is required for proper implementation of non-traditional and innovative architectural elements. A function-based process for innovative architecture design was developed to provide flexibility in the definition of candidate architectural concepts. Tools and methods were developed which facilitate the definition and exploration of a function-based architectural design space. These include functional decomposition, functional induction, dynamic morphology, adaptive functional mapping, reconfigurable mission definition, and concept level system installation. The Architecture Design Environment (ADEN) was built to integrate these tools and to facilitate the definition of physics-based models in evaluating the performance of candidate architectures. Using functions as the foundation of this process assists in mitigating assumptions which traditionally govern architecture structures and offers a promising approach to architecting through flexible conceptualization and integration. This toolset provides the framework wherein knowledge from conceptual, preliminary, and detailed design efforts can be linked in the definition of revolutionary architectures.
8

Méthodologie d’aide à l’innovation par l’exploitation des brevets et des phénomènes physiques impliqués / Innovation aid methodology through patent exploitation and physical phenomena involved

Valverde, Ulises 10 December 2015 (has links)
L’objectif de ce travail de thèse est de développer une méthodologie d’extraction de connaissances à partir de brevets pour aider les concepteurs dans la phase de résolution de problèmes industriels. La méthodologie est fondée sur trois piliers : la définition, la recherche / analyse et l’innovation. La définition exhaustive de la fonction principale du système industriel cible le champ de recherche et permet la récupération de mots clés initiaux grâce à une analyse approfondie de l’existant. La recherche itérative des brevets se base sur la décomposition fonctionnelle et sur l’analyse physique. L’analyse intègre la décomposition fonctionnelle énergétique pour déceler les énergies, les flux fonctionnels transmis et les phénomènes physiques impliqués dans le processus de conversion énergétique afin de sélectionner des effets physiques potentiellement pertinents. Pour délimiter le champ d’exploration nous formulons des requêtes de recherche à partir d’une base de données de mots clés constituée par des mots clés initiaux, des mots clés physiques et des mots clés technologiques. Une matrice des découvertes basée sur les croisements entre ces mots clés permet le classement des brevets pertinents. La recherche des opportunités d’innovation exploite la matrice des découvertes pour déceler les tendances évolutives suivies par les inventions. Les opportunités sont déduites à partir de l’analyse des cellules non pourvues de la matrice des découvertes, de l’analyse par tendances d’évolution et du changement de concept par la substitution du convertisseur énergétique. Nous proposons des tendances d’évolution construites à partir de lois d’évolution de la théorie TRIZ, d’heuristiques de conception et de règles de l’art de l’ingénieur. Un cas d’application concernant l’étude d’évolution et la proposition de nouveaux systèmes de séparation de mélanges bi-phasiques en offshore profond met en valeur la méthode. / The aim of this thesis work is the development of a methodology for knowledge extraction from patents to assist design engineers in the industrial problem-solving phase. The methodology is based on three pillars: definition, search / analysis and innovation. A comprehensive definition of the main function of the industrial system delimits the research field and allows the retrieval of initial keywords through a detailed analysis of what is currently available. The iterative patent search is based on functional decomposition and physical analysis. The analysis phase uses energy functional decomposition to identify energies, transmitted functional flows and physical phenomena involved in the energy conversion process in order to select potentially relevant physical effects. To delineate the exploration field we formulate search queries from a keywords database composed by initial, physical, and technological keywords. A discovery matrix based on the intersections between these keywords allows the classification of pertinent patents. The research for innovation opportunities exploits the discovery matrix in order to decipher the evolutionary trends followed by inventions. Opportunities are deduced from an analysis of the discovery matrix empty cells, an analysis of the evolution trends, and from changing the concept by energy converter substitution. We propose evolution trends constructed from the evolution laws of TRIZ theory, design heuristics, and rules of the art of the engineering. An application case concerning the study of the evolution and the proposal of innovative biphasic separation systems in deep offshore highlights the method.
9

An Approach For Including Business Requirements To Soa Design

Ocakturk, Murat 01 February 2010 (has links) (PDF)
In this thesis, a service oriented decomposition approach: Use case Driven Service Oriented Architecture (UDSOA), is introduced to close the gap between business requirements and SOA (Service Oriented Architecture) design by including business use cases and system use cases into decomposition process. The approach is constructed upon Service Oriented Software Engineering (SOSE) modeling technique and aims to fill the deficits of it at the decomposition phase. Further, it aims to involve both business vision and Information Technologies concerns in the decomposition process. This approach starts with functional top-down decomposition of the domain. Then, business use cases are used for further decomposition because of their high-level view. This connects the business requirements and our SOA design. Also it raises the level of abstraction which allows us to focus on business services. Second step of the SOA approach uses system use cases to continue decomposition. System use cases help discovering technical web services and allocating them on the decomposition tree. Service oriented analysis also helps separating business and technical services in tightly coupled architecture conditions. Those two steps together bring quality in to both problem and solution domains.
10

Recherche de résonances se désintégrant en paire de quarks top-antitop avec l'expérience ATLAS / Search for new resonances decaying into a top-antitop quarks pair with the ATLAS experiment

Barbe, William 19 September 2019 (has links)
Le Modèle Standard de la physique des particules décrit trois des quatre interactions fondamentales et toutes ses prédictions ont été confirmées expérimentalement. Cependant, il reste encore des questions auxquelles le Modèle Standard ne peut répondre. Plusieurs pistes théoriques sont explorées et certaines prédisent de nouvelles particules se désintégrant en paire de quarks top-antitop qui pourraient être observé par le détecteur ATLAS auprès du collisionneur LHC.À partir de 2026, le LHC redémarrera après avoir fait l'objet d'une importante phase d'amélioration afin d'augmenter sa luminosité. C'est dans ce contexte que s'inscrivent les études réalisées sur FATALIC, une puce qui a été proposée pour le remplacement de l'électronique frontale du calorimètre hadronique à tuiles d'ATLAS. Les études ont montré que FATALIC est capable de reconstruire les paramètres d'un signal analogique en utilisant trois canaux de gain et un changement de gain dynamique. Les simulations ont démontré que les performances attendues de la voie rapide de FATALIC entraient dans les spécifications demandées.Ensuite, a été présentée une recherche de nouvelles résonances se désintégrant en paire de quarks top-antitop, utilisant 36,1 fb-1 de données issues des collisions proton-proton à 13 TeV au LHC pendant les années 2015 et 2016. Cette recherche s'est concentrée sur le canal de désintégration semi-leptonique de la paire top-antitop, où l'état final possède une signature comprenant exactement un lepton, des jets hadroniques et de l'énergie transverse manquante. L'estimation du bruit de fond multi-jets a été présentée en détail. Une recherche dans le spectre de masse invariante de la paire top-antitop a été effectuée pour les deux topologies résolue et boostée et la compatibilité des données par rapport aux prédictions du Modèle Standard a été testée. Aucune déviation significative par rapport aux prédictions du Modèle Standard n'a été trouvée et des limites sur les sections efficaces de production de signaux issus des modèles considérés furent mises.Les difficultés rencontrées lors de l'estimation des bruits de fond et du pré-traitement des incertitudes systématiques pour l'analyse à 36,1 fb-1 ont motivé la recherche d'une nouvelle méthode pour l'estimation du bruit de fond globale. L'algorithme Décomposition Fonctionnelle (FD) est une nouvelle méthode permettant de rechercher de nouvelles particules dans un spectre de masse invariante, en séparant la contribution du bruit de fond de celles des contributions résonantes. FD a été testé dans le but de vérifier ses performances sur des pseudo-données des analyses top-antitop et « 4t BSM ». Dans un premier temps, des tests ont été menés sur la propension de FD à créer de faux signaux dans les spectres de masses invariantes. La première version s'est montrée sensible à ce problème. FD a ensuite été amélioré pour réduire la sensibilité à la création de faux signal. Enfin, des études d'injections de signal ont été réalisées et FD a montré des difficultés à modéliser la contribution du signal et à la séparer du bruit de fond pour des largeurs de signal supérieures à 3%. / The Standard Model of particle physics describes three of the four fundamental interactions and all of its predictions have been experimentally confirmed. However, there are still questions that the Standard Model cannot answer. Several theoretical models are being explored and some predict new resonances that would decay into a top-antitop quarks pair that could be observed by the ATLAS detector at the LHC collider.In 2026, the LHC will restart after a significant improvement phase to increase its luminosity. It's in this context that the studies on FATALIC, a chip that has been proposed for the replacement of the front-end electronics of the ATLAS hadronic tile calorimeter, were achieved. The studies showed that FATALIC was able to reconstruct the parameters of an analog signal using three gain channels and a dynamic gain switch. The simulations showed that the expected performance of FATALIC's fast channel was within the required specifications.Then, a search for new particles decaying into a top-antitop quarks pair was presented, using 36.1 fb-1 data from the proton-proton collisions at 13 TeV of the LHC for the years 2015 and 2016. This search concentrate on the semi-leptonic decay channel of the top-antitop quarks pair, where the final state has a signature with exactly one lepton, hadronic jets and missing transverse energy. The estimate of the multi-jets background noise was presented. A search in the top-antitop invariant mass spectrum was performed in the two topology resolved and boosted and the compatibility of the data with respect to the Standard Model predictions was tested. No significant deviation from the Standard Model's predictions was found and limits on benchmark models signal cross sections were set.The difficulties encountered in estimating the background noises and on the profiling of the systematic uncertainties for the 36.1 fb-1 analysis has motivated the search for a new method to perform the global background estimate. The Functional Decomposition (FD) algorithm is a new method to search for new particles in an invariant mass spectrum, separating the contribution of the background noise to those of the resonant contributions. FD has been tested to verify its performance on pseudo-data from the top-antitop and « 4t BSM » analyses. First, tests were conducted to check if FD was creating spurious signal. The first version suffered of this problem and FD was then improved to reduce the amount of spurious signal. Finally, signal injection studies were carried out and FD showed difficulties to model the signal's contribution and to separate it from the background noise for signal with widths greater than 3%.

Page generated in 0.1681 seconds