Spelling suggestions: "subject:"[een] REFINEMENT"" "subject:"[enn] REFINEMENT""
321 |
Entwurf und Verifikation von Petrinetzmodellen verteilter Algorithmen durch Verfeinerung unverteilter AlgorithmenWu, Bixia 12 July 2007 (has links)
Um Entwurf und Verifikation komplizierter verteilter Algorithmen leichter und verständlicher zu machen, wird oft eine Verfeinerungsmethode verwendet. Dabei wird ein einfacher Algorithmus, der gewünschte Eigenschaften erfüllt, schrittweise zu einem komplizierten Algorithmus verfeinert. In jedem Schritt sollen die gewünschten Eigenschaften erhalten bleiben. Für nachrichtenbasierte verteilte Algorithmen haben wir eine neue Verfeinerungsmethmode entwickelt. Wir beginnen mit einem Anfangsalgorithmus, der Aktionen enthält, die gemeinsame Aufgaben mehrerer Agenten beschreiben. In jedem Schritt verfeinern wir eine dieser Aktionen zu einem Netz, das nur solche Aktionen enthält, die die Aufgaben einzelner Agenten beschreiben. Jeder Schritt ist also eine Verteilung einer unverteilten Aktion. Die Analyse solcher Verfeinerungsschritte wird mit Hilfe eines neuen Verfeinerungsbegriffs - der verteilenden Verfeinerung - durchgeführt. Entscheidend dabei ist das Erhaltenbleiben der Halbordnungen des zu verfeinernden Algorithmus. Dies ist durch Kausalitäten der Aktionen der Agenten im lokalen Verfeinerungsnetz zu erreichen. Die Kausalitäten im lokalen Verfeinerungsnetz lassen sich einerseits beim Entwurf direkt durch Nachrichtenaustausch realisieren. Andererseits kann man bei der Verifikation die Gültigkeit einer Kausalität im lokalen Verfeinerungsnetz direkt vom Netz ablesen. Daher ist diese Methode leicht zu verwenden. Die Anwendung der Methode wird in der Arbeit an verschiedenen nicht trivialen Beispielen demonstriert. / In order to make design and verification of complicated distributed algorithms easier and more understandable, a refinement method is often used. A simple algorithm, which fulfills desired properties, is refined stepwise to a complicated algorithm. In each step the desired properties are preserved. For messages-based distributed algorithms we have developed a new refinement method. We begin with an initial algorithm, which contains actions, which describe common tasks of several agents. In each step we refine one of these actions to a net, which contains only such actions, which describe the tasks of individual agents. Thus, each step is a distribution of an undistributed action. The analysis of such refinement steps is accomplished with the help of a new refinement notation - the distributing refinement. Preservation of the partial order of the refined algorithm is important. This can be achieved by causalities of the actions of the agents in the local refinement net. Causalities in the local refinement net can be realized on the one hand at design directly by messages passing. On the other hand, at verification one can read the validity of causality in the local refinement net directly from the net. Therefore, this method is easy to use. The application of the method is demonstrated by several nontrivial examples in this thesis.
|
322 |
From specification through refinement to implementation : a comparative studyVan Coppenhagen, Ingrid H. M. 30 June 2002 (has links)
This dissertation investigates the role of specification, refinement and implementation in the software development cycle. Both the structured and object-oriented paradigms are looked at. Particular emphasis is placed on the role of the refinement process.
The requirements for the product (system) are determined, the specifications are drawn up, the product is designed, specified, implemented and tested. The stage between the (formal) specification of the system and the implementation of the system is the refinement stage.
The refinement process consists out of data refinement, operation refinement, and operation decomposition. In this dissertation, Z, Object-Z and UML (Unified Modelling Language) are used as specification languages and C, C++, Cobol and Object-Oriented Cobol are used as implementation languages.
As an illustration a small system, The ITEM System, is specified in Z and UML and implemented in Object-Oriented Cobol. / Computing / M. Sc. (Information Systems)
|
323 |
From specification through refinement to implementation : a comparative studyVan Coppenhagen, Ingrid H. M. 30 June 2002 (has links)
This dissertation investigates the role of specification, refinement and implementation in the software development cycle. Both the structured and object-oriented paradigms are looked at. Particular emphasis is placed on the role of the refinement process.
The requirements for the product (system) are determined, the specifications are drawn up, the product is designed, specified, implemented and tested. The stage between the (formal) specification of the system and the implementation of the system is the refinement stage.
The refinement process consists out of data refinement, operation refinement, and operation decomposition. In this dissertation, Z, Object-Z and UML (Unified Modelling Language) are used as specification languages and C, C++, Cobol and Object-Oriented Cobol are used as implementation languages.
As an illustration a small system, The ITEM System, is specified in Z and UML and implemented in Object-Oriented Cobol. / Computing / M. Sc. (Information Systems)
|
324 |
Grain refinement in hypoeutectic Al-Si alloy driven by electric currentsZhang, Yunhu 26 February 2016 (has links) (PDF)
The present thesis investigates the grain refinement in solidifying Al-7wt%Si hypoeutectic alloy driven by electric currents. The grain size reduction in alloys generated by electric currents during the solidification has been intensively investigated. However, since various effects of electric currents have the potential to generate the finer equiaxed grains, it is still argued which effect plays the key role in the grain refinement process. In addition, the knowledge about the grain refinement mechanism under the application of electric currents remains fragmentary and inconsistent. Hence, the research objectives of the present thesis focus on the role of electric current effects and the grain refinement mechanism under the application of electric currents.
Chapter 1 presents an introduction with respect to the subject of grain refinement in alloys driven by electric current during the solidification process in particular, including the research objectives; the research motivation; a brief review about the research history; a short introduction on the electric currents effects and a review relevant to the research status of grain refinement mechanism.
Chapter 2 gives a description of research methods. This chapter shows the employed experiment materials, experimental setup, experimental procedure, the analysis methods of solidified samples, and numerical method, respectively.
Chapter 3 focuses on the role of electric current effects in the grain refinement process. A series of solidification experiments are performed under various values of effective electric currents for both, electric current pulse and direct current. The corresponding temperature measurements and flow measurements are carried out with the increase of effective electric current intensity. Meanwhile, numerical simulations are conducted to present the details of the flow structure and the distribution of electric current density and electromagnetic force. Finally, the role of electric current effects is discussed to find the key effect in the grain refinement driven by electric currents.
Chapter 4 investigates the grain refinement mechanism driven by electric currents. This chapter mainly focuses on the origin of finer equiaxed grain for grain refinement under the application of electric current on account of the importance of the origin for understanding the grain refinement mechanism. A series of solidification experiments are carried out in Al-7wt%Si alloy and in high purity aluminum. The main origin of equiaxed grain for grain refinement is concluded based on the experiment results.
Chapter 5 presents three further investigations based on the achieved knowledge in chapter 3 and 4 about the role of electric current effects and the grain refinement mechanism. According to the insight into the key electric current effect for the grain refinement shown in chapter 3, this chapter presents a potential approach to promote the grain refinement. In addition, the solute distribution under the influence of electric current is examined based on the knowledge about the electric current effects. Moreover, the grain refinement mechanism under application of travelling magnetic field is investigated by performing a series of solidification experiments to compare with the experiments about the grain refinement mechanism driven by electric currents shown in chapter 4.
Chapter 6 summarizes the main conclusions from the presented work.
|
325 |
Refinement of the partogram: an educational perspectiveMareka, Kedibonye Mmachere 01 1900 (has links)
A deductive, descriptive, quanitative study was undertaken at Nyangabgwe Hospital, Francistown, Botswana, situated in the north east of the country. Its focus was on the use of partogram by midwives.
The population consisted of 395 obstetric records for the period of one month. A sample of 303 obstetrics records was drawn. Data were collected through auditing the bed letters of delivered mothers and interviews with and observation of midwives using the partogram in practice.
The Statistical Package for Social Sciences (SPSS) program was used to analyse the data. The findings indicate that there are problems regarding, and factors that can have a negative influence on the use of the partogram by midwives.
It is suggested that a supportive teaching programme for the midwives should be designed, that will support the system of supervision in the labour ward that already exists, in the use of the partogram throughout the labour process. / Health Studies / M.A. (Advanced Nursing Sciences)
|
326 |
DISTRIBUTED ARCHITECTURE FOR A GLOBAL TT&C NETWORKMartin, Fredric W. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / Use of top-down design principles and standard interface
techniques provides the basis for a global telemetry data
collection, analysis, and satellite control network with a
high degree of survivability via use of distributed
architecture. Use of Commercial Off-The-Shelf (COTS)
hardware and software minimizes costs and provides for easy
expansion and adaption to new satellite constellations.
Adaptive techniques and low cost multiplexers provide for
graceful system wide degradation and flexible data
distribution.
|
327 |
Development of Effective Algorithm for Coupled Thermal-Hydraulics – Neutron-Kinetics Analysis of Reactivity TransientPeltonen, Joanna January 2009 (has links)
<p>Analyses of nuclear reactor safety have increasingly required coupling of full three dimensional neutron kinetics (NK) core models with system transient thermal-hydraulics (TH) codes. To produce results within a reasonable computing time, the coupled codes use different spatial description of the reactor core. The TH code uses few, typically 5 to 20 TH channels, which represent the core. The NK code uses explicit node for each fuel assembly. Therefore, a spatial mapping of coarse grid TH and fine grid NK domain is necessary. However, improper mappings may result in loss of valuable information, thus causing inaccurate prediction of safety parameters.</p><p>The purpose of this thesis is to study the sensitivity of spatial coupling (channel refinement and spatial mapping) and develop recommendations for NK-TH mapping in simulation of safety transients – Control Rod Drop, Turbine Trip, Feedwater Transient combined with stability performance (minimum pump speed of recirculation pumps).</p><p>The research methodology consists of spatial coupling convergence study, as increasing number of TH channels and different mapping approach the reference case. The reference case consists of one TH channel per one fuel assembly. The comparison of results has been done under steady-state and transient conditions. Obtained results and conclusions are presented in this licentiate thesis.</p>
|
328 |
Program synthesis from domain specific object modelsFaitelson, David January 2008 (has links)
Automatically generating a program from its specification eliminates a large source of errors that is often unavoidable in a manual approach. While a general purpose code generator is impossible to build, it is possible to build a practical code generator for a specific domain. This thesis investigates the theory behind Booster — a domain specific, object based specification language and automatic code generator. The domain of Booster is information systems — systems that consist of a rich object model in which the objects refer to each other to form a complicated network of associations. The operations of such systems are conceptually simple (changing the attributes of objects, adding or removing new objects and creating or destroying associations) but they are tricky to implement correctly. The thesis focuses on the theoretical foundation of the Booster approach, in particular on three contributions: semantics, model completion, and code generation. The semantics of a Booster model is a single abstract data type (ADT) where the invariants and the methods of all the classes in the model are promoted to the level of the ADT. This is different from the traditional view that considers each class as a separate ADT. The thesis argues that the Booster semantics is a better model of object oriented systems. The second important contribution is the idea of model completion — a process that augments the postconditions of methods with additional predicates that follow from the system’s invariant and the method’s original intention. The third contribution describes a simple but effective code generation technique that is based on interpreting postconditions as executable statements and uses weakest preconditions to ensure that the generated code refines its specification.
|
329 |
Radiation hydrodynamic models and simulated observations of radiative feedback in star forming regionsHaworth, Thomas James January 2013 (has links)
This thesis details the development of the radiation transport code torus for radiation hydrodynamic applications and its subsequent use in investigating problems regarding radiative feedback. The code couples Monte Carlo photoionization with grid-based hydrodynamics and has the advantage that all of the features available to a dedicated radiation transport code are at its disposal in RHD applications. I discuss the development of the code, including the hydrodynamics scheme, the adaptive mesh refinement (AMR) framework and the coupling of radiation transport with hydrodynamics. Extensive testing of the resulting code is also presented. The main application involves the study of radiatively driven implosion (RDI), a mechanism where the expanding ionized region about a massive star impacts nearby clumps, potentially triggering star formation. Firstly I investigate the way in which the radiation field is treated, isolating the relative impacts of polychromatic and diffuse field radiation on the evolution of radiation hydrodynamic RDI models. I also produce synthetic SEDs, radio, Hα and forbidden line images of the bright rimmed clouds (BRCs) resulting from the RDI models, on which I perform standard diagnostics that are used by observers to obtain the cloud conditions. I test the accuracy of the diagnostics and show that considering the pressure difference between the neutral cloud and surrounding ionized layer can be used to infer whether or not RDI is occurring. Finally I use more synthetic observations to investigate the accuracy of molecular line diagnostics and the nature of line profiles of BRCs. I show that the previously unexplained lack of dominant blue-asymmetry (a blue-asymmetry is the expected signature of a collapsing cloud) in the line profiles of BRCs can be explained by the shell of material, swept up by the expanding ionized region, that drives into the cloud. The work in this thesis combines to help resolve the difficulties in understanding radiative feedback, which is a non–linear process that happens on small astrophysical timescales, by improving numerical models and the way in which they are compared with observations.
|
330 |
Étude des architectures de sécurité de systèmes autonomes : formalisation et évaluation en Event B / Model based safety of FDIR architectures for autonomous systems : formal specification and assessment with Event-BChaudemar, Jean-Charles 27 January 2012 (has links)
La recherche de la sûreté de fonctionnement des systèmes complexes impose une démarche de conception rigoureuse. Les travaux de cette thèse s’inscrivent dans le cadre la modélisation formelle des systèmes de contrôle autonomes tolérants aux fautes. Le premier objectif a été de proposer une formalisation d’une architecture générique en couches fonctionnelles qui couvre toutes les activités essentielles du système de contrôle et qui intègre des mécanismes de sécurité. Le second objectif a été de fournir une méthode et des outils pour évaluer qualitativement les exigences de sécurité. Le cadre formel de modélisation et d’évaluation repose sur le formalisme Event-B. La modélisation Event-B proposée tire son originalité d’une prise en compte par raffinements successifs des échanges et des relations entre les couches de l’architecture étudiée. Par ailleurs, les exigences de sécurité sont spécifiées à l’aide d’invariants et de théorèmes. Le respect de ces exigences dépend de propriétés intrinsèques au système décrites sous forme d’axiomes. Les preuves que le principe d’architecture proposé satisfait bien les exigences de sécurité attendue ont été réalisées avec les outils de preuve de la plateforme Rodin. L’ensemble des propriétés fonctionnelles et des propriétés relatives aux mécanismes de tolérance aux fautes, ainsi modélisées en Event-B, renforce la pertinence de la modélisation adoptée pour une analyse de sécurité. Cette approche est par la suite mise en œuvre sur un cas d’étude d’un drone ONERA. / The study of complex system safety requires a rigorous design process. The context of this work is the formal modeling of fault tolerant autonomous control systems. The first objective has been to provide a formal specification of a generic layered architecture that covers all the main activities of control system and implement safety mechanisms. The second objective has been to provide tools and a method to qualitatively assess safety requirements. The formal framework of modeling and assessment relies on Event-B formalism. The proposed Event-B modeling is original because it takes into account exchanges and relations betweenarchitecture layers by means of refinement. Safety requirements are first specified with invariants and theorems. The meeting of these requirements depends on intrinsic properties described with axioms. The proofs that the concept of the proposed architecture meets the specified safety requirements were discharged with the proof tools of the Rodin platform. All the functional properties and the properties relating to fault tolerant mechanisms improve the relevance of the adopted Event-B modeling for safety analysis. Then, this approach isimplemented on a study case of ONERA UAV.
|
Page generated in 0.0531 seconds