Spelling suggestions: "subject:"systems.systems"" "subject:"systemsystems""
131 |
Formal language for statistical inference of uncertain stochastic systemsGeorgoulas, Anastasios-Andreas January 2016 (has links)
Stochastic models, in particular Continuous Time Markov Chains, are a commonly employed mathematical abstraction for describing natural or engineered dynamical systems. While the theory behind them is well-studied, their specification can be problematic in a number of ways. Firstly, the size and complexity of the model can make its description difficult without using a high-level language. Secondly, knowledge of the system is usually incomplete, leaving one or more parameters with unknown values, thus impeding further analysis. Sophisticated machine learning algorithms have been proposed for the statistically rigorous estimation and handling of this uncertainty; however, their applicability is often limited to systems with finite state-space, and there has not been any consideration for their use on high-level descriptions. Similarly, high-level formal languages have been long used for describing and reasoning about stochastic systems, but require a full specification; efforts to estimate parameters for such formal models have been limited to simple inference algorithms. This thesis explores how these two approaches can be brought together, drawing ideas from the probabilistic programming paradigm. We introduce ProPPA, a process algebra for the specification of stochastic systems with uncertain parameters. The language is equipped with a semantics, allowing a formal interpretation of models written in it. This is the first time that uncertainty has been incorporated into the syntax and semantics of a formal language, and we describe a new mathematical object capable of capturing this information. We provide a series of algorithms for inference which can be automatically applied to ProPPA models without the need to write extra code. As part of these, we develop a novel inference scheme for infinite-state systems, based on random truncations of the state-space. The expressive power and inference capabilities of the framework are demonstrated in a series of small examples as well as a larger-scale case study. We also present a review of the state-of-the-art in both machine learning and formal modelling with respect to stochastic systems. We close with a discussion of potential extensions of this work, and thoughts about different ways in which the fields of statistical machine learning and formal modelling can be further integrated.
|
132 |
A framework for facilitating the development of systems of systems / Un framework pour faciliter le développement de systèmes de systèmesMoro Puppi Wanderley, Gregory 27 June 2018 (has links)
Le développement de Systèmes de Systèmes a pris de l'ampleur dans de nombreux domaines. Aujourd'hui, les applications complexes nécessitent que plusieurs systèmes développés indépendamment coopèrent ensemble, ce qui conduit au concept de Systèmes de Systèmes. Malgré une telle popularité, aucun consensus n'y a pas encore pu être atteint sur une définition précise de ce que sont les Systèmes de Systèmes. De plus, le nœud du problème est que la plupart des applications sont encore construites à la main et développées de manière ad hoc, c'est-à-dire, sans contraintes et sans être guidées par une structure prédéfinie. Développer un système de systèmes à la main est une tâche herculéenne pour un architecte informatique, en lui demandant de créer un entrelacement de connexions entre les systèmes composant du Système de Systèmes pour qu'ils puissent coopérer. En raison d'un tel entrelas, la complexité et le couplage serré augmentent, et l'évolution des Systèmes de Systèmes devient plus difficile, nécessitant des efforts substantiels. Pour trancher le nœud gordien auquel font face les architectes de Systèmes de Systèmes, nous proposons dans cette recherche un « framework » générique pour faciliter le développement de Systèmes de Systèmes dans le cadre de l'ingénierie des systèmes. Notre approche introduit une nouvelle architecture que nous appelons MBA pour Memory-Broker-Agent. Pour tester notre framework, nous avons construit un système de systèmes dans le domaine du développement collaboratif de logiciel. Les résultats montrent que notre approche réduit la difficulté et l'effort de développement. Sur la base de ces résultats, nous avons créé une méthode originale pour construire un système de systèmes à travers notre framework. Nous avons testé le potentiel de notre méthode ainsi que les caractéristiques génériques de notre framework, en construisant avec succès et avec plus de précision un nouveau système de systèmes dans le domaine de la Santé. / Building Systems of Systems (SoS) has gained momentum in various domains. Today, complex applications require to let several systems developed independently cooperate, leading to the moniker of SoS. Despite such popularity, no consensus has yet been reached about a precise definition of what SoS are. Moreover, the crux of the matter is that most applications are still handcrafted, being developed in an ad hoc fashion, i.e., freely and without being constrained by a predefined structure. Handcrafting SoS is an Herculean task for architects, requiring them to create an interwoven set of connections among SoS constituent systems for allowing cooperation. Because of the large number of interconnections, the complexity and tight coupling increase in SoS, and their evolution becomes more difficult, requiring substantial efforts from architects. To sever the Gordian knot faced by SoS architects, we propose in this research a generic framework for facilitating the development of SoS from a systems engineering perspective. Our approach is based on a novel architecture we call MBA for Memory-Broker-Agent. To test our framework we built an SoS for developing software collaboratively. Results show that our approach reduces the difficulty and effort for developing a SoS. Based on such results, we created an original method for building a SoS using our framework. We tested the potential of our method along with the generic features of our framework, by building a new SoS in the Health Care domain successfully and more accurately.
|
133 |
Transaction logging and recovery on phase-change memoryGao, Shen 01 January 2013 (has links)
No description available.
|
134 |
The development of an integrated design system and its embedded frameworks for information handling, design space characterization and problem solvingYang, Quangang, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2007 (has links)
In today's highly competitive landscape, new product development strategies are imperatives for companies to create and sustain competitive advantages. The objective of this research is to develop an integrated approach to automate, or aid, the design problem solving process. An Integrated Design System (IDS) is proposed focusing on the parametric and detail design. In this system, generation and evaluation of new design problems occur quickly and easily by changing the inputs for the design model. The IDS provides an integrated platform to incorporate available application programs such as CAD and FEM tools into a single system. Four major frameworks, namely information handling, problem decomposition, design space characterization, and problem solving, are proposed and embedded in it to implement the product development process. The information handling includes five aspects. A naming protocol is devised to organize the historical design cases. A search algorithm is proposed to retrieve a design case. A system-generated report is used to distribute the design information. A constraint definition frame is presented to define the relationships between design parameters. Two schemas, information matrix and constraint tree, are developed to represent information in the IDS. A diagonal-centered decomposition scheme is developed utilizing a Genetic Algorithm to decompose a complex design problem. In addition to the conventional genetic operators, two novel genetic operators, unequal position crossover and insertion mutation, are proposed. To characterize the design space, two methods, Incremental Response Method (IRM) and Artificial Neural Network (ANN), are presented. The IRM is derived from response surface method, while the back-propagated ANN is coded to be self-evaluated. The presented problem solving algorithm constitutes the solving mechanism of the IDS. Based on the assessment of the design objectives, all design parameters are given a priority index to facilitate the parameter selection. An independent recursive method is introduced in this algorithm to handle the design constraints. The case studies are performed on two design problems: a hard disk drive actuator arm and a shaft. The results show that the system can automatically align parameter values towards the objective values in a reasonable manner, and thus verify the feasibility of the embedded frameworks.
|
135 |
Soil water and nitrogen dynamics of farming systems on the upper Eyre Peninsula, South AustraliaAdcock, Damien Paul January 2005 (has links)
In the semi - arid Mediterranean - type environments of southern Australia, soil and water resources largely determine crop productivity and ultimately the sustainability of farming systems within the region. The development of sustainable farming systems is a constantly evolving process, of which cropping sequences ( rotations ) are an essential component. This thesis focused on two important soil resources, soil water and nitrogen, and studied the effects of different crop sequences on the dynamic of these resources within current farming systems practiced on the upper Eyre Peninsula of South Australia. The hypothesis tested was that : continuous cropping may alter N dynamics but will not necessarily alter water use efficiency in semi - arid Mediterranean - type environments. Continuous cropping altered N - dynamics ; increases in inorganic N were dependent on the inclusion of a legume in the cropping sequence. Associated with the increase in inorganic N supply was a decrease in WUE by the subsequent wheat crop. Overall, estimates of water use efficiency, a common index of the sustainability of farming systems, in this study concur with reported values for the semi - arid Murray - Mallee region of southern Australia and other semi - arid environments worldwide. Soil water balance and determination of WUE for a series of crop sequences in this thesis suggests that the adoption of continuous cropping may increase WUE and confer a yield benefit compared to crop sequences including a legume component in this environment. No differences in total water use ( ET ) at anthesis or maturity were measured for wheat regardless of the previous crop. Soil evaporation ( E [subscript s] ) was significantly affected by crop canopy development, measured as LAI from tillering until anthesis in 2002, however total seasonal E [subscript s] did not differ between crop sequences. Indeed in environments with infrequent rainfall, such as the upper Eyre Peninsula, soil evaporation may be water - limited rather than energy limited and the potential benefits from greater LAI and reduced E [subscript s] are less. Greater shoot dry matter production and LAI due to an enhanced inorganic N supply for wheat after legumes, and to a lesser degree wheat after canola, relative to continuous cereal crop sequences resulted in increases in WUE calculated at anthesis, as reported by others. Nonetheless the increase in WUE was not sustained due to limitations on available soil water capacity caused by soil physical and chemical constraints. Access to more soil water at depth ( > 0.8m ) through additional root growth was unavailable due to soil chemical limitations. More importantly, the amount of plant available water within the ' effective rooting depth ' ( 0 - 0.8m ) was significantly reduced when soil physical factors were accounted for using the integral water capacity ( IWC ) concept. The difference between the magnitude of the plant available water capacity and the integral water capacity was approximately 90mm within the ' effective rooting depth ' when measured at field capacity, suggesting that the ability of the soil to store water and buffer against periodic water deficit was severely limited. The IWC concept offers a method of evaluating the physical quality of soils and the limitations that these physical properties, viz. aeration, soil strength and hydraulic conductivity, impose on the water supply capacity of the soil. The inability of the soil to maintain a constant supply of water to satisfy maximal transpiration efficiency combined with large amounts of N resulted in ' haying off ', and reduced grain yields. A strong negative linear relationship was established between WUE of grain production by wheat and increasing soil NO [subscript 3] - N at sowing in 2000 and 2002, which conflicts with results from experiments in semi - arid Mediterranean climates in other regions of the world where applications of N increased water use efficiency of grain. Estimates of proportional dependence on N [subscript 2] fixation ( % N [subscript dfa] ) for annual medics and vetch from this study ( 43 - 80 % ) are comparable to others for environments in southern Australia ( < 450mm average annual rainfall ). Such estimates of fixation are considered low ( < 65 % ) to adequate ( 65 - 80 % ). Nevertheless, the amount of plant available N present at sowing for subsequent wheat crops, and the occurrence of ' haying off ', suggests that WUE is not N - limited per se, as implied by some reports, but constrained by the capacity of a soil to balance the co - limiting factors of water and nitrogen. / Thesis (Ph.D.)--School of Earth and Environmental Sciences, 2005.
|
136 |
Efficient Conditional Synchronization for Transactional Memory Based SystemNaik, Aniket Dilip 10 April 2006 (has links)
Multi-threaded applications are needed to realize the full potential
of new chip-multi-threaded machines. Such applications are
very difficult to program and orchestrate correctly, and transactional
memory has been proposed as a way of alleviating some of the programming
difficulties. However, transactional memory can directly
be applied only to critical sections, while conditional synchronization
remains difficult to implement correctly and efficiently.
This dissertation describes EasySync, a simple and inexpensive extension
to transactional memory that allows arbitrary conditional
synchronization to be expressed in a simple and composable way.
Transactional memory eliminates the need to use locks and provides
composability for critical sections: atomicity of a transaction is
guaranteed regardless of how other code is written. EasySync provides
the same benefits for conditional synchronizations: it eliminates
the need to use conditional variables, and it guarantees wakeup
of the waiting transaction when the real condition it is waiting for
is satisfied, regardless of whether other code correctly signals that
change. EasySync also allows transactional memory systems to efficiently
provide lock-free and condition variable-free conditional
critical regions and even more advanced synchronization primitives,
such as guarded execution with arbitrary conditional or guard code.
Because EasySync informs the hardware the that a thread is
waiting, it allows simple and effective optimizations, such as stopping
the execution of a thread until there is a change in the condition
it is waiting for. Like transactional memory, EasySync is backward compatible
with existing code, which we confirm by running unmodified
Splash-2 applications linked with an EasySync-based synchronization
library. We also re-write some of the synchronization
in three Splash-2 applications, to take advantage of better code readability,
and to replace spin-waiting with its more efficient EasySync
equivalents.
Our experimental evaluation shows that EasySync successfully
eliminates processor activity while waiting, reducing the number of
executed instructions by 8.6% on average in a 16-processor CMP.
We also show that these savings increase with the number of processors,
and also for applications written for transactional memory
systems. Finally, EasySync imposes virtually no performance overheads,
and can in fact improve performance.
|
137 |
Cruise Missile Mission RehearsalBircan, Gokhan 01 December 2011 (has links) (PDF)
Cruise missile mission planning is a key activity of cruise missile operations. Ground planning activities aim at low observable missions that have high probability of success. These activities include end game planning, route planning and launch planning. While end game planning tries to optimize end game parameters for maximum effectiveness, route planning tries to maximize survivability and enable navigational supports by determining the waypoints to from launch zone to target through a defended area. And lastly, planner tries to find the appropriate launch parameters that will prohibit platform to contact enemy agents. Mission rehearsal is the execution of the planned mission in a virtual environment that will be constructed with the data that drives the planning process. Mission rehearsal will support planners by providing possible results of the planned mission. Stochastic processes of the execution of the planned mission will be incorporated in the simulation of the combat. Along with platform, cruise missile and target, other players like SAM Sites or Search Radars (Early Warning Radars) will be incorporated in the rehearsal process.
|
138 |
ARC-VM: an architecture real options complexity-based valuation methodology for military systems-of-systems acquisitionsDomercant, Jean Charles 14 November 2011 (has links)
An Architecture Real Options Complexity-Based Valuation Methodology (ARC-VM) is developed for use to aid in the acquisition of military systems-of-systems (SoS). ARC-VM is suitable for acquisition-level decision making, where there is a stated desire for more informed tradeoffs between cost, schedule, and performance during the early phases of design. First, a framework is introduced to measure architecture complexity as it directly relates to military SoS. Development of the framework draws upon a diverse set of disciplines, including Complexity Science, software architecting, measurement theory, and utility theory. Next, a Real Options based valuation strategy is developed using techniques established for financial stock options that have recently been adapted for use in business and engineering decisions. The derived complexity measure provides architects with an objective measure of complexity that focuses on relevant complex system attributes. These attributes are related to the organization and distribution of SoS functionality and the sharing and processing of resources. The use of Real Options provides the necessary conceptual and visual framework to quantifiably and traceably combine measured architecture complexity, time-valued performance levels, as well as programmatic risks and uncertainties. An example suppression of enemy air defenses (SEAD) capability demonstrates the development and utility of the resulting architecture complexity&Real Options based valuation methodology. Different portfolios of candidate system types are used to generate an array of architecture alternatives that are then evaluated using an engagement model. This performance data is combined with both measured architecture complexity and programmatic data to assign an acquisition value to each alternative. This proves useful when selecting alternatives most likely to meet current and future capability needs.
|
139 |
Design space exploration of stochastic system-of-systems simulations using adaptive sequential experimentsKernstine, Kemp H. 25 June 2012 (has links)
The complexities of our surrounding environments are becoming increasingly diverse, more integrated, and continuously more difficult to predict and characterize. These modeling complexities are ever more prevalent in System-of-Systems (SoS) simulations where computational times can surpass real-time and are often dictated by stochastic processes and non-continuous emergent behaviors. As the number of connections continue to increase in modeling environments and the number of external noise variables continue to multiply, these SoS simulations can no longer be explored with traditional means without significantly wasting computational resources.
This research develops and tests an adaptive sequential design of experiments to reduce the computational expense of exploring these complex design spaces. Prior to developing the algorithm, the defining statistical attributes of these spaces are researched and identified. Following this identification, various techniques capable of capturing these features are compared and an algorithm is synthesized. The final algorithm will be shown to improve the exploration of stochastic simulations over existing methods by increasing the global accuracy and computational speed, while reducing the number of simulations required to learn these spaces.
|
140 |
Neuro-fuzzy system with increased accuracy suitable for hardware implementationGovindasamy, Kannan, Wilamowski, Bogdan M. January 2009 (has links)
Thesis--Auburn University, 2009. / Abstract. Vita. Includes MatLab code. Includes bibliography (p.43-44).
|
Page generated in 0.0763 seconds