131 |
A Framework Gene Regulatory Network Controlling Neural Crest Cell DiversificationBosse, Kevin M. 14 December 2010 (has links)
No description available.
|
132 |
A Campus Situational Awareness and Emergency Response Management System ArchitectureChigani, Amine 29 April 2011 (has links)
The history of university, college, and high school campuses is eventful with man-made tragedies ensuing a tremendous loss of life. Virginia Tech's April 16 shooting ignited the discussion about balancing openness and safety in open campus environments. Existing campus safety solutions are characterized by addressing bits and pieces of the problem. The perfect example is the recent influx in demand for Electronic Notification Systems (ENS) by many educational institutions following the tragedies at Virginia Tech and Northern Illinois University. Installing such systems is important, as it is an essential part of an overall solution. However, without a comprehensive, innovative understanding of the requirements for an institution-wide solution that enables effective security control and efficient emergency response, the proposed solutions will always fall short.
This dissertation describes an architecture for SINERGY (campuS sItuational awareNess and Emergency Response manaGement sYstem) – a Service-Oriented Architecture (SOA)-based network-centric system of systems that provides a comprehensive, institution-wide, software-based solution for balancing safety and openness on any campus environment. SINERGY architecture addresses three main capabilities: Situational awareness (SA), security control (SC), and emergency response management (ERM). A safe and open campus environment can be realized through the development of a network-centric system that enables the creation of a COP of the campus environment shared by all campus entities. Having a COP of what goes on campus at any point in time is key to enabling effective SC measures to be put in place. Finally, common SA and effective SC lay the foundation for an efficient and successful ERM in the case of a man-made tragedy.
Because this research employs service orientation principles to architect SINERGY, this dissertation also addresses a critical area of research with regards to SOA; that area is SOA security. Security has become a critical concern when it comes to SOA-based network-centric systems of systems due the nature of business practices today, which emphasize dynamic sharing of information and services among independent partners. As a result, the line between internal and external organization networks and services has been blurred making it difficult to assess the security quality of SOA environments. In order to do this evaluation effectively, a hierarchy of security indicators is developed. The proposed hierarchy is incorporated in a well-established evaluation methodology to provide a structured approach for assessing the security of an SOA-based network-centric system of systems.
Another area of focus in this dissertation is the architecting process. With the advent of potent network technology, software/system engineering has evolved from a traditional platform-centric focus into a network-centric paradigm where the “system of systems” perspective has been the norm. Under this paradigm, architecting has become a critical process in the life cycle of software/system engineering. The need for a structured description of the architecting process is undeniable. This dissertation fulfills that need and provides a structured description of the process of architecting a software-based network-centric system of systems. The architecting process is described using a set of goals that are specific to architecting, and the associated specific practices that enable the realization of these goals. The architecting process description presented herein is intended to guide the software/system architects. / Ph. D.
|
133 |
Interrater Agreement of Incumbent Job Specification Importance Ratings: Rater, Occupation, and Item EffectsBurnkrant, Steven Richard 27 October 2003 (has links)
Despite the importance of job specifications to much of industrial and organizational psychology, little is known of their reliability or validity. Because job specifications are developed based on input from subject matter experts, interrater agreement is a necessary condition for their validity. The purpose of the present research is to examine the validity of job specifications by assessing the level of agreement in ratings and the effects of occupational tenure, occupational complexity, and the abstractness of rated worker requirements. Based on the existing literature, it was hypothesized that (1) agreement will be worse than acceptable levels, (2) agreement will be higher among those with longer tenure, (3) agreement will be lower in more complex occupations, (4) the effect of occupational tenure will be more pronounced in complex than simple occupations, (5) agreement will be higher on more abstract items, and (6) agreement will be lowest for concrete KSAOs in complex occupations. These hypotheses were tested using ratings from 38,041 incumbents in 61 diverse occupations in the Federal government. Consistent with Hypothesis 1, agreement failed to reach acceptable levels in nearly every case, whether measured with the awg or various forms of the rwg agreement indices. However, tenure, occupational complexity, and item abstractness had little effect on ratings, whether agreement was measured with rwg or awg. The most likely explanation for these null findings is that the disagreement reflected a coarse classification system that overshadowed the effects of tenure, complexity, and abstractness. The existence of meaningful subgroups within a single title threatens the content validity of job specifications: the extent to which they include all relevant and predictive KSAOs. Future research must focus on the existence of such subgroups, their consequences, and ways of identifying them. / Ph. D.
|
134 |
A Statistical Approach to Empirical Macroeconomic Modeling with Practical ApplicationsEdwards, Jeffrey A. 24 April 2003 (has links)
Most of empirical modeling involves the use of Ordinary Least Squares regression where the residuals are assumed normal, independent, and identically distributed. In finite samples, these assumptions becomes critical for accurate estimations, however, in macroeconomics in particular, these assumptions are rarely tested. This study addresses the applications of statistical testing methods and model respecification within the context of applied macroeconomics.
The first application is a statistical comparison of Gregory Mankiw, David Romer and David Weil’s
A Contribution to the Empirics of Economic Growth, and Nazrul Islam’s Growth Empirics: A Panel Data Approach. This analysis shows that the models in both papers are statistically misspecified. When respecified, the functional forms of Mankiw, Romer, and Weil’s models change considerably whereas Islam’s retain the theoretical structure. The second application is a study of the impact of inflation on investment and growth. After instrumenting for inflation with a set of political variables, I find that between approximately 1% and 9% inflation, there is a positive correlation between inflation and investment--the Mundell-Tobin effect may be a valid explanation. I further this analysis to show that treating investment as an exogenous variable may be problematic in empirical growth models. / Ph. D.
|
135 |
A Method for Systematically Generating Tests from Object-Oriented Class InterfacesMungara, Mahesh Babu 19 November 2003 (has links)
This thesis describes the development and evaluation of a manual black-box testing method inspired by Zweben's test adequacy criteria, which apply white-box analogues of all-DU-pairs and all-nodes to a flow graph generated from the black-box specification. The approach described herein generates tests from a matrix representation of a class interface based on the flow graph concept. In this process, separate matrices for all-DU-pairs and all-nodes guide the generation of the required tests. The primary goal of the research is not to optimize the number of tests generated but to describe the process in a user-friendly manner so that practitioners can utilize it directly, quickly, and efficiently for real-world testing purposes.
The approach has been evaluated to assess its effectiveness at detecting bugs. Both strategies - all-DU-pairs and all-nodes - were compared against three other testing methods: the commercial white-box testing tool Jtest, Orthogonal Array Testing Strategy (OATS), and test cases generated at random. The five approaches were applied across a sample of eleven java classes selected from java.util.*. Experimental results indicate that the two versions resulting from this research performed on par with or better than their respective equivalent approaches. The all-DU-pairs method performed better than all other approaches except for the random approach, with which it compared equally. Experimental evaluation results thus indicate that an automated approach based on the manual method is worth exploring. / Master of Science
|
136 |
Safety of Self-driving Cars: A Case Study on Lane Keeping SystemsXu, Hao 07 July 2020 (has links)
Machine learning is a powerful method to handle the self-driving problem. Researchers use machine learning to construct a neural network and train it to drive the car. A self-driving car is a safety-critical system. However, the neural network is not necessarily reliable. The output of a neural network can be easily influenced by many factors, such as the quality of training data and the runtime environment. Also, it takes time for the neural network to generate the output. That is, the self-driving car may not respond in time. Such weaknesses will increase the risk of accidents. In this thesis, considering the safety of self-driving cars, we apply a delay-aware shielding mechanism to the neural network to protect the self-driving car. Our approach is an improvement based on previous research on runtime safety enforcement for general cyber-physical systems that did not consider the delay to generate the output. Our approach contains two steps. The first is to use formal language to specify the safety properties of the system. The second step is to synthesize the specifications into a delay-aware enforcer called the shield, which enforces the violated output to satisfy the specifications during the whole delay. We use a lane keeping system as a small but representative case study to evaluate our approach. We utilize an end-to-end neural network as a typical implementation of such a lane keeping system. Our shield supervises those outputs of the neural network and verifies the safety properties during the whole delay period with a prediction. The shield can correct it if a violation exists. We use a 1/16 scale truck and construct a curvy lane to test our approach. We conduct the experiments both on a simulator and a real road to evaluate the performance of our proposed safety mechanism. The result shows the effectiveness of our approach. We improve the safety of a self-driving car and we will consider more comprehensive driving scenarios and safety features in the future. / Master of Science / Self-driving cars is a hot topic nowadays. Machine learning is a popular method to achieve self-driving cars. Machine learning constructs a neural network, which imitates a human driver's behavior to drive the car. However, a neural network is not necessarily reliable. Many things can mislead the neural network into making wrong decisions, such as insufficient training data or a complex driving environment. Thus, we need to guarantee the safety of self-driving cars. We are inspired to use formal language to specify the safety properties of the self-driving system. A system should always follow those specifications. Then the specifications are synthesized into an enforcer called the shield. When the system's output violates the specifications, the shield will modify the output to satisfy the specifications. Nevertheless, there is a problem with state-of-the-art research on specifications. When the specifications are synthesized into a shield, it does not consider the delay to compute the output. As a result, the specifications may not be always satisfied during the period of the delay. To solve such a problem, we propose a delay-aware shielding mechanism to continually protect the self-driving system. We use a lane keeping system as a small self-driving case study. We evaluate the effectiveness of our approach both on the simulation platform and the hardware platform. The experiments show that the safety of our self-driving car is enhanced. We intend to study more comprehensive driving scenarios and safety features in the future.
|
137 |
A component-based approach to proving the correctness of the Schorr-Waite algorithmSingh, Amrinder 23 August 2007 (has links)
This thesis presents a component-based approach to proving the correctness of programs involving pointers. Unlike previous work, our component-based approach supports modular reasoning, which is essential to the scalability of systems. Specifically, we specify the behavior of a graph-marking algorithm known as the Schorr-Waite algorithm, implement it using a component that captures the behavior and performance benefits of pointers, and prove that the implementation is correct with respect to the specification. We use the Resolve language in our example, which is an integrated programming and specification language that supports modular reasoning. The behavior of the algorithm is fully specified using custom definitions, pre- and post-conditions, and a complex loop invariant. Additional operations for the Resolve pointer component are introduced that preserve the accessibility of a system. These operations are used in the implementation of the algorithm. They simplify the proof of correctness and make the code shorter. / Master of Science
|
138 |
An algebraic theory of componentised interactionChilton, Christopher James January 2013 (has links)
This thesis provides a specification theory with strong algebraic and compositionality properties, allowing for the systematic construction of new components out of existing ones, while ensuring that given properties continue to hold at each stage of system development. The theory shares similarities with the interface automata of de Alfaro and Henzinger, but is linear-time in the style of Dill's trace theory, and is endowed with a richer collection of operators. Components are assumed to communicate with one another by synchronisation of input and output actions, with the component specifying the allowed sequences of interactions between itself and the environment. When the environment produces an interaction that the component is unwilling to receive, a communication mismatch occurs, which can correspond to run-time error or underspecification. These are modelled uniformly as inconsistencies. A linear-time refinement preorder corresponding to substitutivity preserves the absence of inconsistency under all environments, allowing for the safe replacement of components at run-time. To build complex systems, a range of compositional operators are introduced, including parallel composition, logical conjunction and disjunction, hiding, and quotient. These can be used to examine the structural behaviour of a system, combine independently developed requirements, abstract behaviour, and incrementally synthesise missing components, respectively. It is shown that parallel composition is monotonic under refinement, conjunction and disjunction correspond to the meet and join operations on the refinement preorder, and quotient is the adjoint of parallel composition. Full abstraction results are presented for the equivalence defined as mutual refinement, a consequence of the refinement being the weakest preorder capturing substitutivity. Extensions of the specification theory with progress-sensitivity (ensuring that refinement cannot introduce quiescence) and real-time constraints on when interactions may and may not occur are also presented. These theories are further complemented by assume-guarantee frameworks for supporting component-based reasoning, where contracts (characterising sets of components) separate the assumptions placed on the environment from the guarantees provided by the components. By defining the compositional operators directly on contracts, sound and complete assume-guarantee rules are formulated that preserve both safety and progress. Examples drawn from distributed systems are used to demonstrate how these rules can be used for mechanically deriving component-based designs.
|
139 |
Spatio-temporal logic for the analysis of biochemical modelsBanks, Christopher Jon January 2015 (has links)
Process algebra, formal specification, and model checking are all well studied techniques in the analysis of concurrent computer systems. More recently these techniques have been applied to the analysis of biochemical systems which, at an abstract level, have similar patterns of behaviour to concurrent processes. Process algebraic models and temporal logic specifications, along with their associated model-checking techniques, have been used to analyse biochemical systems. In this thesis we develop a spatio-temporal logic, the Logic of Behaviour in Context (LBC), for the analysis of biochemical models. That is, we define and study the application of a formal specification language which not only expresses temporal properties of biochemical models, but expresses spatial or contextual properties as well. The logic can be used to express, or specify, the behaviour of a model when it is placed into the context of another model. We also explore the types of properties which can be expressed in LBC, various algorithms for model checking LBC - each an improvement on the last, the implementation of the computational tools to support model checking LBC, and a case study on the analysis of models of post-translational biochemical oscillators using LBC. We show that a number of interesting and useful properties can be expressed in LBC and that it is possible to express highly useful properties of real models in the biochemistry domain, with practical application. Statements in LBC can be thought of as expressing computational experiments which can be performed automatically by means of the model checker. Indeed, many of these computational experiments can be higher-order meaning that one succinct and precise specification in LBC can represent a number of experiments which can be automatically executed by the model checker.
|
140 |
Tjänsteupphandling i offentlig verksamhet : En studie av kravspecifikationens roll / Service procurement in the public sector : A study of the requirement specificationSiverbo, Sofia, Augustsson, Hanna January 2012 (has links)
Bakgrund: De senaste åren har den offentliga upphandlingen ökat i omfattning och utgör nu ca 17 procent av Sveriges BNP. Det är viktigt att de offentliga upphandlingarna utförs effektivt och i detta arbete spelar kravspecifikationen en viktig roll. En bra kravspecifikation är en förutsättning för att den upphandlande myndigheten ska få in anbud som motsvarar det de vill ha medan en bristfällig sådan kan ha stora konsekvenser för resultatet av upphandlingen. Det är dock i mycket en komplicerad uppgift att formulera krav för en tjänst, vars specifika egenskaper gör den mer svårdefinierad än en vara. Syfte: Syftet med studien är att skapa förståelse för kravspecifikationens roll i den offentliga tjänsteupphandlingen. Mer specifikt syftar studien till att undersöka vilka förutsättningarna är för att kunna utforma en bra kravspecifikation och vilka konsekvenser en bristfällig sådan kan ha för resultatet av upphandlingen. Genomförande: Med en kvalitativ ansats och genom personliga intervjuer har upphandlares syn på kravspecifikationens betydelse och svårigheter undersökts. Fyra semistrukturerade personliga intervjuer har genomförts med fyra erfarna upphandlare, från fyra olika offentliga organisationer - två landsting och två kommuner. Slutsats: Studiens resultat visar att det är viktigt med god kunskap om marknaden och medvetenhet om verksamhetens behov för att en relevant kravspecifikation ska kunna utformas. En relevant och bra kravspecifikation motverkar ökade transaktionskostnader och ökar möjligheterna att den tjänst som verksamheten är i behov av levereras. Studiens resultat visar vidare att det är viktigare att kravspecifikationen är relevant och tydlig än att den är detaljrik och att funktionsbaserade kravspecifikationer har många fördelar. / Background: In recent years, the public procurement has increased in importance and it now represents about 17 percent of Sweden's GDP. It is important that the public procurement is carried out effectively and due to this the requirement specification plays an important role in the process. A good requirement specification is a condition for that the purchasing authority receives tenders that correspond to what they want; while a lack of such may have major implications for the outcome of the procurement. However, it is a complex task to draw up requirements for a service, whose particular characteristics make it more difficult to define than a commodity. Purpose: The purpose of this study is to create an understanding of the requirement specification's role in the public service procurement. More specifically, the study aims to examine what the conditions are to work out a good requirement specification and the consequences a lack of such may have for the outcome of the procurement. Method: With a qualitative approach and through personal interviews, the purchasers view of the importance and difficulty of the requirement specification has been examined in this study. Four semi-structured personal interviews were conducted with four senior purchasers from four different public organizations - two counties and two municipalities. Results: The findings of this study indicate that good knowledge about the market and awareness of the organization’s needs is important for the formulation of a relevant requirement specification. A relevant and good requirement specification counteract high transaction costs and increase the possibility of delivery of the, by the organization, requested and needed service. Further, this research imply that it is more important for the requirement specification to be relevant and clear than to be detailed, and that function based specifications have many benefits.
|
Page generated in 0.0516 seconds