• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 269
  • 111
  • 90
  • 36
  • 26
  • 24
  • 21
  • 16
  • 7
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 737
  • 140
  • 138
  • 131
  • 101
  • 90
  • 87
  • 82
  • 81
  • 68
  • 67
  • 64
  • 63
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Theoretical and Applied Essays on the Instrumental Variable Method

Souri, Davood 26 August 2004 (has links)
This dissertation is intended to provide a statistical foundation for the IV models and shed lights on a number of issues related to the IV method. The first chapter shows that the theoretical Instrumental Variable model can be derived by reparameterization of a well-specified statistical model defined on the joint distribution of the involved random variables as the actual (local) data generation process. This reveals the covariance structure of the error terms of the usual theory-driven instrumental variable model. The revealed covariance structure of the IV model have important implications, particularly, for designing simulation studies. Monte Carlo simulations are used to reexamine the Nelson and Startz (1990a) findings regarding the performance of IV estimators when the instruments are weak. The results from the simulation exercises indicate that the sampling distribution of ^Î <sub>IV</sub> is concentrated around ^Î <sub>OLS</sub>. The second chapter considers the underlying joint distribution function of the instrumental variable (IV) model and presents an alternative definition for the exogenous and relevant instruments. The paper extracts a system of independent and orthogonal equations that covers up a non-orthogonal structural model and argues that the estimated IV regression is well-specified if the underlying system of equations is well-specified. It proposes a new instrument relevancy measure that does not suffer from the first-stage <i>R²</i> deficiencies. Third chapter argues the application of the IV method in estimation of models with omitted variable. The paper considers the implicit parametrization of statistical models and presents five conditions for an appropriate instruments. Two of them are empirically measurable and can be tested. This improves the literature by adding one more objective criterion for the selection of instruments. This chapter applies the IV method to estimate the rate of return to education in Iran. It argues that the education of two cohorts of Iranians was delayed or cut short by the Cultural Revolution. Therefore, the Cultural Revolution, as an exogenous shock to the supply of education, establishes the year of birth as the exogenous and relevant instrument for education. Using the standard Mincerian earnings function with control for experience, ethnicity, location of residence and sector of employment, the instrumental variable estimate of the return to schooling is equal to 5.6&#37;. The estimation results indicate that the Iranian labor market values degrees more than years of schooling. / Ph. D.
322

Optimizing TEE Protection by Automatically Augmenting Requirements Specifications

Dhar, Siddharth 03 June 2020 (has links)
An increasing number of software systems must safeguard their confidential data and code, referred to as critical program information (CPI). Such safeguarding is commonly accomplished by isolating CPI in a trusted execution environment (TEE), with the isolated CPI becoming a trusted computing base (TCB). TEE protection incurs heavy performance costs, as TEE-based functionality is expensive to both invoke and execute. Despite these costs, projects that use TEEs tend to have unnecessarily large TCBs. As based on our analysis, developers often put code and data into TEE for convenience rather than protection reasons, thus not only compromising performance but also reducing the effectiveness of TEE protection. In order for TEEs to provide maximum benefits for protecting CPI, their usage must be systematically incorporated into the entire software engineering process, starting from Requirements Engineering. To address this problem, we present a novel approach that incorporates TEEs in the Requirements Engineering phase by using natural language processing (NLP) to classify those software requirements that are security critical and should be isolated in TEE. Our approach takes as input a requirements specification and outputs a list of annotated software requirements. The annotations recommend to the developer which corresponding features comprise CPI that should be protected in a TEE. Our evaluation results indicate that our approach identifies CPI with a high degree of accuracy to incorporate safeguarding CPI into Requirements Engineering. / Master of Science / An increasing number of software systems must safeguard their confidential data like passwords, payment information, personal details, etc. This confidential information is commonly protected using a Trusted Execution Environment (TEE), an isolated environment provided by either the existing processor or separate hardware that interacts with the operating system to secure sensitive data and code. Unfortunately, TEE protection incurs heavy performance costs, with TEEs being slower than modern processors and frequent communication between the system and the TEE incurring heavy performance overhead. We discovered that developers often put code and data into TEE for convenience rather than protection purposes, thus not only hurting performance but also reducing the effectiveness of TEE protection. By thoroughly examining a project's features in the Requirements Engineering phase, which defines the project's functionalities, developers would be able to understand which features handle confidential data. To that end, we present a novel approach that incorporates TEEs in the Requirements Engineering phase by means of Natural Language Processing (NLP) tools to categorize the project requirements that may warrant TEE protection. Our approach takes as input a project's requirements and outputs a list of categorized requirements defining which requirements are likely to make use of confidential information. Our evaluation results indicate that our approach performs this categorization with a high degree of accuracy to incorporate safeguarding the confidentiality related features in the Requirements Engineering phase.
323

Practical Digital Library Generation into DSpace with the 5S Framework

Gorton, Douglas Christopher 30 April 2007 (has links)
In today's ever-changing world of technology and information, a growing number of organizations and universities seek to store digital documents in an online, accessible manner. These digital library (DL) repositories are powerful systems that allow institutions to store their digital documents while permitting interaction and collaboration among users in their organizations. Despite the continual work on DL systems that can produce out-of-the-box online repositories, the installation, configuration, and customization processes of these systems are still far from straightforward Motivated by the arduous process of designing digital library instances; installing software packages like DSpace and Greenstone; and configuring, customizing, and populating such systems, we have developed an XML-based model for specifying the nature of DSpace digital libraries. The ability to map out a digital library to be created in a straightforward, XML-based way allows for the integration of such a specification with other DL tools. To make use of DL specifications for DSpace, we create a DL generator that uses these models of digital library systems to create, configure, customize, and populate DLs as specified. We draw heavily on previous work in understanding the nature of digital libraries from the 5S framework for digital libraries. This divides the concerns of digital libraries into a complex, formal representation of the elements that are basic to any minimal digital library system including Streams, Structures, Spaces, Scenarios, and Societies. We reflect on this previous work and provide a fresh application of the 5S framework to practical DL systems. Given our specification and generation process, we draw conclusions towards a more general model that would be suitable to characterize any DL platform with one specification. We present this DSpace DL specification language and generator as an aid to DL designers and others interested in easing the specification of DSpace digital libraries. We believe that our method will not only enable users to create DLs more easily, but also gain a greater understanding about their desired DL structure, software, and digital libraries in general. / Master of Science
324

A Conceptual Framework for Specification of Network-Centric System Architectures

Churbanau, Dzmitry 26 May 2010 (has links)
Software-based system architecture has been recognized as a foundation laying out the underpinnings that are critically important for successful engineering of large-scale complex systems. In recent years, architecting has played a more crucial role in engineering network-centric system of systems. The software paradigm has been shifting from treating software as a product (SaaP) to treating software as a service (SaaS). SaaS is also referred to as the Cloud Computing, where the term "cloud" is used as a metaphor for "network". As the complexity of the architecture of network-centric software-based system of systems has increased, the description of such architecture has posed significant technical challenges. The U.S. Department of Defense (DoD) has developed the DoD Architecture Framework [DoDAF 2009a, DoDAF 2009b] for describing system architectures. IEEE proposes a Recommended Practice for Architectural Description of Software-Intensive Systems [IEEE 2000]. SEI provides high-level guidelines for Documenting Software Architectures [Clements et al 2003]. However, all of the diagrams proposed by DoD, IEEE, and SEI are two-dimensional static graphical and textual representations that do not reveal the dynamic characteristics of a system architecture. This thesis presents a conceptual framework (CF) for specifying the architecture of a network-centric software-based system of systems. The developed CF provides the beginning part of a larger research effort. The main goal of the overall research is to employ the automation-based software paradigm and to automatically generate a visual simulation model of a system architecture, with which experiments can be conducted to assess the dynamic characteristics of that architecture. The CF, developed in the research described herein, enables the automatic generation of a visual simulation model representing a system architecture. The proposed CF is evaluated in half a dozen case studies to demonstrate that it provides the necessary elements for automatic generation of a simulation model as the description of a complex system of systems architecture. / Master of Science
325

Functional analysis of the histidine kinase CKI1 in female gametogenesis of the liverwort Marchantia polymorpha / 苔類ゼニゴケの雌配偶子発生におけるヒスチジンキナーゼCKI1の機能解析

Bao, Haonan 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(生命科学) / 甲第25448号 / 生博第519号 / 新制||生||69(附属図書館) / 京都大学大学院生命科学研究科統合生命科学専攻 / (主査)教授 河内 孝之, 教授 荒木 崇, 教授 中野 雄司 / 学位規則第4条第1項該当 / Doctor of Philosophy in Life Sciences / Kyoto University / DFAM
326

Towards the formalisation of use case maps

Dongmo, Cyrille 11 1900 (has links)
Formal specification of software systems has been very promising. Critics against the end results of formal methods, that is, producing quality software products, is certainly rare. Instead, reasons have been formulated to justify why the adoption of the technique in industry remains limited. Some of the reasons are: • Steap learning curve; formal techniques are said to be hard to use. • Lack of a step-by-step construction mechanism and poor guidance. • Difficulty to integrate the technique into the existing software processes. Z is, arguably, one of the successful formal specification techniques that was extended to Object-Z to accommodate object-orientation. The Z notation is based on first-order logic and a strongly typed fragment of Zermelo-Fraenkel set theory. Some attempts have been made to couple Z with semi-formal notations such as UML. However, the case of coupling Object-Z (and also Z) and the Use Case Maps (UCMs) notation is still to be explored. A Use Case Map (UCM) is a scenario-based visual notation facilitating the requirements definition of complex systems. A UCM may be generated either from a set of informal requirements, or from use cases normally expressed in natural language. UCMs have the potential to bring more clarity into the functional description of a system. It may furthermore eliminate possible errors in the user requirements. But UCMs are not suitable to reason formally about system behaviour. In this dissertation, we aim to demonstrate that a UCM can be transformed into Z and Object-Z, by providing a transformation framework. Through a case study, the impact of using UCM as an intermediate step in the process of producing a Z and Object-Z specification is explored. The aim is to improve on the constructivity of Z and Object-Z, provide more guidance, and address the issue of integrating them into the existing Software Requirements engineering process. / Computer Science / M. Sc. (Computer Science)
327

Développement et réalisation d'un simulateur de machines à états abstraits temps-réel et model-checking de formules d'une logique des prédicats temporisée du premier ordre / Development and implementation of a simulator for abstract state machines with real time and model-checking of properties in a language of first order predicate logic with time

Vassiliev, Pavel 27 November 2008 (has links)
Dans cette thèse nous proposons un modèle temporel dans le cadre des machines à états abstraits (ASM). Une extension du langage de spécification ASM est développé qui correspond à ce modéle temporel pour le temps continu. L'extension du langage avec des constructions de temps permet de diminuer la taille de la spécification et donc de réduire la probabilité d'erreurs. La sémantique de l'extension du langage ASM est fournie et prend en compte les définitions des fonctions externes, les valeurs des délais et les choix de résolution des non-déterminismes. Un sous-système de vérification des propriétés exprimées en logique FOTL (FirstOrder Timed Logic) est développé. Un simulateur d'ASMs temporisées est développé et implémenté, il comprend un analyseur syntaxique, un interprète du langage, un sous-système de vérification des propriétés ainsi qu'une interface graphique / In this thesis a temporal model for abstract state machines (ASM) method is pro- posed. An extension of ASM specification language on the base of the proposed temporal model with continuous time is developed. The language extension helps to reduce the size of the specification hence to diminish the probability of an error. The semantics of the extended ASM language is developed which takes into account the definitions of external functions, the values of time delays and the method of non-determinism resolving. A subsystem for verification of user properties in the FOTL language is developed. A simulator prototype for ASMs with time is developed and implemented. It includes the parser of the timed ASM language, the interpreter, the verification subsystem and the graphical user interface
328

Automating Component-Based System Assembly

Subramanian, Gayatri 23 May 2006 (has links)
Owing to advancements in component re-use technology, component-based software development (CBSD) has come a long way in developing complex commercial software systems while reducing software development time and cost. However, assembling distributed resource-constrained and safety-critical systems using current assembly techniques is a challenge. Within complex systems when there are numerous ways to assemble the components unless the software architecture clearly defines how the components should be composed, determining the correct assembly that satisfies the system assembly constraints is difficult. Component technologies like CORBA and .NET do a very good job of integrating components, but they do not automate component assembly; it is the system developer's responsibility to ensure thatthe components are assembled correctly. In this thesis, we first define a component-based system assembly (CBSA) technique called "Constrained Component Assembly Technique" (CCAT), which is useful when the system has complex assembly constraints and the system architecture specifies component composition as assembly constraints. The technique poses the question: Does there exist a way of assembling the components that satisfies all the connection, performance, reliability, and safety constraints of the system, while optimizing the objective constraint? To implement CCAT, we present a powerful framework called "CoBaSA". The CoBaSA framework includes an expressive language for declaratively describing component functional and extra-functional properties, component interfaces, system-level and component-level connection, performance, reliability, safety, and optimization constraints. To perform CBSA, we first write a program (in the CoBaSA language) describing the CBSA specifications and constraints, and then an interpreter translates the CBSA program into a satisfiability and optimization problem. Solving the generated satisfiability and optimization problem is equivalent to answering the question posed by CCAT. If a satisfiable solution is found, we deduce that the system can be assembled without violating any constraints. Since CCAT and CoBaSA provide a mechanism for assembling systems that have complex assembly constraints, they can be utilized in several industries like the avionics industry. We demonstrate the merits of CoBaSA by assembling an actual avionic system that could be used on-board a Boeing aircraft. The empirical evaluation shows that our approach is promising and can scale to handle complex industrial problems.
329

Using Explicit State Space Enumeration For Specification Based Regression Testing

Chakrabarti, Sujit Kumar 01 1900 (has links)
Regression testing of an evolving software system may involve significant challenges. While, there would be a requirement of maximising the probability of finding out if the latest changes to the system has broken some existing feature, it needs to be done as economically as possible. A particularly important class of software systems are API libraries. Such libraries would typically constitute a very important component of many software systems. High quality requirements make it imperative to continually optimise the internal implementation of such libraries without affecting the external interface. Therefore, it is preferred to guide the regression testing by some kind of formal specification of the library. The testing problem comprises of three parts: computation of test data, execution of test, and analysis of test results. Current research mostly focuses on the first part. The objective of test data computation is to maximise the probability of uncovering bugs, and to do it with as few test cases as possible. The problem of test data computation for regression testing is to select a subset of the original test suite running which would suffice to test for bugs probably inserted in the modifications done after the last round of testing. A variant of this problem is that of regression testing of API libraries. The regression testing of an API is usually done by making function calls in such a way that the sequence of function calls thus made suffices a test specification. The test specification in turn embodies some concept of completeness. In this thesis, we focus on the problem of test sequence computation for the regression testing of API libraries. At the heart of this method lies the creation of a state space model of the API library by reverse engineering it by executing the system, with guidance from an formal API specification. Once the state space graph is obtained, it is used to compute test sequences for satisfying some test specification. We analyse the theoretical complexity of the problem of test sequence computation and provide various heuristic algorithms for the same. State space explosion is a classical problem encountered whenever there is an attempt of creating a finite state model of a program. Our method also faces this limitation. We explore a simple and intuitive method of ameliorating this problem – by simply reducing the size of the state vector. We develop the theoretical insights into this method. Also, we present experimental results indicating the practical effectiveness of this method. Finally, we bring all this together into the design and implementation of a tool called Modest.
330

Language and tool support for multilingual programs

Lee, Byeongcheol 12 October 2011 (has links)
Programmers compose programs in multiple languages to combine the advantages of innovations in new high-level programming languages with decades of engineering effort in legacy libraries and systems. For language inter-operation, language designers provide two classes of multilingual programming interfaces: (1) foreign function interfaces and (2) code generation interfaces. These interfaces embody the semantic mismatch for developers and multilingual systems builders. Their programming rules are difficult or impossible to verify. As a direct consequence, multilingual programs are full of bugs at interface boundaries, and debuggers cannot assist developers across these lines. This dissertation shows how to use composition of single language systems and interposition to improve the safety of multilingual programs. Our compositional approach is scalable by construction because it does not require any changes to single-language systems, and it leverages their engineering efforts. We show it is effective by composing a variety of multilingual tools that help programmers eliminate bugs. We present the first concise taxonomy and formal description of multilingual programming interfaces and their programming rules. We next compose three classes of multilingual tools: (1) Dynamic bug checkers for foreign function interfaces. We demonstrate a new approach for automatically generating a dynamic bug checker by interposing on foreign function interfaces, and we show that it finds bugs in real-world applications including Eclipse, Subversion, and Java Gnome. (2) Multilingual debuggers for foreign function interfaces. We introduce an intermediate agent that wraps all the methods and functions at language boundaries. This intermediate agent is sufficient to build all the essential debugging features used in single-language debuggers. (3) Safe macros for code generation interfaces. We design a safe macro language, called Marco, that generates programs in any language and demonstrate it by implementing checkers for SQL and C++ generators. To check the correctness of the generated programs, Marco queries single-language compilers and interpreters through code generation interfaces. Using their error messages, Marco points out the errors in program generators. In summary, this dissertation presents the first concise taxonomy and formal specification of multilingual interfaces and, based on this taxonomy, shows how to compose multilingual tools to improve safety in multilingual programs. Our results show that our compositional approach is scalable and effective for improving safety in real-world multilingual programs. / text

Page generated in 0.0341 seconds