Spelling suggestions: "subject:"compliance checking"" "subject:"kompliance checking""
1 |
Towards controlling software architecture erosion through runtime conformance monitoringde Silva, Lakshitha R. January 2014 (has links)
The software architecture of a system is often used to guide and constrain its implementation. While the code structure of an initial implementation is likely to conform to its intended architecture, its dynamic properties cannot always be fully checked until deployment. Routine maintenance and changing requirements can also lead to a deployed system deviating from this architecture over time. Dynamic architecture conformance checking plays an important part in ensuring that software architectures and corresponding implementations stay consistent with one another throughout the software lifecycle. However, runtime conformance checking strategies often force changes to the software, demand tight coupling between the monitoring framework and application, impact performance, require manual intervention, and lack flexibility and extensibility, affecting their viability in practice. This thesis presents a dynamic conformance checking framework called PANDArch framework, which aims to address these issues. PANDArch is designed to be automated, pluggable, non-intrusive, performance-centric, extensible and tolerant of incomplete specifications. The thesis describes the concept and design principles behind PANDArch, and its current implementation, which uses an architecture description language to specify architectures and Java as the target language. The framework is evaluated using three open source software products of different types. The results suggest that dynamic architectural conformance checking with the proposed features may be a viable option in practice.
|
2 |
Formalizing and Enforcing Purpose RestrictionsTschantz, Michael Carl 09 May 2012 (has links)
Privacy policies often place restrictions on the purposes for which a governed entity may use personal information. For example, regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), require that hospital employees use medical information for only certain purposes, such as treatment, but not for others, such as gossip. Thus, using formal or automated methods for enforcing privacy policies requires a semantics of purpose restrictions to determine whether an action is for a purpose. We provide such a semantics using a formalism based on planning. We model planning using a modified version of Markov Decision Processes (MDPs), which exclude redundant actions for a formal definition of redundant. We argue that an action is for a purpose if and only if the action is part of a plan for optimizing the satisfaction of that purpose under the MDP model. We use this formalization to define when a sequence of actions is only for or not for a purpose. This semantics enables us to create and implement an algorithm for automating auditing, and to describe formally and compare rigorously previous enforcement methods. We extend this formalization to Partially Observable Markov Decision Processes (POMDPs) to answer when information is used for a purpose. To validate our semantics, we provide an example application and conduct a survey to compare our semantics to how people commonly understand the word “purpose”.
|
3 |
Avaliação de projeto de software em relação à dívida técnica / Software design evaluation in relation to technical debtSilva, Andrey Estevão da 11 September 2015 (has links)
Submitted by Cláudia Bueno (claudiamoura18@gmail.com) on 2015-12-04T17:46:01Z
No. of bitstreams: 2
Dissertação - Andrey Estevão da Silva - 2015.compressed.pdf: 2582210 bytes, checksum: f95bff4058d3be1a4b52850b6023b906 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-12-07T11:12:37Z (GMT) No. of bitstreams: 2
Dissertação - Andrey Estevão da Silva - 2015.compressed.pdf: 2582210 bytes, checksum: f95bff4058d3be1a4b52850b6023b906 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2015-12-07T11:12:37Z (GMT). No. of bitstreams: 2
Dissertação - Andrey Estevão da Silva - 2015.compressed.pdf: 2582210 bytes, checksum: f95bff4058d3be1a4b52850b6023b906 (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Previous issue date: 2015-09-11 / The SoftwareTechnicalDebtaimstocalculatecostsfromthefailuretocomplywithqua-
lity standardsinthedevelopmentprocess,suchaslackofdocumentation,baddevelop-
ment practicesanddisobediencetospecificcodingrulesofaproject.Oneoftheconcerns
of organizationsandsoftwareengineersistoensuresuchqualitystandards,however,the
humane wayofdoingthiscontrolallowsmistakeswhichconsequentlycausesTechnical
Debt, whichintheshorttermarenotaproblem,butinthelongruncandestroyentire
softwarestructures.Basedonthisproblem,itwillbeproposedanapproachforcalcu-
lating theDesignTechnicalDebtofapplicationsdevelopedinJava.Themethodsused
for thispurposeinvolvedataminingofopensourcecoderepositories,defecttracking
softwaredatabases,timecorrectionestimativeofdesignrulesviolationandarchitectural
compliance check. / A estimativadeDívidaTécnicadeSoftwaretemcomoobjetivocalcularoscustosdonão-
cumprimento depadrõesdequalidadenoprocessodedesenvolvimento,taiscomofaltade
documentação, máspráticasdedesenvolvimentoedesobediênciaàsregrasdecodificação
específicas deumprojeto.Umadaspreocupaçõesdeorganizaçõeseengenheirosde
softwareégarantirtaispadrõesdequalidade,noentanto,aformahumanadefazereste
controle permiteerrosque,consequentemente,ocasionamDívidaTécnica,queemcurto
prazo nãosãoumproblema,masemlongoprazopodedestruirestruturasdesoftware
inteiras. Combasenesteproblema,serápropostaumaabordagemparaocálculodaDívida
Técnica deDesigndeaplicaçõesdesenvolvidasemJava.Osmétodosutilizadosparaeste
fim envolvemamineraçãodedadosderepositóriosdecódigoaberto,bancosdedados
de softwarederastreamentodedefeitos,estimativadotempodecorreçãodeviolaçãode
regrasdedesigneverificaçãodeconformidadearquitetural.
|
4 |
Fac tExtraction For Ruby On Rails PlatformTshering, Nima January 2010 (has links)
In the field of software engineering, software architecture plays an important role particularly in areas of critical and large-scale software system development and over the years it has evolved as an important sub-discipline within the field of software engineering. However, software architecture is still an emerging discipline mainly attributed by the lack of standardized way for architectural representation and also due to lack of analysis methods that can determine if the intended architecture translates into correct implementation during the software development [HNS00]. Architecture compliance checking [KP07] is a technique used to resolve latter part of the problem and Fraunhofer SAVE (Software Architecture Visualization and Evaluation) is a compliance-checking tool that uses fact extraction. This master’s thesis provides fact extraction support to Fraunhofer SAVE for a system developed using Ruby on Rail framework by developing a fact extractor. The fact extractor was developed as an eclipse plug-in in Java that was integrated with SAVE platform, it consists of a parser that parses Ruby source code and then generates an abstract syntax tree. The architectural facts are extracted by analyzing these abstract syntax trees using a visitor pattern from which architecture of the system are generated. It is represented using the internal model of the SAVE platform. The fact extractor was validated using two reference systems of differing sizes developed using Ruby on Rails framework. A reference system with smaller size, which contains all the relevant Ruby language constructs, was used to evaluate correctness and completeness of the fact extractor. The evaluation result showed the correctness value of 1.0 or 100% and completeness value of 1.0 or 100%. Afterwards, a larger application with more complex architecture was used to validate the performance and robustness of the fact extractor. It has successfully extracted, analyzed and build the SAVE model of this large system by taking 0.05 seconds per component without crashing. Based these computations, it was concluded that the performance of the fact extractor was acceptable as it performed better than C# fact extractor.
|
5 |
Facilitating Automated Compliance Checking of Processes against Safety StandardsCastellanos Ardila, Julieth Patricia January 2019 (has links)
A system is safety-critical if its malfunctioning could have catastrophic consequences for people, property or the environment, e.g., the failure in a car's braking system could be potentially tragic. To produce such type of systems, special procedures, and strategies, that permit their safer deployment into society, should be used. Therefore, manufacturers of safety-critical systems comply with domain-specific safety standards, which embody the public consensus of acceptably safe. Safety standards also contain a repository of expert knowledge and best practices that can, to some extent, facilitate the safety-critical system’s engineering. In some domains, the applicable safety standards establish the accepted procedures that regulate the development processes. For claiming compliance with such standards, companies should adapt their practices and provide convincing justifications regarding the processes used to produce their systems, from the initial steps of the production. In particular, the planning of the development process, in accordance with the prescribed process-related requirements specified in the standard, is an essential piece of evidence for compliance assessment. However, providing such evidence can be time-consuming and prone-to-error since it requires that process engineers check the fulfillment of hundreds of requirements based on their processes specifications. With access to suitable tool-supported methodologies, process engineers would be able to perform their job efficiently and accurately. Safety standards prescribe requirements in natural language by using notions that are subtly similar to the concepts used to describe laws. In particular, requirements in the standards introduce conditions that are obligatory for claiming compliance. Requirements also define tailoring rules, which are actions that permit to comply with the standard in an alternative way. Unfortunately, current approaches for software verification are not furnished with these notions, which could make their use in compliance checking difficult. However, existing tool-supported methodologies designed in the legal compliance context, which are also proved in the business domain, could be exploited for defining an adequate automated compliance checking approach that suits the conditions required in the safety-critical context. The goal of this Licentiate thesis is to propose a novel approach that combines: 1) process modeling capabilities for representing systems and software process specifications, 2) normative representation capabilities for interpreting the requirements of the safety standards in an adequate machine-readable form, and 3) compliance checking capabilities to provide the analysis required to conclude whether the model of a process corresponds to the model with the compliant states proposed by the standard's requirements. Our approach contributes to facilitating compliance checking by providing automatic reasoning from the requirements prescribed by the standards, and the description of the process they regulate. It also contributes to cross-fertilize two communities that were previously isolated, namely safety-critical and legal compliance contexts. Besides, we propose an approach for mastering the interplay between highly-related standards. This approach includes the reuse capabilities provided by SoPLE (Safety-oriented Process Line Engineering), which is a methodological approach aiming at systematizing the reuse of process-related information in the context of safety-critical systems. With the addition of SoPLE, we aim at planting the seeds for the future provision of systematic reuse of compliance proofs. Hitherto, our proposed methodology has been evaluated with academic examples that show the potential benefits of its use. / AMASS
|
6 |
A framework of trust in service workflowsViriyasitavat, Wattana January 2013 (has links)
The everything as a service concept enables dynamic resource provisions to be seen and delivered as services. Their proliferation nowadays leads to the creation of new value-added services composed of several sub-services in a pre-specified manner, known as service workflows. The use of service workflow appears in various domains, ranging from the basic interactions found in several e-commerce and several online interactions to the complex ones such as Virtual Organizations, Grids, and Cloud Computing. However, the dynamic nature in open environments makes a workflow constantly changing, to be adaptable to the change of new circumstances. How to determine suitable services has becomes a very important challenge. Requirements from both workflow owners and service providers play a significant role in the process of service acquisition, composition, and interoperations. From the workflow owner viewpoint, requirements can specify properties of services to be acquired for tasks in a workflow. On the other hand, requirements from service providers affect trust-based decision in workflow participation. The lack of formal languages to specify these requirements poses difficulties in the success of service collaborations in a workflow. It impedes: (1) workflow scalability that tends to be limited within a certain set of trusted domains; (2) dynamicity when each service acts in an autonomous and unpredictable manner where any change might affect existing requirements; and (3) inconsistency in dealing with the disparate representations of requirements, causing high overhead for compliance checking. This thesis focuses on developing a framework to overcome, or at least alleviate, these problems. It situates in inter-disciplinary areas including logics, workflow modelling, specification languages, trust management, decision support system, and compliance checking. Two core elements are proposed: (1) a formal logic-based requirement specification language, namely Trust Specification (TS), such that the requirements can be formally and uniformly expressed; and (2) compliance checking algorithms to automatically check for the compliance of requirements in service workflows. It is worth noting that this thesis contains some proofs of logic extension, workflow modelling, specification language, and compliance checking algorithms. These might raise a concern to people focusing deep on one particular area such as logics, or workflow modelling who might overlook the essence of the work, for example (1) the application of a formal specification language to the exclusive characteristics of service workflows, and (2) bridging the gap of the high level languages such as trust management down to the lower logic-based ones. The first contribution of the framework is to allow requirements to be independently and consistently expressed by each party where the workflow participation decision and acquisition are subject to the compliance of requirements. To increase scalability in large-scale interoperations, the second contribution centres on automatic compliance checking where TS language and compliance checking algorithms are two key components. The last contribution focuses on dynamicity. The framework allows each party to modify existing requirements and the compliance checking would be automatically activated to check for further compliance. As a result, it is anticipated that the solution will encourage the proliferation of service provisions and consumption over the Internet.
|
7 |
SEMANTIC INTELLIGENCE FOR KNOWLEDGE-BASED COMPLIANCE CHECKING OF UNDERGROUND UTILITIESXin Xu (9183590) 30 July 2020 (has links)
<p>Underground utilities must comply
with the requirements stipulated in utility regulations to ensure their
structural integrity and avoid interferences and disruptions of utility
services. Noncompliance with the regulations could cause disastrous consequences
such as pipeline explosion and pipeline contamination that can lead to hundreds
of deaths and huge financial loss. However, the current practice of utility compliance
checking relies on manual efforts to examine lengthy textual regulations,
interpret them subjectively, and check against massive and heterogeneous
utility data. It is time-consuming, costly, and error prone. There remains a
critical need for an effective mechanism to help identify the regulatory
non-compliances in new utility designs or existing pipelines to limit possible
negative impacts. Motivated by this critical need, this research aims to create
an intelligent, knowledge-based method to automate the compliance checking for
underground utilities. </p>
<p>The overarching goal is to build
semantic intelligence to enable knowledge-based, automated compliance checking
of underground utilities by integrating semantic web technologies, natural
language processing (NLP), and domain ontologies. Three specific objectives
are: (1) designing an ontology-based framework for integrating massive and heterogeneous
utility data for automated compliance checking, (2) creating a semi-automated
method for utility ontology development, and (3) devising a semantic NLP approach
for interpreting textual utility regulations. Objective 1 establishes the
knowledge-based skeleton for utility compliance checking. Objectives 2 and 3 build
semantic intelligence into the framework resulted from Objective 1 for improved
performance in utility compliance checking. </p>
<p>Utility compliance checking is
the action that examines geospatial data of utilities and their surroundings
against textual utility regulations. The integration of heterogeneous
geospatial data of utilities as well as textual data remains a big challenge. Objective
1 is dedicated to addressing this challenge. An ontology-based framework has
been designed to integrate heterogeneous data and automate compliance checking through
semantic, logic, and spatial reasoning. The framework consists of three key
components: (1) four interlinked ontologies that provide the semantic schema to
represent heterogeneous data, (2) two data convertors to transform data from
proprietary formats into a common and interoperable format, and (3) a reasoning
mechanism with spatial extensions for detecting non-compliances. The
ontology-based framework was tested on a sample utility database, and the
results proved its effectiveness.</p>
<p>Two supplementary methods were
devised to build the semantic intelligence in the ontology-based framework. The
first one is a novel method that integrates the top-down strategy and NLP to
address two semantic limitations in existing ontologies for utilities: lack of
compatibility with existing utility modeling initiatives and relatively small
vocabulary sizes. Specifically, a base ontology is first developed by
abstracting the modeling information in CityGML Utility Network ADE through a
series of semantic mappings. Then, a novel integrated NLP approach is devised
to automatically learn the semantics from domain glossaries. Finally, the
semantics learned from the glossaries are incorporated into the base ontology
to result in a domain ontology for utility infrastructure. For case
demonstration, a glossary of water terms was learned to enrich the base
ontology (formalized from the ADE) and the resulting ontology was evaluated to
be an accurate, sufficient, and shared conceptualization of the domain. </p>
<p>The second one is an ontology-
and rule-based NLP approach for automated interpretation of textual regulations
on utilities. The approach integrates ontologies to capture both domain and
spatial semantics from utility regulations that contain a variety of technical
jargons/terms and spatial constraints regarding the location and clearance of
utility infrastructure. The semantics are then encoded into pattern-matching
rules for extracting the requirements from the regulations. An ontology- and
deontic logic-based mechanism have also been integrated to facilitate the
semantic and logic-based formalization of utility-specific regulatory
knowledge. The proposed approach was tested in interpreting the spatial
configuration-related requirements in utility accommodation policies, and
results proved it to be an effective means for interpreting utility regulations
to ensure the compliance of underground utilities. </p>
<p>The main outcome of this research
is a novel knowledge-based computational platform with semantic intelligence
for regulatory compliance checking of underground utilities, which is also the
primary contribution of this research. The knowledge-based computational
platform provides a declarative way rather than the otherwise
procedural/hard-coding implementation approach to automate the overall process
of utility compliance checking, which is expected to replace the conventional
costly and time-consuming skill-based practice. Utilizing this computational
platform for utility compliance checking will help eliminate non-compliant
utility designs at the very early stage and identify non-compliances in
existing utility records for timely correction, thus leading to enhanced safety
and sustainability of the massive utility infrastructure in the U.S.</p>
|
8 |
Invariant Signatures for Supporting BIM InteroperabilityJin Wu (11187477) 27 July 2021 (has links)
<div>
<div>
<p>Building Information Modeling (BIM) serves as an important media in supporting
automation in the architecture, engineering, and construction (AEC) domain. However, with its
fast development by different software companies in different applications, data exchange became
labor-intensive, costly, and error-prone, which is known as the problem of interoperability.
Industry foundation classes (IFC) are widely accepted to be the future of BIM in solving the
challenge of BIM interoperability. However, there are practical limitations of the IFC standards,
e.g., IFC’s flexibility creates space for misuses of IFC entities. This incorrect semantic information
of an object can cause severe problems to downstream uses. To address this problem, the author
proposed to use the concept of invariant signatures, which are a new set of features that capture
the essence of an AEC object. Based on invariant signatures, the author proposed a rule-based
method and a machine learning method for BIM-based AEC object classification, which can be
used to detect potential misuses automatically. Detailed categories for beams were tested to have
error-free performance. The best performing algorithm developed by the methods achieved 99.6%
precision and 99.6% recall in the general building object classification. To promote automation
and further improve the interoperability of BIM tasks, the author adopted invariant signature-based
object classification in quantity takeoff (QTO), structural analysis, and model validation for
automated building code compliance checking (ACC). Automation in such BIM tasks was enabled
with high accuracy.</p><p><br></p><p><br></p>
</div>
</div>
|
9 |
NATURAL LANGUAGE PROCESSING-BASED AUTOMATED INFORMATION EXTRACTION FROM BUILDING CODES TO SUPPORT AUTOMATED COMPLIANCE CHECKINGXiaorui Xue (13171173) 29 July 2022 (has links)
<p> </p>
<p>Traditional manual code compliance checking process is a time-consuming, costly, and error-prone process that has many shortcomings (Zhang & El-Gohary, 2015). Therefore, automated code compliance checking systems have emerged as an alternative to traditional code compliance checking. However, computer software cannot directly process regulatory information in unstructured building code texts. To support automated code compliance checking, building codes need to be transformed to a computer-processable, structured format. In particular, the problem that most automated code compliance checking systems can only check a limited number of building code requirements stands out.</p>
<p>The transformation of building code requirements into a computer-processable, structured format is a natural language processing (NLP) task that requires highly accurate part-of-speech (POS) tagging results on building codes beyond the state of the art. To address this need, this dissertation research was conducted to provide a method to improve the performance of POS taggers by error-driven transformational rules that revise machine-tagged POS results. The proposed error-driven transformational rules fix errors in POS tagging results in two steps. First, error-driven transformational rules locate errors in POS tagging by their context. Second, error-driven transformational rules replace the erroneous POS tag with the correct POS tag that is stored in the rule. A dataset of POS tagged building codes, namely the Part-of-Speech Tagged Building Codes (PTBC) dataset (Xue & Zhang, 2019), was published in the Purdue University Research Repository (PURR). Testing on the dataset illustrated that the method corrected 71.00% of errors in POS tagging results for building codes. As a result, the POS tagging accuracy on building codes was increased from 89.13% to 96.85%.</p>
<p>This dissertation research was conducted to provide a new POS tagger that is tailored to building codes. The proposed POS tagger utilized neural network models and error-driven transformational rules. The neural network model contained a pre-trained model and one or more trainable neural layers. The neural network model was trained and fine-tuned on the PTBC (Xue & Zhang, 2019) dataset, which was published in the Purdue University Research Repository (PURR). In this dissertation research, a high-performance POS tagger for building codes using one bidirectional Long-short Term Memory (LSTM) Recurrent Neural Network (RNN) trainable layer, a BERT-Cased-Base pre-trained model, and 50 epochs of training was discovered. This model achieved 91.89% precision without error-driven transformational rules and 95.11% precision with error-driven transformational rules, outperforming the otherwise most advanced POS tagger’s 89.82% precision on building codes in the state of the art.</p>
<p>Other automated information extraction methods were also developed in this dissertation. Some automated code compliance checking systems represented building codes in logic clauses and used pattern matching-based rules to convert building codes from natural language text to logic clauses (Zhang & El-Gohary 2017). A ruleset expansion method that can expand the range of checkable building codes of such automated code compliance checking systems by expanding their pattern matching-based ruleset was developed in this dissertation research. The ruleset expansion method can guarantee: (1) the ruleset’s backward compatibility with the building codes that the ruleset was already able to process, and (2) forward compatibility with building codes that the ruleset may need to process in the future. The ruleset expansion method was validated on Chapters 5 and 10 of the International Building Code 2015 (IBC 2015). The Chapter 10 of IBC 2015 was used as the training dataset and the Chapter 5 of the IBC 2015 was used as the testing dataset. A gold standard of logic clauses was published in the Logic Clause Representation of Building Codes (LCRBC) dataset (Xue & Zhang, 2021). Expanded pattern matching-based rules were published in the dissertation (Appendix A). The expanded ruleset increased the precision, recall, and f1-score of the logic clause generation at the predicate-level by 10.44%, 25.72%, and 18.02%, to 95.17%, 96.60%, and 95.88%, comparing to the baseline ruleset, respectively. </p>
<p>Most of the existing automated code compliance checking research focused on checking regulatory information that was stored in textual format in building code in text. However, a comprehensive automated code compliance checking process should be able to check regulatory information stored in other parts, such as, tables. Therefore, this dissertation research was conducted to provide a semi-automated information extraction and transformation method for tabular information processing in building codes. The proposed method can semi-automatically detect the layouts of tables and store the extracted information of a table in a database. Automated code compliance checking systems can then query the database for regulatory information in the corresponding table. The algorithm’s initial implementation accurately processed 91.67 % of the tables in the testing dataset composed of tables in Chapter 10 of IBC 2015. After iterative upgrades, the updated method correctly processed all tables in the testing dataset. </p>
|
Page generated in 0.0895 seconds