Spelling suggestions: "subject:"informatics""
191 |
Ontological Foundations for Strategic Business Modeling: The Case of Value, Risk and CompetitionSales, Tiago Prince January 2019 (has links)
To cope with increasingly dynamic and competitive markets, enterprises need carefully formulated strategies in order to improve their processes, develop sustainable business models and offer more attractive products and services to their customers. To help them make sense of this complex environment, enterprises resort to an array of strategic business analysis tools and techniques, such as SWOT and the Business Model Canvas. Most of the tools, however, are derived from informally defined social and economic concepts, which hinders their reuse by practitioners. In this thesis, we address this limitation by means of in-depth ontological analyses conducted under the principles of the Unified Foundational Ontology (UFO). In particular, we focus on the notions of value, risk and competition, as these are recurrently employed by many techniques and yet, still suffer conceptual and definitional shortcomings. One main contribution of this thesis is the Common Ontology of Value and Risk (COVER), a reference conceptual model that disentangle and clarifies several perspectives on value and risk, while demonstrating that they are ultimately two ends of the same spectrum. We demonstrate the usability and relevance of COVER by means of two applications in ArchiMate, an international standard for enterprise architecture representation. A second contribution is the Ontology of Competition, which formally characterizes competitive relationships and defines the nature of several competitive relationships arising in business markets.
|
192 |
Managing the Uncertainty of the Evolution of Requirements ModelsTran, Le Minh Sang January 2014 (has links)
Evolution is an inevitable phenomenon during the life time of a long-lived software systems due to the dynamic of their working environment. Software systems thus need to evolve to meet the changing demands. A key point of evolution is its uncertainty since it refers to potential future changes to software artifacts such as requirements models. Thus, the selection of evolution-resilient design alternatives for the systems is a significant challenge. This dissertation proposes a framework for modeling evolution and reasoning about it and its uncertainty in requirements models to facilitate the decision making process. The framework provides evolution rules as a means to capture requirements evolution, and a set of evolution metrics to quantify design alternatives of the system. This enables more useful information about to what extent design alternatives could resist to evolution. Thus, it helps decision makers to make strategic moves. Both evolution rules and evolution metrics are backed up with a formal model, which is based on a game-theoretic interpretation, so that it allows a formal semantics understanding of the meaning of the metrics in different scenarios. The proposed framework is supported by a series of algorithms, which automates the calculation of metrics, and a proof-of-concept Computer Aided Software Engineering (CASE) tool. The algorithms calculate metric values for each design alternative, and enumerate possible design alternatives with the best metric values, i.e., winner alternatives. The algorithms have been designed to incrementally react to every single change made to requirements models in an efficient way. The proposed framework is evaluated in a series of empirical studies that took place over a year to evaluate the modeling part of the framework. The evaluation studies used scenarios taken from industrial projects in the Air Traffic Management (ATM) domain. The studies involve different types of participants with different expertise in the framework and the domain. The results from the studies show that the modeling approach is effective in capturing the evolution of complex systems. It is reasonably possible for people, if they are supplied with appropriate knowledge (i.e., knowledge of method for domain experts, knowledge of domain for method experts, and knowledge of both domain and method for novices), to build significantly large models, and identify possible ways for these models to evolve. Moreover, the studies show that obviously there is a difference between domain experts, method experts, and students on the “baseline" (initial) model, but when it comes to model the changes with evolution rules, there is no significant difference. The proposed framework is not only applicable to requirements model, but also other system models like risk assessment. The framework has been adapted to deal with evolving risks in long-lived software systems at a high level of abstraction. It thus could work with many existing risk-assessment methods. In summary, the contribution of this dissertation to the early phase of system development should allow system designers to improve the evolution resilience of long-lived systems.
|
193 |
Visual Saliency Detection and its Application to Image RetrievalMuratov, Oleg January 2013 (has links)
People perceive any kind of information with different level of attention and involvement. It is due to the way how our brain functions, redundancy and importance of the perceived data. This work deals with visual information, in particular with images.
Image analysis and processing is often requires running computationally expensive algorithms. The knowledge of which part of an image is important over other parts allows for reduction of data to be processed. Besides computational cost a broad variety of applications, including image compression, quality assessment, adaptive content display and rendering, can benefit from this kind of information. The development of an accurate visual importance estimation method may bring a useful tool for image processing domain and that is the main goal for this work. In the following two novel approaches to saliency detection are presented. In comparison to previous works in this field the proposed approaches tackles saliency estimation on the object-wise level. In addition, one of the proposed approach solves saliency detection problem through modelling 3-D spatial relationships between objects in a scene.
Moreover, a novel idea of the application of saliency to diversification of image retrieval results is presented.
|
194 |
Semantic Language models with deep neural NetworksBayer, Ali Orkan January 2015 (has links)
Spoken language systems (SLS) communicate with users in natural language through speech. There are two main problems related to processing the spoken input in SLS. The first one is automatic speech recognition (ASR) which recognizes what the user says. The second one is spoken language understanding (SLU) which understands what the user means. We focus on the language model (LM) component of SLS. LMs constrain the search space that is used in the search for the best hypothesis. Therefore, they play a crucial role in the performance of SLS. It has long been discussed that an improvement in the recognition performance does not necessarily yield a better understanding performance. Therefore, optimization of LMs for the understanding performance is crucial. In addition, long-range dependencies in languages are hard to handle with statistical language models. These two problems are addressed in this thesis. We investigate two different LM structures. The first LM that we investigate enable SLS to understand better what they recognize by searching the ASR hypotheses for the best understanding performance. We refer to these models as joint LMs. They use lexical and semantic units jointly in the LM. The second LM structure uses the semantic context of an utterance, which can also be described as “what the system understands”, to search for a better hypothesis that improves the recognition and the understanding performance. We refer to these models as semantic LMs (SELMs). SELMs use features that are based on a well established theory of lexical semantics, namely the theory of frame semantics. They incorporate the semantic features which are extracted from the ASR hypothesis into the LM and handle long-range dependencies by using the semantic relationships between words and semantic context. ASR noise is propagated to the semantic features, to suppress this noise we introduce the use of deep semantic encodings for semantic feature extraction. In this way, SELMs optimize both the recognition and the understanding performance.
|
195 |
Requirements Engineering for Self-Adaptive Software: Bridging the Gap Between Design-Time and Run-TimeQureshi, Nauman Ahmed January 2011 (has links)
Self-Adaptive Software systems (SAS) adapt at run-time in response to changes in user’s needs, operating contexts, and resource availability, by requiring minimal to no involvement of system administrators. The ever-greater reliance on software with qualities such as flexibility and easy integrability, and the associated increase of design and maintenance effort, is raising the interest towards research on SAS. Taking the perspective of Requirements Engineering (RE), we investigate in this thesis how RE for SAS departs from more conventional RE for nonadaptive systems. The thesis has two objectives. First, to define a systematic approach to support the analyst to engineer requirements for SAS at design-time, which starts at early requirements (elicitation and analysis) and ends with the specification of the system, which will satisfy those requirements. Second, to realize software holding a representation of its requirements at run-time, thus enabling run-time adaptation in a user-oriented, goal-driven manner. To fulfill the first objective, a conceptual and theoretical framework is proposed. The framework is founded on core ontology for RE with revised elements that are needed to support RE for SAS. On this basis, a practical and systematic methodology at support of the requirements engineer is defined. It exploits a new aggregate type of requirement, called adaptive requirements, together with a visual modeling language to code requirements into a designtime artifact (called Adaptive Requirements Modeling Language, ARML). Adaptive requirements not only encompass functional and non-functional requirements but also specify properties for control loop functionalities such as monitoring specification, decision criteria and adaptation actions. An experiment is conducted involving human subjects to provide a first assessment on the effectiveness of proposed modeling concepts and approach. To support the second objective, a Continuous Adaptive RE (CARE) framework is proposed. It is based on a service-oriented architecture mainly adopting concepts from service-based applications to support run-time analysis and refinement of requirements by the system itself. The key contribution in achieving this objective is enabling the CARE framework to involve the end-user in the adaptation at run-time, when needed. As a validation of this framework, we perform a research case study by developing a proof of concept application, which rests on CARE’s conceptual architecture. This thesis contributes to the research on requirements engineering for SAS by proposing: (1) a conceptual core ontology with necessary concepts and relations to support the formulation of a dynamic RE problem i.e. finding adaptive requirements specification both at design-time and run-time. (2) a systematic methodology to support the analyst for modeling and operationalizing adaptive requirements at design-time. (3) a framework to perform continuous requirements engineering at run-time by the system itself involving the end-user.
|
196 |
CROSS-LAYER ADAPTATION OF SERVICE-BASED SYSTEMSZengin, Asli January 2012 (has links)
One of the key features of service-based systems (SBS) is the capability to adapt in order to react to various changes in the business requirements and the application context. Given the complex layered structure, and the heterogeneous and dynamic execution context of such systems, adaptation is not at all a trivial task.
The importance of the problem of adaptation has been widely recognized in the community of software services and systems. There exist several adaptation approaches which aim at identifying and solving problems that occur in
one of the SBS layers. A fundamental problem with most of these works is their fragmentation and isolation. While these solutions are quite effective when the specific problem they try to solve is considered, they may be incompatible or even harmful when the whole system is taken into account. Enacting an adaptation in the system might result in triggering new problems.
When building adaptive SBSs precautions must be taken to consider the impacts of the adaptations on the entire system. This can be achieved by properly coordinating adaptation actions provided by the different analysis and decision mechanisms through holistic and multi-layer adaptation strategies. In this dissertation, we address this problem. We present a novel framework for Crosslayer
Adaptation Management (CLAM) that enables a comprehensive impact analysis by coordinating the adaptation and analysis tools available in the SBS.
We define a new system modeling methodology for adaptation coordination. The SBS model and the accompanying adaptation model that we propose in this thesis overcome the limitations of the existing cross-layer adaptation approaches: (i) genericness for accommodating diverse SBS domains with different system elements and layers (ii) flexibility for allowing new system artifacts and adaptation tools (iii) capability for dealing with the complexity of the SBS considering the possibility of a huge number of problems and adaptations that might take place in the system.
Based on this model we present a tree-based coordination algorithm. On the one hand it exploits the local adaptation and analysis facilities provided by the system, and on the other hand it harmonizes the different layers and system elements by properly coordinating the local solutions. The outcome of the algorithm is a set of alternative cross-layer adaptation strategies which are consistent with the overall system.
Moreover, we propose novel selection criteria to rank the alternative strategies and select the best one. Differently from the traditional approaches, we consider as selection criteria not only the overall quality of the SBS, but also
the total efforts required to enact an adaptation strategy. Based on these criteria we present two possible ranking methods, one relying on simple additive weighting - multiple criteria decision making, the other relying on fuzzy logic.
The framework is implemented and integrated in a toolkit that allows for constructing and selecting the cross-layer adaptation strategies, and is evaluated on a set of case studies.
|
197 |
Machine Learning for Tract Segmentation in dMRI dataThien Bao, Nguyen January 2016 (has links)
Diffusion MRI (dMRI) data allows to reconstruct the 3D pathways of axons within the white matter of the brain as a set of streamlines, called tractography. A streamline is a vectorial representation of thousands of neuronal axons expressing structural connectivity. An important task is to group the same functional streamlines into one tract segmentation. This work is extremely helpful for neuro surgery or diagnosing brain disease. However, the segmentation process is difficult and time consuming due to the large number of streamlines (about 3 × 10 5 in a normal brain) and the variability of the brain anatomy among different subjects. In our project, the goal is to design an effective method for tract segmentation task based on machine learning techniques and to develop an interactive tool to assist medical practitioners to perform this task more precisely, more easily, and faster. First, we propose a design of interactive segmentation process by presenting the user a clustered version of the tractography in which user selects some of the clusters to identify a superset of the streamlines of interest. This superset is then re-clustered at a finer scale and again the user is requested to select the relevant clusters. The process of re-clustering and manual selection is iterated until the remaining streamlines faithfully represent the desired anatomical structure of interest. To solve the computational issue of clustering a large number of streamlines under the strict time constraints requested by the interactive use, we present a solution which consists in embedding the streamlines into a Euclidean space (call dissimilarity representation), and then in adopting a state-of-the art scalable implementation of the k-means algorithm. The dissimilarity representation is defined by selecting a set of streamlines called prototypes and then mapping any new streamline to the vector of distances from prototypes. Second, an algorithm is proposed to find the correspondence/mapping between streamlines in tractographies among two different samples, without requiring any transformation as the traditional tractography registration usually does. In other words, we try to find a mapping between the tractographies. Mapping is very useful for studying tractography data across subjects. Last but not least, by exploring the mapping in the context of dissimilarity representation, we also propose the algorithmic solution to build the common vectorial representation of streamlines across subject. The core of the proposed solution combines two state-of-the-art elements: first using the recently proposed tractography mapping approach to align the prototypes across subjects; then applying the dissimilarity representation to build the common vectorial representation for streamline. Preliminary results of applying our methods in clinical use-cases show evidence that our proposed algorithm is greatly beneficial (in terms of time efficiency, easiness.etc.) for the study of white matter tractography in clinical applications.
|
198 |
Automatic Techniques for the Synthesis and Assisted Deployment of Security Policies in Workflow-based Applicationsdos Santos, Daniel Ricardo January 2017 (has links)
Workflows specify a collection of tasks that must be executed under the responsibility or supervision of human users. Workflow management systems and workflow-driven applications need to enforce security policies in the form of access control, specifying which users can execute which tasks, and authorization constraints, such as Separation/Binding of Duty, further restricting the execution of tasks at run-time. Enforcing these policies is crucial to avoid frauds and malicious use, but it may lead to situations where a workflow instance cannot be completed without the violation of the policy. The Workflow Satisfiability Problem (WSP) asks whether there exists an assignment of users to tasks in a workflow such that every task is executed and the policy is not violated. The run-time version of this problem amounts to answering user requests to execute tasks positively if the policy is respected and the workflow instance is guaranteed to terminate. The WSP is inherently hard, but solutions to this problem have a practical application in reconciling business compliance (stating that workflow instances should follow the specified policies) and business continuity (stating that workflow instances should be deadlock-free). Related problems, such as finding execution scenarios that not only satisfy a workflow but also satisfy other properties (e.g., that a workflow instance is still satisfiable even in the absence of users), can be solved at deployment-time to help users design policies and reuse available workflow models. The main contributions of this thesis are three: 1. We present a technique to synthesize monitors capable of solving the run-time version of the WSP, i.e., capable of answering user requests to execute tasks in such a way that the policy is not violated and the workflow instance is guaranteed to terminate. The technique is extended to modular workflow specifications, using components and gluing assertions. This allows us to compose synthesized monitors, reuse workflow models, and synthesize monitors for large models. 2. We introduce and present techniques to solve a new class of problems called Scenario Finding Problems, i.e., finding execution scenarios that satisfy properties of interest to users. Solutions to these problems can assist customers during the deployment of reusable workflow models with custom authorization policies. 3. We implement the proposed techniques in two tools. Cerberus integrates monitor synthesis, scenario finding, and run-time enforcement into workflow management systems. Aegis recovers workflow models from web applications using process mining, synthesizes monitors, and invokes them at run-time by using a reverse proxy. An extensive experimental evaluation shows the practical applicability of the proposed approaches on realistic and synthetic (for scalability) problem instances.
|
199 |
Privacy-Aware Risk-Based Access Control SystemsMetoui, Nadia January 2018 (has links)
Modern organizations collect massive amounts of data, both internally (from their employees and processes) and externally (from customers, suppliers, partners). The increasing availability of these large datasets was made possible thanks to the increasing storage and processing capability. Therefore, from a technical perspective, organizations are now in a position to exploit these diverse datasets to create new data-driven businesses or optimizing existing processes (real-time customization, predictive analytics, etc.). However, this kind of data often contains very sensitive information that, if leaked or misused, can lead to privacy violations. Privacy is becoming increasingly relevant for organization and businesses, due to strong regulatory frameworks (e.g., the EU General Data Protection Regulation GDPR, the Health Insurance Portability and Accountability Act HIPAA) and the increasing awareness of citizens about personal data issues. Privacy breaches and failure to meet privacy requirements can have a tremendous impact on companies (e.g., reputation loss, noncompliance fines, legal actions). Privacy violation threats are not exclusively caused by external actors gaining access due to security gaps. Privacy breaches can also be originated by internal actors, sometimes even by trusted and authorized ones. As a consequence, most organizations prefer to strongly limit (even internally) the sharing and dissemination of data, thereby making most of the information unavailable to decision-makers, and thus preventing the organization from fully exploit the power of these new data sources. In order to unlock this potential, while controlling the privacy risk, it is necessary to develop novel data sharing and access control mechanisms able to support risk-based decision making and weigh the advantages of information against privacy considerations. To achieve this, access control decisions must be based on an (dynamically assessed) estimation of expected cost and benefits compared to the risk, and not (as in traditional access control systems) on a predefined policy that statically defines what accesses are allowed and denied. In Risk-based access control for each access request, the corresponding risk is estimated and if the risk is lower than a given threshold (possibly related to the trustworthiness of the requester), then access is granted or denied. The aim is to be more permissive than in traditional access control systems by allowing for a better exploitation of data. Although existing risk-based access control models provide an important step towards a better management and exploitation of data, they have a number of drawbacks which limit their effectiveness. In particular, most of the existing risk-based systems only support binary access decisions: the outcome is “allowed” or “denied”, whereas in real life we often have exceptions based on additional conditions (e.g., “I cannot provide this information, unless you sign the following non-disclosure agreement.” or “I cannot disclose this data, because they contain personal identifiable information, but I can disclose an anonymized version of the data.”). In other words, the system should be able to propose risk mitigation measures to reduce the risk (e.g., disclose partial or anonymized version of the requested data) instead of denying risky access requests. Alternatively, it should be able to propose appropriate trust enhancement measures (e.g., stronger authentication), and once they are accepted/fulfilled by the requester, more information can be shared. The aim of this thesis is to propose and validate a novel privacy enhancing access control approach offering adaptive and fine-grained access control for sensitive data-sets. This approach enhances access to data, but it also mitigates privacy threats originated by authorized internal actors. More in detail: 1. We demonstrate the relevance and evaluate the impact of authorized actors’ threats. To this aim, we developed a privacy threats identification methodology EPIC (Evaluating Privacy violation rIsk in Cyber security systems) and apply EPIC in a cybersecurity use case where very sensitive information is used. 2. We present the privacy-aware risk-based access control framework that supports access control in dynamic contexts through trust enhancement mechanisms and privacy risk mitigation strategies. This allows us to strike a balance between the privacy risk and the trustworthiness of the data request. If the privacy risk is too large compared to the trust level, then the framework can identify adaptive strategies that can decrease the privacy risk (e.g., by removing/obfuscating part of the data through anonymization) and/or increase the trust level (e.g., by asking for additional obligations to the requester). 3. We show how the privacy-aware risk-based approach can be integrated to existing access control models such as RBAC and ABAC and that it can be realized using a declarative policy language with a number of advantages including usability, flexibility, and scalability. 4. We evaluate our approach using several industrial relevant use cases, elaborated to meet the requirements of the industrial partner (SAP) of this industrial doctorate.
|
200 |
Distributed Computing for Large-scale GraphsGuerrieri, Alessio January 2015 (has links)
The last decade has seen an increased attention on large-scale data analysis, caused mainly by the availability of new sources of data and the development of programming model that allowed their analysis. Since many of these sources can be modeled as graphs, many large-scale graph processing frameworks have been developed, from vertex-centric models such as pregel to more complex programming models that allow asynchronous computation, can tackle dynamism in the data and permit the usage of different amount of resources. This thesis presents theoretical and practical results in the area of distributed large- scale graph analysis by giving an overview of the entire pipeline. Data must first be pre-processed to obtain a graph, which is then partitioned into subgraphs of similar size. To analyze this graph the user must choose a system and a programming model that matches her available resources, the type of data and the class of algorithm to execute. Aside from an overview of all these different steps, this research presents three novel approaches to those steps. The first main contribution is dfep, a novel distributed partitioning algorithm that divides the edge set into similar sized partition. dfep can obtain partitions with good quality in only a few iterations. The output of dfep can then be used by etsch, a graph processing framework that uses partitions of edges as the focus of its programming model. etsch’s programming model is shown to be flexible and can easily reuse sequential classical graph algorithms as part of its workflow. Implementations of etsch in hadoop, spark and akka allow for a comparison of those systems and the discussion of their advantages and disadvantages. The implementation of etsch in akka is by far the fastest and is able to process billion-edges graphs faster that competitors such as gps, blogel and giraph++, while using only a few computing nodes. A final contribution is an application study of graph-centric approaches to word sense induction and disambiguation: from a large set of documents a word graph is constructed and then processed by a graph clustering algorithm, to find documents that refer to the same entities. A novel graph clustering algorithm, named tovel, uses a diffusion-based approach inspired by the cycle of water.
|
Page generated in 0.072 seconds