• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1678
  • 338
  • 13
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2041
  • 707
  • 488
  • 365
  • 346
  • 279
  • 252
  • 251
  • 236
  • 225
  • 223
  • 216
  • 191
  • 189
  • 179
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Effects of artificial light at night on benthic primary producers in freshwaters

Grubisic, Maja January 2017 (has links)
In recent decades, the use of artificial nocturnal illumination has rapidly increased worldwide, imposing an increase of nocturnal light levels and a disruption of natural cycles of light and dark that have been stable over geological and evolutionary time scales. This wide-spread alteration of the natural light regime by artificial light at night (ALAN) is contributing to global environmental change and raises concerns about the potentially adverse effects on organisms and processes in illuminated ecosystems. Simultaneously, a global shift in outdoor lighting technologies from yellow high-pressure sodium (HPS) to white light-emitting diode (LED) light is taking place, changing the spectral composition of nocturnal illumination. Mounting evidence suggests that ALAN affects microorganisms, plants and animals in both aquatic and terrestrial ecosystems. Light is a major source of energy and an important environmental cue for primary producers that influences and to a large extent drives their growth, production and community structure. Freshwaters are increasingly illuminated at night, as they are often located near the human population centers. Despite this, the impacts of artificial nocturnal illumination in freshwater ecosystems are still largely unknown. In particular, effects on aquatic primary producers in urban and sub-urban rivers and streams have hardly been addressed. This thesis aimed to investigate effects of artificial nocturnal illumination on biomass and community composition of communities of benthic primary producers in freshwaters, the periphyton. The presented work is based on manipulative field studies performed in two contrasting freshwater systems whose periphyton communities are characterized by different species. The first study was performed in a stream-side flume system on a sub-alpine stream and the second in a lowland agricultural ditch. I found that two to 13 weeks of exposure to LED light at night decreased the biomass of periphyton in both aquatic systems. In stream periphy-ton, the decrease in biomass was observed for periphyton in early developmental stages (up to three weeks), but not that in the later developmental stages (four to six weeks). The effects of LED on community composition were found only in stream periphyton, where it increased the proportion of the dominant autotroph group, the diatoms and decreased the proportion of cya-nobacteria in early developmental stages, but indicated a decreased proportion of diatoms and an increased proportion of cyanobacteria in the later developmental stages. I found that LED light at night altered pigment composition and quantitative taxonomic composition in stream periphyton in later developmental stages and that several diatom and chrysophyte taxa, both autotrophic and heterotrophic, responded to ALAN by increasing or decreasing in abundance in a taxon-specific manner. LED did not affect periphyton community composition in lowland agricultural ditch, likely because periphyton was composed of different species. All effects of LED light were different between the seasons presumably due to seasonal differences in community composition and environmental variables. I did not find any evidence that HPS-light affects either biomass or community composition of periphyton. Differential effects of the two light sources are likely a result of differences in their spectral composition, in particu-lar the high proportion of blue light emitted by LED but not by HPS. This thesis provides, for the first time, evidence that LED light at night can profoundly affect benthic primary producers and periphyton communities in freshwater systems by reduc-ing their biomass and altering community composition. Systems dominated by periphyton in its early developmental stages, such as streams prone to physical disturbances, are likely to be more sensitive to ALAN compared to systems with stable flow conditions based on the results presented. Periphyton plays a fundamental role in productivity, nutrient and carbon cycling and food supply for higher trophic levels in small, clear waters; its position in the base of aquatic ecosystems suggests that the alterations induced by ALAN may have important con-sequences for ecosystem functions. This should be considered when developing lighting strat-egies for areas close to freshwaters in order to mitigate potentially adverse effects of nocturnal artificial illumination on aquatic ecosystems.
102

CROSS-LAYER ADAPTATION OF SERVICE-BASED SYSTEMS

Zengin, Asli January 2012 (has links)
One of the key features of service-based systems (SBS) is the capability to adapt in order to react to various changes in the business requirements and the application context. Given the complex layered structure, and the heterogeneous and dynamic execution context of such systems, adaptation is not at all a trivial task. The importance of the problem of adaptation has been widely recognized in the community of software services and systems. There exist several adaptation approaches which aim at identifying and solving problems that occur in one of the SBS layers. A fundamental problem with most of these works is their fragmentation and isolation. While these solutions are quite effective when the specific problem they try to solve is considered, they may be incompatible or even harmful when the whole system is taken into account. Enacting an adaptation in the system might result in triggering new problems. When building adaptive SBSs precautions must be taken to consider the impacts of the adaptations on the entire system. This can be achieved by properly coordinating adaptation actions provided by the different analysis and decision mechanisms through holistic and multi-layer adaptation strategies. In this dissertation, we address this problem. We present a novel framework for Crosslayer Adaptation Management (CLAM) that enables a comprehensive impact analysis by coordinating the adaptation and analysis tools available in the SBS. We define a new system modeling methodology for adaptation coordination. The SBS model and the accompanying adaptation model that we propose in this thesis overcome the limitations of the existing cross-layer adaptation approaches: (i) genericness for accommodating diverse SBS domains with different system elements and layers (ii) flexibility for allowing new system artifacts and adaptation tools (iii) capability for dealing with the complexity of the SBS considering the possibility of a huge number of problems and adaptations that might take place in the system. Based on this model we present a tree-based coordination algorithm. On the one hand it exploits the local adaptation and analysis facilities provided by the system, and on the other hand it harmonizes the different layers and system elements by properly coordinating the local solutions. The outcome of the algorithm is a set of alternative cross-layer adaptation strategies which are consistent with the overall system. Moreover, we propose novel selection criteria to rank the alternative strategies and select the best one. Differently from the traditional approaches, we consider as selection criteria not only the overall quality of the SBS, but also the total efforts required to enact an adaptation strategy. Based on these criteria we present two possible ranking methods, one relying on simple additive weighting - multiple criteria decision making, the other relying on fuzzy logic. The framework is implemented and integrated in a toolkit that allows for constructing and selecting the cross-layer adaptation strategies, and is evaluated on a set of case studies.
103

Computational problems in algebra: units in group rings and subalgebras of real simple Lie algebras

Faccin, Paolo January 2014 (has links)
In the first part of the thesis I produce and implement an algorithm for obtaining generators of the unit group of the integral group ring ZG of finite abelian group G. We use our implementation in MAGMA of this algorithm to compute the unit group of ZG for G of order up to 110. In the second part of the thesis I show how to construct multiplication tables of the semisimple real Lie algebras. Next I give an algorithm, based on the work of Sugiura, to find all Cartan subalgebra of such a Lie algebra. Finally I show algorithms for finding semisimple subalgebras of a given semisimple real Lie algebra.
104

Machine Learning for Tract Segmentation in dMRI data

Thien Bao, Nguyen January 2016 (has links)
Diffusion MRI (dMRI) data allows to reconstruct the 3D pathways of axons within the white matter of the brain as a set of streamlines, called tractography. A streamline is a vectorial representation of thousands of neuronal axons expressing structural connectivity. An important task is to group the same functional streamlines into one tract segmentation. This work is extremely helpful for neuro surgery or diagnosing brain disease. However, the segmentation process is difficult and time consuming due to the large number of streamlines (about 3 × 10 5 in a normal brain) and the variability of the brain anatomy among different subjects. In our project, the goal is to design an effective method for tract segmentation task based on machine learning techniques and to develop an interactive tool to assist medical practitioners to perform this task more precisely, more easily, and faster. First, we propose a design of interactive segmentation process by presenting the user a clustered version of the tractography in which user selects some of the clusters to identify a superset of the streamlines of interest. This superset is then re-clustered at a finer scale and again the user is requested to select the relevant clusters. The process of re-clustering and manual selection is iterated until the remaining streamlines faithfully represent the desired anatomical structure of interest. To solve the computational issue of clustering a large number of streamlines under the strict time constraints requested by the interactive use, we present a solution which consists in embedding the streamlines into a Euclidean space (call dissimilarity representation), and then in adopting a state-of-the art scalable implementation of the k-means algorithm. The dissimilarity representation is defined by selecting a set of streamlines called prototypes and then mapping any new streamline to the vector of distances from prototypes. Second, an algorithm is proposed to find the correspondence/mapping between streamlines in tractographies among two different samples, without requiring any transformation as the traditional tractography registration usually does. In other words, we try to find a mapping between the tractographies. Mapping is very useful for studying tractography data across subjects. Last but not least, by exploring the mapping in the context of dissimilarity representation, we also propose the algorithmic solution to build the common vectorial representation of streamlines across subject. The core of the proposed solution combines two state-of-the-art elements: first using the recently proposed tractography mapping approach to align the prototypes across subjects; then applying the dissimilarity representation to build the common vectorial representation for streamline. Preliminary results of applying our methods in clinical use-cases show evidence that our proposed algorithm is greatly beneficial (in terms of time efficiency, easiness.etc.) for the study of white matter tractography in clinical applications.
105

Long-term morphological response of tide dominated estuaries

Todeschini, Ilaria January 2006 (has links)
Most estuaries of the world are influenced by tides. The tidal action is a fundamental mechanism for mixing river and estuarine waters, resuspending and transporting sediments and creating bedforms. The planform of tide-dominated estuaries is characterized by a funnel-shaped geometry with a high width-to-depth ratio. The width of the estuarine section tends to decrease rapidly upstream, following an exponential law and the bottom slopes are generally non-significant. In this thesis the long-term morphological evolution of tide-dominated estuaries is investigated through a relatively simple one-dimensional numerical model. When a reflective barrier condition is assigned at the landward boundary, the morphological evolution of an initially horizontal tidal channel is characterized by the formation of a sediment wave that migrates slowly landward until it leads to the emergence of a beach. The bottom profile reaches a dynamical equilibrium configuration which has been defined as the condition in which the tidally averaged sediment flux vanishes or, alternatively, the bottom elevation attains a constant value. For relatively short and weakly convergent estuaries, the beach is formed at the landward end of the channel, due to the reflective barrier chosen as boundary condition, and the equilibrium length coincides with the distance of such boundary from the mouth. When the above distance exceeds a threshold value, which decreases for increasing values of the degree of convergence of the channel, the beach forms within an internal section of the estuary and the final equilibrium length is much shorter and mainly governed by the convergence length. The final equilibrium length of the channel is found to depend mainly on two parameters, namely the physical length of the channel and the degree of convergence. Moreover, if a condition of vanishing sediment flux from the outer sea during the flood phase is imposed, a larger scour in the seaward part of the estuary is observed, though the overall longitudinal bottom profile does not differ much from that corresponding to a sediment influx equal to the equilibrium transport capacity at the mouth. This fixed banks model is not able to explain this typical funnel shape; furthermore, the bottom slopes obtained with this models are quite large if compared with the mild slopes of real tidedominated estuaries. For these reasons, the problem has been analysed to understand the reason why tidal channels are convergent and to define the conditions under which the exponential law for width variation, which is so often observed in nature, is reproduced. The long-term evolution of the channel cross-section is investigated allowing the width to vary with time. The one-dimensional model is expanded by considering a simple way to take the banks erosion into account. A strongly simplified approach is adopted, whereby only the effects related to flow and sediment transport processes within the tidal channel are retained and further ingredients, like the control exerted by tidal flats or the direct effect of sea currents in the outer part of the estuary, are discarded. The lateral erosion is taken into account and computed as a function of bed shear stress, provided it exceeds a threshold value within the cross section. The effect of different initial and boundary conditions on the widening process of the channel is tested, along with the role of the incoming river discharge. Another problem, which is somehow analogous to that of the cross-section evolution, is tackled: a part of a tidal flat dissected by a series of parallel channels is considered and the response of the system induced by a modification of the depth of the channels is studied. In particular, the aim is to assess whether the increase of the depth of one channel, starting from an equilibrium configuration, causes deposition in the others inducing their closure. The evaluation of the morphological effect of a depth variation in one of channel upon the other channels is quite a relevant task, because the stability of salt marshes and lagoons is intrinsically related to the stability of the hydrodynamic functionality of the channels. A result of this analysis is the determination of a characteristic distance between the channels to have mutual influence. It is found that this distance scales with the root of the longitudinal length of the flat; thus, a scaledependent spacing is expected in tidal networks.
106

Automatic Techniques for the Synthesis and Assisted Deployment of Security Policies in Workflow-based Applications

dos Santos, Daniel Ricardo January 2017 (has links)
Workflows specify a collection of tasks that must be executed under the responsibility or supervision of human users. Workflow management systems and workflow-driven applications need to enforce security policies in the form of access control, specifying which users can execute which tasks, and authorization constraints, such as Separation/Binding of Duty, further restricting the execution of tasks at run-time. Enforcing these policies is crucial to avoid frauds and malicious use, but it may lead to situations where a workflow instance cannot be completed without the violation of the policy. The Workflow Satisfiability Problem (WSP) asks whether there exists an assignment of users to tasks in a workflow such that every task is executed and the policy is not violated. The run-time version of this problem amounts to answering user requests to execute tasks positively if the policy is respected and the workflow instance is guaranteed to terminate. The WSP is inherently hard, but solutions to this problem have a practical application in reconciling business compliance (stating that workflow instances should follow the specified policies) and business continuity (stating that workflow instances should be deadlock-free). Related problems, such as finding execution scenarios that not only satisfy a workflow but also satisfy other properties (e.g., that a workflow instance is still satisfiable even in the absence of users), can be solved at deployment-time to help users design policies and reuse available workflow models. The main contributions of this thesis are three: 1. We present a technique to synthesize monitors capable of solving the run-time version of the WSP, i.e., capable of answering user requests to execute tasks in such a way that the policy is not violated and the workflow instance is guaranteed to terminate. The technique is extended to modular workflow specifications, using components and gluing assertions. This allows us to compose synthesized monitors, reuse workflow models, and synthesize monitors for large models. 2. We introduce and present techniques to solve a new class of problems called Scenario Finding Problems, i.e., finding execution scenarios that satisfy properties of interest to users. Solutions to these problems can assist customers during the deployment of reusable workflow models with custom authorization policies. 3. We implement the proposed techniques in two tools. Cerberus integrates monitor synthesis, scenario finding, and run-time enforcement into workflow management systems. Aegis recovers workflow models from web applications using process mining, synthesizes monitors, and invokes them at run-time by using a reverse proxy. An extensive experimental evaluation shows the practical applicability of the proposed approaches on realistic and synthetic (for scalability) problem instances.
107

Privacy-Aware Risk-Based Access Control Systems

Metoui, Nadia January 2018 (has links)
Modern organizations collect massive amounts of data, both internally (from their employees and processes) and externally (from customers, suppliers, partners). The increasing availability of these large datasets was made possible thanks to the increasing storage and processing capability. Therefore, from a technical perspective, organizations are now in a position to exploit these diverse datasets to create new data-driven businesses or optimizing existing processes (real-time customization, predictive analytics, etc.). However, this kind of data often contains very sensitive information that, if leaked or misused, can lead to privacy violations. Privacy is becoming increasingly relevant for organization and businesses, due to strong regulatory frameworks (e.g., the EU General Data Protection Regulation GDPR, the Health Insurance Portability and Accountability Act HIPAA) and the increasing awareness of citizens about personal data issues. Privacy breaches and failure to meet privacy requirements can have a tremendous impact on companies (e.g., reputation loss, noncompliance fines, legal actions). Privacy violation threats are not exclusively caused by external actors gaining access due to security gaps. Privacy breaches can also be originated by internal actors, sometimes even by trusted and authorized ones. As a consequence, most organizations prefer to strongly limit (even internally) the sharing and dissemination of data, thereby making most of the information unavailable to decision-makers, and thus preventing the organization from fully exploit the power of these new data sources. In order to unlock this potential, while controlling the privacy risk, it is necessary to develop novel data sharing and access control mechanisms able to support risk-based decision making and weigh the advantages of information against privacy considerations. To achieve this, access control decisions must be based on an (dynamically assessed) estimation of expected cost and benefits compared to the risk, and not (as in traditional access control systems) on a predefined policy that statically defines what accesses are allowed and denied. In Risk-based access control for each access request, the corresponding risk is estimated and if the risk is lower than a given threshold (possibly related to the trustworthiness of the requester), then access is granted or denied. The aim is to be more permissive than in traditional access control systems by allowing for a better exploitation of data. Although existing risk-based access control models provide an important step towards a better management and exploitation of data, they have a number of drawbacks which limit their effectiveness. In particular, most of the existing risk-based systems only support binary access decisions: the outcome is “allowed” or “denied”, whereas in real life we often have exceptions based on additional conditions (e.g., “I cannot provide this information, unless you sign the following non-disclosure agreement.” or “I cannot disclose this data, because they contain personal identifiable information, but I can disclose an anonymized version of the data.”). In other words, the system should be able to propose risk mitigation measures to reduce the risk (e.g., disclose partial or anonymized version of the requested data) instead of denying risky access requests. Alternatively, it should be able to propose appropriate trust enhancement measures (e.g., stronger authentication), and once they are accepted/fulfilled by the requester, more information can be shared. The aim of this thesis is to propose and validate a novel privacy enhancing access control approach offering adaptive and fine-grained access control for sensitive data-sets. This approach enhances access to data, but it also mitigates privacy threats originated by authorized internal actors. More in detail: 1. We demonstrate the relevance and evaluate the impact of authorized actors’ threats. To this aim, we developed a privacy threats identification methodology EPIC (Evaluating Privacy violation rIsk in Cyber security systems) and apply EPIC in a cybersecurity use case where very sensitive information is used. 2. We present the privacy-aware risk-based access control framework that supports access control in dynamic contexts through trust enhancement mechanisms and privacy risk mitigation strategies. This allows us to strike a balance between the privacy risk and the trustworthiness of the data request. If the privacy risk is too large compared to the trust level, then the framework can identify adaptive strategies that can decrease the privacy risk (e.g., by removing/obfuscating part of the data through anonymization) and/or increase the trust level (e.g., by asking for additional obligations to the requester). 3. We show how the privacy-aware risk-based approach can be integrated to existing access control models such as RBAC and ABAC and that it can be realized using a declarative policy language with a number of advantages including usability, flexibility, and scalability. 4. We evaluate our approach using several industrial relevant use cases, elaborated to meet the requirements of the industrial partner (SAP) of this industrial doctorate.
108

Distributed Computing for Large-scale Graphs

Guerrieri, Alessio January 2015 (has links)
The last decade has seen an increased attention on large-scale data analysis, caused mainly by the availability of new sources of data and the development of programming model that allowed their analysis. Since many of these sources can be modeled as graphs, many large-scale graph processing frameworks have been developed, from vertex-centric models such as pregel to more complex programming models that allow asynchronous computation, can tackle dynamism in the data and permit the usage of different amount of resources. This thesis presents theoretical and practical results in the area of distributed large- scale graph analysis by giving an overview of the entire pipeline. Data must first be pre-processed to obtain a graph, which is then partitioned into subgraphs of similar size. To analyze this graph the user must choose a system and a programming model that matches her available resources, the type of data and the class of algorithm to execute. Aside from an overview of all these different steps, this research presents three novel approaches to those steps. The first main contribution is dfep, a novel distributed partitioning algorithm that divides the edge set into similar sized partition. dfep can obtain partitions with good quality in only a few iterations. The output of dfep can then be used by etsch, a graph processing framework that uses partitions of edges as the focus of its programming model. etsch’s programming model is shown to be flexible and can easily reuse sequential classical graph algorithms as part of its workflow. Implementations of etsch in hadoop, spark and akka allow for a comparison of those systems and the discussion of their advantages and disadvantages. The implementation of etsch in akka is by far the fastest and is able to process billion-edges graphs faster that competitors such as gps, blogel and giraph++, while using only a few computing nodes. A final contribution is an application study of graph-centric approaches to word sense induction and disambiguation: from a large set of documents a word graph is constructed and then processed by a graph clustering algorithm, to find documents that refer to the same entities. A novel graph clustering algorithm, named tovel, uses a diffusion-based approach inspired by the cycle of water.
109

Formal Proofs of Security for Privacy-Preserving Blockchains and other Cryptographic Protocols

Longo, Riccardo January 2018 (has links)
Cryptography is used to protect data and communications. The basic tools are cryptographic primitives, whose security and efficiency are widely studied. But in real-life applications these primitives are not used individually, but combined inside complex protocols. The aim of this thesis is to analyse various cryptographic protocols and assess their security in a formal way. In chapter 1 the concept of formal proofs of security is introduced and the main categorisation of attack scenarios and types of adversary are presented, and the protocols analysed in the thesis are briefly introduced with some motivation. In chapter 2 are presented the security assumptions used in the proofs of the following chapters, distinguishing between the hardness of algebraic problems and the strength of cryptographic primitives. Once that the bases are given, the first protocols are analysed in chapter 3, where two Attribute Based Encryption schemes are proven secure. First context and motivation are introduced, presenting settings of cloud encryption, alongside the tools used to build ABE schemes. Then the first scheme, that introduces multiple authorities in order to improve privacy, is explained in detail and proven secure. Finally the second scheme is presented as a variation of the first one, with the aim of improving the efficiency performing a round of collaboration between the authorities. The next protocol analysed is a tokenization algorithm for the protection of credit cards. In chapter 4 the advantages of tokenization and the regulations required by the banking industry are presented, and a practical algorithm is proposed, and proven secure and compliant with the standard. In chapter 5 the focus is on the BIX Protocol, that builds a chain of certificates in order to decentralize the role of certificate authorities. First the protocol and the structure of the certificates are introduced, then two attack scenarios are presented and the protocol is proven secure in these settings. Finally a viable attack vector is analysed, and a mitigation approach is discussed. In chapter 6 is presented an original approach on building a public ledger with end-to-end encryption and a one-time-access property, that make it suitable to store sensitive data. Its security is studied in a variety of attack scenarios, giving proofs based on standard algebraic assumptions. The last protocol presented in chapter 7 uses a proof-of-stake system to maintain the consistency of subchains built on top of the Bitcoin blockchain, using only standard Bitcoin transactions. Particular emphasis is given to the analysis of the refund policies employed, proving that the naive approach is always ineffective whereas the chosen policy discourages attackers whose stake falls below a threshold, that may be adjusted varying the protocol parameters.
110

An Organizational Approach to the Polysemy Problem in WordNet

Freihat, Abed Alhakim January 2014 (has links)
Polysemy in WordNet corresponds to various kinds of linguistic phenomena that can be grouped into five classes. One of them is homonymy that refers to the cases, where the meanings of a term are unrelated, and three of the classes refer to the polysemy cases, where the meanings of a term are related. These three classes are specialization polysemy, metonymy, and metaphoric polysemy.Another polysemy class is the compound noun polysemy. In this thesis, we focus on compound noun polysemy and specialization polysemy. Compound noun Polysemy corresponds to the cases, where we use the modified noun to refer to a compound noun. Specialization polysemy is a type of related polysemy referring to the polysemy cases, when a term is used to refer to either a more general meaning or a more specific meaning. Compound noun polysemy and specialization polysemy in WordNet are considered the main reasons behind the highpolysemous nature of WordNet that make WordNet redundant and too fine grained for natural language processing. Another problem in WordNet is its polysemy representation. WordNet represents the polysemous terms by capturing the different meanings of them at lexical level but without giving emphasis on the polysemy classes these terms belong to. The highpolysemous nature and the polysemy representation in WordNet affect the usability of it as suitable knowledge representation resource for natural language processing applications. In fact, the polysemy problem in WordNet is a challenging problem for natural language processing applications, especially in the field of information retrieval and semantic search. To solve this problem, many approaches have been suggested. Although all the state of the art approaches are good to solve the polysemy problem partially, they do not give a general solution for it. In this thesis, we propose a novel approach to solve the compound noun and specialization polysemy problem in WordNet in the case of nouns. Solving the compound noun polysemy and the specialization polysemy problem is an important step that enhances the usability of WordNet as a knowledge representation resource. The proposed approach is not an alternative to the existing approaches. It is a complementary solution for the state of the art approaches especially the systematic polysemy approaches.

Page generated in 0.0713 seconds