• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8730
  • 2930
  • 1104
  • 1047
  • 1016
  • 682
  • 315
  • 302
  • 277
  • 266
  • 135
  • 128
  • 79
  • 78
  • 75
  • Tagged with
  • 20065
  • 3906
  • 2815
  • 2572
  • 2430
  • 2343
  • 1929
  • 1828
  • 1554
  • 1521
  • 1510
  • 1510
  • 1497
  • 1443
  • 1395
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Correlating Easily and Unobtrusively Queried Computer Characteristics to Number and Severity of Vulnerabilities

Mercado, Jonathan M 01 November 2018 (has links)
Cybersecurity has become a top-of-mind concern as the threat landscape expands and organizations continue to undergo digital transformation. As the industry confronts this growth, tools designed to evaluate the security posture of a network must improve to provide better value. Current agent-based and network scanning tools are resource intensive, expensive, and require thorough testing before implementation in order to ensure seamless integration. While surfacing specific vulnerability information is imperative to securing network assets, there are ways to predict the security status of a network without taking exact measurements. These methods may inform security professionals as to where the weakest points of the network lie quickly, unobtrusively, and cost-effectively. This thesis proposes a methodology for identifying correlations between host configuration and vulnerability, then specifically examines easily queried characteristics within the Microsoft Windows operating system that may be vulnerability predictors. After taking measurements of forty hosts, it was discovered that there is a strong (r > 0.80) correlation between several metrics and total number of vulnerabilities as measured by the Tenable Nessus network scanner. Specifically, total number of open TCP ports (r = 0.82), total number of programs installed (r = 0.90), days since last restart (r = 0.97), and days since last windows update (r = 0.93) were found to be strong candidates for identifying high-risk machines. A significant correlation was also found when measuring the total number of logged in users (r = 0.68). Correlations were not as strong when considering subsets of hosts in similar environments. These findings can be used in tooling which will quickly evaluate the security posture of network hosts.
312

Resting State and Task Triple Network Connectivity Profiles in Remitted Depressed Patients Compared with Healthy Volunteers

Lynn, Emma Kathryn 14 December 2021 (has links)
In addition to mood symptoms, major depressive disorder (MDD) is characterized by cognitive impairments that can have detrimental impacts on quality of life and daily function, and have been found to persist into remission. In particular, altered affective cognition (e.g. biased attention to negative stimuli) has been reported in MDD, and may continue into remission. Unfortunately, current pharmacotherapies do not adequately address cognitive dysfunction in acute or remitted MDD. Understanding the neurobiological underpinnings of affective cognitive dysfunction in remitted MDD may help inform the development of new interventions to address this lingering problem and the associated poorer functional outcomes. The triple network model posits that altered functioning of three key networks implicated in normal cognitive function – the default mode network (DMN), central executive network (CEN) and salience network (SN) – underlies cognitive dysfunction in a variety of psychiatric illnesses, including MDD. Though notable exceptions exist, work in acutely depressed MDD patients has found evidence of DMN hyperconnectivity, CEN hypoconnectivity and abberant SN connectivity both at rest and during the completion of various cognitive tasks. The evidence for triple network connectivity alterations persisting into remission is less robust, and warrants further investigation. Furthermore, there is a paucity of studies examining remitted MDD connectivity during affective tasks. As such, the primary objectives of this thesis were to: 1) compare resting-state and task triple network connectivity profiles in remitted MDD patients (rMDDs) and healthy volunteers (HVs) at rest, during an affective (emotional Stroop [eStroop]) task, and during rest vs. the task and, 2) assess the relationship between DMN and CEN connectivity and measures of daily functioning, quality of life and/or negative, self-relational rumination in the rMDD cohort. Behaviourally, there were findings of an affective attentional bias and impaired processing speed in the rMDD vs. HVs, as revealed by a computerized cognitive test battery. However, we found no evidence of DMN hyperconnectivity or CEN hyperconnectivity in the rMDD study sample. We did find evidence of altered intrinsic CEN and CEN-SN connectivity between the rest and task conditions that seemed driven by the rMDD sample, as well as positive CEN-DMN correlations across the entire sample both at rest and during the eStroop task. Suprisingly, we also found higher intrinsic DMN connectivity during the eStroop task vs. at rest across the whole sample. Finally, we found a positive relationship between task-based CEN connectivity and hopeless rumination, and a significant negative relationship between resting state and task-based DMN connectivity and psychosocial dysfunction in the rMDD sample. These findings contribute to our understanding of large-scale intrinsic network connectivity alterations during remitted depression, and their relationship to functional outcomes.
313

Network Interdiction Model on Interdependent Incomplete Network

Xiaodan, Xie 28 September 2020 (has links)
No description available.
314

Quantifying supply chain vulnerability using a multilayered complex network perspective

Viljoen, Nadia M. 02 1900 (has links)
Today's supply chains face increasing volatility on many fronts. From the shop-floor where machines break and suppliers fail to the boardrooms where unanticipated price inflation erodes profi tability. Turbulence is the new normal. To remain competitive and weather these (daily) storms, supply chains need to move away from an effi ciency mindset towards a resilience mindset. For over a little more than a decade industry and academia have awakened to this reality. Academic literature and case studies show that there is no longer a shortage of resilience strategies and designs. Unfortunately, industry still lacks the tools with which to assess and evaluate the effectiveness of such strategies and designs. Without the ability to quantify the benefi t it is impossible to motivate the cost. This thesis adds one piece to the puzzle of quantifying supply chain vulnerability. Speci fically, it focussed on supply chains within urban areas. It addresses the question: "How does a supply chain's network design (internal con figuration) and its dependence on the underlying road network (external circumstances) make it more or less vulnerable to disruptions of the road network?" Multilayered Complex Network Theory (CNT) held promise as a modelling approach that could capture the complexity of the dependence between a logical supply chain network and the physical road network that underpins it. This approach addressed two research gaps in complex network theory applications. In the supply chain arena CNT applications have reaped many benefi ts but the majority of studies regarded single-layer networks that model only supply chain relations. There were no studies found where the dependence of supply chain layers on underlying physical infrastructure was modelled in a multilayered manner. Road network applications offered many more multilayered applications but these primarily focussed on passenger transport, not freight transport. The first artefact developed in the thesis was a multilayered complex network formulation representing a logical (supply chain) layer placed on a physical (road infrastructure) layer. The individual layers had predefi ned network characteristics and on their own could not hint at the inherent vulnerability that the system as a whole might have. From the multilayered formulation, the collection of shortest paths emerged. This is the collection of all shortest path alternatives within a network. The collection of shortest paths is the unique fingerprint of each multilayered network instance. The key to understanding vulnerability lies within the characteristics of the collection of shortest paths. Three standard supply chain network archetypes were de fined namely the Fully Connected (FC), Single Hub (SH) and Double Hub (DH) archetypes. A sample of 500 theoretical multilayered network instances was generated for each archetype. These theoretical instances were subjected to three link-based progressive targeted disruption simulations to study the vulnerability characteristics of the collection of shortest paths. Two of the simulations used relative link betweenness to prioritise the disruptions while the third used the concept of network skeletons as captured by link salience. The results from these simulations showed that the link betweenness strategies were far more effective than the link salience strategy. From these results three aspects of vulnerability were identifi ed. Redundancy quantifi es the number of alternative shortest paths available to an instance. Overlap measures to what degree the shortest path sets of an instance overlap and have road segments in common. Effi ciency step-change is a measure of the magnitude of the "shock" absorbed by the shortest paths of an instance during a disruption. For each of these aspects one or more metrics were defi ned. This suite of vulnerability metrics is the second artefact produced by the thesis. The design of the artefacts itself, although novel, was not considered research. It is the insights derived during analysis of the artefacts' performance that contributes to the body of knowledge. Link-based progressive random disturbance simulations were used to assess the ability of the vulnerability metrics to quantify supply chain vulnerability. It was found that none of the de fined vulnerability aspects are good stand-alone predictors of vulnerability. The multilayered nature and random disturbance protocol result in vulnerability being more multi-faceted than initially imagined. Nonetheless, the formulation of the multilayered network proved useful and intuitive and even though the vulnerability metrics fail as predictors they still succeed in capturing shortest path phenomena that would lead to vulnerability under non-random protocols. To validate the fi ndings from the theoretical instances, link-based random disturbance simulations were executed on 191 case study instances. These instances were extracted from real-life data in three urban areas in South Africa, namely Gauteng Province (GT), City of Cape Town (CoCT) and eThekwini Metropolitan Municipality (ET). The case study instances showed marked deviations from the assumptions underlying the theoretical instances. Despite these differences, the multilayered formulation still enables the quanti fication of the relationship between supply chain structure and road infrastructure. The performance of the vulnerability metrics in the case study corroborates the findings from the theoretical instances. Although the suite of vulnerability metrics was unsuccessful in quantifying or predicting vulnerability in both the theoretical and case study instances, the rationale behind their development is sound. Future work that will result in more effective metrics is outlined in this thesis. On the one hand the development of a more realistic disruption strategy is suggested. Road network disruptions are neither completely random nor specifi cally targeted. Important segments with greater tra ffic loads are more likely to be disrupted, but the reality is that disruptions such as accidents, equipment failure or road maintenance could really occur anywhere on the network. A more realistic disruption strategy would lie somewhere on the continuum between targeted and random disruptions. Other future work suggests the refi nement of both artefacts by incorporating link weights in both the logical and physical layers. An unanticipated fi nding from this thesis is that future research in the fi eld may be expedited if theory-building emanates from real-life empirical networks as opposed to theoretically generated networks. Expanding the scope of the case study, characterising the true network archetypes found in practice and increasing the number of case study samples is a high priority for future work. / Thesis (PhD)--University of Pretoria, 2018. / National Research Foundation of South Africa (Grant UID: 105519). Partial funding of doctoral research. / Industrial and Systems Engineering / PhD / Unrestricted
315

Generic Properties of Chemical Networks: Artificial Chemistry Based on Graph Rewriting

Benkö, Gil, Flamm, Christoph, Stadler, Peter F. 06 November 2018 (has links)
We use a Toy Model of chemistry that represents molecules in terms of usual structural formulae to generate large chemical reaction networks. An extremely simplified quantum mechanical energy calculation and a straightforward implementation of reactions as graph rewritings ensure both transparency and closeness to chemical reality, both conditions that are necessary for the analysis of generic properties of large reaction networks. We show that some chemical networks graphs, e.g., repetitive Diels-Alder reactions, have the small-world property and exhibit a scale-free degree distribution. On the other hand, the Formose reaction does not fit well to this paradigm.
316

Network Properties of Optically Linked Planetary Satellite Systems

Pennington, Nicholas 01 May 2020 (has links)
With plans for advancing into the rest of the solar system in the coming decades, an understanding of how interlinked satellite systems behave as a network will be essential. The relatively recent development of optics as a method of space communication means that inter-satellite networks are more feasible than ever. That said, there are currently no analyses which take into account a planet-wide, largely uncoordinated, optically linked satellite network. To provide a look at the properties of such a network, movement and connections of Earth's currently active satellites were simulated based on real-world data, and their networks modeled via graphs. Ultimately, it was found that many properties of such a network are periodic, fluctuating in sync with the orbital time of low-earth orbit satellites. This, among other data, suggests that the peaks of these waves are caused by a meeting of satellites near the north and south poles.
317

Low-bit Quantization-aware Training of Spiking Neural Networks

Shymyrbay, Ayan 04 1900 (has links)
Deep neural networks are proven to be highly effective tools in various domains, yet their computational and memory costs restrict them from being widely deployed on portable devices. The recent rapid increase of edge computing devices has led to an active search for techniques to address the above-mentioned limitations of machine learning frameworks. The quantization of artificial neural networks (ANNs), which converts the full-precision synaptic weights into low-bit versions, emerged as one of the solutions. At the same time, spiking neural networks (SNNs) have become an attractive alternative to conventional ANNs due to their temporal information processing capability, energy efficiency, and high biological plausibility. Despite being driven by the same motivation, the simultaneous utilization of both concepts has not been fully studied. Therefore, this thesis work aims to bridge the gap between recent progress in quantized neural networks and SNNs. It presents an extensive study on the performance of the quantization function, represented as a linear combination of sigmoid functions, exploited in low-bit weight quantization in SNNs. The given quantization function demonstrates the state-of-the-art performance on four popular benchmarks, CIFAR10-DVS, DVS128 Gesture, N-Caltech101, and N-MNIST, for binary networks (64.05%, 95.45%, 68.71%, and 99.365 respectively) with small accuracy drops (8.03%, 1.18%, 3.47%, and 0.17% respectively) and up to 32x memory savings, which outperforms the existing methods.
318

A structured approach to network security protocol implementation

Tobler, Benjamin January 2005 (has links)
The implementation of network security protocols has not received the same level of attention in the literature as their analysis. Security protocol analysis has successfully used inference logics, like GNY and BAN, and attack analysis, employing state space examination techniques such as model checking and strand spaces, to verify security protocols. Tools, such as the multi-dimensional analysis environment SPEAR II, exist to help automate security protocol specification and verification, however actual implementation of the specification in executable code is a task still largely left to human programmers. Many vulnerabilities have been found in implementations of security protocols such as SSL, PPTP and RADIUS that are incorporated into widely used operating system software, web servers and other network aware applications. While some of these vulnerabilities may be a result of flawed or unclear specifications, many are the result of the failure of programmers to correctly interpret and implement them. The above indicates a gap between security protocol specifications and their concrete implementations, in that there are methodologies and tools that have been established for developing the former, but not the latter. This dissertation proposes an approach to bridging this gap, describes our implementation of that approach and attempts to evaluate its success. The approach is three-fold, providing different measures to improve current ad-hoc implementation approaches: 1. From Informal to Formal Specifications: If a security protocol has been specified using informal standard notation, it can be converted, using automatic translation, to a formal specification language with well defined semantics. The formal protocol specification can then be analysed using formal techniques, to verify that the desired security properties hold. The precise specification of the protocol behaviour further serves to facilitate the concrete implementation of the protocol in code.
319

Modèles spatiaux pour la planification cellulaire / Spatial models for cellular network planning

Vu, Thanh Tung 20 September 2012 (has links)
Dans cette thèse, nous enrichissons et appliquons la théorie des processus de Poisson spatiaux pour résoudre certains problèmes issus de la conception et du déploiement des réseaux cellulaire. Cette thèse comporte deux parties principales. La première partie est consacrée à la résolution de quelques problèmes de dimensionnement et de couverture des réseaux cellulaires. Nous calculons la probabilité de surcharge de systèmes OFDMA grâce aux inégalités de concentration et aux développements d'Edgeworth, pour lesquels nous prouvons des bornes d'erreur explicites, et nous l'appliquons à résoudre un problème de dimensionnement. Nous calculons également la probabilité d'outage et le taux de handover pour un utilisateur typique. La seconde partie est consacrée à l'étude de différents modèles pour la consommation d'énergie dans les réseaux cellulaires. Dans le premier modèle, l'emplacement initial des utilisateurs forme un processus de Poisson ponctuel et à chaque utilisateur est associé un processus d'activité de type ON-OFF. Dans le second modèle, l'arrivée des utilisateurs constitue un processus de Poisson en espace et en temps, une dynamique connue sous le nom de dynamique de Glauber. Nous étudions également l'impact de la mobilité des utilisateurs en supposant que les utilisateurs se déplacent de manière aléatoire pendant leur séjour. Nous nous intéressons dans toutes ces situations, à la distribution de l'énergie consommée par une station de base. Cette énergie est divisée en deux parties: la partie additive et la partie diffusive. Nous obtenons des expressions analytiques pour les moments de la partie additive ainsi que la moyenne et la variance de l'énergie totale consommée. Nous trouvons une borne d'erreur pour l'approximation gaussienne de la partie additive. Nous prouvons que la mobilité des utilisateurs a un impact positif sur la consommation d'énergie. Il n'augmente ni ne réduit l'énergie consommée en moyenne, mais réduit sa variance à $0$ en régime de mobilité élevé. Nous caractérisons aussi le taux de convergence en fonction de la vitesse des utilisateurs. / Nowadays, cellular technology is almost everywhere. It has had an explosive success over the last two decades and the volume of traffic will still increase in the near future. For this reason, it is also regarded as one cause of worldwide energy consumption, with high impact on carbon dioxide emission. On the other hand, new mathematical tools have enabled theconception of new models for cellular networks: one of these tools is stochastic geometry, or more particularly spatial Poisson point process. In the last decade, researchers have successfully used stochastic geometry to quantify outage probability, throughput or coverage of cellular networks by treating deployment of mobile stations or (and) base stations as Poisson point processes on a plane. These results also take into account to impact of mobility on the performance of such networks. In this thesis, we apply the theory of Poisson point process to solve some problems of cellular networks, in particular we analyze the energy consumption of cellular networks. This thesis has two main parts. The first part deals with some dimensioning and coverage problems in cellular network. We uses stochastic analysis to provide bounds for theoverload probability of OFDMA systems thanks to concentration inequalities and we apply it to solve a dimensioning problem. We also compute the outage probability and handover probability of a typical user. The second part is dedicated to introduce different models for energy consumption of cellular networks. In the first model, the initial location of users form a \PPP\ and each user is associated with an ON-OFF process of activity. In the second model, arrival of users forms a time-space \PPP. We also study the impact of mobility of users by assuming that users randomly move during its sojourn. We focus on the distribution of consumed energy by a base station. This consumed energy is divided into the additive part and the broadcast part. We obtain analytical expressions for the moments of the additive part as well as the mean and variance of the consumed energy. We are able to find an error bound for Gaussian approximation of the additive part. We prove that the mobility of users has a positive impact on the energy consumption. It does not increase or decrease the consumed energy in average but reduces its variance to zero in high mobility regime. We also characterize the convergent rate in function of user's speed.
320

Simulation study of an adaptive routing technique for packet-switched communication networks

Fuchs, Hanoch January 1974 (has links)
No description available.

Page generated in 0.0595 seconds