• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 759
  • 105
  • 69
  • 58
  • 24
  • 24
  • 16
  • 16
  • 16
  • 16
  • 16
  • 16
  • 14
  • 10
  • 7
  • Tagged with
  • 1397
  • 1397
  • 292
  • 200
  • 154
  • 149
  • 124
  • 122
  • 121
  • 120
  • 119
  • 115
  • 109
  • 107
  • 107
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
961

Average Shortest Path Length in a Novel Small-World Network

Allen, Andrea J., January 2017 (has links)
No description available.
962

Local K-Core Algorithm in Complex Networks

Lu, Chen 21 October 2013 (has links)
No description available.
963

2-arc transitive polygonal graphs of large girth and valency

Swartz, Eric Allen 02 September 2009 (has links)
No description available.
964

Graph theory applications in the energy sector : From the perspective of electric utility companies

Espinosa, Kristofer, Vu, Tam January 2020 (has links)
Graph theory is a mathematical study of objects and their pairwise relations, also known as nodes and edges. The birth of graph theory is often considered to take place in 1736 when Leonhard Euler tried to solve a problem involving seven bridges of Königsberg in Prussia. In more recent times, graphs has caught the attention of companies from many industries due to its power of modelling and analysing large networks. This thesis investigates the usage of graph theory in the energy sector for a utility company, in particular Fortum whose activities consist of, but not limited to, production and distribution of electricity and heat. The output of the thesis is a wide overview of graph-theoretic concepts and their applications, as well as an evaluation of energy-related use-cases where some concepts are put into deeper analysis. The chosen use-case within the scope of this thesis is feature selection for electricity price forecasting. Feature selection is a process for reducing the number of features, also known as input variables, typically before a regression model is built to avoid overfitting and to increase model interpretability. Five graph-based feature selection methods with different points of view are studied. Experiments are conducted on realistic data sets with many features to verify the legitimacy of the methods. One of the data sets is owned by Fortum and used for forecasting the electricity price, among other important quantities. The obtained results look promising according to several evaluation metrics and can be used by Fortum as a support tool to develop prediction models. In general, a utility company can likely take advantage graph theory in many ways and add value to their business with enriched mathematical knowledge. / Grafteori är ett matematiskt område där objekt och deras parvisa relationer, även kända som noder respektive kanter, studeras. Grafteorins födsel anses ofta ha ägt rum år 1736 när Leonhard Euler försökte lösa ett problem som involverade sju broar i Königsberg i Preussen. På senare tid har grafer fått uppmärksamhet från företag inom flera branscher på grund av dess kraft att modellera och analysera stora nätverk. Detta arbete undersöker användningen av grafteori inom energisektorn för ett allmännyttigt företag, närmare bestämt Fortum, vars verksamhet består av, men inte är begränsad till, produktion och distribution av el och värme. Arbetet resulterar i en bred genomgång av grafteoretiska begrepp och deras tillämpningar inom både allmänna tekniska sammanhang och i synnerhet energisektorn, samt ett fallstudium där några begrepp sätts in i en djupare analys. Den valda fallstudien inom ramen för arbetet är variabelselektering för elprisprognostisering. Variabelselektering är en process för att minska antalet ingångsvariabler, vilket vanligtvis genomförs innan en regressions- modell skapas för att undvika överanpassning och öka modellens tydbarhet. Fem grafbaserade metoder för variabelselektering med olika ståndpunkter studeras. Experiment genomförs på realistiska datamängder med många ingångsvariabler för att verifiera metodernas giltighet. En av datamängderna ägs av Fortum och används för att prognostisera elpriset, bland andra viktiga kvantiteter. De erhållna resultaten ser lovande ut enligt flera utvärderingsmått och kan användas av Fortum som ett stödverktyg för att utveckla prediktionsmodeller. I allmänhet kan ett energiföretag sannolikt dra fördel av grafteori på många sätt och skapa värde i sin affär med hjälp av berikad matematisk kunskap
965

COMBINATORIAL OPTIMIZATION APPROACHES TO DISCRETE PROBLEMS

LIU, MIN JING 10 1900 (has links)
<p>As stressed by the Society for Industrial and Applied Mathematics (SIAM): Applied mathematics, in partnership with computational science, is essential in solving many real-world problems. Combinatorial optimization focuses on problems arising from discrete structures such as graphs and polyhedra. This thesis deals with extremal graphs and strings and focuses on two problems: the Erdos' problem on multiplicities of complete subgraphs and the maximum number of distinct squares in a string.<br />The first part of the thesis deals with strengthening the bounds for the minimum proportion of monochromatic t cliques and t cocliques for all 2-colourings of the edges of the complete graph on n vertices. Denote by k_t(G) the number of cliques of order t in a graph G. Let k_t(n) = min{k_t(G)+k_t(\overline{G})} where \overline{G} denotes the complement of G of order n. Let c_t(n) = {k_t(n)} / {\tbinom{n}{t}} and c_t be the limit of c_t(n) for n going to infinity. A 1962 conjecture of Erdos stating that c_t = 2^{1-\tbinom{t}{2}} was disproved by Thomason in 1989 for all t > 3. Tighter counterexamples have been constructed by Jagger, Stovicek and Thomason in 1996, by Thomason for t < 7 in 1997, and by Franek for t=6 in 2002. We present a computational framework to investigate tighter upper bounds for small t yielding the following improved upper bounds for t=6,7 and 8: c_6 \leq 0.7445 \times 2^{1- \tbinom{6}{2}}, c_7\leq 0.6869\times 2^{1- \tbinom{7}{2}}, and c_8 \leq 0.7002\times 2^{1- \tbinom{8}{2}}. The constructions are based on a large but highly regular variant of Cayley graphs for which the number of cliques and cocliques can be expressed in closed form. Considering the quantity e_t=2^{\tbinom{t}{2}-1} c_t, the new upper bound of 0.687 for e_7 is the first bound for any e_t smaller than the lower bound of 0.695 for e_4 due to Giraud in 1979.<br />The second part of the thesis deals with extremal periodicities in strings: we consider the problem of the maximum number of distinct squares in a string. The importance of considering as key variables both the length n and the size d of the alphabet is stressed. Let (d,n)-string denote a string of length n with exactly d distinct symbols. We investigate the function \sigma_d(n) = max {s(x) | x} where s(x) denotes the number of distinct primitively rooted squares in a (d,n)-string x. We discuss a computational framework for computing \sigma_d(n) based on the notion of density and exploiting the tightness of the available lower bound. The obtained computational results substantiate the hypothesized upper bound of n-d for \sigma_d(n). The structural similarities with the approach used for investigating the Hirsch bound for the diameter of a polytope of dimension d having n facets is underlined. For example, the role played by (d,2d)-polytope was presented in 1967 by Klee and Walkup who showed the equivalency between the Hirsch conjecture and the d-step conjecture.</p> / Doctor of Philosophy (PhD)
966

Resting-State Functional Brain Networks in Bipolar Spectrum Disorder: A Graph Theoretical Investigation

Black, Chelsea Lynn January 2016 (has links)
Neurobiological theories of bipolar spectrum disorder (BSD) propose that the emotional dysregulation characteristic of BSD stems from disrupted prefrontal control over subcortical limbic structures (Strakowski et al., 2012; Depue & Iacono, 1989). However, existing neuroimaging research on functional connectivity between frontal and limbic brain regions remains inconclusive, and is unable to adequately characterize global functional network dynamics. Graph theoretical analysis provides a framework for understanding the local and global connections of the brain and comparing these connections between groups (Sporns et al., 2004). The purpose of this study was to investigate resting state functional connectivity in individuals at low and high risk for BSD based on moderate versus high reward sensitivity, both with and without a BSD diagnosis, using graph theoretical network analysis. Results demonstrated decreased connectivity in a cognitive control region (dorsolateral prefrontal cortex), but increased connectivity of a brain region involved in the detection and processing of reward (bilateral orbitofrontal cortex), among participants at high risk for BSD. Participants with BSD showed increased inter-module connectivity of the dorsal anterior cingulate cortex (ACC). Reward sensitivity was associated with decreased global and local efficiency, and interacted with BSD risk group status to predict inter-module connectivity. Findings are discussed in relation to neurobiological theories of BSD. / Psychology
967

Multicast Communication in Grid Computing Networks with Background Traffic

Kouvatsos, Demetres D., Mkwawa, I.M. January 2003 (has links)
No / Grid computing is a computational concept based on an infrastructure that integrates and collaborates the use of high end computers, networks, databases and scientific instruments owned and managed by several organisations. It involves large amounts of data and computing which require secure and reliable resource sharing across organisational domains. Despite its high computing performance orientation, communication delays between grid computing nodes is a big hurdle due to geographical separation in a realistic grid computing environment. Communication schemes such as broadcasting, multicasting and routing should, therefore, take communication delay into consideration. Such communication schemes in a grid computing environment pose a great challenge due to the arbitrary nature of its topology. In this context, a heuristic algorithm for multicast communication is proposed for grid computing networks with finite capacity and bursty background traffic. The scheme facilitates inter-node communication for grid computing networks and it is applicable to a single-port mode of message passing communication. The scheme utilises a queue-by-queue decomposition algorithm for arbitrary open queueing network models, based on the principle of maximum entropy, in conjunction with an information theoretic decomposition criterion and graph theoretic concepts. Evidence based on empirical studies indicates the suitability of the scheme for achieving an optimal multicast communication cost, subject to system decomposition constraints.
968

Hardware Architectures for Software Security

Edmison, Joshua Nathaniel 20 October 2006 (has links)
The need for hardware-based software protection stems primarily from the increasing value of software coupled with the inability to trust software that utilizes or manages shared resources. By correctly utilizing security functions in hardware, trust can be removed from software. Existing hardware-based software protection solutions generally suffer from utilization of trusted software, lack of implementation, and/or extreme measures such as processor redesign. In contrast, the research outlined in this document proposes that substantial, hardware-based software protection can be achieved, without trusting software or redesigning the processor, by augmenting existing processors with security management hardware placed outside of the processor boundary. Benefits of this approach include the ability to add security features to nearly any processor, update security features without redesigning the processor, and provide maximum transparency to the software development and distribution processes. The major contributions of this research include the the augmentation methodology, design principles, and a graph-based method for analyzing hardware-based security systems. / Ph. D.
969

Resource Allocation in Cellular Networks with Coexisting Femtocells and Macrocells

Shi, Yongsheng 18 January 2011 (has links)
Over the last decade, cellular networks have grown rapidly from circuit-switch-based voice-only networks to IP-based data-dominant networks, embracing not only traditional mobile phones, but also smartphones and mobile computers. The ever-increasing demands for reliable and high-speed data services have challenged the capacity and coverage of cellular networks. Research and development on femtocells seeks to provide a solution to fill coverage holes and to increase the network capacity to accommodate more mobile terminals and applications that requires higher bandwidth. Among the challenges associated with introducing femtocells in existing cellular networks, interference management and resource allocation are critical. In this dissertation, we address fundamental aspects of resource allocation for cellular networks with coexisting femtocells and macrocells on the downlink side, addressing questions such as: How many additional resource blocks are required to add femtocells into the current cellular system? What is the best way to reuse resources between femtocells and macrocells? How can we efficiently assign limited resources to network users? In this dissertation, we develop an analytical model of resource allocation based on random graphs. In this model, arbitrarily chosen communication links interfere with each other with a certain probability. Using this model, we establish asymptotic bounds on the minimum number of resource blocks required to make interference-free resource assignments for all the users in the network. We assess these bounds using a simple greedy resource allocation algorithm to demonstrate that the bounds are reasonable in finite networks of plausible size. By applying the bounds, we establish the expected impact of femtocell networks on macrocell resource allocation under a variety of interference scenarios. We proceed to compare two reuse schemes, termed shared reuse and split reuse, using three social welfare functions, denoted utilitarian fitness, egalitarian fitness, and proportionally fair fitness. The optimal resource split points, which separate resource access by femtocells and macrocells, are derived with respect to the above fitness functions. A set of simple greedy resource allocation algorithms are developed to verify our analysis and compare fitness values of the two reuse schemes under various network scenarios. We use the obtained results to assess the efficiency loss associated with split reuse, as an aid to determining whether resource allocators should use the simpler split reuse scheme or attempt to tackle the complexity and overhead associated with shared reuse. Due to the complexity of the proportionally fair fitness function, optimal resource allocation for cellular networks with femtocells and macrocells is difficult to obtain. We develop a genetic algorithm-based centralized resource allocation algorithm to yield suboptimal solutions for such a problem. The results from the genetic algorithm are used to further assess the performance loss of split reuse and provide a baseline suboptimal resource allocation. Two distributed algorithms are then proposed to give a practical solution to the resource allocation problem. One algorithm is designed for a case with no communications between base stations and another is designed to exploit the sharing of information between base stations. The numerical results from these distributed algorithms are then compared against to the ones obtained by the genetic algorithm and the performance is found to be satisfactory, typically falling within 8\% of the optimum social welfare found via the genetic algorithm. The capability of the distributed algorithms in adapting to network changes is also assessed and the results are promising. All of the work described thus far is carried out under a protocol model in which interference between two links is a binary condition. Though this model makes the problem more analytically tractable, it lacks the ability to reflect additive interference as in the SINR model. Thus, in the final part of our work, we apply conflict-free resource allocations from our distributed algorithms to simulated networks and examine the allocations under the SINR model to evaluate feasibility. This evaluation study confirms that the protocol-model-based algorithms, with a small adjustment, offer reasonable performance even under the more realistic SINR model. This work was supported by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice under Award No. 2005-IJ-CX-K017 and the National Science Foundation under Grant No. 0448131. Any opinions, findings, and conclusions or recommendations expressed in this dissertation are those of the author and do not necessarily reflect the views of the National Institute of Justice or the National Science Foundation. The NSF/TEKES Wireless Research Exchange Program also contributed to this work by funding a summer study. / Ph. D.
970

Applications of Combinatorial Graph Theory to the Classical and Post-Quantum Security Analysis of Memory-Hard Functions and Proofs of Sequential Work

Seunghoon Lee (18431271) 26 April 2024 (has links)
<p dir="ltr">Combinatorial graph theory is an essential tool in the design and analysis of cryptographic primitives such as Memory-Hard Functions (MHFs) and Proofs of Sequential Work (PoSWs). MHFs are used to design egalitarian Proofs of Work and to help protect low-entropy secrets such as user passwords against brute-force attacks in password hashing. A PoSW is a protocol for proving that one spent significant sequential computation work to validate some statement. PoSWs have many applications, including time-stamping, blockchain design, and universally verifiable CPU benchmarks. Prior work has used combinatorial properties of graphs to construct provably secure MHFs and PoSWs. However, some open problems still exist, such as improving security bounds for MHFs, finding approximation algorithms for measuring their memory hardness, and analyzing the post-quantum security of MHFs and PoSWs. This dissertation addresses these challenges in the security analysis of MHFs and PoSWs using combinatorial graph theory. </p><p dir="ltr">We first improve the understanding of the classical security of MHFs in the following ways. (1) We present improved security bounds for MHF candidates such as Argon2i and DRSample under plausible graph-theoretic conjectures. (2) We prove that it is Unique Games-hard to approximate the cumulative pebbling complexity of a directed acyclic graph, which is an important metric to understand the memory-hardness of data-independent MHFs. (3) We provide the first explicit construction of extremely depth-robust graphs with small indegree. Here, (extreme) depth-robustness is a crucial combinatorial tool to construct secure MHFs and PoSWs. (4) We build a new family of graphs that achieves better provable parameters for concrete depth-robustness.</p><p dir="ltr">Second, as we progress toward developing quantum computers, we initiate the post-quantum security analysis of MHFs and PoSWs. Specifically, we make the following contributions. (1) We introduce the parallel reversible pebbling game, which captures additional restrictions in quantum computing. We use combinatorial graph theory as a tool to analyze the space-time complexity and the cumulative pebbling complexity of MHF candidates such as Argon2i and DRSample in a reversible setting, which we call reversible space-time/cumulative pebbling cost, respectively. (2) We prove that the reversible cumulative pebbling cost is never too much larger than the classical cumulative pebbling cost, along with the separation result that, in some instances, the reversible cumulative pebbling cost is asymptotically larger than the classical one. (3) We prove that it is also Unique Games-hard to approximate the reversible cumulative pebbling cost of a directed acyclic graph. (4) Finally, we establish the post-quantum security of a PoSW from Cohen and Pietrzak (EUROCRYPT 2018) in the parallel quantum random oracle model by extending Zhandry's compressed oracle technique (CRYPTO 2019) and utilizing underlying combinatorial techniques of PoSWs.</p>

Page generated in 0.0755 seconds