• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 542
  • 230
  • 69
  • 48
  • 44
  • 41
  • 17
  • 15
  • 14
  • 13
  • 11
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1281
  • 254
  • 249
  • 207
  • 142
  • 137
  • 131
  • 107
  • 96
  • 84
  • 82
  • 78
  • 71
  • 70
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Advances in Stochastic Geometry for Cellular Networks

Saha, Chiranjib 24 August 2020 (has links)
The mathematical modeling and performance analysis of cellular networks have seen a major paradigm shift with the application of stochastic geometry. The main purpose of stochastic geometry is to endow probability distributions on the locations of the base stations (BSs) and users in a network, which, in turn, provides an analytical handle on the performance evaluation of cellular networks. To preserve the tractability of analysis, the common practice is to assume complete spatial randomness} of the network topology. In other words, the locations of users and BSs are modeled as independent homogeneous Poisson point processes (PPPs). Despite its usefulness, the PPP-based network models fail to capture any spatial coupling between the users and BSs which is dominant in a multi-tier cellular network (also known as the heterogeneous cellular networks (HetNets)) consisting of macro and small cells. For instance, the users tend to form hotspots or clusters at certain locations and the small cell BSs (SBSs) are deployed at higher densities at these locations of the hotspots in order to cater to the high data demand. Such user-centric deployments naturally couple the locations of the users and SBSs. On the other hand, these spatial couplings are at the heart of the spatial models used in industry for the system-level simulations and standardization purposes. This dissertation proposes fundamentally new spatial models based on stochastic geometry which closely emulate these spatial couplings and are conductive for a more realistic and fine-tuned performance analysis, optimization, and design of cellular networks. First, this dissertation proposes a new class of spatial models for HetNets where the locations of the BSs and users are assumed to be distributed as Poisson cluster process (PCP). From the modeling perspective, the proposed models can capture different spatial couplings in a network topology such as the user hotspots and user BS coupling occurring due to the user-centric deployment of the SBSs. The PCP-based model is a generalization of the state-of-the-art PPP-based HetNet model. This is because the model reduces to the PPP-based model once all spatial couplings in the network are ignored. From the stochastic geometry perspective, we have made contributions in deriving the fundamental distribution properties of PCP, such as the distance distributions and sum-product functionals, which are instrumental for the performance characterization of the HetNets, such as coverage and rate. The focus on more refined spatial models for small cells and users brings to the second direction of the dissertation, which is modeling and analysis of HetNets with millimeter wave (mm-wave) integrated access and backhaul (IAB), an emerging design concept of the fifth generation (5G) cellular networks. While the concepts of network densification with small cells have emerged in the fourth generation (4G) era, the small cells can be realistically deployed with IAB since it solves the problem of high capacity wired backhaul of SBSs by replacing the last-mile fibers with mm-wave links. We have proposed new stochastic geometry-based models for the performance analysis of IAB-enabled HetNets. Our analysis reveals some interesting system-design insights: (1) the IAB HetNets can support a maximum number of users beyond which the data rate drops below the rate of a single-tier macro-only network, and (2) there exists a saturation point of SBS density beyond which no rate gain is observed with the addition of more SBSs. The third and final direction of this dissertation is the combination of machine learning and stochastic geometry to construct a new class of data driven network models which can be used in the performance optimization and design of a network. As a concrete example, we investigate the classical problem of wireless link scheduling where the objective is to choose an optimal subset of simultaneously active transmitters (Tx-s) from a ground set of Tx-s which will maximize the network-wide sum-rate. Since the optimization problem is NP-hard, we replace the computationally expensive heuristic by inferring the point patterns of the active Tx-s in the optimal subset after training a determinantal point process (DPP). Our investigations demonstrate that the DPP is able to learn the spatial interactions of the Tx-s in the optimal subset and gives a reasonably accurate estimate of the optimal subset for any new ground set of Tx-s. / Doctor of Philosophy / The high speed global cellular communication network is one of the most important technologies, and it continues to evolve rapidly with every new generation. This evolution greatly depends on observing performance-trends of the emerging technologies on the network models through extensive system-level simulations. Since these simulation models are extremely time-consuming and error prone, the complementary analytical models of cellular networks have been an area of active research for a long time. These analytical models are intended to provide crisp insights on the network behavior such as the dependence of network performance metrics (such as coverage or rate) on key system-level parameters (such as transmission powers, base station (BS) density) which serve as the prior knowledge for more fine-tuned simulations. Over the last decade, the analytical modeling of the cellular networks has been driven by stochastic geometry. The main purpose of stochastic geometry is to endow the locations of the base stations (BSs) and users with probability distributions and then leverage the properties of these distributions to average out the spatial randomness. This process of spatial averaging allows us to derive the analytical expressions of the system-level performance metrics despite the presence of a large number of random variables (such as BS and user locations, channel gains) under some reasonable assumptions. The simplest stochastic geometry based model of cellular networks, which is also the most tractable, is the so-called Poisson point process (PPP) based network model. In this model, users and BSs are assumed to be distributed as independent homogeneous PPPs. This is equivalent to saying that the users and BSs independently and uniformly at random over a plane. The PPP-based model turned out to be a reasonably accurate representation of the yesteryear’s cellular networks which consisted of a single tier of macro BSs (MBSs) intended to provide a uniform coverage blanket over the region. However, as the data-hungry devices like smart-phones, tablets, and application like online gaming continue to flood the consumer market, the network configuration is rapidly deviating from this baseline setup with different spatial interactions between BSs and users (also termed spatial coupling) becoming dominant. For instance, the user locations are far from being homogeneous as they are concentrated in specific areas like residential and commercial zones (also known as hotspots). Further, the network, previously consisting of a single tier of macro BSs (MBSs), is becoming increasingly heterogeneous with the deployment of small cell BSs (SBSs) with small coverage footprints and targeted to serve the user hotspots. It is not difficult to see that the network topology with these spatial couplings is quite far from complete spatial randomness which is the basis of the PPP-based models. The key contribution of this dissertation is to enrich the stochastic geometry-based mathematical models so that they can capture the fine-grained spatial couplings between the BSs and users. More specifically, this dissertation contributes in the following three research directions. Direction-I: Modeling Spatial Clustering. We model the locations of users and SBSs forming hotspots as Poisson cluster processes (PCPs). A PCP is a collection of offspring points which are located around the parent points which belong to a PPP. The coupling between the locations of users and SBSs (due to their user-centric deployment) can be introduced by assuming that the user and SBS PCPs share the same parent PPP. The key contribution in this direction is the construction of a general HetNet model with a mixture of PPP and PCP-distributed BSs and user distributions. Note that the baseline PPP-based HetNet model appears as one of the many configurations supported by this general model. For this general model, we derive the analytical expressions of the performance metrics like coverage probability, BS load, and rate as functions of the coupling parameters (e.g. BS and user cluster size). Direction-II: Modeling Coupling in Wireless Backhaul Networks. While the deployment of SBSs clearly enhances the network performance in terms of coverage, one might wonder: how long network densification with tens of thousands of SBSs can meet the everincreasing data demand? It turns out that in the current network setting, where the backhaul links (i.e. the links between the BSs and core network) are still wired, it is not feasible to densify the network beyond some limit. This backhaul bottleneck can be overcome if the backhaul links also become wireless and the backhaul and access links (link between user and BS) are jointly managed by an integrated access and backhaul (IAB) network. In this direction, we develop the analytical models of IAB-enabled HetNets where the key challenge is to tackle new types of couplings which exist between the rates on the wireless access and backhaul links. Such couplings exist due to the spatial correlation of the signal qualities of the two links and the number of users served by different BSs. Two fundamental insights obtained from this work are as follows: (1) the IAB HetNets can support a maximum number of users beyond which the network performance drops below that of a single-tier macro-only network, and (2) there exists a saturation point of SBS density beyond which no performance gain is observed with the addition of more SBSs. Direction-III: Modeling Repulsion. In this direction, we focus on modeling another aspect of spatial coupling imposed by the intra-point repulsion. Consider a device-to-device (D2D) communication scenario, where some users are transmitting some on-demand content locally cached in their devices using a common channel. Any reasonable multiple access scheme will ensure that two nearly users are never simultaneously active as they will cause severe mutual interference and thereby reducing the network-wide sum rate. Thus the active users in the network will have some spatial repulsion. The locations of these users can be modeled as determinantal point processes (DPPs). The key property of DPP is that it forms a bridge between stochastic geometry and machine learning, two otherwise non-overlapping paradigms for wireless network modeling and design. The main focus in this direction is to explore the learning framework of DPP and bring together advantages of stochastic geometry and machine learning to construct a new class of data-driven analytical network models.
552

Improving Bio-Inspired Frameworks

Varadarajan, Aravind Krishnan 05 October 2018 (has links)
In this thesis, we provide solutions to two different bio-inspired algorithms. The first is enhancing the performance of bio-inspired test generation for circuits described in RTL Verilog, specifically for branch coverage. We seek to improve upon an existing framework, BEACON, in terms of performance. BEACON is an Ant Colony Optimization (ACO) based test generation framework. Similar to other ACO frameworks, BEACON also has a good scope in improving performance using parallel computing. We try to exploit the available parallelism using both multi-core Central Processing Units (CPUs) and Graphics Processing Units(GPUs). Using our new multithreaded approach we can reduce test generation time by a factor of 25 — compared to the original implementation for a wide variety of circuits. We also provide a 2-dimensional factoring method for BEACON to improve available parallelism to yield some additional speedup. The second bio-inspired algorithm we address is for Deep Neural Networks. With the increasing prevalence of Neural Nets in artificial intelligence and mission-critical applications such as self-driving cars, questions arise about its reliability and robustness. We have developed a test-generation based technique and metric to evaluate the robustness of a Neural Nets outputs based on its sensitivity to its inputs. This is done by generating inputs which the neural nets find difficult to classify but at the same time is relatively apparent to human perception. We measure the degree of difficulty for generating such inputs to calculate our metric. / MS / High-level Hardware Design Languages (HDLs) has allowed designers to implement complicated hardware designs with considerably lesser effort. Unfortunately, design verification for the same circuits has failed to scale gracefully in terms of time and effort. Not only has it become more difficult for formal methods due to exponential complexity from increasing path explosion, but concrete test generation frameworks also face new issues such as the increased requirement in the volume of simulations. The advent of parallel computing using General Purpose Graphics Processing Units (GPGPUs) has led to improved performance for various applications. We propose to leverage both the multi-core CPU and the GPGPU for RTL test generation. This is achieved by implementing a test generation framework that can utilize the SIMD type parallelism available in GPGPUs and task level parallelism available on CPUs. The speedup achieved is extracted from both the test generation framework itself and also from refactoring the hardware model for multi-threaded test generation. For this purpose, we translate the RTL Verilog to a C++ and a CUDA compilable program. Experimental results show that considerable speedup can be achieved for test generation without loss of coverage. In recent years, machine learning and artificial intelligence have taken a substantial leap forward with the discovery of Deep Neural Networks(DNN). Unfortunately, apart from Accuracy and FTest numbers, there exist very few metrics to qualify a DNN. This becomes a reliability issue as DNNs are quite frequently used in safety-critical applications. It is difficult to interpret how the parameters of a trained DNN help store the knowledge from the training inputs. Therefore it is also difficult to infer whether a DNN has learned parameters which might cause an output neuron to misfire wrongly, a bug. An exhaustive search of the input space of the DNN is not only infeasible but is also misleading. Thus, in our work, we try to apply test generation techniques to generate new test inputs based on existing training and testing set to qualify the underlying robustness. Attempts to generate these inputs are guided only by the prediction probability values at the final output layer. We observe that depending on the amount of perturbation and time needed to generate these inputs we can differentiate between DNNs of varying quality.
553

Experimental Study of Scan Based Transition Fault Testing Techniques

Jayaram, Vinay B. 19 February 2003 (has links)
The presence of delay-inducing defects is causing increasing concern in the semiconductor industry today. To test for such delay-inducing defects, scan-based transition fault testing techniques are being implemented. There exist organized techniques to generate test patterns for the transition fault model and the two popular methods being used are Broad-side delay test (Launch-from-capture) and Skewed load delay test (Launch-from-shift). Each method has its own drawbacks and many practical issues are associated with pattern generation and application. Our work focuses on the implementation and comparison of these transition fault testing techniques on multiple industrial ASIC designs. In this thesis, we present results from multiple designs and compare the two techniques with respect to test coverage, pattern volume and pattern generation time. For both methods, we discuss the effects of multiple clock domains, tester hardware considerations, false and multi-cycle paths and the implications of using a low cost tester. We then consider the implications of pattern volume on testing both stuck-at and transition faults and the effects of using transition fault patterns to test stuck-at faults. Finally, we present results from our analysis on switching activity of nets in the design, while executing transition fault patterns. / Master of Science
554

RTL Functional Test Generation Using Factored Concolic Execution

Pinto, Sonal 21 July 2017 (has links)
This thesis presents a novel concolic testing methodology and CORT, a test generation framework that uses it for high-level functional test generation. The test generation effort is visualized as the systematic unraveling of the control-flow response of the design over multiple (factored) explorations. We begin by transforming the Register Transfer Level (RTL) source for the design into a high-performance C++ compiled functional simulator which is instrumented for branch coverage. An exploration begins by simulating the design with concrete stimuli. Then, we perform an interleaved cycle-by-cycle symbolic evaluation over the concrete execution trace extracted from the Control Flow Graph (CFG) of the design. The purpose of this task is to dynamically discover means to divert the control flow of the system, by mutating primary-input stimulated control statements in this trace. We record the control-flow response as a Test Decision Tree (TDT), a new representation for the test generation effort. Successive explorations begin at system states heuristically selected from a global TDT, onto which each new decision tree resultant from an exploration is stitched. CORT succeeds at constructing functional tests for ITC99 and IWLS-2005 benchmarks that achieve high branch coverage using the fewest number of input vectors, faster than existing methods. Furthermore, we achieve orders of magnitude speedup compared to previous hybrid concrete and symbolic simulation based techniques. / Master of Science / In recent years, the cost of verifying digital designs has outpaced the cost of development, in terms of both resources and time. The scale and complexity of modern designs have made it increasingly impractical to manually verify the design. In the process of circuit design, designers use Hardware Descriptive Languages (HDL) to abstract the design in a manner similar to software programming languages. This thesis presents a novel methodology for the automation of testing functional level hardware description with the aim of maximizing branch coverage. Branches indicate decision points in the design, and tests with high branch coverage are able to thoroughly exercise the design in a manner that randomly generated tests cannot. In our work, the design is simulated concretely with a random test (a sequence of input or stimulus). During simulation, we analyze the flow of behavioral statements and decisions executed to construct a formulaic interpretation of the design execution in terms of syntactical elements, to uncover differentiating input that could have diverted the flow of execution to unstimulated parts of the design. This process is formally known as Concolic Execution. The techniques described in this thesis tightly interleaves concrete and symbolic simulation (concolic execution) of hardware designs to generate tests with high branch coverage, orders of magnitude faster than previous similar work.
555

Design Considerations in a Modern Land Mobile Radio System

Sprinkle, Matthew 07 July 2003 (has links)
Modern Land Mobile Radio has the potential for large growth in the near future. Current regulations have set the stage for a required transition to more spectrally efficient technologies. While several organizations are working to ease this transition, there still remain many details and feature sets which the end user must decide amongst and often there is no clear dividing line between these choices. This thesis provides a high-level view of the distinguishing components in modern LMR systems. Discussions related to trunked channel allocation, coverage, costs, security, and other capabilities are given. The application to and effect on everyday users is also considered. Several quantitative examples are provided to assist the end-user in determining when a solution is viable. The discussion and analysis included reaffirm that LMR design is complex and wide-ranging. Ultimately, the designer must evaluate needs and technologies to provide a course of action which is optimum and justifiable. / Master of Science
556

Multihop clustering algorithm for load balancing in wireless sensor networks

Israr, Nauman, Awan, Irfan U. January 2007 (has links)
Yes / The paper presents a new cluster based routing algorithm that exploits the redundancy properties of the sensor networks in order to address the traditional problem of load balancing and energy efficiency in the WSNs.The algorithm makes use of the nodes in a sensor network of which area coverage is covered by the neighbours of the nodes and mark them as temporary cluster heads. The algorithm then forms two layers of multi hop communication. The bottom layer which involves intra cluster communication and the top layer which involves inter cluster communication involving the temporary cluster heads. Performance studies indicate that the proposed algorithm solves effectively the problem of load balancing and is also more efficient in terms of energy consumption from Leach and the enhanced version of Leach.
557

The use of multiple mobile sinks in wireless sensor networks for large scale areas

Al-Behadili, H., AlWane, S., Al-Yasir, Yasir I.A., Ojaroudi Parchin, Naser, Olley, Peter, Abd-Alhameed, Raed 01 May 2020 (has links)
Yes / Sensing coverage and network connectivity are two of the most fundamental issues to ensure that there are effective environmental sensing and robust data communication in a WSN application. Random positioning of nodes in a WSN may result in random connectivity, which can cause a large variety of key parameters within the WSN. For example, data latency and battery lifetime can lead to the isolation of nodes, which causes a disconnection between nodes within the network. These problems can be avoided by using mobile data sinks, which travel between nodes that have connection problems. This research aims to design, test and optimise a data collection system that addresses the isolated node problem, as well as to improve the connectivity between sensor nodes and base station, and to reduce the energy consumption simultaneously. In addition, this system will help to solve several problems such as the imbalance of delay and hotspot problems. The effort in this paper is focussed on the feasibility of using the proposed methodology in different applications. More ongoing experimental work will aim to provide a detailed study for advanced applications e.g. transport systems for civil purposes. / European Union’s Horizon 2020 research and innovation programme under grant agreement H2020-MSCA-ITN-2016 SECRET-722424.
558

Romania and the Russia-Ukraine War : The Discourse on Sovereignty, the Limits to Military Power and Diplomatic Alternatives

Buzoianu, Alina January 2024 (has links)
Abstract This thesis investigates Romania’s response to the Russia-Ukraine war, focusing on the discourse surrounding sovereignty, the limitations of military power, and the potential for diplomatic alternatives. The study critically analyzes Romanian media coverage and political statements to explore how sovereignty is framed in the context of the conflict and examines the efficacy of military power versus non-military diplomatic strategies. Through a qualitative content analysis of selected Romanian news articles and official communications, the research identifies key themes and narratives that shape Romania’s stance on the conflict. The findings reveal a complex relationship between national security concerns, historical ties, and international diplomatic pressures. Moreover, the study highlights the constraints and costs associated with military power projection and underscores the importance of diplomatic avenues in mitigating conflict. By providing a comprehensive understanding of Romania’s position and proposing diplomatic strategies as viable alternatives to militarism, this thesis contributes to broader discussions on international relations and conflict resolution. The study also addresses methodological limitations, including potential biases in source selection and translation challenges.
559

A content analysis of local television news in Orlando, Florida

Peterson, Erik 01 October 2003 (has links)
No description available.
560

Foreign News Coverage in Conservative and Liberal U.S. Newspapers: A Case Study of Saudi Arabia from 1932 to 2023

Huraysi, Mohammed 05 1900 (has links)
This study investigated the historical coverage of foreign issues in U.S. newspapers. The study mainly focused on four primary areas: coverage of wars, leaders, human rights, and economic issues in foreign countries. I qualitatively analyzed data to find if there are any other common topics discussed during the time frame. Then, these topics were analyzed by applying the framing theory to news stories about Saudi Arabia, used as a case study from September 1923 to December 2023. The Wall Street Journal (WSJ) and the New York Times (NYT) were investigated and representative of two distinct newspaper orientations, which are conservative and liberal ideological orientations. Finally, sentiment analysis was used to find the dominant tone for each frame. This study found that the topics discussed were leaders, wars, human rights, economics, sports, Islamic culture, terrorism, education, and natural phenomena. In the NYT, the focus of topics was on leaders, economics, and wars; in the WSJ, the focus was on leaders, economics, and Islamic culture. In terms of applied frames, NYT mostly applied responsibility, cooperation, and consequences frames, while WSJ mostly applied consequences and cooperation frames. The sentiment analysis of data showed that NYT mostly used negative tones, while WSJ mostly used positive tones. This study provided a comprehensive view of the coverage of U.S. newspapers from past to present, leading to predicting a model for each newspaper to understand how these newspapers were covering Saudi issues in the past, explaining the present, and formulating future expectations.

Page generated in 0.0326 seconds