Spelling suggestions: "subject:"parameterized"" "subject:"aparameterized""
11 |
A Parameterized Simulation of Doppler LidarChester, David B. 01 December 2017 (has links)
Upcoming missions to explore planetary bodies in the solar system will require accurate position and velocity data during descent in order to land safely at a predesignated site. A Doppler lidar instrument could provide measurements of the altitude, attitude, and velocity of the landing vehicle to supplement the data collected by other instruments. A flexible simulation tool would aid the tasks of designing and testing the functionality of such an instrument.
LadarSIM is a robust parameterized simulation tool developed for time of flight lidar at Utah State University's Center for Advanced Imaging Ladar. This thesis outlines how LadarSIM was modified to include a simulation of Doppler lidar. A study is performed using LadarSIM to determine the effects of varying certain parameters of a Doppler lidar system. Point clouds of landing scenarios generated by the simulation with different scanning patterns are shown.
|
12 |
Computation Methods for Parametric Analysis of Gravitational Wave DataPatel, Heta Ajay 18 September 2019 (has links)
Gravitational waves are detected, analyzed and matched filtered based on an approximation of General Relativity called the Post Newtonian theory. This approximation method is based on the assumption that there is a weak gravity field both inside and around the body. However, scientists cannot justify why Post-Newtonian theory (meant for weak fields) works so well with strong fields of black hole mergers when it really should have failed [C. Will 2011]. Yunes and Pretorius gave another approach called parameterized post-Einsteinian (ppE) theory that uses negligible assumptions and promises to identify any deviation on the parameters through post-processing tests. This thesis project proposes to develop a method for the parametric detection and testing of gravitational waves by computation of ppE for the inspiral phase using ChirpLab. A set of templates will be generated with ppE parameters that can be used for the testing. / Master of Science / Electromagnetic waves were discovered in the 19th century and have changed our lives with various applications. Similarly, this new set of waves, gravitational waves, will potentially alter our perspective of the universe. Gravitational waves can help us understand space, time and energy from a new and deeper perspective. Gravitational waves and black holes are among the trending topics in physics at the moment, especially with the recent release of the first image of a black hole in history. The existence of black holes was predicted a century ago by Einstein in the well defined theory, “Theory of General Relativity”. Current approaches model the chaotic phenomenon of a black hole pair merger by the use of approximation methods. However, scientists Yunes and Pretorius [69] argue that the approximations employed skew the estimation of the physical features of the black hole system. Hence, there is a need to approach this problem with methods that don’t make specific assumptions about the system itself. This thesis project proposes to develop a computational method for the parametric2 detection and testing of gravitational waves.
|
13 |
Open-Source Parameterized Low-Latency Aggressive Hardware Compressor and Decompressor for Memory CompressionJearls, James Chandler 16 June 2021 (has links)
In recent years, memory has shown to be a constraining factor in many workloads. Memory is an expensive necessity in many situations, from embedded devices with a few kilobytes of SRAM to warehouse-scale computers with thousands of terabytes of DRAM. Memory compression has existed in all major operating systems for many years. However, while faster than swapping to a disk, memory decompression adds latency to data read operations. Companies and research groups have investigated hardware compression to mitigate these problems. Still, open-source low-latency hardware compressors and decompressors do not exist; as such, every group that studies hardware compression must re-implement. Importantly, because the devices that can benefit from memory compression vary so widely, there is no single solution to address all devices' area, latency, power, and bandwidth requirements. This work intends to address the many issues with hardware compressors and decompressors. This work implements hardware accelerators for three popular compression algorithms; LZ77, LZW, and Huffman encoding. Each implementation includes a compressor and decompressor, and all designs are entirely parameterized. There are a total of 22 parameters between the designs in this work. All of the designs are open-source under a permissive license. Finally, configurations of the work can achieve decompression latencies under 500 nanoseconds, much closer than existing works to the 255 nanoseconds required to read an uncompressed 4 KB page. The configurations of this work accomplish this while still achieving compression ratios comparable to software compression algorithms. / Master of Science / Computer memory, the fast, temporary storage where programs and data are held, is expensive and limited. Compression allows for data and programs to be held in memory in a smaller format so they take up less space. This work implements a hardware design for compression and decompression accelerators to make it faster for the programs using the compressed data to access it. This work includes three hardware compressor and decompressor designs that can be easily modified and are free for anyone to use however they would like. The included designs are orders of magnitude smaller and less expensive than the existing state of the art, and they reduce the decompression time by up to 6x. These smaller areas and latencies result in a relatively small reduction in compression ratios: only 13% on average across the tested benchmarks.
|
14 |
SMT-based Verification of Parameterized SystemsRedondi, Gianluca 18 July 2024 (has links)
SMT-based verification analyzes reachability for transition systems represented by SMT
formulae. Depending on the theories and the kinds of systems considered, various
approaches have been proposed. Together, they form the Verification Modulo Theory
(VMT) framework. This thesis delves into SMT-based verification of parameterized systems, emphasizing the challenges and novel solutions in verifying systems with an unbounded number of components. In this thesis, we first introduce a general framework to model such
systems. Then, we introduce two novel algorithms that leverage the strengths of SMT
for the verification of parameterized systems, focusing on the automation and reduction
of computational complexity inherent in such tasks. These algorithms are designed to improve upon existing verification methods by offering enhanced scalability and automation, making them particularly suited for the analysis of distributed systems, network protocols, and concurrent programming models where traditional approaches may fail. Moreover, we introduce an algorithm for compositional verification that advances the capability to modularly verify complex systems by decomposing the verification task into smaller, more manageable sub-tasks. Additionally, we discuss the potential and ongoing application of these algorithms in an industrial project focusing on the design of interlocking logic. This particular application demonstrates the practical utility of our algorithms in a real-world setting, highlighting their effectiveness in improving the safety and reliability of critical infras-
tructure. The theoretical advancements proposed in this thesis are complemented by a rig-
orous experimental evaluation, demonstrating the applicability and effectiveness of our
methods across a range of verification scenarios. Our work is implemented within an ex-
tended framework of the MathSAT SMT solver, facilitating its integration into existing
verification workflows. Overall, this research contributes to the theoretical underpinnings of Verification Modulo Theories (VMT) and offers tools and methodologies for the verification community, enhancing the capability to verify complex parameterized systems with greater
efficiency and reliability.
|
15 |
Parameterized Event MonitoringPriyadarshini, Dande January 2005 (has links)
<p>Event monitoring has been employed in many applications such as network monitoring, active databases etc.; however, there is only an insignificant amount work done on parameterized event monitoring, a feature that is necessary in any real application. The aim of this work is to investigate solutions for parameterized event composition that is scalable and efficient; these solutions are refined from existing event monitoring algorithms. An algorithm for parameterized event composition is proposed and analysis on algorithmic time complexity is performed. In addition to this, experiments on the prototype Solicitor, a software component in DeeDS, along with simulated input of events are conducted in order to validate the theoretical model and the hypothesis that were made. The experiments support the theoretical model and suggest that it is possible to build an efficient and scalable parameterized event composition that is useful in real applications.</p>
|
16 |
Parameterized Event MonitoringPriyadarshini, Dande January 2005 (has links)
Event monitoring has been employed in many applications such as network monitoring, active databases etc.; however, there is only an insignificant amount work done on parameterized event monitoring, a feature that is necessary in any real application. The aim of this work is to investigate solutions for parameterized event composition that is scalable and efficient; these solutions are refined from existing event monitoring algorithms. An algorithm for parameterized event composition is proposed and analysis on algorithmic time complexity is performed. In addition to this, experiments on the prototype Solicitor, a software component in DeeDS, along with simulated input of events are conducted in order to validate the theoretical model and the hypothesis that were made. The experiments support the theoretical model and suggest that it is possible to build an efficient and scalable parameterized event composition that is useful in real applications.
|
17 |
Instance compression of parametric problems and related hierarchiesChakraborty, Chiranjit January 2014 (has links)
We define instance compressibility ([13, 17]) for parametric problems in the classes PH and PSPACE.We observe that the problem ƩiCIRCUITSAT of deciding satisfiability of a quantified Boolean circuit with i-1 alternations of quantifiers starting with an existential quantifier is complete for parametric problems in the class Ʃp/i with respect to w-reductions, and that analogously the problem QBCSAT (Quantified Boolean Circuit Satisfiability) is complete for parametric problems in PSPACE with respect to w-reductions. We show the following results about these problems: 1. If CIRCUITSAT is non-uniformly compressible within NP, then ƩiCIRCUITSAT is non-uniformly compressible within NP, for any i≥1. 2. If QBCSAT is non-uniformly compressible (or even if satisfiability of quantified Boolean CNF formulae is non-uniformly compressible), then PSPACE ⊆ NP/poly and PH collapses to the third level. Next, we define Succinct Interactive Proof (Succinct IP) and by adapting the proof of IP = PSPACE ([11, 6]) , we show that QBCNFSAT (Quantified Boolean Formula (in CNF) Satisfiability) is in Succinct IP. On the contrary if QBCNFSAT has Succinct PCPs ([32]) , Polynomial Hierarchy (PH) collapses. After extending the notion of instance compression to higher classes, we study the hierarchical structure of the parametric problems with respect to compressibility. For that purpose, we extend the existing definition of VC-hierarchy ([13]) to parametric problems. After that, we have considered a long list of natural NP problems and tried to classify them into some level of VC-hierarchy. We have shown some of the new w-reductions in this context and pointed out a few interesting results including the ones as follows. 1. CLIQUE is VC1-complete (using the results in [14]). 2. SET SPLITTING and NAE-SAT are VC2-complete. We have also introduced a new complexity class VCE in this context and showed some hardness and completeness results for this class. We have done a comparison of VC-hierarchy with other related hierarchies in parameterized complexity domain as well. Next, we define the compression of counting problems and the analogous classification of them with respect to the notion of instance compression. We define #VC-hierarchy for this purpose and similarly classify a large number of natural counting problems with respect to this hierarchy, by showing some interesting hardness and completeness results. We have considered some of the interesting practical problems as well other than popular NP problems (e.g., #MULTICOLOURED CLIQUE, #SELECTED DOMINATING SET etc.) and studied their complexity for both decision and counting version. We have also considered a large variety of circuit satisfiability problems (e.g., #MONOTONE WEIGHTED-CNFSAT, #EXACT DNF-SAT etc.) and proved some interesting results about them with respect to the theory of instance compressibility.
|
18 |
Estratégias de escalonamento OFDMA DL para redes móveisNogueira, Matheus Cadori January 2016 (has links)
A grande popularidade dos dispositivos móveis que provêm acesso ubíquo à Internet de banda larga, através de redes de rádio, e o volume de tráfego gerado por estes dispositivos estão aumentando a cada ano. Além disso, vem ampliando consideravelmente a frequência com que usuários de dispositivos móveis estão usando serviços baseados na Web. Alguns destes usuários podem estar acessando serviços que precisam de transmissão contínua como, por exemplo, vídeos interativos, outros podem estar apenas lendo e-mails, o que não exige um fluxo contínuo de dados. Mais do que isso, usuários com altos níveis de sinal podem atingir melhores taxas de transferência do que os com níveis menores. Portanto, encontrar a melhor relação entre os usuários que estão acessando serviços sensíveis ao atraso e aqueles que maximizam a taxa de transferência, e ainda ser justo na transmissão, é um relevante desafio para o escalonamento dos recursos de uma rede sem fio. Embora as pesquisas de escalonamento de recursos em redes sem fio tenham evoluído neste sentido, o recente aumento do volume de tráfego mencionado pode levar a uma sobrecarga no sistema, comprometendo o escalonamento. A fim de enfrentar estes desafios, o Orthogonal Frequency Division Multiple Access (OFDMA), tecnologia fundamental para o acesso múltiplo em redes de quarta geração, tem sido considerado também para ser utilizado na próxima geração de rádios móveis. Para implementar um serviço efetivo aos usuários, requisitos, tais como, altas taxas de transferência, tolerância baixa ao atraso, minimização da perda de pacotes e maximização da justiça no escalonamento, devem somar-se à característica, de alta densidade de usuários, que surgiu após o advento da popularização dos dispositivos móveis. Portanto, novas estratégias de escalonamento devem ser idealizadas. Nesta dissertação, deu-se um passo além na proposição de um escalonador para as redes móveis de próxima geração, que busca melhorar a relação entre taxa de transferência e atraso, consequentemente, levando a maiores índices de justiça no escalonamento resultante. O escalonador foi especialmente desenvolvido para lidar com altas densidades de usuários, inerentes às redes modernas, e as redes LTE foram utilizadas como caso de estudo. Desta forma, um novo escalonador ótimo que considera provisão dos requisitos acima mencionados, é modelado. Além disso, uma nova heurística parametrizável, baseada na qualidade do canal do usuário, no atraso permitido por cada serviço e na justiça do escalonamento é proposta, a fim de lidar com cenários sobrecarregados. Resultados demonstram que a abordagem de escalonamento proposta leva a uma taxa de transferência apenas 7,5% menor que os valores ótimos, com 25% a menos de perda de pacotes em cenários sobrecarregados. O modelo também garante que o escalonamento resultante seja pelo menos 0,91 na escala do índice de justiça de Jain. Finalmente, os resultados mostram uma melhor relação entre a eficiência espectral e as métricas de QoS. / The huge popularity of mobile devices that provides a ubiquitous Internet broadband access via radio networks and the volume of traffic generated by these devices in the base stations are increasing every year. Furthermore, the frequency which, mobile users are using web-based services, is increasing, requiring high transfer rates such as transmission of interactive videos. These factors have become the main challenges for the scheduling of radio resources. In order to meet these challenges, the Orthogonal Frequency Division Multiple Access (OFDMA), a key technology for multiple access in fourth generation networks, has also been considered for use in next-generation mobile radios. To implement an effective service to users, requirements such as high transfer rates, lower delay tolerance, minimum packet loss and maximum scheduling fairness, should be added to the requirements that emerged after the advent of the popularity of mobile devices. Therefore, new scheduling strategies should be projected. Despite efforts to solve the downlink (DL) scheduling problem on wireless networks, we are not aware of previous attempts that have addressed the above requirements in a single strategy. In this thesis, we took a step further in this direction and still considering the high densities in small cells inherent in modern networks. In additional, we address the radio DL resource scheduling problem for multiple users using LTE networks as a case study. A new optimal scheduler is modeled regarding Quality of Service (QoS) provisioning. In addition, a parameterized heuristic based on user channel quality and service delay is proposed to reach scheduling solutions for overbooked scenarios. Results demonstrate that the proposed scheduling approaches led to a throughput of 7.5% lower than the optimal ones and 25% lower packet losses in overloaded scenarios. Our model also ensures that the resultant scheduling is at least as fair as 0.91 in Jain fairness index. Additionally, the obtained results show a reasonable trade-off between spectral efficiency and QoS metrics.
|
19 |
Parallelism with limited nondeterminismFinkelstein, Jeffrey 05 March 2017 (has links)
Computational complexity theory studies which computational problems can be solved with limited access to resources. The past fifty years have seen a focus on the relationship between intractable problems and efficient algorithms. However, the relationship between inherently sequential problems and highly parallel algorithms has not been as well studied. Are there efficient but inherently sequential problems that admit some relaxed form of highly parallel algorithm? In this dissertation, we develop the theory of structural complexity around this relationship for three common types of computational problems.
Specifically, we show tradeoffs between time, nondeterminism, and parallelizability. By clearly defining the notions and complexity classes that capture our intuition for parallelizable and sequential problems, we create a comprehensive framework for rigorously proving parallelizability and non-parallelizability of computational problems. This framework provides the means to prove whether otherwise tractable problems can be effectively parallelized, a need highlighted by the current growth of multiprocessor systems. The views adopted by this dissertation—alternate approaches to solving sequential problems using approximation, limited nondeterminism, and parameterization—can be applied practically throughout computer science.
|
20 |
Representations and Parameterizations of Combinatorial AuctionsLoker, David Ryan January 2007 (has links)
Combinatorial auctions (CAs) are an important mechanism for allocating multiple items while allowing agents to specify preferences over bundles of items. In order to communicate these preferences, agents submit bids, which consist of one or more items and a value indicating the agent’s preference for these items. The process of determining the allocation of items is known as the winner determination problem (WDP). WDP for CAs is known to be NP-complete in the general case.
We consider two distinct graph representations of a CA; the bid graph and the item graph. In a bid graph, vertices represent bids, and two vertices are adjacent if and only if the bids share items in common. In an item graph, each vertex represents a unique item, there is a vertex for each item, and any bid submitted by any agent must induce a connected subgraph of the item graph. We introduce a new definition of combinatorial
auction equivalence by declaring two CAs equivalent if and only if their bid graphs are isomorphic.
Parameterized complexity theory can be used to further distinguish between NP-hard
problems. In order to make use of parameterized complexity theory in the investigation of a problem, we aim to find one or more parameters that describe some aspect of the problem such that if we fix these parameters, then either the problem is still hard (fixed-parameter intractable), or the problem can be solved in polynomial time (fixed-parameter tractable).
We analyze WDP using bid graphs from within the formal scope of parameterized complexity theory. This approach has not previously been used to analyze WDP for CAs, although it has been used to solve set packing, which is related to WDP for CAs and is discussed in detail. We investigate a few parameterizations of WDP; some of the parameterizations are shown to be fixed-parameter intractable, while others are fixed-parameter tractable. We also analyze WDP when the graph class of a bid graph is restricted.
We also discuss relationships between item graphs and bid graphs. Although both graphs can represent the same problem, there is little previous work analyzing direct relationships between them. Our discussion on these relationships begins with a result by
Conitzer et al. [7], which focuses on the item graph representation and its treewidth, a property of a graph that measures how close the graph is to a tree. From a result by Gavril, if an item graph has treewidth one, then the bid graph must be chordal [16]. To apply the other direction of Gavril’s theorem, we use our new definition of CA equivalence. With this new definition, Gavril’s result shows that if a bid graph of a CA is chordal, then we can construct an item graph that has treewidth one for some equivalent CA.
|
Page generated in 0.2331 seconds