• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 79
  • 41
  • 23
  • 16
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 371
  • 61
  • 56
  • 52
  • 51
  • 45
  • 39
  • 37
  • 36
  • 34
  • 33
  • 30
  • 29
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Truss decomposition in large probabilistic graphs

Daneshmandmehrabani, Mahsa 24 December 2019 (has links)
Truss decomposition is an essential problem in graph mining, which focuses on discovering dense subgraphs of a graph. Detecting trusses in deterministic graphs is extensively studied in the literature. As most of the real-world graphs, such as social, biological, and communication networks, are associated with uncertainty, it is of great importance to study truss decomposition in a probabilistic context. However, the problem has received much less attention in a probabilistic framework. Furthermore, due to computational challenges of truss decomposition in probabilistic graphs, state-of- the-art approaches are not scalable to large graphs. Formally, given a user-defined threshold k (for truss denseness), we are interested in finding all the maximal subgraphs, which are a k-truss with high probability. In this thesis, we introduce a novel approach based on an asynchronous h-index updating process, which offers significant improvement over the state-of-the-art. Our extensive experimental results confirm the scalability and efficiency of our approach. / Graduate
12

An Exposition of the Deterministic Polynomial-Time Primality Testing Algorithm of Agrawal-Kayal-Saxena

Anderson, Robert Lawrence 29 June 2005 (has links) (PDF)
I present a thorough examination of the unconditional deterministic polynomial-time algorithm for determining whether an input number is prime or composite proposed by Agrawal, Kayal and Saxena in their paper [1]. All proofs cited have been reworked with full details for the sake of completeness and readability.
13

Using Modeling And Simulation To Evaluate Disease Control Measures

Atkins, Tracy 01 January 2010 (has links)
This dissertation introduced several issues concerning the analysis of diseases by showing how modeling and simulation could be used to assist in creating health policy by estimating the effects of such policies. The first question posed was how would education, vaccination and a combination of these two programs effect the possible outbreak of meningitis on a college campus. After creating a model representative of the transmission dynamics of meningitis and establishing parameter values characteristic of the University of Central Florida main campus, the results of a deterministic model were presented in several forms. The result of this model was the combination of education and vaccination would eliminate the possibility of an epidemic on our campus. Next, we used simulation to evaluate how quarantine and treatment would affect an outbreak of influenza on the same population. A mathematical model was created specific to influenza on the UCF campus. Numerical results from this model were then presented in tabular and graphical form. The results comparing the simulations for quarantine and treatment show the best course of action would be to enact a quarantine policy on the campus thus reducing the maximum number of infected while increasing the time to reach this peak. Finally, we addressed the issue of performing the analysis stochastically versus deterministically. Additional models were created with the progression of the disease occurring by chance. Statistical analysis was done on the mean of 100 stochastic simulation runs comparing that value to the one deterministic outcome. The results for this analysis were inconclusive, as the results for meningitis were comparable while those for influenza appeared to be different.
14

Aerodynamic Uncertainty Quantification and Estimation of Uncertainty Quantified Performance of Unmanned Aircraft Using Non-Deterministic Simulations

Hale II, Lawrence Edmond 24 January 2017 (has links)
This dissertation addresses model form uncertainty quantification, non-deterministic simulations, and sensitivity analysis of the results of these simulations, with a focus on application to analysis of unmanned aircraft systems. The model form uncertainty quantification utilizes equation error to estimate the error between an identified model and flight test results. The errors are then related to aircraft states, and prediction intervals are calculated. This method for model form uncertainty quantification results in uncertainty bounds that vary with the aircraft state, narrower where consistent information has been collected and wider where data are not available. Non-deterministic simulations can then be performed to provide uncertainty quantified estimates of the system performance. The model form uncertainties could be time varying, so multiple sampling methods were considered. The two methods utilized were a fixed uncertainty level and a rate bounded variation in the uncertainty level. For analysis using fixed uncertainty level, the corner points of the model form uncertainty were sampled, providing reduced computational time. The second model better represents the uncertainty but requires significantly more simulations to sample the uncertainty. The uncertainty quantified performance estimates are compared to estimates based on flight tests to check the accuracy of the results. Sensitivity analysis is performed on the uncertainty quantified performance estimates to provide information on which of the model form uncertainties contribute most to the uncertainty in the performance estimates. The proposed method uses the results from the fixed uncertainty level analysis that utilizes the corner points of the model form uncertainties. The sensitivity of each parameter is estimated based on corner values of all the other uncertain parameters. This results in a range of possible sensitivities for each parameter dependent on the true value of the other parameters. / Ph. D. / This dissertation examines a process that can be utilized to quantify the uncertainty associated with an identified model, the performance of the system accounting for the uncertainty, and the sensitivity of the performance estimates to the various uncertainties. This uncertainty is present in the identified model because of modeling errors and will tend to increase as the states move away from locations where data has been collected. The method used in this paper to quantify the uncertainty attempts to represent this in a qualitatively correct sense. The uncertainties provide information that is used to predict the performance of the aircraft. A number of simulations are performed, with different values for the uncertain terms chosen for each simulation. This provides a family of possible results to be produced. The uncertainties can be sampled in various manners, and in this study were sampled at fixed levels and at time varying levels. The sampling of fixed uncertainty level required fewer samples, improving computational requirements. Sampling with time varying uncertainty better captures the nature of the uncertainty but requires significantly more simulations. The results provide a range of the expected performance based on the uncertainty. Sensitivity analysis is performed to determine which of the input uncertainties produce the greatest uncertainty in the performance estimates. To account for the uncertainty in the true parameter values, the sensitivity is predicted for a number of possible values of the uncertain parameters. This results in a range of possible sensitivities for each parameter dependent on the true value of the other parameters. The range of sensitivities can be utilized to determine the future testing to be performed.
15

IMPROVING REAL-TIME LATENCY PERFORMANCE ON COTS ARCHITECTURES

Bono, John, Hauck, Preston 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Telemetry systems designed to support the current needs of mission-critical applications often have stringent real-time requirements. These systems must guarantee a maximum worst-case processing and response time when incoming data is received. These real-time tolerances continue to tighten as data rates increase. At the same time, end user requirements for COTS pricing efficiencies have forced many telemetry systems to now run on desktop operating systems like Windows or Unix. While these desktop operating systems offer advanced user interface capabilities, they cannot meet the realtime requirements of the many mission-critical telemetry applications. Furthermore, attempts to enhance desktop operating systems to support real-time constraints have met with only limited success. This paper presents a telemetry system architecture that offers real-time guarantees while at the same time extensively leveraging inexpensive COTS hardware and software components. This is accomplished by partitioning the telemetry system onto two processors. The first processor is a NetAcquire subsystem running a real-time operating system (RTOS). The second processor runs a desktop operating system running the user interface. The two processors are connected together with a high-speed Ethernet IP internetwork. This architecture affords an improvement of two orders of magnitude over the real-time performance of a standalone desktop operating system.
16

Application of deterministic 3D SN transport driven dose kernel methods for out-of-field dose assessments in clinical megavoltage radiation therapy

Huang, Mi 07 January 2016 (has links)
With the recent interest in single fraction Stereotactic Body Radiation Therapy and the emerging prominence of the Rapid Arc radiotherapy technique capable of delivering a fast and accurate treatment, the in-field primary dose and out-of-field dose assessments are becoming increasingly important. Currently, full physics Monte Carlo calculations for dose calculations have been regarded as a ‘gold standard’ for dose assessments of the target and OAR (organ at risk). However, these Monte Carlo calculations require very long computation times. The current treatment planning methods provide shorter calculation times, but issues such as heterogeneities and model-based parameter calculations cause challenges and affect dose calculation accuracy. This thesis describes a new and fast dose estimation method leveraging parallel computing called EDK-SN, “Electron Dose Kernel-Discrete Ordinates”. This new method uses hybrid electron dose kernels driven by linear Boltzmann (discrete ordinates) photon transport method to carry out dose calculations. The method has proven effective for fast and accurate computations of out-of-field whole body dose calculations benchmarked to Monte Carlo with isotropic monoenergetic photon sources. This thesis accomplishes adaptation of clinical Varian phase space data for use with general Monte Carlo codes including MCNP, and mapping accurate phase space data into the application of optimized EDK-SN dose calculation method with a 15-year-old patient phantom. The EDK-SN method with improved source term modeling is demonstrated to fall within accuracy of the measured golden beam data for a clinical water phantom.
17

Deterministic Distribution of Telemetry and Other Replicated Information

Gustin, Thomas W. 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / Discover how it is now possible to memory-link all man-in-the-loop and machine-in-the-loop elements, as global resources that share information at memory-access speeds, to provide a unified system paradigm that avows: "the data is there, on time, every time." Regardless of configuration, if your past, present, or future system consists of more than one computer, and it interactively mixes information sources and destinations (e.g. Telemetry data streams, I/O interfaces, information processors, etc.) to achieve a highly integrated system, then the critical path to real-time success mandates a high performance, reliable, and deterministic communications methodology. This softwareless, future technology is already successfully sharing information in other real-time markets and applications, and is ready for more challenging applications.
18

Movimentos regulares e caóticos de rotação de corpos rígidos /

Jambersi, Andreyson Bicudo. January 2016 (has links)
Orientador: Samuel da Silva / Banca: Marcio Antonio Bazani / Banca: Marcos Silveira / Resumo: A descrição e representação do movimento de corpos rígidos no espaço pode ser realizada de diversas formas, a forma mais popular é através dos ângulos de Euler, apesar de não ser sempre a mais adequada. O objetivo deste trabalho consiste em obter um modelo matemático que descreve o movimento de um giroscópio no espaço através de conceitos da mecânica clássica de Newton-Euler e parametrizar o problema da cinemática inversa dos ângulos de Euler e dos quatérnions e obter a solução numérica, além de realizar uma análise do comportamento deste sistema sob ação de esforços em função das velocidades angulares do corpo. Os resultados são comparados e são destacadas as vantagens de cada parametrização utilizada. A partir deste modelo estuda-se o caso onde os torques externos são realimentados pelas velocidades angulares nas direções principais de inércia do corpo, para estas situações o giroscópio apresenta caos. Nota-se que, para determinados valores de parâmetros, as equações de Euler do giroscópio assumem a forma dos sistemas de Lorenz, Chen e Lü-Chen e podem ser visualizados atratores estranhos no espaço de fases / Abstract: The description and representation of the motion of a rigid body in space can be performed in several ways, the most popular form is through the Euler angles, although it is not always the most appropriate. The goal of this work is to achieve a mathematical model that describes the movement of a gyroscope in space through the classical concepts Newton-Euler mechanical and parameterizing the problem of inverse kinematics of the Euler angles and quaternions and obtain the numerical solution, and to perform an analysis of the behavior of this system in action efforts as functions of the angular velocities of the body. The results are compared and are emphasized the advantages of each parameterization used. From this model it is also studied the case where the external torque are feedback by the angular velocities in the main directions of the body of inertia, for these situations the gyroscope presents chaos. It is noted that for certain parameter values, the Euler equations for the gyroscope take the form of the Lorenz, Chen and LuChen systems and strange attractors can be seen in the phase space / Mestre
19

Optimizing Safety Stock Placement in General Network Supply Chains

Graves, Stephen C., Lesnaia, Ekaterina 01 1900 (has links)
In the paper, we minimize the holding cost of the safety stock held in a supply chain modeled as a general network. By our assumption, the demand is bounded by a concave function. This fact allows us to formulate the problem as a deterministic optimization. We minimize a concave function over a discrete polyhedron. The main goal of the paper is to describe an algorithm to solve the problem without assuming any particular structure of the underlying supply chain. The algorithm is a branch and bound algorithm. / Singapore-MIT Alliance (SMA)
20

A New Method for Coupling 2D and 3D Deterministic and Stochastic Radiation Transport Calculations

Kulesza, Joel Aaron 01 August 2011 (has links)
The objective of this body of work was to produce a code system capable of processing boundary angular flux data from discrete ordinates calculations in 2D and 3D Cartesian and cylindrical geometries into cumulative probability density functions that can be used with a Monte Carlo radiation transport code to define neutron and photon initial positions, directions, and energies. In order to accomplish this goal, the DISCO (DetermInistic-Stochastic Coupling Operation) code was created to interface between the DORT and TORT deterministic radiation transport codes and the MCNP stochastic radiation transport code. DISCO introduces new methods to use the boundary angular flux data, along with information regarding the deterministic quadrature sets and spatial mesh structure, to create cumulative probability density functions that are passed to MCNP for sampling within the source.F90 subroutine that was also generated as part of this work. Operating in concert, DISCO and the MCNP source.F90 subroutine create a source term according to the discrete ordinates angular flux information. In order to validate the work described herein, 24 test cases were created to exercise the different geometries and execution modes available. The results of these test cases confirm that the methodology and corresponding implementation is appropriate and functioning correctly. Furthermore, this work incorporates several novel features such as compatibility with all 2D and 3D Cartesian and cylindrical geometries, an angular and spatial indexing scheme to reduce random sampling operations, a streamlining of process execution, and the ability for the resulting Monte Carlo code to operate in either serial and parallel mode.

Page generated in 0.0923 seconds