• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1101
  • 350
  • 139
  • 134
  • 125
  • 87
  • 42
  • 39
  • 29
  • 24
  • 11
  • 11
  • 10
  • 7
  • 7
  • Tagged with
  • 2536
  • 492
  • 331
  • 286
  • 234
  • 196
  • 169
  • 158
  • 158
  • 151
  • 145
  • 135
  • 129
  • 128
  • 125
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Planning and exploring under uncertainty

Murphy, Elizabeth M. January 2010 (has links)
Scalable autonomy requires a robot to be able to recognize and contend with the uncertainty in its knowledge of the world stemming from its noisy sensors and actu- ators. The regions it chooses to explore, and the paths it takes to get there, must take this uncertainty into account. In this thesis we outline probabilistic approaches to represent that world; to construct plans over it; and to determine which part of it to explore next. We present a new technique to create probabilistic cost maps from overhead im- agery, taking into account the uncertainty in terrain classification and allowing for spatial variation in terrain cost. A probabilistic cost function combines the output of a multi-class classifier and a spatial probabilistic regressor to produce a probability density function over terrain for each grid cell in the map. The resultant cost map facilitates the discovery of not only the shortest path between points on the map, but also a distribution of likely paths between the points. These cost maps are used in a path planning technique which allows the user to trade-off the risk of returning a suboptimal path for substantial increases in search speed. We precompute a probability distribution which precisely approximates the true distance between any grid cell in the map and goal cell. This distribution under- pins a number of A* search heuristics we present, which can characterize and bound the risk we are prepared to take in gaining search efficiency while sacrificing optimal path length. Empirically, we report efficiency increases in excess of 70% over standard heuristic search methods. Finally, we present a global approach to the problem of robotic exploration, uti- lizing a hybrid of a topological data structure and an underlying metric mapping process. A ‘Gap Navigation Tree’ is used to motivate global target selection and occluded regions of the environment (‘gaps’) are tracked probabilistically using the metric map. In pursuing these gaps we are provided with goals to feed to the path planning process en route to a complete exploration of the environment. The combination of these three techniques represents a framework to facilitate robust exploration in a-priori unknown environments.
262

Analytical model for phonon transport analysis of periodic bulk nanoporous structures

Hao, Qing, Xiao, Yue, Zhao, Hongbo 25 January 2017 (has links)
Phonon transport analysis in nano- and micro-porous materials is critical to their energy-related applications. Assuming diffusive phonon scattering by pore edges, the lattice thermal conductivity can be predicted by modifying the bulk phonon mean free paths with the characteristic length of the nanoporous structure, i.e., the phonon mean free path (Lambda(pore)) for the pore-edge scattering of phonons. In previous studies (Jean et al., 2014), a Monte Carlo (MC) technique have been employed to extract geometry determined Lambda(pore) for nanoporous bulk materials with selected periods and porosities. In other studies (Minnich and Chen, 2007; Machrafi and Lebon, 2015), simple expressions have been proposed to compute Lambda(pore). However, some divergence can often be found between lattice thermal conductivities predicted by phonon MC simulations and by analytical models using Lambda(pore). In this work, the effective Lambda(pore) values are extracted by matching the frequency-dependent phonon MC simulations with the analytical model for nanoporous bulk Si. The obtained Lambda(pore) values are usually smaller than their analytical expressions. These new values are further confirmed by frequency-dependent phonon MC simulations on nano porous bulk Ge. By normalizing the volumetric surface area A and Lambda(pore) with the period length p, the same curve can be used for bulk materials with aligned cubic or spherical pores up to dimensionless p.A of 1.5. Available experimental data for nanoporous Si materials are further analyzed with new Lambda(pore) values. In practice, the proposed model can be employed for the thermal analysis of various nanoporous materials and thus replace the time-consuming phonon MC simulations.
263

Spiking Neural Networks: Neuron Models, Plasticity, and Graph Applications

Donachy, Shaun 01 January 2015 (has links)
Networks of spiking neurons can be used not only for brain modeling but also to solve graph problems. With the use of a computationally efficient Izhikevich neuron model combined with plasticity rules, the networks possess self-organizing characteristics. Two different time-based synaptic plasticity rules are used to adjust weights among nodes in a graph resulting in solutions to graph prob- lems such as finding the shortest path and clustering.
264

The Career Path of the Female Superintendent: Why She Leaves

Robinson, Kerry 11 March 2013 (has links)
This qualitative study used a phenomenological approach to determine the reasons why women leave the superintendency. This study not only illustrated the different ways a woman can leave the position of superintendent but also the reasons she would choose to leave. These reasons can be either positive or negative, but they rarely are the sole cause for why a woman leaves the position. This interview study of 20 female participants who served as superintendent in the Commonwealth of Virginia identified four main themes as to why a woman chose to leave the superintendency. These included: (a) it wasn’t the job I thought it would be; (b) the struggles with family; (c) taking care of herself; and (d) I’m not the right fit for the community. The study also identified the routes women take to leave the superintendency which include retirement, leaving for another superintendency, movement into another position within PK-12, opportunity in higher education, working as an educational consultant, or moving into a position outside of education.
265

Reciprocal Relations Between Traumatic Stress and Physical Aggression During Middle School

Thompson, Erin L 01 January 2016 (has links)
There is convincing evidence that demonstrates traumatic stress and aggressive behavior are highly related among adolescents. The evidence is less clear regarding the direction of this relation. The purpose of this study was to examine the reciprocal longitudinal relations between physical aggression and traumatic stress among a predominantly African American sample of middle school students. Support was found for traumatic stress predicting increased levels of physical aggression across the winter to the spring of the sixth grade for boys and across all waves from the fall of the seventh grade to the fall of the eighth grade for both boys and girls. Conversely, physical aggression during the winter of the sixth grade predicted a decrease in traumatic stress in the spring of the sixth grade for both boys and girls. These findings suggest that interventions may need to incorporate skills that are aligned with trauma-informed care practices in order to reduce traumatic stress and physical aggression among adolescents.
266

XML Query Adaptation / XML Query Adaptation

Polák, Marek January 2011 (has links)
In the presented work we study XML schema evolution, its types and impact on queries which are related on the particular schema. The thesis contains a review of existing approaches of this problem. The approach presented in this work shows a possible solution how to adapt related queries while schema evolves. The thesis contains a description of an algorithm which modies queries related to the evolved schema. Finally the work contains a number of exper- iments that verify proposal of the algorithms and show their advantages and disadvantages. 1
267

Code Clone Discovery Based on Concolic Analysis

Krutz, Daniel Edward 01 January 2013 (has links)
Software is often large, complicated and expensive to build and maintain. Redundant code can make these applications even more costly and difficult to maintain. Duplicated code is often introduced into these systems for a variety of reasons. Some of which include developer churn, deficient developer application comprehension and lack of adherence to proper development practices. Code redundancy has several adverse effects on a software application including an increased size of the codebase and inconsistent developer changes due to elevated program comprehension needs. A code clone is defined as multiple code fragments that produce similar results when given the same input. There are generally four types of clones that are recognized. They range from simple type-1 and 2 clones, to the more complicated type-3 and 4 clones. Numerous clone detection mechanisms are able to identify the simpler types of code clone candidates, but far fewer claim the ability to find the more difficult type-3 clones. Before CCCD, MeCC and FCD were the only clone detection techniques capable of finding type-4 clones. A drawback of MeCC is the excessive time required to detect clones and the likely exploration of an unreasonably large number of possible paths. FCD requires extensive amounts of random data and a significant period of time in order to discover clones. This dissertation presents a new process for discovering code clones known as Concolic Code Clone Discovery (CCCD). This technique discovers code clone candidates based on the functionality of the application, not its syntactical nature. This means that things like naming conventions and comments in the source code have no effect on the proposed clone detection process. CCCD finds clones by first performing concolic analysis on the targeted source code. Concolic analysis combines concrete and symbolic execution in order to traverse all possible paths of the targeted program. These paths are represented by the generated concolic output. A diff tool is then used to determine if the concolic output for a method is identical to the output produced for another method. Duplicated output is indicative of a code clone. CCCD was validated against several open source applications along with clones of all four types as defined by previous research. The results demonstrate that CCCD was able to detect all types of clone candidates with a high level of accuracy. In the future, CCCD will be used to examine how software developers work with type-3 and type-4 clones. CCCD will also be applied to various areas of security research, including intrusion detection mechanisms.
268

Accessing Geospatial Services in Limited Bandwidth Service-Oriented Architecture (SOA) Environments

Boggs, James Darrell 01 January 2013 (has links)
First responders are continuously moving at an incident site and this movement requires them to access Service-Oriented Architecture services, such as a Web Map Service, via mobile wireless networks. First responders from inside a building often have problems in communicating to devices outside that building due to propagation obstacles. Dynamic user geometry and the propagation conditions of communicating from inside buildings to transceivers on the outside are difficult to model reliably in network planning software. Thus, leading commercial network simulation software and open source network simulator software do not model wireless links between transceivers inside and outside of buildings; new modeling software is needed. The discrete simulation runs in this investigation were built on events in a scenario that is typical of first-responder activities at an incident site. This scenario defined the geometry and node characteristics that were used in a mobile wireless network simulation to calculate expected connectivity based on propagation modeling, transceiver characteristics, and the environment. The author implemented in software a propagation model from the United States National Institute of Standards and Technology (NIST) to simulate radio wave propagation path loss during the scenario. Modifications to the NIST model propagation path loss method were generated to improve consistency in results calculated with the same node separation distances and radio wave obstacle environments. The final set of modifications made the NIST model more generalized by using more building material characteristics than the original version. The modifications in this study to the path loss model from NIST engineers were grounded on ad hoc network connectivity data collected at the operational scenario site. After changes in the NIST model were validated, 1,265 operational simulation runs were conducted with different numbers of deployed nodes in an operational incident-response scenario. Data were reduced and analyzed to compare measures of mobile ad hoc network effectiveness. Findings in this investigation resulted in two specific contributions to the body of knowledge in mobile wireless network design. First, data analysis indicated that specific changes to a recent path loss model from NIST produced results that were more generalized than the original model with respect to accommodating different building materials and enhancing the consistency of simulation results. Second, the results from the modified path loss model revealed an operational impact in using relay nodes to support public safety. Specifically, placing relay nodes at the entrance to a building and on odd-numbered floors improved connectivity in terms of first responders' accessing Web Services via mobile network devices, when moving through a building in an incident scenario.
269

Low-Complexity Algorithms for Echo Cancellation in Audio Conferencing Systems

Schüldt, Christian January 2012 (has links)
Ever since the birth of the telephony system, the problem with echoes, arising from impedance mismatch in 2/4-wire hybrids, or acoustic echoes where a loudspeaker signal is picked up by a closely located microphone, has been ever present. The removal of these echoes is crucial in order to achieve an acceptable audio quality for conversation. Today, the perhaps most common way for echo removal is through cancellation, where an adaptive filter is used to produce an estimated replica of the echo which is then subtracted from the echo-infested signal. Echo cancellation in practice requires extensive control of the filter adaptation process in order to obtain as rapid convergence as possible while also achieving robustness towards disturbances. Moreover, despite the rapid advancement in the computational capabilities of modern digital signal processors there is a constant demand for low-complexity solutions that can be implemented using low power and low cost hardware. This thesis presents low-complexity solutions for echo cancellation related to both the actual filter adaptation process itself as well as for controlling the adaptation process in order to obtain a robust system. Extensive simulations and evaluations using real world recorded signals are used to demonstrate the performance of the proposed solutions.
270

Sparse representations and quadratic approximations in path integral techniques for stochastic response analysis of diverse systems/structures

Psaros Andriopoulos, Apostolos January 2019 (has links)
Uncertainty propagation in engineering mechanics and dynamics is a highly challenging problem that requires development of analytical/numerical techniques for determining the stochastic response of complex engineering systems. In this regard, although Monte Carlo simulation (MCS) has been the most versatile technique for addressing the above problem, it can become computationally daunting when faced with high-dimensional systems or with computing very low probability events. Thus, there is a demand for pursuing more computationally efficient methodologies. Recently, a Wiener path integral (WPI) technique, whose origins can be found in theoretical physics, has been developed in the field of engineering dynamics for determining the response transition probability density function (PDF) of nonlinear oscillators subject to non-white, non-Gaussian and non-stationary excitation processes. In the present work, the Wiener path integral technique is enhanced, extended and generalized with respect to three main aspects; namely, versatility, computational efficiency and accuracy. Specifically, the need for increasingly sophisticated modeling of excitations has led recently to the utilization of fractional calculus, which can be construed as a generalization of classical calculus. Motivated by the above developments, the WPI technique is extended herein to account for stochastic excitations modeled via fractional-order filters. To this aim, relying on a variational formulation and on the most probable path approximation yields a deterministic fractional boundary value problem to be solved numerically for obtaining the oscillator joint response PDF. Further, appropriate multi-dimensional bases are constructed for approximating, in a computationally efficient manner, the non-stationary joint response PDF. In this regard, two distinct approaches are pursued. The first employs expansions based on Kronecker products of bases (e.g., wavelets), while the second utilizes representations based on positive definite functions. Next, the localization capabilities of the WPI technique are exploited for determining PDF points in the joint space-time domain to be used for evaluating the expansion coefficients at a relatively low computational cost. Subsequently, compressive sampling procedures are employed in conjunction with group sparsity concepts and appropriate optimization algorithms for decreasing even further the associated computational cost. It is shown that the herein developed enhancement renders the technique capable of treating readily relatively high-dimensional stochastic systems. More importantly, it is shown that this enhancement in computational efficiency becomes more prevalent as the number of stochastic dimensions increases; thus, rendering the herein proposed sparse representation approach indispensable, especially for high-dimensional systems. Next, a quadratic approximation of the WPI is developed for enhancing the accuracy degree of the technique. Concisely, following a functional series expansion, higher-order terms are accounted for, which is equivalent to considering not only the most probable path but also fluctuations around it. These fluctuations are incorporated into a state-dependent factor by which the exponential part of each PDF value is multiplied. This localization of the state-dependent factor yields superior accuracy as compared to the standard most probable path WPI approximation where the factor is constant and state-invariant. An additional advantage relates to efficient structural reliability assessment, and in particular, to direct estimation of low probability events (e.g., failure probabilities), without possessing the complete transition PDF. Overall, the developments in this thesis render the WPI technique a potent tool for determining, in a reliable manner and with a minimal computational cost, the stochastic response of nonlinear oscillators subject to an extended range of excitation processes. Several numerical examples, pertaining to both nonlinear dynamical systems subject to external excitations and to a special class of engineering mechanics problems with stochastic media properties, are considered for demonstrating the reliability of the developed techniques. In all cases, the degree of accuracy and the computational efficiency exhibited are assessed by comparisons with pertinent MCS data.

Page generated in 0.0369 seconds