• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 168
  • 168
  • 57
  • 31
  • 29
  • 23
  • 20
  • 19
  • 19
  • 18
  • 17
  • 17
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR

Hamraz, Hamid 01 January 2018 (has links)
Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level.
132

HIGH-ORDER INTEGRAL EQUATION METHODS FOR QUASI-MAGNETOSTATIC AND CORROSION-RELATED FIELD ANALYSIS WITH MARITIME APPLICATIONS

Pfeiffer, Robert 01 January 2018 (has links)
This dissertation presents techniques for high-order simulation of electromagnetic fields, particularly for problems involving ships with ferromagnetic hulls and active corrosion-protection systems. A set of numerically constrained hexahedral basis functions for volume integral equation discretization is presented in a method-of-moments context. Test simulations demonstrate the accuracy achievable with these functions as well as the improvement brought about in system conditioning when compared to other basis sets. A general method for converting between a locally-corrected Nyström discretization of an integral equation and a method-of-moments discretization is presented next. Several problems involving conducting and magnetic-conducting materials are solved to verify the accuracy of the method and to illustrate both the reduction in number of unknowns and the effect of the numerically constrained bases on the conditioning of the converted matrix. Finally, a surface integral equation derived from Laplace’s equation is discretized using the locally-corrected Nyström method in order to calculate the electric fields created by impressed-current corrosion protection systems. An iterative technique is presented for handling nonlinear boundary conditions. In addition we examine different approaches for calculating the magnetic field radiated by the corrosion protection system. Numerical tests show the accuracy achievable by higher-order discretizations, validate the iterative technique presented. Various methods for magnetic field calculation are also applied to basic test cases.
133

Pin-Wise Loading Optimization and Lattice–to-Core Coupling for Isotopic Management in Light Water Reactors

Hernandez Noyola, Hermilo 01 December 2010 (has links)
A generalized software capability has been developed for the pin-wise loading optimization of light water reactor (LWR) fuel lattices with the enhanced flexibility of control variables that characterize heterogeneous or blended target pins loaded with non-standard compositions, such as minor actinides (MAs). Furthermore, this study has developed the software coupling to evaluate the performance of optimized lattices outside their reflective boundary conditions and within the realistic three-dimensional core-wide environment of a LWR. The illustration of the methodologies and software tools developed helps provide a deeper understanding of the behavior of optimized lattices within a full core environment. The practical applications include the evaluation of the recycling (destruction) of “undesirable” minor actinides from spent nuclear fuel such as Am-241 in a thermal reactor environment, as well as the timely study of planting Np-237 (blended NpO2 + UO2) targets in the guide tubes of typical commercial pressurized water reactor (PWR) bundles for the production of Pu-238, a highly “desirable” radioisotope used as a heat source in radioisotope thermoelectric generators (RTGs). Both of these applications creatively stretch the potential utility of existing commercial nuclear reactors into areas historically reserved to research or hypothetical next-generation facilities. In an optimization sense, control variables include the loadings and placements of materials; U-235, burnable absorbers, and MAs (Am-241 or Np-237), while the objective functions are either the destruction (minimization) of Am-241 or the production (maximization) of Pu-238. The constraints include the standard reactivity and thermal operational margins of a commercial nuclear reactor. Aspects of the optimization, lattice-to-core coupling, and tools herein developed were tested in a concurrent study (Galloway, 2010) in which heterogeneous lattices developed by this study were coupled to three-dimensional boiling water reactor (BWR) core simulations and showed incineration rates of Am-241 targets of around 90%. This study focused primarily upon PWR demonstrations, whereby a benchmarked reference equilibrium core was used as a test bed for MA-spiked lattices and was shown to satisfy standard PWR reactivity and thermal operational margins while exhibiting consistently high destruction rates of Am-241 and Np to Pu conversion rates of approximately 30% for the production of Pu-238.
134

Plateforme de recherche basée d'information multimédia guidée par une ontologie dans une architecture paire à paire

Sokhn, Maria 26 August 2011 (has links) (PDF)
Au cours de la dernière décennie, nous avons assisté à une croissance exponentielle de documents numériques et de ressources multimédias, y compris une présence massive de ressources vidéo. Les vidéos sont devenu de plus en plus populaire grâce au contenue riche à l'audio riche qu'elles véhiculent (contenu audiovisuelle et textuelle). Les dernières avancées technologiques ont rendu disponibles aux utilisateurs cette grande quantité de ressources multimédias et cela dans une variété de domaines, y compris les domaines académiques et scientifiques. Toutefois, sans techniques adéquates se basant sur le contenu des multimédia, cette masse de donnée précieuse est difficilement accessible et demeure en vigueur inutilisable. Cette thèse explore les approches sémantiques pour la gestion ainsi que la navigation et la visualisation des ressources multimédias générées par les conférences scientifiques. Un écart, que l'on appelle sémantique, existe entre la représentation des connaissances explicites requis par les utilisateurs qui cherchent des ressources multimédias et la connaissance implicite véhiculée le long du cycle de vie d'une conférence. Le but de ce travail est de fournir aux utilisateurs une plateforme qui améliore la recherche de l'information multimédia des conférences en diminuant cette distance sémantique. L'objectif de cette thèse est de fournir une nouvelle approche pour le contenu multimédia basé sur la recherche d'information dans le domaine des conférences scientifiques.
135

An Empirical Investigation of Collaborative Web Search Tool on Novice's Query Behavior

Al-Sammarraie, Mareh Fakhir 01 January 2017 (has links)
In the past decade, research efforts dedicated to studying the process of collaborative web search have been on the rise. Yet, a limited number of studies have examined the impact of collaborative information search processes on novices’ query behaviors. Studying and analyzing factors that influence web search behaviors, specifically users’ patterns of queries when using collaborative search systems can help with making query suggestions for group users. Improvements in user query behaviors and system query suggestions help in reducing search time and increasing query success rates for novices. This thesis investigates the influence of collaboration between experts and novices as well as the use of a collaborative web search tool on novices’ query behavior. We used SearchTeam as our collaborative search tool. This empirical study involves four collaborative team conditions: SearchTeam and expert-novice team, SearchTeam and novice-novice team, traditional and expert-novice team, and traditional and novice-novice team. We analyzed participants’ query behavior in two dimensions: quantitatively (e.g. the query success rate), and qualitatively (e.g. the query reformulation patterns). The findings of this study reveal that the successful query rate is higher in expert-novice collaborative teams, who used the collaborative search tools. Participants in expert-novice collaborative teams who used the collaborative search tools, required less time to finalize all tasks compared to expert-novice collaborative teams, who used the traditional search tools. Self-issued queries and chat logs were major sources of terms that novice participants in expert-novice collaborative teams who used the collaborative search tools used. Novices as part of expert-novice pairs who used the collaborative search tools, employed New and Specialization more often as query reformulation patterns. The results of this study contribute to the literature by providing detailed investigation regarding the influence of utilizing collaborative search tool (SearchTeam) in the context of software troubleshooting and development. This study highlights the possible collaborative information seeking (CIS) activities that may occur among software developers’ interns and their mentors. Furthermore, our study reveals that there are specific features, such as awareness and built-in instant messaging (IM), offered by SearchTeam that can promote the CIS activities among participants and help increase novices’ query success rates. Finally, we believe the use of CIS tools, designed to support collaborative search actions in big software development companies, has the potential to improve the overall novices’ query behavior and search strategies.
136

Development of a Methodology that Couples Satellite Remote Sensing Measurements to Spatial-Temporal Distribution of Soil Moisture in the Vadose Zone of the Everglades National Park

Perez, Luis G 06 August 2014 (has links)
Spatial-temporal distribution of soil moisture in the vadose zone is an important aspect of the hydrological cycle that plays a fundamental role in water resources management, including modeling of water flow and mass transport. The vadose zone is a critical transfer and storage compartment, which controls the partitioning of energy and mass linked to surface runoff, evapotranspiration and infiltration. This dissertation focuses on integrating hydraulic characterization methods with remote sensing technologies to estimate the soil moisture distribution by modeling the spatial coverage of soil moisture in the horizontal and vertical dimensions with high temporal resolution. The methodology consists of using satellite images with an ultrafine 3-m resolution to estimate soil surface moisture content that is used as a top boundary condition in the hydrologic model, SWAP, to simulate transport of water in the vadose zone. To demonstrate the methodology, herein developed, a number of model simulations were performed to forecast a range of possible moisture distributions in the Everglades National Park (ENP) vadose zone. Intensive field and laboratory experiments were necessary to prepare an area of interest (AOI) and characterize the soils, and a framework was developed on ArcGIS platform for organizing and processing of data applying a simple sequential data approach, in conjunction with SWAP. An error difference of 3.6% was achieved when comparing radar backscatter coefficient (σ0) to surface Volumetric Water Content (VWC); this result was superior to the 6.1% obtained by Piles during a 2009 NASA SPAM campaign. A registration error (RMSE) of 4% was obtained between model and observations. These results confirmed the potential use of SWAP to simulate transport of water in the vadose zone of the ENP. Future work in the ENP must incorporate the use of preferential flow given the great impact of macropore on water and solute transport through the vadose zone. Among other recommendations, there is a need to develop procedures for measuring the ENP peat shrinkage characteristics due to changes in moisture content in support of the enhanced modeling of soil moisture distribution.
137

Optimization of Cooling Protocols for Hearts Destined for Transplantation

Abdoli, Abas 10 October 2014 (has links)
Design and analysis of conceptually different cooling systems for the human heart preservation are numerically investigated. A heart cooling container with required connections was designed for a normal size human heart. A three-dimensional, high resolution human heart geometric model obtained from CT-angio data was used for simulations. Nine different cooling designs are introduced in this research. The first cooling design (Case 1) used a cooling gelatin only outside of the heart. In the second cooling design (Case 2), the internal parts of the heart were cooled via pumping a cooling liquid inside both the heart’s pulmonary and systemic circulation systems. An unsteady conjugate heat transfer analysis is performed to simulate the temperature field variations within the heart during the cooling process. Case 3 simulated the currently used cooling method in which the coolant is stagnant. Case 4 was a combination of Case 1 and Case 2. A linear thermoelasticity analysis was performed to assess the stresses applied on the heart during the cooling process. In Cases 5 through 9, the coolant solution was used for both internal and external cooling. For external circulation in Case 5 and Case 6, two inlets and two outlets were designed on the walls of the cooling container. Case 5 used laminar flows for coolant circulations inside and outside of the heart. Effects of turbulent flow on cooling of the heart were studied in Case 6. In Case 7, an additional inlet was designed on the cooling container wall to create a jet impinging the hot region of the heart’s wall. Unsteady periodic inlet velocities were applied in Case 8 and Case 9. The average temperature of the heart in Case 5 was +5.0oC after 1500 s of cooling. Multi-objective constrained optimization was performed for Case 5. Inlet velocities for two internal and one external coolant circulations were the three design variables for optimization. Minimizing the average temperature of the heart, wall shear stress and total volumetric flow rates were the three objectives. The only constraint was to keep von Mises stress below the ultimate tensile stress of the heart’s tissue.
138

Techniques for Efficient Execution of Large-Scale Scientific Workflows in Distributed Environments

Kalayci, Selim 14 November 2014 (has links)
Scientific exploration demands heavy usage of computational resources for large-scale and deep analysis in many different fields. The complexity or the sheer scale of the computational studies can sometimes be encapsulated in the form of a workflow that is made up of numerous dependent components. Due to its decomposable and parallelizable nature, different components of a scientific workflow may be mapped over a distributed resource infrastructure to reduce time to results. However, the resource infrastructure may be heterogeneous, dynamic, and under diverse administrative control. Workflow management tools are utilized to help manage and deal with various aspects in the lifecycle of such complex applications. One particular and fundamental aspect that has to be dealt with as smooth and efficient as possible is the run-time coordination of workflow activities (i.e. workflow orchestration). Our efforts in this study are focused on improving the workflow orchestration process in such dynamic and distributed resource environments. We tackle three main aspects of this process and provide contributions in each of them. Our first contribution involves increasing the scalability and site autonomy in situations where the mapped components of a workflow span across several heterogeneous administrative domains. We devise and implement a generic decentralization framework for orchestration of workflows under such conditions. Our second contribution is involved with addressing the issues that arise due to the dynamic nature of such environments. We provide generic adaptation mechanisms that are highly transparent and also substantially less intrusive with respect to the rest of the workflow in execution. Our third contribution is to improve the efficiency of orchestration of large-scale parameter-sweep workflows. By exploiting their specific characteristics, we provide generic optimization patterns that are applicable to most instances of such workflows. We also discuss implementation issues and details that arise as we provide our contributions in each situation.
139

Reward Allocation For Maximizing Energy Savings In A Transportation System

Oduwole, Adewale O 09 July 2018 (has links)
Transportation has a major impact on our society and environment, contributing 70% of U.S petroleum use, 28% of U.S. greenhouse gas (GHG) emissions, over 34,000 fatalities and 2.2 million injuries in 2013. Punitive approaches to used to tackle environmental issues in the transportation sector, such as congestion pricing have been well documented, although the use of incentives or rewards lags behind in comparison. In addition to the use of more fuel-efficient, alternate energy vehicles and various other energy reduction strategies; energy consumption can be lowered through incentivizing alternative modes of transportation. This paper focused on modifying travelers’ behavior by providing rewards to enable shifts to more energy-efficient modes, (e.g., from auto to either bus or bicycles). Optimization conditions are formulated for the problem to understand solution properties, and numerical tests are carried out to study the effects of system parameters (e.g., token budget and coefficient of tokens) on the optimal solutions (i.e., energy savings). The multinomial logit model is used to formulate the full problem, comprised of an objective function and constraint of a token budget ranging from $5,000-$10,000. Comparably, the full problem is computationally reduced by various parameterization strategies, in that the number of tokens assigned to all travelers’ is parameterized and proportional to the expected energy savings. An optimization solution algorithm is applied with a global and local solver to solve a lagrangian sub-problem and a duo of heuristic solution algorithms of the original problem. These were determined necessary to attain an optimal and feasible solution. Input data necessary for this analysis is obtained for the Town of Amherst, MA from the Pioneer Valley Planning Commission (PVPC). The results demonstrated strong evidence to conclude a positive correlation between the system’s energy savings and the aforementioned system parameters. The local and global solvers solution algorithm reduced the average energy consumption by 11.48% - 19.91% and12.79% – 21.09% consecutively for the identified token budget range from a base case scenario with no tokens assigned. The duo of lagrangian heuristic algorithms improved the full problems solution i.e., higher energy savings, when optimized over a local solver, while the parameterized problem formulations resulted in higher energy savings when compared to the full problems’ formulation solution over local solver, but higher energy savings compared over the global solver. The Computational run-time for the global and local solvers solution algorithm for the full problem formulation required 43 hours and 24 minutes consecutively, while the local solver for the lagrangian heuristics and parameterized problem solution algorithm took 13 minutes and 7 minutes consecutively. Future research on this paper will be comprised of a bi-level optimization problem formulation where a high level optimization aims at maximizing system-wide energy savings, while a low-level consumer surplus maximization problem is solved for each system user.
140

Rotordynamic Analysis of Theoretical Models and Experimental Systems

Naugle, Cameron R 01 April 2018 (has links)
This thesis is intended to provide fundamental information for the construction and analysis of rotordynamic theoretical models, and their comparison the experimental systems. Finite Element Method (FEM) is used to construct models using Timoshenko beam elements with viscous and hysteretic internal damping. Eigenvalues and eigenvectors of state space equations are used to perform stability analysis, produce critical speed maps, and visualize mode shapes. Frequency domain analysis of theoretical models is used to provide Bode diagrams and in experimental data full spectrum cascade plots. Experimental and theoretical model analyses are used to optimize the control algorithm for an Active Magnetic Bearing on an overhung rotor.

Page generated in 0.1433 seconds