• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 802
  • 474
  • 212
  • 148
  • 88
  • 77
  • 70
  • 23
  • 16
  • 15
  • 13
  • 13
  • 13
  • 13
  • 13
  • Tagged with
  • 2243
  • 2243
  • 969
  • 659
  • 645
  • 442
  • 432
  • 409
  • 357
  • 335
  • 329
  • 328
  • 323
  • 317
  • 317
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

Towards a Polyalgorithm for Land Use and Land Cover Change Detection

Saxena, Rishu 23 February 2018 (has links)
Earth observation satellites (EOS) such as Landsat provide image datasets that can be immensely useful in numerous application domains. One way of analyzing satellite images for land use and land cover change (LULCC) is time series analysis (TSA). Several algorithms for time series analysis have been proposed by various groups in remote sensing; more algorithms (that can be adapted) are available in the general time series literature. However, in spite of an abundance of algorithms, the choice of algorithm to be used for analyzing an image stack is presently an open question. A concurrent issue is the prohibitive size of Landsat datasets, currently of the order of petabytes and growing. This makes them computationally unwieldy --- both in storage and processing. An EOS image stack typically consists of multiple images of a fixed area on the Earth's surface (same latitudes and longitudes) taken at different time points. Experiments on multicore servers indicate that carrying out meaningful time series analysis on one such interannual, multitemporal stack with existing state of the art codes can take several days. This work proposes using multiple algorithms to analyze a given image stack in a polyalgorithmic framework. A polyalgorithm combines several basic algorithms, each meant to solve the same problem, producing a strategy that unites the strengths and circumvents the weaknesses of constituent algorithms. The foundation of the proposed TSA based polyalgorithm is laid using three algorithms (LandTrendR, EWMACD, and BFAST). These algorithms are precisely described mathematically, and chosen to be fundamentally distinct from each other in design and in the phenomena they capture. Analysis of results representing success, failure, and parameter sensitivity for each algorithm is presented. Scalability issues, important for real simulations, are also discussed, along with scalable implementations, and speedup results. For a given pixel, Hausdorff distance is used to compare the distance between the change times (breakpoints) obtained from two different algorithms. Timesync validation data, a dataset that is based on human interpretation of Landsat time series in concert with historical aerial photography, is used for validation. The polyalgorithm yields more accurate results than EWMACD and LandTrendR alone, but counterintuitively not better than BFAST alone. This nascent work will be directly useful in land use and land cover change studies, of interest to terrestrial science research, especially regarding anthropogenic impacts on the environment, and in much broader applications such as health monitoring and urban transportation. / M. S. / Numerous manmade satellites circling around the Earth regularly take pictures (images) of the Earth’s surface from up above. These images naturally provide information regarding the land cover of any given piece of land at the moment of capture (for e.g., whether the land area in the picture is covered with forests or with agriculture or housing). Therefore, for a fixed land area, if a person looks at a chronologically arranged series of images, any significant changes in land use can be identified. Identifying such changes is of critical importance, especially in this era where deforestation, urbanization, and global warming are major concerns. The goal of this thesis is to investigate the design of methodologies (algorithms) that can efficiently and accurately use satellite images for answering questions regarding land cover trend and change. Experience shows that the state-of-the-art methodologies produce great results for the region they were originally designed on but their performance on other regions is unpredictable. In this work, therefore, a ‘polyalgorithm’ is proposed. A ‘polyalgorithm’ utilizes multiple simple methodologies and strategically combines them so that the outcome is better than the individual components. In this introductory work, three component methodologies are utilized; each component methodology is capable of capturing phenomenon different from the other two. Mathematical formulation of each component methodology is presented. Initial strategy for combining the three component algorithms is proposed. The outcomes of each component methodology as well the polyalgorithm are tested on human interpreted data. The strengths and limitations of each methodology are also discussed. Efficiency of the codes used for implementing the polyalgorithm is also discussed; this is important because the satellite data that needs to be processed is known to be huge (petabytes sized already and growing). This nascent work will be directly useful especially in understanding the impact of human activities on the environment. It will also be useful in other applications such as health monitoring and urban transportation.
872

Objective-Driven Strategies for HPC Job Scheduling

Goponenko, Alexander V 01 January 2024 (has links) (PDF)
As High-Performance Computing (HPC) becomes increasingly prevalent and resource-intensive, there is a growing need for the development of more efficient job schedulers, which play a crucial role in the performance of HPC clusters. This dissertation manifests a comprehensive approach to this complex issue, contributing to three major components of the problem: (1) metrics of job packing efficiency and fairness, (2) advanced scheduling algorithms, and (3) job resource utilization prediction techniques. To ensure high relevance of the results, this study emphasizes scheduling objectives. Therefore, scheduling quality metrics are investigated first, yielding a set of metrics that allow comparing alternative schedules and evaluating scheduling goals trade-offs. The set of metrics enables the first comprehensive analysis of effects of different scheduling improvement approaches on several aspects of scheduling quality, covering a variety of list scheduling algorithms as well as constraint programming optimization schedulers. The contribution to the third research area covers techniques to measure and estimate resource usage data. It reports a first-of-a-kind evaluation of various job runtime prediction techniques in improving scheduling quality, demonstrates an approach capable of estimating job parameters beyond the runtime, and explores measuring resources consumed by a job in an HPC cluster. The dissertation concludes with a practical demonstration of these concepts through an I/O-aware scheduling prototype that measures real-time resource utilization, autonomously determines job resource requirements the scheduler needs, and implements full-featured multi-resource backfill scheduling that accounts for the specific properties of the parallel file system bandwidth resource. The study exhibits the advantages of further reducing I/O congestion—beyond the capability of generic I/O-aware scheduling—and presents the Workload-adaptive scheduling strategy that attains such improvement. This approach features a “two-group” approximation technique to maintain efficient performance regardless of zero-throughput job availability. An evaluation conducted on a real HPC cluster demonstrates the effectiveness of the novel strategy.
873

Advances in High Performance Computing Through Concurrent Data Structures and Predictive Scheduling

Lamar, Kenneth M 01 January 2024 (has links) (PDF)
Modern High Performance Computing (HPC) systems are made up of thousands of server-grade compute nodes linked through a high-speed network interconnect. Each node has tens or even hundreds of CPU cores each, with counts continuing to grow on newer HPC clusters. This results in a need to make use of millions of cores per cluster. Fully leveraging these resources is difficult. There is an active need to design software that scales and fully utilizes the hardware. In this dissertation, we address this gap with a dual approach, considering both intra-node (single node) and inter-node (across node) concerns. To aid in intra-node performance, we propose two novel concurrent data structures: a transactional vector and a persistent hash map. These designs have broad applicability in any multi-core environment but are particularly useful in HPC, which commonly features many cores per node. For inter-node performance, we propose a metrics-driven approach to improve scheduling quality, using predicted run times to backfill jobs more accurately and aggressively. This is augmented using application input parameters to further improve these run time predictions. Improved scheduling reduces the number of idle nodes in an HPC cluster, maximizing job throughput. We find that our data structures outperform the prior state-of-the-art while offering additional features. Our backfill technique likewise outperforms previous approaches in simulations, and our run time predictions were significantly more accurate than conventional approaches. Code for these works is freely available, and we have plans to deploy these techniques more broadly on real HPC systems in the future.
874

Synthesis of hexafluoroisopropylidene (6F) polyarylenes via interfacial polymerization of aromatic monomers with hexafluoroacetone trihydrate

Muñoz Ruiz, Gustavo Adolfo 13 August 2024 (has links) (PDF)
Fluoropolymers are well-known for their exceptional thermo-oxidative stability, chemical resistance, UV-light resistance, and low surface energy properties, making them essential for high-performance applications across clean energy, medical device, automotive, aerospace, electronics, and telecommunication industries. Since the discovery of poly(tetrafluoroethylene) (PTFE), numerous fluorinated thermoplastics and fluoroelastomers have entered the market. The global fluoropolymer industry is projected to reach $18 billion by 2033. Recent advancements have focused on integrating mainchain fluorocarbon moieties, such as perfluorocyclobutyl (PFCB), perfluorocycloalkenyl (PFCA), fluoroarylene vinylene ether (FAVE), and the hexafluoroisopropylidene (6F) group, into semi-fluorinated polymers. These modifications enhance properties like thermal stability, processability, and optical transparency, while reducing water absorption, thereby enhancing durability. This dissertation introduces a versatile electrophilic aromatic substitution methodology for synthesizing polymer containing the 6F groups, followed by a practical approach for synthesizing semi-fluorinated alcohols and diols. The research explores possibilities for creating novel materials, showcasing the utility of conducting hexafluorohydroxyalkylation by using hexafluoroacetone trihydrate (HFAH) for incorporating the 6F group. An interfacial electrophilic aromatic substitution polymerization using HFAH with aromatic monomers is developed in Chapter 2, which can generate semi-fluorinated polyaryl ethers and polyphenylenes with high regioselectivity and molecular weights up to 60 kDa. These polymers exhibit high solubility in organic solvents and excellent thermo-oxidative stability. The dielectric and optical characterization of these fluoropolymers is presented. Chapter 3 extends this electrophilic substitution methodology to the preparation of random and block semi-fluorinated copolymers, as well as thermoset materials, demonstrating the versatility in polymer design and application through the fluorohydroxyalkylation of aromatic compounds. Chapter 4 details the synthesis and characterization of 4,4’-bis(2-hydroxyhexafluoroisopropyl)diphenyl ether, a semi-fluorinated diol. This monomer was used to prepare the first reported polycarbonate with hexafluoroisopropoxy groups -C(CF3)2O- incorporated in the main chain via polycondensation. Polyesters and polysilyl ethers were also prepared from this diol. Finally, the dissertation explores attempts to form metallocene condensation metallopolymers by reacting the acidic and sterically hindered semi-fluorinated diol with group IVB metallocene dichloride.
875

Multiscale Computational Framework for Analysis and Design of Ultra-High Performance Concrete Structural Components and Systems

El Helou, Rafic Gerges 04 November 2016 (has links)
This research develops and validates computational tools for the design and analysis of structural components and systems constructed with Ultra-High Performance Concrete (UHPC). The modeling strategy utilizes the Lattice Discrete Particle Model (LDPM) to represent UHPC material and structural member response, and extends a structural-level triaxial continuum constitutive law to account for the addition of discrete fibers. The approach is robust, general, and could be utilized by other researchers to expand the computational capability and simulate the behavior of different composite materials. The work described herein identifies the model material parameters by conducting a complete material characterization for UHPC, with and without fiber reinforcement, describing its behavior in unconfined compression, uniaxial tension, and fracture toughness. It characterizes the effect of fiber orientations, fiber-matrix interaction, and resolves the issue of multi-axial stress states on fiber pullout. The capabilities of the computational models are demonstrated by comparing the material test data that were not used in the parameter identification phase to numerical simulations to validate the models' predictive capabilities. These models offer a mechanics-based shortcut to UHPC analysis that can strategically support ongoing development of material and structural design codes and standards. / Ph. D. / This research develops and validates new computer-based methods to analyze and design civil infrastructure constructed with ultra-high performance concrete (UHPC), achieved when steel fibers are combined with a finely graded cement matrix. With superior performance characteristics in comparison to regular concrete, UHPC is studied herein for its strong potential to advance the durability, efficiency, and resiliency of new and existing infrastructure. The simulation-based methods are extensively verified with novel experiments that evaluate the material limits and failure modes when compressed, bent, or stretched, considering fiber volume and orientation. The computer-based tools can be used to realistically assess the structural performance of innovative UHPC applications in buildings, bridges, and tunnels under natural hazards, leading to surpassed levels of structural efficiency and resiliency across civil infrastructure.
876

200 MBPS TO 1 GBPS DATA ACQUISITION & CAPTURE USING RACEWAY

O’Connell, Richard 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / For many years VME has been the platform of choice for high-performance, real-time data acquisition systems. VME’s longevity has been made possible in part by timely enhancements which have expanded system bandwidth and allowed systems to support ever increasing throughput. One of the most recent ANSI-standard extensions of the VME specification defines RACEway, a system of dynamically switched, 160 Mbyte/second board-to-board interconnects. In typical systems RACEway increases the internal bandwidth of a VME system by an order of magnitude. Since this bandwidth is both scaleable and deterministic, it is particularly well suited to high-performance, real-time systems. The potential of RACEway for very high-performance (200 Mbps to 1 Gbps) real-time systems has been recognized by both the VME industry and a growing number of system integrators. This recognition has yielded many new RACEway-ready VME products from more than a dozen vendors. In fact many significant real-time data acquisition systems that consist entirely of commercial-off-the-shelf (COTS) RACEway products are being developed and fielded today. This paper provides an overview of RACEway technology, identifies the types of RACEway equipment currently available, discusses how RACEway can be applied in high-performance data acquisition systems, and briefly describes two systems that acquiring and capturing real-time data streams at rates from 200 Mbps to 1 Gbps using RACEway.
877

Contribuição ao estudo da carbonatação em concretos e argamassas executados com e sem adição de sílica ativa / Contribution to the carbonation study in concretes and mortars manufactured with and without the addition of silica fume

Silva, Valdirene Maria 08 May 2002 (has links)
O presente estudo refere-se a uma das deteriorações mais freqüentes nas estruturas de concreto armado: a ação da carbonatação. Para essa verificação construiu-se uma câmara de carbonatação acelerada, que foi calibrada, com a finalidade de estudar o processo de carbonatação em corpos-de-prova executados em concreto e argamassa com cimentos CP V ARI Plus e CP V ARI RS com e sem adição de sílica ativa, curados em câmara úmida por sete dias e posteriormente expostos à atmosfera agressiva de gás carbônico por 7, 14, 28, 63 e 91 dias. Também foram executados corpos-de-prova semelhantes (controle), os quais foram ensaiados à compressão axial e à compressão diametral para determinação da resistência à compressão, tração e medida da profundidade de carbonatação. A partir destes resultados é ajustado um modelo teórico experimental para previsão da profundidade de carbonatação em função do tempo. Observa-se que para todas as composições estudadas a profundidade de carbonatação é pequena. Analisa-se também, a influência da carbonatação no ganho da resistência mecânica das argamassas e dos concretos, e o efeito da adição de sílica ativa e do tipo de cimento no fenômeno de carbonatação. Finalizando, é apresentada uma justificativa dos resultados com base no banco de dados existente no LMABC-SET-EESC-USP. / The present study refers to one of the most frequent deterioration in reinforced concrete structure: the action of carbonation. For this, an accelerated carbonation chamber was built and gauged in order to study the carbonation process in concrete and mortar specimens with CP V ARI Plus and CP V ARI RS cements, with and without silica fume addition. The specimens were cured in a humidity chamber for seven days and exposed to aggressive atmosphere of carbonic gas for 7, 14, 28, 63 and 91 days. Similar specimens of control were also manufactured and left in humidity chamber during the same periods. These specimens were tested an axial compression and splitting tensile strength to determine the compression and tensile strength and the carbonation depth. From all the obtained results an experimental theoretical model was forecasted to determine the depth carbonation in function of time. It is observed that all the depths carbonation measured is small. The carbonation influence on mechanical resistance gain of the mortar and concrete, as well as the effect addition of both of silica fume and cement type on the phenomenon of carbonation is also analyzed. Finally, it is presented a justification of results based on the existent database at LMABC-SET-EESC-USP.
878

Parallel programming on General Block Min Max Criterion

Lee, ChuanChe 01 January 2006 (has links)
The purpose of the thesis is to develop a parallel implementation of the General Block Min Max Criterion (GBMM). This thesis deals with two kinds of parallel overheads: Redundant Calculations Parallel Overhead (RCPO) and Communication Parallel Overhead (CPO).
879

Servant Leaders' Use of High Performance Work Practices and Corporate Social Performance

Preiksaitis, Michelle Kathleen Fitzgerald 01 January 2016 (has links)
Business researchers have shown that servant leaders empower, provide long-term vision, and serve their workers and followers better than do nonservant leaders. High performance work practices (HPWPs) and corporate social performance (CSP) can enhance employee and firm productivity. However, when overused or poorly managed, HPWPs and CSP can lead to the business problems of employee disengagement, overload, or anxiety. Scholars noted a gap in human resource management research regarding whether leadership styles affect HPWPs and CSP use. This study examined the relationship between leadership style and the use of HPWPs and CSP, by using a quantitative, nonexperimental design. U.S. business leaders (N = 287) completed a survey consisting of 3 previously published scales. A chi-square analysis calculated the servant to nonservant leader ratio in the population, finding a disproportionate ratio (1:40) of servant (n = 7) to nonservant (n = 280) leaders. Two t tests showed that no significant difference existed in how servant and nonservant leaders use HPWPs or CSP. However, a multiple linear regression model showed that a leader's self-reported characteristics of empowerment, vision, or service positively predicted CSP use; empowerment positively predicted HPWPs use; service negatively predicted HPWPs use; and vision had no effect on HPWPs use. Findings may help human resource practitioners identify leaders who use HPWPs or CSP differently. Positive social change may occur by hiring more visionary, empowering, or service-oriented leaders who can support overwhelmed or anxious workers, potentially leading to more engaged and productive workers, and an increase in the use of positive CSP.
880

Beton unter mehraxialer Beanspruchung / Concrete under multiaxial loading conditions / Ein Materialgesetz für Hochleistungsbetone unter Kurzzeitbelastung

Speck, Kerstin 21 July 2008 (has links) (PDF)
Diese Arbeit basiert auf der Untersuchung von hochfesten und ultrahochfesten Betonen mit und ohne Fasern unter zwei- und dreiaxialer Druckbeanspruchung. Die Auswirkung der unterschiedlichen Betonzusammensetzung ist für verschiedene Beanspruchungen nicht gleich ausgeprägt, dennoch konnten grundlegende Zusammenhänge herausgearbeitet werden. Anhand der Bruchbilder konnten die drei Versagensmechanismen Druck-, Spalt- und Schubbruch identifiziert werden, deren Charakteristik über die Kalibrierung an vier speziellen Versuchswerten direkt in das Bruchkriterium einfließen. Dieses stellt eine Erweiterung der Formulierung von OTTOSEN dar, so dass das spröde und z. T. anisotrope Verhalten von Hochleistungsbeton berücksichtigt wird. Die beobachteten Spannungs-Dehnungs-Verläufe korrelieren mit den Versagensformen. Deshalb wird ein Stoffgesetz getrennt für den Druck- und den Zugmeridian aufgestellt, dessen Parameter sich mit zunehmendem hydrostatischen Druck verändern. In die Anfangswerte fließen die Betonzusammensetzung und herstellungsbedingte Anisotropien ein. Die lastinduzierte Anisotropie infolge einer gerichteten Mikrorissbildung wird in dem vorgestellten Stoffgesetzt über richtungsabhängige Parameter ebenfalls berücksichtigt.

Page generated in 0.0833 seconds