• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 19
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Mechanistic-empirical study of effects of truck tire pressure on asphalt pavement performance

Wang, Feng 28 August 2008 (has links)
Not available / text
2

A Framework for the Utilization of CFD in the Early Stages of Architectural Design

Jo, SooJeong 02 November 2021 (has links)
Computational Fluid Dynamics (CFD) refers to numerical methods for simulating the movement of fluid. Due to its efficiency, CFD has been widely used in aerospace engineering and automotive design since the 1970s. It also has potential in architectural design since airflow has been an important player in the design process. However, the CFD users in the building industry tend to be limited to researchers and engineers rather than architectural designers due to the complexity of the simulations including the extensive knowledge required for the processing. The benefit of using CFD would be maximized through its early application by architectural designers since the key design decisions are made in the early stages. In response to this, simulation tools specialized for the early stages of architectural design are developed recently, which offer more user-friendly interfaces. Within this context, the present study aimed to introduce and test the simulation tools for the early stages of design and establish a framework for supporting architectural designers to utilize CFD. Under this objective, a mixed-method approach was employed that includes quantitative and qualitative assessments of simulation tools, development of a knowledge set that can help the users to understand the simulation processes and results, an immersive case study for structuring the procedural model, and a Delphi method for evaluating and reaching a consensus on the proposed framework. / Doctor of Philosophy / Computational Fluid Dynamics (CFD) is a computer simulation method for automating the calculations of the complex equations on the flow of a fluid, such as air or water, and visualizing the calculation results. CFD has been widely used in designing aircraft and cars since the 1970s because of the efficiency of this method compared to physical experiments. CFD also has potential in architectural design since airflow has been an important player in the design process. However, the CFD users in the building industry tend to be limited to researchers and engineers rather than architectural designers due to the complexity of the simulations including the extensive knowledge required for the processing. In response to this situation, more user-friendly simulation tools for non-experts, including designers, are developed recently. Considering this context, the present study tried to introduce and test the simulation tools for designers and develop a framework for supporting architectural designers to utilize CFD in their design processes. Under this objective, both quantitative and qualitative studies were conducted, including the review of relevant articles, computer simulations, a case study with an architectural project example, designed by the author, and a Delphi method in which the recruited experts in architectural design evaluate the proposed framework.
3

Preference-based modelling and prediction of occupants window behaviour in non-air-conditioned office buildings

Wei, Shen January 2013 (has links)
In naturally ventilated buildings, occupants play a key role in the performance and energy efficiency of the building operation, mainly through the opening and closing of windows. To include the effects of building occupants within building performance simulation, several useful models describing building occupants and their window opening/closing behaviour have been generated in the past 20 years. However, in these models, the occupants are classified based on the whole population or on sub-groups within a building, whilst the behavioural difference between individuals is commonly ignored. This research project addresses this latter issue by evaluating the importance of the modelling and prediction of occupants window behaviour individually, rather than putting them into a larger population group. The analysis is based on field-measured data collected from a case study building containing a number of single-occupied cellular offices. The study focuses on the final position of windows at the end of the working day. In the survey, 36 offices and their occupants were monitored, with respect to the occupants presence and window use behaviour, in three main periods of a year: summer, winter and transitional. From the behaviour analysis, several non-environmental factors, namely, season, floor level, gender and personal preference, are identified to have a statistically significant effect on the end-of-day window position in the building examined. Using these factors, occupants window behaviour is modelled by three different classification methods of building occupants, namely, whole population, sub-groups and personal preference. The preference-based model is found to perform much better predictive ability on window state when compared with those developed based on whole population and sub-groups. When used in a realistic building simulation problem, the preference-based prediction of window behaviour can reflect well the different energy performance among individual rooms, caused by different window use patterns. This cannot be demonstrated by the other two models. The findings from this research project will help both building designers and building managers to obtain a more accurate prediction of building performance and a better understanding of what is happening in actual buildings. Additionally, if the habits and behavioural preferences of occupants are well understood, this knowledge can be potentially used to increase the efficiency of building operation, by either relocating occupants within the building or by educating them to be more energy efficient.
4

Printing conductive traces to enable high frequency wearable electronics applications

Lim, Ying Ying January 2015 (has links)
With the emergence of the Internet of Things (IoT), wireless body area networks (WBANs) are becoming increasingly pervasive in everyday life. Most WBANs are currently working at the IEEE 802.15.4 Zigbee standard. However there are growing interests to investigate the performance of BANs operating at higher frequencies (e.g. millimetre-wave band), due to the advantages offered compared to those operating at lower microwave frequencies. This thesis aims to realise printed conductive traces on flexible substrates, targeted for high frequency wearable electronics applications. Specifically, investigations were performed in the areas pertaining to the surface modification of substrates and the electrical performance of printed interconnects. Firstly, a novel methodology was proposed to characterise the dielectric properties of a non-woven fabric (Tyvek) up to 20 GHz. This approach utilised electromagnetic (EM) simulation to improve the analytical equations based on transmission line structures, in order to improve the accuracy of the conductor loss values in the gigahertz range. To reduce the substrate roughness, an UV-curable insulator was used to form a planarisation layer on a non-porous substrate via inkjet printing. The results obtained demonstrated the importance of matching the surface energy of the substrate to the ink to minimise the ink de-wetting phenomenon, which was possible within the parameters of heating the platen. Furthermore, the substrate surface roughness was observed to affect the printed line width significantly, and a surface roughness factor was introduced in the equation of Smith et al. to predict the printed line width on a substrate with non-negligible surface roughness (Ra ≤ 1 μm). Silver ink de-wetting was observed when overprinting silver onto the UV-cured insulator, and studies were performed to investigate the conditions for achieving electrically conductive traces using commercial ink formulations, where the curing equipment may be non-optimal. In particular, different techniques were used to characterise the samples at different stages in order to evaluate the surface properties and printability, and to ascertain if measurable resistances could be predicted. Following the results obtained, it was demonstrated that measurable resistance could be obtained for samples cured under an ambient atmosphere, which was verified on Tyvek samples. Lastly, a methodology was proposed to model for the non-ideal characteristics of printed transmission lines to predict the high frequency electrical performance of those structures. The methodology was validated on transmission line structures of different lengths up to 30 GHz, where a good correlation was obtained between simulation and measurement results. Furthermore, the results obtained demonstrate the significance of the paste levelling effect on the extracted DC conductivity values, and the need for accurate DC conductivity values in the modelling of printed interconnects.
5

Performance optimization of geophysics stencils on HPC architectures / Optimização de desempenho de estênceis geofísicos sobre arquiteturas HPC

Abaunza, Víctor Eduardo Martínez January 2018 (has links)
A simulação de propagação de onda é uma ferramenta crucial na pesquisa de geofísica (para análise eficiente dos terremotos, mitigação de riscos e a exploração de petróleo e gáz). Devido à sua simplicidade e sua eficiência numérica, o método de diferenças finitas é uma das técnicas implementadas para resolver as equações da propagação das ondas. Estas aplicações são conhecidas como estênceis porque consistem num padrão que replica a mesma computação num domínio multidimensional de dados. A Computação de Alto Desempenho é requerida para solucionar este tipo de problemas, como consequência do grande número de pontos envolvidos nas simulações tridimensionais do subsolo. A optimização do desempenho dos estênceis é um desafio e depende do arquitetura usada. Neste contexto, focamos nosso trabalho em duas partes. Primeiro, desenvolvemos nossa pesquisa nas arquiteturas multicore; analisamos a implementação padrão em OpenMP dos modelos numéricos da transferência de calor (um estêncil Jacobi de 7 pontos), e o aplicativo Ondes3D (um simulador sísmico desenvolvido pela Bureau de Recherches Géologiques et Minières); usamos dois algoritmos conhecidos (nativo, e bloqueio espacial) para encontrar correlações entre os parâmetros da configuração de entrada, na execução, e o desempenho computacional; depois, propusemos um modelo baseado no Aprendizado de Máquina para avaliar, predizer e melhorar o desempenho dos modelos estênceis na arquitetura usada; também usamos um modelo de propagação da onda acústica fornecido pela empresa Petrobras; e predizemos o desempenho com uma alta precisão (até 99%) nas arquiteturas multicore. Segundo, orientamos nossa pesquisa nas arquiteturas heterogêneas, analisamos uma implementação padrão do modelo de propagação de ondas em CUDA, para encontrar os fatores que afetam o desempenho quando o número de aceleradores é aumentado; então, propusemos uma implementação baseada em tarefas para amelhorar o desempenho, de acordo com um conjunto de configuração no tempo de execução (algoritmo de escalonamento, tamanho e número de tarefas), e comparamos o desempenho obtido com as versões de só CPU ou só GPU e o impacto no desempenho das arquiteturas heterogêneas; nossos resultados demostram um speedup significativo (até 25) em comparação com a melhor implementação disponível para arquiteturas multicore. / Wave modeling is a crucial tool in geophysics, for efficient strong motion analysis, risk mitigation and oil & gas exploration. Due to its simplicity and numerical efficiency, the finite-difference method is one of the standard techniques implemented to solve the wave propagation equations. This kind of applications is known as stencils because they consist in a pattern that replicates the same computation on a multi-dimensional domain. High Performance Computing is required to solve this class of problems, as a consequence of a large number of grid points involved in three-dimensional simulations of the underground. The performance optimization of stencil computations is a challenge and strongly depends on the underlying architecture. In this context, this work was directed toward a twofold aim. Firstly, we have led our research on multicore architectures and we have analyzed the standard OpenMP implementation of numerical kernels from the 3D heat transfer model (a 7-point Jacobi stencil) and the Ondes3D code (a full-fledged application developed by the French Geological Survey). We have considered two well-known implementations (naïve, and space blocking) to find correlations between parameters from the input configuration at runtime and the computing performance; thus, we have proposed a Machine Learning-based approach to evaluate, to predict, and to improve the performance of these stencil models on the underlying architecture. We have also used an acoustic wave propagation model provided by the Petrobras company and we have predicted the performance with high accuracy on multicore architectures. Secondly, we have oriented our research on heterogeneous architectures, we have analyzed the standard implementation for seismic wave propagation model in CUDA, to find which factors affect the performance; then, we have proposed a task-based implementation to improve the performance, according to the runtime configuration set (scheduling algorithm, size, and number of tasks), and we have compared the performance obtained with the classical CPU or GPU only versions with the results obtained on heterogeneous architectures.
6

Using uncertainty and sensitivity analysis to inform the design of net-zero energy vaccine warehouses

Pudleiner, David Burl 27 August 2014 (has links)
The vaccine cold chain is an integral part of the process of storing and distributing vaccines prior to administration. A key component of this cold chain for developing countries is the primary vaccine storage warehouse. As the starting point for the distribution of vaccines throughout the country, these buildings have a significant amount of refrigerated space and therefore consume large amounts of energy. Therefore, this thesis focuses on analyzing the relative importance of parameters for the design of an energy efficient primary vaccine storage warehouse with the end goal of achieving Net-Zero Energy operation. A total of 31 architectural design parameters, such as roof insulation U-Value and external wall thermal mass, along with 14 building control parameters, including evaporator coil defrost termination and thermostat set points, are examined. The analysis is conducted across five locations in the developing world with significant variations in climate conditions: Buenos Aires, Argentina; Tunis, Tunisia; Asuncion, Paraguay; Mombasa, Kenya; and Bangkok, Thailand. Variations in the parameters are examined through the implementation of a Monte Carlo-based global uncertainty and sensitivity analysis to a case study building layout. A regression-based sensitivity analysis is used to analyze both the main effects of each parameter as well as the interactions between parameter pairs. The results of this research indicate that for all climates examined, the building control parameters have a larger relative importance than the architectural design parameters in determining the warehouse energy consumption. This is due to the dominance of the most influential building control parameter examined, the Chilled Storage evaporator fan control strategy. The importance of building control parameters across all climates examined emphasizes the need for an integrated design method to ensure the delivery of an energy efficient primary vaccine warehouse.
7

Performance Study and Dynamic Optimization Design for Thread Pool Systems

Dongping Xu January 2004 (has links)
19 Dec 2004. / Published through the Information Bridge: DOE Scientific and Technical Information. "IS-T 2359" Dongping Xu. 12/19/2004. Report is also available in paper and microfiche from NTIS.
8

Applicability of climate-based daylight modelling

Brembilla, Eleonora January 2017 (has links)
This PhD thesis evaluated the applicability of Climate-Based Daylight Modelling (CBDM) as it is presently done. The objectives stated in this thesis aimed at broadly assessing applicability by looking at multiple aspects: (i) the way CBDM is used by expert researchers and practitioners; (ii) how state-of-the-art simulation techniques compare to each other and how they are affected by uncertainty in input factors; (iii) how the simulated results compare with data measured in real occupied spaces. The answers obtained from a web-based questionnaire portrayed a variety of workflows used by different people to perform similar, if not the same, evaluations. At the same time, the inter-model comparison performed to compare the existing simulation techniques revealed significant differences in the way the sky and the sun are recreated by each technique. The results also demonstrated that some of the annual daylight metrics commonly required in building guidelines are sensitive to the choice of simulation tool, as well as other input parameters, such as climate data, orientation and material optical properties. All the analyses were carried out on four case study spaces, remodelled from existing classrooms that were the subject of a concurrent research study that monitored their interior luminous conditions. A large database of High Dynamic Range images was collected for that study, and the luminance data derived from these images could be used in this work to explore a new methodology to calibrate climate-based daylight models. The results collected and presented in this dissertation illustrate how, at the time of writing, there is not a single established common framework to follow when performing CBDM evaluations. Several different techniques coexist but each of them is characterised by a specific domain of applicability.
9

Performance optimization of geophysics stencils on HPC architectures / Optimização de desempenho de estênceis geofísicos sobre arquiteturas HPC

Abaunza, Víctor Eduardo Martínez January 2018 (has links)
A simulação de propagação de onda é uma ferramenta crucial na pesquisa de geofísica (para análise eficiente dos terremotos, mitigação de riscos e a exploração de petróleo e gáz). Devido à sua simplicidade e sua eficiência numérica, o método de diferenças finitas é uma das técnicas implementadas para resolver as equações da propagação das ondas. Estas aplicações são conhecidas como estênceis porque consistem num padrão que replica a mesma computação num domínio multidimensional de dados. A Computação de Alto Desempenho é requerida para solucionar este tipo de problemas, como consequência do grande número de pontos envolvidos nas simulações tridimensionais do subsolo. A optimização do desempenho dos estênceis é um desafio e depende do arquitetura usada. Neste contexto, focamos nosso trabalho em duas partes. Primeiro, desenvolvemos nossa pesquisa nas arquiteturas multicore; analisamos a implementação padrão em OpenMP dos modelos numéricos da transferência de calor (um estêncil Jacobi de 7 pontos), e o aplicativo Ondes3D (um simulador sísmico desenvolvido pela Bureau de Recherches Géologiques et Minières); usamos dois algoritmos conhecidos (nativo, e bloqueio espacial) para encontrar correlações entre os parâmetros da configuração de entrada, na execução, e o desempenho computacional; depois, propusemos um modelo baseado no Aprendizado de Máquina para avaliar, predizer e melhorar o desempenho dos modelos estênceis na arquitetura usada; também usamos um modelo de propagação da onda acústica fornecido pela empresa Petrobras; e predizemos o desempenho com uma alta precisão (até 99%) nas arquiteturas multicore. Segundo, orientamos nossa pesquisa nas arquiteturas heterogêneas, analisamos uma implementação padrão do modelo de propagação de ondas em CUDA, para encontrar os fatores que afetam o desempenho quando o número de aceleradores é aumentado; então, propusemos uma implementação baseada em tarefas para amelhorar o desempenho, de acordo com um conjunto de configuração no tempo de execução (algoritmo de escalonamento, tamanho e número de tarefas), e comparamos o desempenho obtido com as versões de só CPU ou só GPU e o impacto no desempenho das arquiteturas heterogêneas; nossos resultados demostram um speedup significativo (até 25) em comparação com a melhor implementação disponível para arquiteturas multicore. / Wave modeling is a crucial tool in geophysics, for efficient strong motion analysis, risk mitigation and oil & gas exploration. Due to its simplicity and numerical efficiency, the finite-difference method is one of the standard techniques implemented to solve the wave propagation equations. This kind of applications is known as stencils because they consist in a pattern that replicates the same computation on a multi-dimensional domain. High Performance Computing is required to solve this class of problems, as a consequence of a large number of grid points involved in three-dimensional simulations of the underground. The performance optimization of stencil computations is a challenge and strongly depends on the underlying architecture. In this context, this work was directed toward a twofold aim. Firstly, we have led our research on multicore architectures and we have analyzed the standard OpenMP implementation of numerical kernels from the 3D heat transfer model (a 7-point Jacobi stencil) and the Ondes3D code (a full-fledged application developed by the French Geological Survey). We have considered two well-known implementations (naïve, and space blocking) to find correlations between parameters from the input configuration at runtime and the computing performance; thus, we have proposed a Machine Learning-based approach to evaluate, to predict, and to improve the performance of these stencil models on the underlying architecture. We have also used an acoustic wave propagation model provided by the Petrobras company and we have predicted the performance with high accuracy on multicore architectures. Secondly, we have oriented our research on heterogeneous architectures, we have analyzed the standard implementation for seismic wave propagation model in CUDA, to find which factors affect the performance; then, we have proposed a task-based implementation to improve the performance, according to the runtime configuration set (scheduling algorithm, size, and number of tasks), and we have compared the performance obtained with the classical CPU or GPU only versions with the results obtained on heterogeneous architectures.
10

Verification and science simulations with the Instrument Performance Simulator for JWST - NIRSpec / Vérification et simulations scientifiques avec le simulateur des performances de l’instrument JWST - NIRSpec

Dorner, Bernhard 10 May 2012 (has links)
Le télescope spatial James Webb (JWST) est le successeur du télescope spatial Hubble (HST). Il est développé en collaboration par les agences spatiales NASA, ESA et CSA. Le spectrographe proche infrarouge NIRSpec est un instrument du JWST. Le Centre de Recherche Astrophysique de Lyon (CRAL) a développé le logiciel de simulation des performances (IPS) de NIRSpec en vue de l’étude de ses performances et de la préparation de poses synthétiques réalistes. Dans cette thèse, nous vérifions certains algorithmes de l’IPS, en particulier ceux traitant des transformations de coordonnées et de la propagation en optique de Fourier. Nous présentons ensuite une interface simplifiée pour la préparation de « scènes » d’observation et un logiciel de traitement de données permettant d’extraire des spectres à partir de poses synthétiques afin de faciliter l’exploitation des simulations. Nous décrivons comment nous avons construit et validé le modèle de l’instrument par comparaison avec les données de calibration. Pour les transformations de coordonnées, le modèle final est capable de reproduire les mesures avec une précision 3 à 5 fois meilleure que celle requise pour la calibration spectrale. Pour la transmission globale notre précision est de 0–10% dans l’absolu et meilleure que 5% en relatif. Finalement, nous présentons la première simulation d’une observation de type « champ profond spectrographique » et nous explorons comment NIRSpec pourra être utilisé pour observer le transit de planètes extra-solaires. Nous déterminons en particulier la luminosité maximale des étoiles hôtes pouvant être observées et quels peuvent être les rapports signal sur bruit attendus. / The James Webb Space Telescope (JWST), a joint project by NASA, ESA, and CSA, is the successor mission to the Hubble Space Telescope. One of the four science instruments on board is the near-infrared spectrograph NIRSpec. To study the instrument performance and to create realistic science exposures, the Centre de Recherche Astrophysique de Lyon (CRAL) developed the Instrument Performance Simulator (IPS) software. Validating the IPS functionality, creating an accurate model of the instrument, and facilitating the preparation and analysis of simulations are key elements for the success of the IPS. In this context, we verified parts of the IPS algorithms, specifically the coordinate transform formalism, and the Fourier propagation module. We also developed additional software tools to simplify the scientific usage, as a target interface to construct observation scenes, and a dedicated data reduction pipeline to extract spectra from exposures. Another part of the PhD work dealt with the assembly of an as-built instrument model, and its verification with measurements from a ground calibration campaign. For coordinate transforms inside the instrument, we achieved an accuracy of 3–5 times better than the required absolute spectral calibration, and we could reproduce the total instrument throughput with an absolute error of 0–10% and a relative error of less than 5%. Finally, we show first realistic on-sky simulations of a deep field spectroscopy scene, and we explored the capabilities of NIRSpec to study exoplanetary transit events. We determined upper brightness limits of observable host stars, and give noise estimations of exemplary transit spectra.

Page generated in 0.0488 seconds