• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1775
  • 718
  • 212
  • 158
  • 80
  • 50
  • 41
  • 35
  • 30
  • 19
  • 18
  • 13
  • 13
  • 10
  • 9
  • Tagged with
  • 3776
  • 1667
  • 737
  • 541
  • 404
  • 398
  • 391
  • 321
  • 319
  • 306
  • 275
  • 272
  • 265
  • 231
  • 197
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

Forced Convection Heat Transfer in Two-Dimensional Ribbed Channels

Mortazavi, Hamidreza 12 1900 (has links)
<p> The progress of technology in the electronic components industry has been rapidly growing. The evolution of various techniques has made it possible for this industry to grow and diversify with the market demand. Thus, the development of electronic component products over a short span of time requires having highly efficient tools for design and manufacturing. Advances in commercial Computational Fluid Dynamics (CFD) softwares and computational power have enabled modeling to a high level of architectural details. Nowadays, computer aided design becomes an essential design tool in the engineering environment. Computer analysis reduces both the time development cycle and the prototyping costs in the early to intermediate design phases. The accuracy of the computational prediction of heat transfer rates depends mostly on the correct choice of turbulent model. Although many turbulent models, rather than a universal turbulent model, have been developed during the last two decades, there is usually one model that performs better than others for certain flow conditions. </p> <p> In the present research, a turbulence model is selected from amongst a few candidates, namely standard k- 8, RNG k- 8, shear stress transport (SST), and Reynolds Stress Model (RSM), based on comparisons with experimental data and direct numerical simulation (DNS) results from previous work. The SST turbulence model shows excellent agreement with the DNS results and, hence, is considered an appropriate turbulence model for thermal analysis of electronic packages with elements that have almost the same heights. Moreover, the average Nusselt number of array of obstacles is obtained numerically using commercial code ANSYS-CFX 1 0.0. The effects upon the mean Nusselt number arising from parameteric changes in Reynolds number, element height, element width, and element-to-element distance are compared and discussed. Finally, the parametric study has offered a set of correlations for the mean Nusselt number of arrays of mounted obstacles in the channel flow. </p> / Thesis / Master of Applied Science (MASc)
842

Using dimensional analysis in building statistical response models

Boycan, Nancy Weisenstein January 1966 (has links)
The method of dimensional analysis has been used for almost a century with experimental methods to obtain, among other things, prediction equations in the physical sciences and engineering. Only recently has the method been considered in the statistical sense. A thorough literature research is presented, including history, method and theory, problems, and disadvantages of dimensional analysis. The dimensional analysis preliminary model is transformed into a multiple linear regression model and is compared to a quadratic regression model with respect to prediction of a single variable in some practical examples. Whereas dimensions are the main consideration in the dimensional analysis model, they are ignored in the quadratic regression model. Two sets of experimental data were used, each set on both models, and the respective residual sum of squares and multiple correlation coefficients compared. The results were similar in both cases. The correlation coefficients of the quadratic model were higher than those of the dimensional analysis model and the residual sum of squares were lower for the quadratic than for the dimensional analysis model. / M.S.
843

The Effect of Endwall Contouring On Boundary Layer Development in a Turbine Blade Passage

Lynch, Stephen P. 22 September 2011 (has links)
Increased efficiency and durability of gas turbine components is driven by demands for reduced fuel consumption and increased reliability in aircraft and power generation applications. The complex flow near the endwall of an axial gas turbine has been identified as a significant contributing factor to aerodynamic loss and increased part temperatures. Three-dimensional (non-axisymmetric) contouring of the endwall surface has been shown to reduce aerodynamic losses, but the effect of the contouring on endwall heat transfer is not well understood. This research focused on understanding the general flow physics of contouring and the sensitivity of the contouring to perturbations arising from leakage features present in an engine. Two scaled low-speed cascades were designed for spatially-resolved measurements of endwall heat transfer and film cooling. One cascade was intended for flat and contoured endwall studies without considering typical engine leakage features. The other cascade modeled the gaps present between a stator and rotor and between adjacent blades on a wheel, in addition to the non-axisymmetric endwall contouring. Comparisons between a flat and contoured endwall showed that the contour increased endwall heat transfer and increased turbulence in the forward portion of the passage due to displacement of the horseshoe vortex. However, the contour decreased heat transfer further into the passage, particularly in regions of high heat transfer, due to delayed development of the passage vortex and reduced boundary layer skew. Realistic leakage features such as the stator-rotor rim seal had a significant effect on the endwall heat transfer, although leakage flow from the rim seal only affected the horseshoe vortex. The contours studied were not effective at reducing the impact of secondary flows on endwall heat transfer and loss when realistic leakage features were also considered. The most significant factor in loss generation and high levels of endwall heat transfer was the presence of a platform gap between adjacent airfoils. / Ph. D.
844

Robust Feature Screening Procedures for Mixed Type of Data

Sun, Jinhui 16 December 2016 (has links)
High dimensional data have been frequently collected in many fields of scientific research and technological development. The traditional idea of best subset selection methods, which use penalized L_0 regularization, is computationally too expensive for many modern statistical applications. A large number of variable selection approaches via various forms of penalized least squares or likelihood have been developed to select significant variables and estimate their effects simultaneously in high dimensional statistical inference. However, in modern applications in areas such as genomics and proteomics, ultra-high dimensional data are often collected, where the dimension of data may grow exponentially with the sample size. In such problems, the regularization methods can become computationally unstable or even infeasible. To deal with the ultra-high dimensionality, Fan and Lv (2008) proposed a variable screening procedure via correlation learning to reduce dimensionality in sparse ultra-high dimensional models. Since then many authors further developed the procedure and applied to various statistical models. However, they all focused on single type of predictors, that is, the predictors are either all continuous or all discrete. In practice, we often collect mixed type of data, which contains both continuous and discrete predictors. For example, in genetic studies, we can collect information on both gene expression profiles and single nucleotide polymorphism (SNP) genotypes. Furthermore, outliers are often present in the observations due to experimental errors and other reasons. And the true trend underlying the data might not follow the parametric models assumed in many existing screening procedures. Hence a robust screening procedure against outliers and model misspecification is desired. In my dissertation, I shall propose a robust feature screening procedure for mixed type of data. To gain insights on screening for individual types of data, I first studied feature screening procedures for single type of data in Chapter 2 based on marginal quantities. For each type of data, new feature screening procedures are proposed and simulation studies are performed to compare their performances with existing procedures. The aim is to identify a best robust screening procedure for each type of data. In Chapter 3, I combine these best screening procedures to form the robust feature screening procedure for mixed type of data. Its performance will be assessed by simulation studies. I shall further illustrate the proposed procedure by the analysis of a real example. / Ph. D.
845

Three-Dimensional Spherical Modeling of the Mantles of Mars and Ceres: Inference from Geoid, Topography and Melt History

Sekhar, Pavithra 03 April 2014 (has links)
Mars is one of the most intriguing planets in the solar system. It is the fourth terrestrial planet and is differentiated into a core, mantle and crust. The crust of Mars is divided into the Southern highlands and the Northern lowlands. The largest volcano in the solar system, Olympus Mons is found on the crustal dichotomy boundary. The presence of isolated volcanism on the surface suggests the importance of internal activity on the planet. In addition to volcanism in the past, there has been evidence of present day volcanic activity. Convective upwelling, including decompression melting, has remained an important contributing factor in melting history of the planet. In this thesis, I investigate the production of melt in the mantle for a Newtonian rheology, and compare it with the melt needed to create Tharsis. In addition to the melt production, I analyze the 3D structure of the mantle for a stagnant lithosphere. I vary different parameters in the Martian mantle to understand the production of low or high degree structures early on to explain the crustal dichotomy. This isothermal structure in the mantle contributes to the geoid and topography on the planet. I also analyze how much of the internal density contributes to the surface topography and areoid of Mars. In contrast to Mars, Ceres is a dwarf planet in the Asteroid belt. Ceres is an icy body and it is unclear if it is differentiated into a core, mantle and crust yet. However, studies show that it is most likely a differentiated body and the mantle consists of ice and silicate. The presence of brucite and serpentine on the surface suggests the presence of internal activity. Being a massive body and also believed to have existed since the beginning of the solar system, studying Ceres will shed light on the conditions of the early solar system. Ceres has been of great interest in the scientific community and its importance has motivated NASA to launch a mission, Dawn, to study the planet. Dawn will collect data from the dwarf planet when it arrives in 2015. In my modeling studies, I implement a similar technique on Ceres, as followed on Mars, and focus on the mantle convection process and the geoid and topography. The silicate-ice mixture in the mantle gives rise to a non-Newtonian rheology that depends on the grain size of the ice particle. The geoid and topography observed for different differentiated scenarios in my modeling can be compared with the data from the Dawn mission when it arrives at Ceres in 2015. / Ph. D.
846

Efficacy of retinal disparity depth cues in three-dimensional visual displays

Miller, Robert Howard 07 November 2008 (has links)
Recent interest in three-dimensional (3-D) stereoscopic displays has prompted the need to assess the efficacy of retinal disparity depth cues. Accordingly, this study analyzed performance on two 3-D tasks under three levels of signal-to-clutter ratio as participants viewed three display formats portrayed with or without retinal disparity depth cues. Display formats included a plan view and two types of perspective formats. The two tasks assessed viewer ability to compare inter-object distances and extrapolate object positions given a known vector within a 3-D volume. Results indicate that retinal disparity depth cues reduce the number and magnitude of errors within a course prediction task, but did not affect search times or ratings of viewer confidence. Display format affected search times as follows. In a relative distance task, search times for the perspective format are lower than for either the plan view or enhanced perspective formats. In a course prediction task, search times for the plan view and perspective formats are lower than for the enhanced perspective format. Display format does not affect error rate, error magnitude, or ratings of viewer confidence. No interaction between depth cues and display format was observed. The inclusion of retinal disparity depth cues in a visual display system are suggested when the viewer task involves predictions of object position in a 3-D volume and when reducing the number and magnitude of errors is important. Perspective display formats are suggested when fast search times are important. / Master of Science
847

A Rate of Convergence for Learning Theory with Consensus

Gregory, Jessica G. 04 February 2015 (has links)
This thesis poses and solves a distribution free learning problem with consensus that arises in the study of estimation and control strategies for distributed sensor networks. Each node i for i = 1, . . . , n of the sensor network collects independent and identically distributed local measurements {z i} := {z i j}j∈N := {(x i j , yi j )}j∈N ⊆ X × Y := Z that are generated by the probability measure ρ i on Z. Each node i for i = 1, . . . , n of the network constructs a sequence of estimates {f i k }k∈N from its local measurements {z i} and from information functionals whose values are exchanged with other nodes as specified by the communication graph G for the network. The optimal estimate of the distribution free learning problem with consensus is cast as a saddle point problem which characterizes the consensus-constrained optimal estimate. This thesis introduces a two stage learning dynamic wherein local estimation is carried out via local least square approximations based on wavelet constructions and information exchange is associated with the Lagrange multipliers of the saddle point problem. Rates of convergence for the two stage learning dynamic are derived based on certain recent probabilistic bounds derived for wavelet approximation of regressor functions. / Master of Science
848

Envisioning the Mind: Children's Representations of Mental Processes

Rice, Rebekah R. 06 January 2004 (has links)
Inspired by writings on creativity and by Howard Gardner's theory of multiple intelligences, I conducted a series of ten "exercises" -- each of them a guided visualization followed by an opportunity to produce -- with nine- and ten-year-old students. The visualizations, which were designed to encourage the students to explore some of the many ways our minds have of knowing and learning, began with a simple relaxation exercise and proceeded to more challenging exercises involving, for instance, kinesthetic learning, sensory awareness, the logical and linguistic mind versus the spatial mind, and intra- and interpersonal intelligence. Following each visualization the students discussed what they had experienced (transcripts of the visualizations and the discussions are included in the thesis). The students responded in visual terms as well: after each visualization, each student created a two- or three-dimensional piece of art from materials such as matboard, construction and origami paper, glue, felt-tip pens, pipe cleaners, and plastic-coated wire. These visual responses have been photographed, described, and scored according to the number of materials used, the number of colors used, and the dimensionality of the piece (photos, descriptions, and scores are included in the "Gallery". I found, surprisingly, that the visualizations in which the students were the most imaginatively engaged did not always produce the most interesting art, and that girls were much less likely than boys to create three-dimensional pieces, although girls tended to use more colors and occasionally used relief on otherwise two-dimensional pieces. / Master of Architecture
849

Spectral edge image fusion: theory and applications

Connah, David, Drew, M.S., Finlayson, G. January 2014 (has links)
No / This paper describes a novel approach to the fusion of multidimensional images for colour displays. The goal of the method is to generate an output image whose gradient matches that of the input as closely as possible. It achieves this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is subsequently reintegrated to generate an output. Constraints on the output colours are provided by an initial RGB rendering to produce ‘naturalistic’ colours: we provide a theorem for projecting higher-D contrast onto the initial colour gradients such that they remain close to the original gradients whilst maintaining exact high-D contrast. The solution to this constrained optimisation is closed-form, allowing for a very simple and hence fast and efficient algorithm. Our approach is generic in that it can map any N-D image data to any M-D output, and can be used in a variety of applications using the same basic algorithm. In this paper we focus on the problem of mapping N-D inputs to 3-D colour outputs. We present results in three applications: hyperspectral remote sensing, fusion of colour and near-infrared images, and colour visualisation of MRI Diffusion-Tensor imaging.
850

Classification of heterogeneous data based on data type impact of similarity

Ali, N., Neagu, Daniel, Trundle, Paul R. 11 August 2018 (has links)
Yes / Real-world datasets are increasingly heterogeneous, showing a mixture of numerical, categorical and other feature types. The main challenge for mining heterogeneous datasets is how to deal with heterogeneity present in the dataset records. Although some existing classifiers (such as decision trees) can handle heterogeneous data in specific circumstances, the performance of such models may be still improved, because heterogeneity involves specific adjustments to similarity measurements and calculations. Moreover, heterogeneous data is still treated inconsistently and in ad-hoc manner. In this paper, we study the problem of heterogeneous data classification: our purpose is to use heterogeneity as a positive feature of the data classification effort by using consistently the similarity between data objects. We address the heterogeneity issue by studying the impact of mixing data types in the calculation of data objects’ similarity. To reach our goal, we propose an algorithm to divide the initial data records based on pairwise similarity for classification subtasks with the aim to increase the quality of the data subsets and apply specialized classifier models on them. The performance of the proposed approach is evaluated on 10 publicly available heterogeneous data sets. The results show that the models achieve better performance for heterogeneous datasets when using the proposed similarity process.

Page generated in 0.0859 seconds