• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Preprocessing and Postprocessing in Linear Optimization

Huang, Xin 06 1900 (has links)
This thesis gives an overall survey of preprocessing and postprocessing techniques in linear optimization (LO) and its implementations in the software package McMaster Interior Point Method (McIPM). We first review the basic concepts and theorems in LO. Then we present all the techniques used in preprocessing and the corresponding operations in postprocessing. Further, we discuss the implementation issues in our software development. Finally we test a series of problems from the Netlib test set and compare our results with state of the art software, such as LIPSOL and CPLEX. / Thesis / Master of Science (MS)
2

Improving hydrometeorologic numerical weather prediction forecast value via bias correction and ensemble analysis

McCollor, Douglas 11 1900 (has links)
This dissertation describes research designed to enhance hydrometeorological forecasts. The objective of the research is to deliver an optimal methodology to produce reliable, skillful and economically valuable probabilistic temperature and precipitation forecasts. Weather plays a dominant role for energy companies relying on forecasts of watershed precipitation and temperature to drive reservoir models, and forecasts of temperatures to meet energy demand requirements. Extraordinary precipitation events and temperature extremes involve consequential water- and power-management decisions. This research compared weighted-average, recursive, and model output statistics bias-correction methods and determined optimal window-length to calibrate temperature and precipitation forecasts. The research evaluated seven different methods for daily maximum and minimum temperature forecasts, and three different methods for daily quantitative precipitation forecasts, within a region of complex terrain in southwestern British Columbia, Canada. This research then examined ensemble prediction system design by assessing a three-model suite of multi-resolution limited area mesoscale models. The research employed two different economic models to investigate the ensemble design that produced the highest-quality, most valuable forecasts. The best post-processing methods for temperature forecasts included moving-weighted average methods and a Kalman filter method. The optimal window-length proved to be 14 days. The best post-processing methods for achieving mass balance in quantitative precipitation forecasts were a moving-average method and the best easy systematic estimator method. The optimal window-length for moving-average quantitative precipitation forecasts was 40 days. The best ensemble configuration incorporated all resolution members from all three models. A cost/loss model adapted specifically for the hydro-electric energy sector indicated that operators managing rainfall-dominated, high-head reservoirs should lower their reservoir with relatively low probabilities of forecast precipitation. A reservoir-operation model based on decision theory and variable energy pricing showed that applying an ensemble-average or full-ensemble precipitation forecast provided a much greater profit than using only a single deterministic high-resolution forecast. Finally, a bias-corrected super-ensemble prediction system was designed to produce probabilistic temperature forecasts for ten cities in western North America. The system exhibited skill and value nine days into the future when using the ensemble average, and 12 days into the future when employing the full ensemble forecast.
3

Improving hydrometeorologic numerical weather prediction forecast value via bias correction and ensemble analysis

McCollor, Douglas 11 1900 (has links)
This dissertation describes research designed to enhance hydrometeorological forecasts. The objective of the research is to deliver an optimal methodology to produce reliable, skillful and economically valuable probabilistic temperature and precipitation forecasts. Weather plays a dominant role for energy companies relying on forecasts of watershed precipitation and temperature to drive reservoir models, and forecasts of temperatures to meet energy demand requirements. Extraordinary precipitation events and temperature extremes involve consequential water- and power-management decisions. This research compared weighted-average, recursive, and model output statistics bias-correction methods and determined optimal window-length to calibrate temperature and precipitation forecasts. The research evaluated seven different methods for daily maximum and minimum temperature forecasts, and three different methods for daily quantitative precipitation forecasts, within a region of complex terrain in southwestern British Columbia, Canada. This research then examined ensemble prediction system design by assessing a three-model suite of multi-resolution limited area mesoscale models. The research employed two different economic models to investigate the ensemble design that produced the highest-quality, most valuable forecasts. The best post-processing methods for temperature forecasts included moving-weighted average methods and a Kalman filter method. The optimal window-length proved to be 14 days. The best post-processing methods for achieving mass balance in quantitative precipitation forecasts were a moving-average method and the best easy systematic estimator method. The optimal window-length for moving-average quantitative precipitation forecasts was 40 days. The best ensemble configuration incorporated all resolution members from all three models. A cost/loss model adapted specifically for the hydro-electric energy sector indicated that operators managing rainfall-dominated, high-head reservoirs should lower their reservoir with relatively low probabilities of forecast precipitation. A reservoir-operation model based on decision theory and variable energy pricing showed that applying an ensemble-average or full-ensemble precipitation forecast provided a much greater profit than using only a single deterministic high-resolution forecast. Finally, a bias-corrected super-ensemble prediction system was designed to produce probabilistic temperature forecasts for ten cities in western North America. The system exhibited skill and value nine days into the future when using the ensemble average, and 12 days into the future when employing the full ensemble forecast.
4

Improving hydrometeorologic numerical weather prediction forecast value via bias correction and ensemble analysis

McCollor, Douglas 11 1900 (has links)
This dissertation describes research designed to enhance hydrometeorological forecasts. The objective of the research is to deliver an optimal methodology to produce reliable, skillful and economically valuable probabilistic temperature and precipitation forecasts. Weather plays a dominant role for energy companies relying on forecasts of watershed precipitation and temperature to drive reservoir models, and forecasts of temperatures to meet energy demand requirements. Extraordinary precipitation events and temperature extremes involve consequential water- and power-management decisions. This research compared weighted-average, recursive, and model output statistics bias-correction methods and determined optimal window-length to calibrate temperature and precipitation forecasts. The research evaluated seven different methods for daily maximum and minimum temperature forecasts, and three different methods for daily quantitative precipitation forecasts, within a region of complex terrain in southwestern British Columbia, Canada. This research then examined ensemble prediction system design by assessing a three-model suite of multi-resolution limited area mesoscale models. The research employed two different economic models to investigate the ensemble design that produced the highest-quality, most valuable forecasts. The best post-processing methods for temperature forecasts included moving-weighted average methods and a Kalman filter method. The optimal window-length proved to be 14 days. The best post-processing methods for achieving mass balance in quantitative precipitation forecasts were a moving-average method and the best easy systematic estimator method. The optimal window-length for moving-average quantitative precipitation forecasts was 40 days. The best ensemble configuration incorporated all resolution members from all three models. A cost/loss model adapted specifically for the hydro-electric energy sector indicated that operators managing rainfall-dominated, high-head reservoirs should lower their reservoir with relatively low probabilities of forecast precipitation. A reservoir-operation model based on decision theory and variable energy pricing showed that applying an ensemble-average or full-ensemble precipitation forecast provided a much greater profit than using only a single deterministic high-resolution forecast. Finally, a bias-corrected super-ensemble prediction system was designed to produce probabilistic temperature forecasts for ten cities in western North America. The system exhibited skill and value nine days into the future when using the ensemble average, and 12 days into the future when employing the full ensemble forecast. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate
5

Investigating Polynomial Fitting Schemes for Image Compression

Ameer, Salah 13 January 2009 (has links)
Image compression is a means to perform transmission or storage of visual data in the most economical way. Though many algorithms have been reported, research is still needed to cope with the continuous demand for more efficient transmission or storage. This research work explores and implements polynomial fitting techniques as means to perform block-based lossy image compression. In an attempt to investigate nonpolynomial models, a region-based scheme is implemented to fit the whole image using bell-shaped functions. The idea is simply to view an image as a 3D geographical map consisting of hills and valleys. However, the scheme suffers from high computational demands and inferiority to many available image compression schemes. Hence, only polynomial models get further considerations. A first order polynomial (plane) model is designed to work in a multiplication- and division-free (MDF) environment. The intensity values of each image block are fitted to a plane and the parameters are then quantized and coded. Blocking artefacts, a common drawback of block-based image compression techniques, are reduced using an MDF line-fitting scheme at blocks’ boundaries. It is shown that a compression ratio of 62:1 at 28.8dB is attainable for the standard image PEPPER, outperforming JPEG, both objectively and subjectively for this part of the rate-distortion characteristics. Inter-block prediction can substantially improve the compression performance of the plane model to reach a compression ratio of 112:1 at 27.9dB. This improvement, however, slightly increases computational complexity and reduces pipelining capability. Although JPEG2000 is not a block-based scheme, it is encouraging that the proposed prediction scheme performs better in comparison to JPEG 2000, computationally and qualitatively. However, more experiments are needed to have a more concrete comparison. To reduce blocking artefacts, a new postprocessing scheme, based on Weber’s law, is employed. It is reported that images postprocessed using this scheme are subjectively more pleasing with a marginal increase in PSNR (<0.3 dB). The Weber’s law is modified to perform edge detection and quality assessment tasks. These results motivate the exploration of higher order polynomials, using three parameters to maintain comparable compression performance. To investigate the impact of higher order polynomials, through an approximate asymptotic behaviour, a novel linear mapping scheme is designed. Though computationally demanding, the performances of higher order polynomial approximation schemes are comparable to that of the plane model. This clearly demonstrates the powerful approximation capability of the plane model. As such, the proposed linear mapping scheme constitutes a new approach in image modeling, and hence worth future consideration.
6

Investigating Polynomial Fitting Schemes for Image Compression

Ameer, Salah 13 January 2009 (has links)
Image compression is a means to perform transmission or storage of visual data in the most economical way. Though many algorithms have been reported, research is still needed to cope with the continuous demand for more efficient transmission or storage. This research work explores and implements polynomial fitting techniques as means to perform block-based lossy image compression. In an attempt to investigate nonpolynomial models, a region-based scheme is implemented to fit the whole image using bell-shaped functions. The idea is simply to view an image as a 3D geographical map consisting of hills and valleys. However, the scheme suffers from high computational demands and inferiority to many available image compression schemes. Hence, only polynomial models get further considerations. A first order polynomial (plane) model is designed to work in a multiplication- and division-free (MDF) environment. The intensity values of each image block are fitted to a plane and the parameters are then quantized and coded. Blocking artefacts, a common drawback of block-based image compression techniques, are reduced using an MDF line-fitting scheme at blocks’ boundaries. It is shown that a compression ratio of 62:1 at 28.8dB is attainable for the standard image PEPPER, outperforming JPEG, both objectively and subjectively for this part of the rate-distortion characteristics. Inter-block prediction can substantially improve the compression performance of the plane model to reach a compression ratio of 112:1 at 27.9dB. This improvement, however, slightly increases computational complexity and reduces pipelining capability. Although JPEG2000 is not a block-based scheme, it is encouraging that the proposed prediction scheme performs better in comparison to JPEG 2000, computationally and qualitatively. However, more experiments are needed to have a more concrete comparison. To reduce blocking artefacts, a new postprocessing scheme, based on Weber’s law, is employed. It is reported that images postprocessed using this scheme are subjectively more pleasing with a marginal increase in PSNR (<0.3 dB). The Weber’s law is modified to perform edge detection and quality assessment tasks. These results motivate the exploration of higher order polynomials, using three parameters to maintain comparable compression performance. To investigate the impact of higher order polynomials, through an approximate asymptotic behaviour, a novel linear mapping scheme is designed. Though computationally demanding, the performances of higher order polynomial approximation schemes are comparable to that of the plane model. This clearly demonstrates the powerful approximation capability of the plane model. As such, the proposed linear mapping scheme constitutes a new approach in image modeling, and hence worth future consideration.
7

Effect of Autoclave Process Parameters on Mechanical Behaviors of Carbon Fiber Reinforced Polymer Composites Fabricated via Additive Manufacturing

Nguyen, Quang Hao 01 January 2023 (has links) (PDF)
Additively manufactured carbon fiber reinforced polymers (CFRP) are vastly studied for their remarkable mechanical properties compared to most other 3D printed materials. Different methods were employed to further increase mechanical performance of CFRP 3D printed parts. The objective of the study is to investigate the effect of autoclave postprocessing on the interlaminar shear behavior between 3D printed CFRP layers. 3D printed CFRP samples were processed with nine combinations of temperature and vacuum in an autoclave. Short beam shear (SBS) tests were performed to characterize the interlaminar shear strength (ILSS) of the samples after autoclave processing. Digital image correlation (DIC) was utilized to quantify the strain and failure mode of the samples during SBS tests. From SBS mechanical tests, the curing temperature and vacuum of 170 C and -90 kPa produced samples with the highest ILSS, 39 MPa, a 46% improvement compared to uncured samples. The observed failure modes were fracture and delamination. Little work in additive manufacturing has applied autoclave as a post-process procedure. This study aims to explore this technique and establish its viability in improving mechanical performance of 3D printed fiber-reinforced parts.
8

Interactive Machine Learning for Refinement and Analysis of Segmented CT/MRI Images

Sarigul, Erol 07 January 2005 (has links)
This dissertation concerns the development of an interactive machine learning method for refinement and analysis of segmented computed tomography (CT) images. This method uses higher-level domain-dependent knowledge to improve initial image segmentation results. A knowledge-based refinement and analysis system requires the formulation of domain knowledge. A serious problem faced by knowledge-based system designers is the knowledge acquisition bottleneck. Knowledge acquisition is very challenging and an active research topic in the field of machine learning and artificial intelligence. Commonly, a knowledge engineer needs to have a domain expert to formulate acquired knowledge for use in an expert system. That process is rather tedious and error-prone. The domain expert's verbal description can be inaccurate or incomplete, and the knowledge engineer may not correctly interpret the expert's intent. In many cases, the domain experts prefer to do actions instead of explaining their expertise. These problems motivate us to find another solution to make the knowledge acquisition process less challenging. Instead of trying to acquire expertise from a domain expert verbally, we can ask him/her to show expertise through actions that can be observed by the system. If the system can learn from those actions, this approach is called learning by demonstration. We have developed a system that can learn region refinement rules automatically. The system observes the steps taken as a human user interactively edits a processed image, and then infers rules from those actions. During the system's learn mode, the user views labeled images and makes refinements through the use of a keyboard and mouse. As the user manipulates the images, the system stores information related to those manual operations, and develops internal rules that can be used later for automatic postprocessing of other images. After one or more training sessions, the user places the system into its run mode. The system then accepts new images, and uses its rule set to apply postprocessing operations automatically in a manner that is modeled after those learned from the human user. At any time, the user can return to learn mode to introduce new training information, and this will be used by the system to updates its internal rule set. The system does not simply memorize a particular sequence of postprocessing steps during a training session, but instead generalizes from the image data and from the actions of the human user so that new CT images can be refined appropriately. Experimental results have shown that IntelliPost improves the segmentation accuracy of the overall system by applying postprocessing rules. In tests two different CT datasets of hardwood logs, the use of IntelliPost resulted in improvements of 1.92% and 9.45%, respectively. For two different medical datasets, the use of IntelliPost resulted in improvements of 4.22% and 0.33%, respectively. / Ph. D.
9

Refinement of Automated Forest Area Estimation via Iterative Guided Spectral Class Rejection

Musy, Rebecca Forest 30 June 2003 (has links)
The goal of this project was to develop an operational Landsat TM image classification protocol for FIA forest area estimation. A hybrid classifier known as Iterative Guided Spectral Class Rejection (IGSCR) was automated using the ERDAS C Toolkit and ERDAS Macro Language. The resulting program was tested on 4 Landsat ETM+ images using training data collected via region-growing at 200 random points within each image. The classified images were spatially post-processed using variations on a 3x3 majority filter and a clump and eliminate technique. The accuracy of the images was assessed using the center land use of all plots, and subsets containing plots with 50, 75 and 100% homogeneity. The overall classification accuracies ranged from 81.9-95.4%. The forest area estimates derived from all image, filter and accuracy set combinations met the USDA Forest Service precision requirement of less than 3% per million acres timberland. There were no consistently significant filtering effects at the 95% level; however, the 3x3 majority filter significantly improved the accuracy of the most fragmented image and did not decrease the accuracy of the other images. Overall accuracy increased with homogeneity of the plots used in the validation set and decreased with fragmentation (estimated by % edge; R2 = 0.932). We conclude that the use of random points to initiate training data collection via region-growing may be an acceptable and repeatable addition to the IGSCR protocol, if the training data are representative of the spectral characteristics of the image. We recommend 3x3 majority filtering for all images, and, if it would not bias the sample, the selection of validation data using a plot homogeneity requirement rather than plot center land use only. These protocol refinements, along with the automation of IGSCR, make IGSCR suitable for use by the USDA Forest Service in the operational classification of Landsat imagery for forest area estimation. / Master of Science
10

Singularly perturbed problems with characteristic layers : Supercloseness and postprocessing

Franz, Sebastian 13 August 2008 (has links) (PDF)
In this thesis singularly perturbed convection-diffusion equations in the unit square are considered. Due to the presence of a small perturbation parameter the solutions of those problems exhibit an exponential layer near the outflow boundary and two parabolic layers near the characteristic boundaries. Discretisation of such problems on standard meshes and with standard methods leads to numerical solutions with unphysical oscillations, unless the mesh size is of order of the perturbation parameter which is impracticable. Instead we aim at uniformly convergent methods using layer-adapted meshes combined with standard methods. The meshes considered here are S-type meshes--generalisations of the standard Shishkin mesh. The domain is dissected in a non-layer part and layer parts. Inside the layer parts, the mesh might be anisotropic and non-uniform, depending on a mesh-generating function. We show, that the unstabilised Galerkin finite element method with bilinear elements on an S-type mesh is uniformly convergent in the energy norm of order (almost) one. Moreover, the numerical solution shows a supercloseness property, i.e. the numerical solution is closer to the nodal bilinear interpolant than to the exact solution in the given norm. Unfortunately, the Galerkin method lacks stability resulting in linear systems that are hard to solve. To overcome this drawback, stabilisation methods are used. We analyse different stabilisation techniques with respect to the supercloseness property. For the residual-based methods Streamline Diffusion FEM and Galerkin Least Squares FEM, the choice of parameters is addressed additionally. The modern stabilisation technique Continuous Interior Penalty FEM--penalisation of jumps of derivatives--is considered too. All those methods are proved to possess convergence and supercloseness properties similar to the standard Galerkin FEM. With a suitable postprocessing operator, the supercloseness property can be used to enhance the accuracy of the numerical solution and superconvergence of order (almost) two can be proved. We compare different postprocessing methods and prove superconvergence of above numerical methods on S-type meshes. To recover the exact solution, we apply continuous biquadratic interpolation on a macro mesh, a discontinuous biquadratic projection on a macro mesh and two methods to recover the gradient of the exact solution. Special attentions is payed to the effects of non-uniformity due to the S-type meshes. Numerical simulations illustrate the theoretical results.

Page generated in 0.0939 seconds