Spelling suggestions: "subject:"postprocessing"" "subject:"postprocessing""
51 |
Selective laser melting and post-processing for lightweight metallic optical componentsMaamoun, Ahmed January 2019 (has links)
Industry 4.0 will pave the way to a new age of advanced manufacturing. Additive manufacturing (AM) is one of the leading sectors of the upcoming industrial revolution. The key advantage of AM is its ability to generate lightweight, robust, and complex shapes. AM can also customize the microstructure and mechanical properties of the components according to the selected technique and process parameters. AM of metals using selective laser melting (SLM) could significantly impact a variety of critical applications. SLM is the most common technique of processing high strength Aluminum alloys. SLM of these alloys promises to enhance the performance of lightweight critical components used in various aerospace and automotive applications such as metallic optics and optomechanical components. However, the surface and inside defects of the as-built parts present an obstacle to product quality requirements. Consequently, the post-processing of SLM produced Al alloy parts is an essential step for homogenizing their microstructure and reducing as-built defects.
In the current research, various studies assess the optimal process mapping for high-quality SLM parts and the post-processing treatment of Al alloy parts. Ultra-precision machining with single point diamond turning or diamond micro fly-milling is also investigated for the as-built and post-processed Al parts to satisfy the optical mirror’s surface finish requirements.
The influence of the SLM process parameters on the quality of the AlSi10Mg and Al6061 alloy parts is investigated. A design of experiment (DOE) is used to analyze relative density, porosity, surface roughness, dimensional accuracy, and mechanical properties according to the interaction effect between SLM process parameters. The microstructure of both materials was also characterized. A developed process map shows the range of energy densities and SLM process parameters for each material needed to achieve optimum quality of the as-built parts. This comprehensive study also strives to reduce the amount of post-processing needed.
Thermal post-processing of AlSi10Mg parts is evaluated, using recycled powder, with the aim of improving the microstructure homogeneity of the as-built parts. This work is essential for the cost-effective additive manufacturing (AM) of metal optics and optomechanical systems. To achieve this goal, a full characterization of fresh and recycled powder was performed, in addition to a microstructure assessment of the as-built fabricated samples. Annealing, solution heat treatment (SHT) and T6 heat treatment (T6 HT) were applied under different processing conditions. The results demonstrated an improvement in microstructure homogeneity after thermal post-processing under specific conditions of SHT and T6 HT. A micro-hardness map was developed to help in the selection of optimal post-processing parameters for the part’s design requirements.
A study is also presented, which aims to improve the surface characteristics of the as-built AlSi10Mg parts using shot peening (SP). Different SP intensities were applied to various surface textures of the as-built samples. The SP results showed a significant improvement in the as-built surface topography and a higher value of effective depth using 22.9A intensity and Gp165 glass beads. The area near the shot-peened surface showed a significant microstructure refinement up to a specific depth, due to the dynamic precipitation of nanoscale Si particles. Surface hardening and high compressive residual stresses were generated due to severe plastic deformation.
Friction stir processing (FSP) was studied as a localized treatment on a large surface area of the as-built and hot isostatic pressed (HIPed) AlSi10Mg parts using multiple FSP tool passes. The influence of FSP on the microstructure, hardness, and residual stresses of parts was investigated. FSP transforms the microstructure of parts into an equiaxed grain structure. A consistent microstructure homogenization was achieved over the processed surface after applying a high ratio of tool pass overlap of ≥60%. A map of microstructure and hardness was prepared to assist in the selection of the optimal FSP parameters for attaining the required quality of the final processed parts.
Micromachining to the mirror surface was performed using diamond micro fly-milling and single point diamond turning techniques, and the effect of the material properties on surface roughness after machining was investigated. The machining parameters were also tuned to meet IR mirror optical requirements. A novel mirror structure is developed using the design for additive manufacturing concept additive (DFAM). This design achieved weight reduction of 50% as compared to the typical mirror structure. Moreover, the developed design offers an improvement of the mirror cooling performance due to the embedded cooling channels directed to the mirror surface.
A novel mirror structure is developed using the design for additive manufacturing concept additive (DFAM). This design achieved weight reduction of 50% as compared to the typical mirror structure. Moreover, the developed design offers an improvement of the mirror cooling performance due to the embedded cooling channels directed to the mirror surface. / Thesis / Doctor of Philosophy (PhD)
|
52 |
Probabilistic Flood Forecast Using Bayesian MethodsHan, Shasha January 2019 (has links)
The number of flood events and the estimated costs of floods have increased dramatically over the past few decades. To reduce the negative impacts of flooding, reliable flood forecasting is essential for early warning and decision making. Although various flood forecasting models and techniques have been developed, the assessment and reduction of uncertainties associated with the forecast remain a challenging task. Therefore, this thesis focuses on the investigation of Bayesian methods for producing probabilistic flood forecasts to accurately quantify predictive uncertainty and enhance the forecast performance and reliability.
In the thesis, hydrologic uncertainty was quantified by a Bayesian post-processor - Hydrologic Uncertainty Processor (HUP), and the predictability of HUP with different hydrologic models under different flow conditions were investigated. Followed by an extension of HUP into an ensemble prediction framework, which constitutes the Bayesian Ensemble Uncertainty Processor (BEUP). Then the BEUP with bias-corrected ensemble weather inputs was tested to improve predictive performance. In addition, the effects of input and model type on BEUP were investigated through different combinations of BEUP with deterministic/ensemble weather predictions and lumped/semi-distributed hydrologic models.
Results indicate that Bayesian method is robust for probabilistic flood forecasting with uncertainty assessment. HUP is able to improve the deterministic forecast from the hydrologic model and produces more accurate probabilistic forecast. Under high flow condition, a better performing hydrologic model yields better probabilistic forecast after applying HUP. BEUP can significantly improve the accuracy and reliability of short-range flood forecasts, but the improvement becomes less obvious as lead time increases. The best results for short-range forecasts are obtained by applying both bias correction and BEUP. Results also show that bias correcting each ensemble member of weather inputs generates better flood forecast than only bias correcting the ensemble mean. The improvement on BEUP brought by the hydrologic model type is more significant than the input data type. BEUP with semi-distributed model is recommended for short-range flood forecasts. / Dissertation / Doctor of Philosophy (PhD) / Flood is one of the top weather related hazards and causes serious property damage and loss of lives every year worldwide. If the timing and magnitude of the flood event could be accurately predicted in advance, it will allow time to get well prepared, and thus reduce its negative impacts. This research focuses on improving flood forecasts through advanced Bayesian techniques. The main objectives are: (1) enhancing reliability and accuracy of flood forecasting system; and (2) improving the assessment of predictive uncertainty associated with the flood forecasts. The key contributions include: (1) application of Bayesian forecasting methods in a semi-urban watershed to advance the predictive uncertainty quantification; and (2) investigation of the Bayesian forecasting methods with different inputs and models and combining bias correction technique to further improve the forecast performance. It is expected that the findings from this research will benefit flood impact mitigation, watershed management and water resources planning.
|
53 |
Deformation monitoring using GNSS:A study on a local network with preset displacementsMohammed, Peshawa January 2019 (has links)
In the past two decades, the number of observations and the accuracy of satellite-basedgeodetic measurements like Global Navigation satellite systems (GNSS) greatly increased,providing measured values of displacements and velocities of permanent geodetic stations.Establishment of the geodetic control networks and collecting geodetic observations, indifferent epochs, are a commonly used method for detection of displacements andconsequently disaster management. Selecting proper processing parameters for differenttypes of monitoring networks are critical factors of the deformation monitoring analysisusing GNSS, which is the main aim of this research. In this study, a simulation study and acontrolled survey were performed using simultaneous GNSS measurements of 5 geodeticpillars, established by Lantmäteriet at Gävle airport. Sensitivity analyses were performed ondifferent types of monitoring networks using different set of processing paarameters . Thesescenarios consider different sets of parameters, different types of monitoring networks, andvarious number of monitoring stations to evaluate the detectable displacements andcompare with the known millimeter displacements (simulated one). The results showed thatthe selection of processing parameters depends on the type and size of the monitoringnetwork and the location of the monitoring stations. Analyses also show that onlineprocessing services can provide mm-cm level accuracy for displacement detection ifsufficient observation time is available. Finally, checks were performed on the two ofsample scenarios to find the minimum observation time required for reaching to the mostaccurate simulated (preset) displacements.
|
54 |
On-Board Data Processing and FilteringFaber, Marc 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / One of the requirements resulting from mounting pressure on flight test schedules is the reduction of time needed for data analysis, in pursuit of shorter test cycles. This requirement has ramifications such as the demand for record and processing of not just raw measurement data but also of data converted to engineering units in real time, as well as for an optimized use of the bandwidth available for telemetry downlink and ultimately for shortening the duration of procedures intended to disseminate pre-selected recorded data among different analysis groups on ground. A promising way to successfully address these needs consists in implementing more CPU-intelligence and processing power directly on the on-board flight test equipment. This provides the ability to process complex data in real time. For instance, data acquired at different hardware interfaces (which may be compliant with different standards) can be directly converted to more easy-to-handle engineering units. This leads to a faster extraction and analysis of the actual data contents of the on-board signals and busses. Another central goal is the efficient use of the available bandwidth for telemetry. Real-time data reduction via intelligent filtering is one approach to achieve this challenging objective. The data filtering process should be performed simultaneously on an all-data-capture recording and the user should be able to easily select the interesting data without building PCM formats on board nor to carry out decommutation on ground. This data selection should be as easy as possible for the user, and the on-board FTI devices should generate a seamless and transparent data transmission, making a quick data analysis viable. On-board data processing and filtering has the potential to become the future main path to handle the challenge of FTI data acquisition and analysis in a more comfortable and effective way.
|
55 |
PTC Creo Simulate 4 RoadmapCoronado, Jose 22 July 2016 (has links) (PDF)
This presentation is intended to inform about the enhancements to Creo Simulate 4.0 and the Roadmap for the future (5.0 +)
|
56 |
CREO SIMULATE : ROADMAPCoronado, Jose 06 June 2017 (has links) (PDF)
This presentation is intended to inform about the enhancements of Creo Simulate and the Roadmap for the future.
|
57 |
Metodologia de análise estrutural e pós-processamento a partir de simulações do comportamento de sistemas oceânicos. / Methodology of structural analysis and post-processing from offshore system simulations.Gaspar, Henrique Murilo 28 June 2007 (has links)
Este trabalho apresenta uma metodologia capaz de unir a análise hidrodinâmica de um sistema oceânico com sua análise estrutural, assim como o pós-processamento acoplado dos resultados. Foram criadas rotinas e códigos para que a série temporal de forças das linhas de risers e amarração de uma plataforma pudessem tornar-se dados passíveis de entrada num pré- processador de elementos finitos. Com a aplicação destas no modelo, e sua conseqüente análise no domínio do tempo, foi criada uma interface para os resultados do solver, para que pudesse ser importados no pós-processador hidrodinâmico, e visualizados com os mesmos movimentos que os obtidos na resposta da análise hidrodinâmica. O TPNView, atual pós-processador do laboratório Tanque de Provas Numérico(TPN), foi quem recebeu por fim as rotinas e interfaces criadas a partir das idéias apresentadas nesta dissertação. Com isso é possível ver em uma única ferramenta de visualização tanto o comportamento hidrodinâmico quanto o estrutural de uma estrutura do sistema de uma só vez.. / This work presents a methodology developed to treat the hydrodynamic analysis of an offshore system conjointly with its structural analysis; the same methodology also allows for combined post-processing of data. Programming routines were created so as to enable the use of the time series of the forces present at the risers and mooring lines as input data for a finite element analysis solver software. Applying this forces in to the finite element model, and its subsequent analysis in time domain, it was possible to create an interface between the solver output, so that structural analysis could be imported into the hydrodynamic post-processor and visualised with the same movements obtained in the hydrodynamic analysis response. TPNView, the post-processor developed at the Tanque de Provas Numérico laboratory, was benefited from the programming routines and interfaces developed for this thesis. Using the aforedescribed visualisation tools, it became possible to monitor at once both the hydrodynamic and the structural behaviour of a system component.
|
58 |
Analysis of High Fidelity Turbomachinery CFD Using Proper Orthogonal DecompositionSpencer, Ronald Alex 01 March 2016 (has links)
Assessing the impact of inlet flow distortion in turbomachinery is desired early in the design cycle. This thesis introduces and validates the use of methods based on the Proper Orthogonal Decomposition (POD) to analyze clean and 1/rev static pressure distortion simulation results at design and near stall operating condition. The value of POD comes in its ability to efficiently extract both quantitative and qualitative information about dominant spatial flow structures as well as information about temporal fluctuations in flow properties. Observation of the modes allowed qualitative identification of shock waves as well as quantification of their location and range of motion. Modal coefficients revealed the location of the passage shock at a given angular location. Distortion amplification and attenuation between rotors was also identified. A relationship was identified between how distortion manifests itself based on downstream conditions. POD provides an efficient means for extracting the most meaningful information from large CFD simulation data. Static pressure and axial velocity were analyzed to explore the flow physics of 3 rotors of a compressor with a distorted inlet. Based on the results of the analysis of static pressure using the POD modes, it was concluded that there was a decreased range of motion in passage shock oscillation. Analysis of axial velocity POD modes revealed the presence of a separated region on the low pressure surface of the blade which was most dynamic in rotor 1. The thickness of this structure decreased in the near stall operating condition. The general conclusion is made that as the fan approaches stall the apparent effects of distortion are lessened which leads to less variation in the operating condition. This is due to the change in operating condition placing the fan at a different position on the speedline such that distortion effects are less pronounced. POD modes of entropy flux were used to identify three distinct levels of entropy flux in the blade row passage. The separated region was the region with the highest entropy due to the irreversibilities associated with separation.
|
59 |
Fiber post-processing and atomic spectroscopy for the development of atomic-vapour photonic microcell / Post traitement de fibres creuses et spectroscopie atomique pour le développement de microcapsule photonique et atomiqueZheng, Ximeng 18 July 2017 (has links)
Cette thèse concerne la spectroscopie atomique pour le développement de microcellules photoniques à base de vapeur atomique alcaline (PMC). Le travail est motivé par reproduire les performances remarquables obtenues dans les domaines des références de fréquences et de l’optique cohérente en environnement laboratoire et à les transférer dans des dispositifs très compacts et autonomes accessibles à une communauté scientifique plus large ou à un marché commercial. Dans notre cas, ces futurs composants seront basés sur une fibre à cristal photonique à coeur creux (HC-PCF) remplie d'un matériau en phase gazeuse pour former la PMC et se distingue par une longueur d'interaction ultra longue associée à des dimensions modales transverses micrométriques. Cependant, cette échelle micrométrique du coeur creux de la fibre contenant les atomes soulève plusieurs défis techniques et scientifiques. Parmi les défis techniques, nous énumérons le développement d'un processus efficace pour le chargement d'atomes dans une telle fibre optique, la suppression ou l'atténuation de la réactivité physio-chimique des atomes (c'est-à-dire le rubidium) avec la surface interne silice entourant le coeur de la fibre, etc... En parallèle, le rapport large surface / volume du coeur de la fibre soulève des questions comme la dynamique de relaxation de la cohérence et la nature et l'effet de l'interaction atome-surface. Ainsi, les travaux de thèse reposent sur l’utilisation de revêtements spécifiques de la surface interne du coeur de la fibre avec différents matériaux pour atténuer ces réactions physico-chimiques, sur l'amincissement des larges coeurs creux des HC-PCF Kagomé à couplage inhibé et sur une technique de soudure qui garantit de faibles pertes d’insertion et l’absence de réactivité avec les atomes. Parallèlement, la thèse rapporte un ensemble d'expériences de spectroscopie pour évaluer la dynamique de relaxation des atomes à l'intérieur des HC-PCF et l’observation de nouvelles transparences sous-Doppler. / This thesis reported on atomic spectroscopy for the development of alkaline atomic vapor photonic microcell (PMC). The work is motivated by reproducing the outstanding laboratory environment based performances achieved in the fields of frequency standard and coherent optics in highly compact and stand-alone devices that can be accessible to a wider scientific community or to a commercial market. In our case these future devices will be based a Hollow-core photonic crystal fiber (HC-PCF) filled with a gas phase material to form a PMC, and outstands with an ultra-long long interaction length and micrometric modal area. However, the micrometric scale of the fiber core harboring the atoms raises several technical and scientific challenges. Among the technical challenges, we list the development of efficient process for atom loading inside long hollow fiber with small core diameter, the suppression or mitigation of physio-chemical reactivity of the atoms (i.e. Rubidium) with the fiber core inner-surface silica etc. In parallel, the large surface-to-volume ratio of the fiber-core raises questions like the coherence relaxation dynamics and the nature and effect of the atom-surface interaction. The thesis reports on fiber-core inner surface coating with different materials to mitigate the physio-chemical reactions of the confined atoms with the surface, on tapering large core inhibited coupling Kagome HC-PCF, and splicing technique that ensures low splice loss and no atomic reactivity during the splicing process. In parallel, the thesis reports on a set of spectroscopy experiments to assess the relaxation dynamics of atoms inside HC-PCF and to report on novel sub-Doppler transparencies.
|
60 |
Application of high pressure processing for extending the shelf-life of fresh lactic curd cheeseDaryaei, Hossein, s3088498@student.rmit.edu.au January 2008 (has links)
Outgrowth of spoilage yeasts and moulds and post-processing acidification can limit the shelf-life of some fermented dairy products including fresh lactic curd cheeses. The possibility of using high pressure processing (HPP) for controlling these problems was investigated in a commercially manufactured fresh lactic curd cheese (pH 4.3-4.4) and fermented milk models (pH 4.3-6.5). The effects of HPP at 300 and 600 MPa on inactivation of glycolytic enzymes of lactic acid bacteria were also evaluated. Fresh cheeses made from pasteurised bovine milk using a commercial Lactococcus starter preparation were treated with high pressures ranging from 200 to 600 MPa (less than or equal to 22°C, 5 min) under vacuum packaging conditions and subsequently stored at 4°C for 8 weeks. Treatment at greater than or equal to 300 MPa substantially reduced the viable count of Lactococcus and effectively prevented the outgrowth of yeasts and moulds for 6 to 8 weeks without adversely affecting the sensory and textural attributes of the product. However, it had no significant effects (p less than 0.01) on variation of titratable acidity during storage. Fermented milk models were prepared by individually growing Lactococcus lactis subsp. lactis C10, Lactococcus lactis subsp. cremoris BK5, Streptococcus thermophilus TS1, Lactobacillus acidophilus 2400 and Lactobacillus delbrueckii subsp. bulgaricus 2517 in UHT skim milk and diluting the resulting fermented milk with UHT skim milk up to pH 6.5. Pressure treatment of the milk models at pH 5.2 resulted in substantial inhibition of post-processing acidification during storage and markedly reduced the viable count of Lactococcus at both 300 and 600 MPa and other bacteria only at 600 MPa. Treatment of the milk model at 600 MPa decreased the viable counts of Candida zeylanoides and Candida lipolytica (wildtype spoilage yeasts of lactic curd cheese, added as challenge cultures) from 105 CFU mL-1 to below the detection limit (log 0 CFU mL-1) at all pH levels tested (pH 4.3-6.5) and effectively controlled their outgrowth for 8 weeks. Treatment of milk model at 300 MPa had a similar effect only on C. zeylanoides. The viable count of C. lipolytica was reduced by 2.6, 2.4 and 2.3 logs by treatment at 300 MPa at pH levels of 4.3, 5.2 and 6.5, respectively, which subsequently recovered by 2.9, 2.8 and 3.2 logs within 3 weeks. Glycolytic enzymes of various starter bacteria showed different responses to pressure treatment. The lactate dehydrogenase in L. lactis subsp. lactis and Lb. acidophilus was quite resistant to pressures up to 600 MPa, but it was almost completely inactivated in S. thermophilus at pressure levels as low as 300 MPa. The â-galactosidase in Lb. acidophilus was more pressure stable than â-galactosidase in S. thermophilus and Phospho-â-galactosidase in L. lactis subsp. lactis. The findings of this study suggests HPP at 300-600 MPa as an effective method for controlling the outgrowth of some spoilage yeasts and moulds in fresh lactic curd cheeses. The results obtained with selected lactic acid bacteria in fermented milk models can be used to assist in establishing HPP operating parameters for development of new generation cultured dairy products, of reduced acidity and extended shelf-life.
|
Page generated in 0.0703 seconds