841 |
Thermal Stability of Zr-Si-N Nanocomposite Hard Thin FilmsKu, Nai-Yuan January 2010 (has links)
Mechanical property and thermal stability of Zr-Si-N films of varying silicon contents deposited on Al2O3 (0001) substrates are characterized. All films provided for characterization were deposited by reactive DC magnetron sputter deposition technique from elemental Zr and Si targets in a N2/Ar plasma at 800 oC. The hardness and microstructures of the as deposited films and post-annealed films up to 1100 oC are evaluated by means of nanoindentation, X-ray diffractometry and transmission electron microscopy. The Zr-Si-N films with 9.4 at.% Si exhibit hardness as high as 34 GPa and a strong (002) texture within which vertically elongated ZrN crystallites are embedded in a Si3N4 matrix. The hardness of these two dimensional nanocomposite films remains stable up to 1000 oC annealing temperatures which is in contrast to ZrN films where hardness degradation occurs already above 800 oC. The enhanced thermal stability is attributed to the presence of Si3N4 grain boundaries which act as efficient barriers to hinder the oxygen diffusion. X-ray amorphous or nanocrystalline structures are observed in Zr-Si-N films with silicon contents > 13.4 at.%. After the annealing treatments, crystalline phases such as ZrSi2, ZrO2 and Zr2O are formed above 1000 oC in the Si-containing films while only zirconia crystallites are observed at 800 oC in pure ZrN films because oxygen acts as artifacts in the vacuum furnace. The structural, compositional and hardness comparison of as-deposited and annealed films reveal that the addition of silicon enhances the thermal stability compared to pure ZrN films and the hardness degradation stems from the formation of oxides at elevated temperatures.
|
842 |
Stochastic Nested Aggregation for Images and Random FieldsWesolkowski, Slawomir Bogumil 27 March 2007 (has links)
Image segmentation is a critical step in building a computer vision algorithm that is able to distinguish between separate objects in an image scene. Image segmentation is based on two fundamentally intertwined components: pixel comparison and pixel grouping. In the pixel comparison step, pixels are determined to be similar or different from each other. In pixel grouping, those pixels which are similar are grouped together to form meaningful regions which can later be processed. This thesis makes original contributions to both of those areas.
First, given a Markov Random Field framework, a Stochastic Nested Aggregation (SNA) framework for pixel and region grouping is presented and thoroughly analyzed using a Potts model. This framework is applicable in general to graph partitioning and discrete estimation problems where pairwise energy models are used. Nested aggregation reduces the computational complexity of stochastic algorithms such as Simulated Annealing to order O(N) while at the same time allowing local deterministic approaches such as Iterated Conditional Modes to escape most local minima in order to become a global deterministic optimization method. SNA is further enhanced by the introduction of a Graduated Models strategy which allows an optimization algorithm to converge to the model via several intermediary models. A well-known special case of Graduated Models is the Highest Confidence First algorithm which merges pixels or regions that give the highest global energy decrease. Finally, SNA allows us to use different models at different levels of coarseness. For coarser levels, a mean-based Potts model is introduced in order to compute region-to-region gradients based on the region mean and not edge gradients.
Second, we develop a probabilistic framework based on hypothesis testing in order to achieve color constancy in image segmentation. We develop three new shading invariant semi-metrics based on the Dichromatic Reflection Model. An RGB image is transformed into an R'G'B' highlight invariant space to remove any highlight components, and only the component representing color hue is preserved to remove shading effects. This transformation is applied successfully to one of the proposed distance measures. The probabilistic semi-metrics show similar performance to vector angle on images without saturated highlight pixels; however, for saturated regions, as well as very low intensity pixels, the probabilistic distance measures outperform vector angle.
Third, for interferometric Synthetic Aperture Radar image processing we apply the Potts model using SNA to the phase unwrapping problem. We devise a new distance measure for identifying phase discontinuities based on the minimum coherence of two adjacent pixels and their phase difference. As a comparison we use the probabilistic cost function of Carballo as a distance measure for our experiments.
|
843 |
Dynamic and Robust Capacitated Facility Location in Time Varying Demand EnvironmentsTorres Soto, Joaquin 2009 May 1900 (has links)
This dissertation studies models for locating facilities in time varying demand
environments. We describe the characteristics of the time varying demand that motivate
the analysis of our location models in terms of total demand and the change
in value and location of the demand of each customer. The first part of the dissertation
is devoted to the dynamic location model, which determines the optimal
time and location for establishing capacitated facilities when demand and cost parameters
are time varying. This model minimizes the total cost over a discrete and
finite time horizon for establishing, operating, and closing facilities, including the
transportation costs for shipping demand from facilities to customers. The model
is solved using Lagrangian relaxation and Benders? decomposition. Computational
results from different time varying total demand structures demonstrate, empirically,
the performance of these solution methods.
The second part of the dissertation studies two location models where relocation
of facilities is not allowed and the objective is to determine the optimal location
of capacitated facilities that will have a good performance when demand and cost
parameters are time varying. The first model minimizes the total cost for opening
and operating facilities and the associated transportation costs when demand and
cost parameters are time varying. The model is solved using Benders? decomposition. We show that in the presence of high relocation costs of facilities (opening and closing
costs), this model can be solved as a special case by the dynamic location model. The
second model minimizes the maximum regret or opportunity loss between a robust
configuration of facilities and the optimal configuration for each time period. We
implement local search and simulated annealing metaheuristics to efficiently obtain
near optimal solutions for this model.
|
844 |
Multi-Scale Approaches For Understanding Deformation And Fracture Mechanisms In Amorphous AlloysPalla Murali, * 08 1900 (has links)
Amorphous alloys possess attractive combinations of mechanical properties (high elastic limit, ~2%, high fracture toughness, 20-50 MPa.m1/2, etc.) and exhibit mechanical behavior that is different, in many ways, from that of the crystalline metals and alloys. However, fundamental understanding of the deformation and fracture mechanisms in amorphous alloys, which would allow for design of better metallic glasses, has not been established on a firm footing yet. The objective of this work is to understand the deformation and fracture mechanisms of amorphous materials at various length scales and make connections with the macroscopic properties of glasses. Various experimental techniques were employed to study the macroscopic behavior and atomistic simulations were conducted to understand the mechanisms at the nano level.
Towards achieving these objectives, we first study the toughness of a Zr-based bulk metallic glass (BMG), Vitreloy-1, as a function of the free volume, which was varied by recourse to structural relaxation of the BMG through sub-Tg annealing treatment. Both isothermal annealing at 500 K (0.8Tg) for up to 24 h and isochronal annealing for 24 h in the temperature range of 130 K (0.65Tg) to 530 K (0.85Tg) were conducted and the impact toughness, Γ, values were measured. Results show severe embrittlement, with losses of up to 90% in Γ, with annealing. The variation in Γ with annealing time, ta, was found to be similar to that observed in the enthalpy change at the glass transition, ΔH, with ta, indicating that the reduction of free volume due to annealing is the primary mechanism responsible for the loss in Γ with annealing. Having established the connection between sub-atomic length scales (free volume) and macroscopic response (toughness), we investigated further the affects of relaxation on intermediate length scale behavior, namely deformation induced by shear bands, by employing instrumented indentation techniques. While the Vickers nano-indentation response of the as-cast and annealed glasses do not show any significant difference, spherical indentation response shows reduced shear band activity in the annealed BMG. Further, relatively high indentation strain was observed to be necessary for shear band initiation in the annealed glass, implying an increased resistance for the nucleation of shear bands when the BMG is annealed.
In the absence of microstructural features that allow for establishment of correlation between properties and the structure, we resort to atomistic modeling to gain further understanding of the deformation mechanisms in amorphous alloys. In particular, we focus on the micromechanisms of strain accommodation including crystallization and void formation during inelastic deformation of glasses. Molecular dynamics simulations on a single component system with Lennard-Jones-like atoms suggest that a softer short range interaction between atoms favors crystallization. Compressive hydrostatic strain in the presence of a shear strain promotes crystallization whereas a tensile hydrostatic strain was found to induce voids. The deformation subsequent to the onset of crystallization includes partial re-amorphization and recrystallization, suggesting important mechanisms of plastic deformation in glasses.
Next, a study of deformation induced crystallization is conducted on two component amorphous alloys through atomistic simulations. The resistance of a binary glass to deformation-induced-crystallization (deformation stability) is found to increase with increasing atomic size ratio. A new parameter called “atomic stiffness” (defined by the curvature of the inter-atomic potential at the equilibrium separation distance) is introduced and examined for its role on deformation stability. The deformation stability of binary glasses is found to increase with increasing atomic stiffness. For a given composition, the internal energies of binary crystals and glasses are compared and it is found that the energy of glass remains approximately constant for a wide range of atomic size ratios unlike crystals in which the energy increases with increasing atomic size ratio. This study uncovers the similarities between deformation and thermal stabilities of glasses and suggests new parameters for predicting highly stable glass compositions.
|
845 |
HeT-SiC-05International Topical Workshop on Heteroepitaxy of 3C-SiC on Silicon and its Application to Sensor DevicesApril 26 to May 1, 2005,Hotel Erbgericht Krippen / Germany- Selected Contributions -Skorupa, Wolfgang, Brauer, Gerhard 31 March 2010 (has links) (PDF)
This report collects selected outstanding scientific and technological results obtained within the frame of the European project "FLASiC" (Flash LAmp Supported Deposition of 3C-SiC) but also other work performed in adjacent fields. Goal of the project was the production of large-area epitaxial 3C-SiC layers grown on Si, where in an early stage of SiC deposition the SiC/Si interface is rigorously improved by energetic electromagnetic radiation from purpose-built flash lamp equipment developed at Forschungszentrum Rossendorf. Background of this work is the challenging task for areas like microelectronics, biotechnology, or biomedicine to meet the growing demands for high-quality electronic sensors to work at high temperatures and under extreme environmental conditions. First results in continuation of the project work – for example, the deposition of the topical semiconductor material zinc oxide (ZnO) on epitaxial 3C-SiC/Si layers – are reported too.
|
846 |
An intelligent vertical handoff decision algorithm in next generation wireless networksNkansah-Gyekye, Yaw January 2010 (has links)
<p>The objective of the thesis research is to design such vertical handoff decision algorithms in order for mobile field workers and other mobile users equipped with contemporary multimode mobile devices to communicate seamlessly in the NGWN. In order to tackle this research objective, we used fuzzy logic and fuzzy inference systems to design a suitable handoff initiation algorithm that can handle imprecision and uncertainties in data and process multiple vertical handoff initiation parameters (criteria) / used the fuzzy multiple attributes decision making method and context awareness to design a suitable access network selection function that can handle a tradeoff among many handoff metrics including quality of service requirements (such as network conditions and system performance), mobile terminal conditions, power requirements, application types, user preferences, and a price model / used genetic algorithms and simulated annealing to optimise the access network selection function in order to dynamically select the optimal available access network for handoff / and we focused in particular on an interesting use case: vertical handoff decision between mobile WiMAX and UMTS access networks. The implementation of our handoff decision algorithm will provide a network selection mechanism to help mobile users select the best wireless access network among all available wireless access networks, that is, one that provides always best connected services to users.</p>
|
847 |
Étude fonctionnelle du cotransporteur Na+/glucose (hSGLT1) : courant de fuite, vitesse de cotransport et modélisation cinétiqueLongpré, Jean-Philippe 05 1900 (has links)
Les résultats présentés dans cette thèse précisent certains aspects de la fonction du cotransporteur Na+/glucose (SGLT1), une protéine transmembranaire qui utilise le gradient électrochimique favorable des ions Na+ afin d’accumuler le glucose à l’intérieur des cellules épithéliales de l’intestin grêle et du rein.
Nous avons tout d’abord utilisé l’électrophysiologie à deux microélectrodes sur des ovocytes de xénope afin d’identifier les ions qui constituaient le courant de fuite de SGLT1, un courant mesuré en absence de glucose qui est découplé de la stoechiométrie stricte de 2 Na+/1 glucose caractérisant le cotransport. Nos résultats ont démontré que des cations comme le Li+, le K+ et le Cs+, qui n’interagissent que faiblement avec les sites de liaison de SGLT1 et ne permettent pas les conformations engendrées par la liaison du Na+, pouvaient néanmoins générer un courant de fuite d’amplitude comparable à celui mesuré en présence de Na+. Ceci suggère que le courant de fuite traverse SGLT1 en utilisant une voie de perméation différente de celle définie par les changements de conformation propres au cotransport Na+/glucose, possiblement similaire à celle empruntée par la perméabilité à l’eau passive. Dans un deuxième temps, nous avons cherché à estimer la vitesse des cycles de cotransport de SGLT1 à l’aide de la technique de la trappe ionique, selon laquelle le large bout d’une électrode sélective (~100 μm) est pressé contre la membrane plasmique d’un ovocyte et circonscrit ainsi un petit volume de solution extracellulaire que l’on nomme la trappe. Les variations de concentration ionique se produisant dans la trappe en conséquence de l’activité de SGLT1 nous ont permis de déduire que le cotransport Na+/glucose s’effectuait à un rythme d’environ 13 s-1 lorsque le potentiel membranaire était fixé à -155 mV. Suite à cela, nous nous sommes intéressés au développement d’un modèle cinétique de SGLT1. En se servant de l’algorithme du recuit simulé, nous avons construit un schéma cinétique à 7 états reproduisant de façon précise les courants du cotransporteur
en fonction du Na+ et du glucose extracellulaire. Notre modèle prédit qu’en présence d’une concentration saturante de glucose, la réorientation dans la membrane de SGLT1 suivant le relâchement intracellulaire de ses substrats est l’étape qui limite la vitesse de cotransport. / The results presented in this thesis clarify certain functional aspects of the Na+/glucose cotransporter (SGLT1), a membrane protein which uses the downhill electrochemical gradient of Na+ ions to drive the accumulation of glucose in epithelial cells of the small intestine and the kidney.
We first used two microelectrodes electrophysiology on Xenopus oocytes to indentify the ionic species mediating the leak current of SGLT1, a current measured in the absence of glucose that is uncoupled from the strict 2 Na+/1 glucose stoichiometry
characterising cotransport. Our results showed that cations such as Li+, K+ and Cs+, which interact weakly with SGLT1 binding sites and are unable to generate the conformational changes that are triggered by Na+ binding, were however able to generate leak currents similar in amplitude to the one measured in the presence of Na+. This suggests that the leak current permeating through SGLT1 does so using a pathway that differs from the conformational changes associated with Na+/glucose cotransport. Moreover, it was found that the cationic leak and the passive water permeability could share a common pathway. We then sought to estimate the turnover rate of SGLT1 using the ion-trap technique, where a large tip ion-selective electrode (~100 μm) is pushed against the oocyte plasma membrane, thus enclosing a small volume of extracellular solution referred to as the trap. The variations in ionic concentration occurring in the trap as a consequence of SGLT1 activity made it possible to assess that the turnover rate of Na+/glucose cotransport was 13 s-1 when the membrane potential was clamped to -155 mV. As a last project, we focused our interest on the development of a kinetic model for SGLT1. Taking advantage of the simulated annealing algorithm, we constructed a 7-state kinetic scheme whose predictions accurately reproduced the currents of the cotransporter as a function of extracellular Na+ and glucose. According to our model, the rate limiting step of cotransport under a saturating glucose concentration is the reorientation of the empty carrier that follows the intracellular
release of substrates.
|
848 |
An optimisation approach to improve the throughput in wireless mesh networks through network coding / van der Merwe C.Van der Merwe, Corna January 2011 (has links)
In this study, the effect of implementing Network Coding on the aggregated throughput in Wireless
Mesh Networks, was examined. Wireless Mesh Networks (WMNs) are multiple hop wireless networks,
where routing through any node is possible. The implication of this characteristic, is that messages
flow across the points where it would have been terminated in conventional wireless networks. User
nodes in conventional wireless networks only transmit and receive messages from an Access Point
(AP), and discard any messages not intended for them.
The result is an increase in the volume of network traffic through the links of WMNs. Additionally,
the dense collection of multiple RF signals propagating through a shared wireless medium, contributes
to the situation where the links become saturated at levels below their capacity. The need exists to
examine methods that will improve the utilisation of the shared wireless medium in WMNs.
Network Coding is a coding and decoding technique at the network level of the OSI stack, aimed to
improve the boundaries of saturated links. The technique implies that the bandwidth is simultaneously
shared amongst separate message flows, by combining these flows at common intermediate nodes.
The number of transmissions needed to convey information through the network, is decreased by
Network Coding. The result is in an improvement of the aggregated throughput.
The research approach followed in this dissertation, includes the development of a model that
investigates the aggregated throughput performance of WMNs. The scenario of the model, followed a
typical example of indoors WMN implementations. Therefore, the physical environment representation
of the network elements, included an indoors log–distance path loss channel model, to account for the
different effects such as: power absorption through walls; and shadowing.
Network functionality in the model was represented through a network flow programming problem.
The problem was concerned with determining the optimal amount of flow represented through the
links of the WMN, subject to constraints pertaining to the link capacities and mass balance at each
node. The functional requirements of the model stated that multiple concurrent sessions were to
be represented. This condition implied that the network flow problem had to be a multi–commodity
network flow problem.
Additionally, the model requirements stated that each session of flow should remain on a single path.
This condition implied that the network flow problem had to be an integer programming problem.
Therefore, the network flow programming problem of the model was considered mathematically
equivalent to a multi–commodity integer programming problem. The complexity of multi–commodity
integer programming problems is NP–hard. A heuristic solving method, Simulated Annealing, was implemented to solve the goal function represented by the network flow programming problem of the model.
The findings from this research provide evidence that the implementation of Network Coding in
WMNs, nearly doubles the level of the calculated aggregated throughput values. The magnitude of
this throughput increase, can be further improved by additional manipulation of the network traffic
dispersion. This is achieved by utilising link–state methods, rather than distance vector methods, to
establish paths for the sessions of flow, present in the WMNs. / Thesis (M.Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2012.
|
849 |
An optimisation approach to improve the throughput in wireless mesh networks through network coding / van der Merwe C.Van der Merwe, Corna January 2011 (has links)
In this study, the effect of implementing Network Coding on the aggregated throughput in Wireless
Mesh Networks, was examined. Wireless Mesh Networks (WMNs) are multiple hop wireless networks,
where routing through any node is possible. The implication of this characteristic, is that messages
flow across the points where it would have been terminated in conventional wireless networks. User
nodes in conventional wireless networks only transmit and receive messages from an Access Point
(AP), and discard any messages not intended for them.
The result is an increase in the volume of network traffic through the links of WMNs. Additionally,
the dense collection of multiple RF signals propagating through a shared wireless medium, contributes
to the situation where the links become saturated at levels below their capacity. The need exists to
examine methods that will improve the utilisation of the shared wireless medium in WMNs.
Network Coding is a coding and decoding technique at the network level of the OSI stack, aimed to
improve the boundaries of saturated links. The technique implies that the bandwidth is simultaneously
shared amongst separate message flows, by combining these flows at common intermediate nodes.
The number of transmissions needed to convey information through the network, is decreased by
Network Coding. The result is in an improvement of the aggregated throughput.
The research approach followed in this dissertation, includes the development of a model that
investigates the aggregated throughput performance of WMNs. The scenario of the model, followed a
typical example of indoors WMN implementations. Therefore, the physical environment representation
of the network elements, included an indoors log–distance path loss channel model, to account for the
different effects such as: power absorption through walls; and shadowing.
Network functionality in the model was represented through a network flow programming problem.
The problem was concerned with determining the optimal amount of flow represented through the
links of the WMN, subject to constraints pertaining to the link capacities and mass balance at each
node. The functional requirements of the model stated that multiple concurrent sessions were to
be represented. This condition implied that the network flow problem had to be a multi–commodity
network flow problem.
Additionally, the model requirements stated that each session of flow should remain on a single path.
This condition implied that the network flow problem had to be an integer programming problem.
Therefore, the network flow programming problem of the model was considered mathematically
equivalent to a multi–commodity integer programming problem. The complexity of multi–commodity
integer programming problems is NP–hard. A heuristic solving method, Simulated Annealing, was implemented to solve the goal function represented by the network flow programming problem of the model.
The findings from this research provide evidence that the implementation of Network Coding in
WMNs, nearly doubles the level of the calculated aggregated throughput values. The magnitude of
this throughput increase, can be further improved by additional manipulation of the network traffic
dispersion. This is achieved by utilising link–state methods, rather than distance vector methods, to
establish paths for the sessions of flow, present in the WMNs. / Thesis (M.Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2012.
|
850 |
System-level design of power efficient FSMD architecturesAgarwal, Nainesh 06 May 2009 (has links)
Power dissipation in CMOS circuits is of growing concern as the computational requirements of portable, battery operated devices increases. The ability to easily develop application specific circuits, rather than program general-purpose architectures can provide tremendous power savings. To this end, we present a design platform for rapidly developing power efficient hardware architectures starting at a system level. This high level VLSI design platform, called CoDeL, allows hardware description at the algorithm level, and thus dramatically reduces design time and power dissipation. We compare the CoDeL platform to a modern DSP and find that the CoDeL platform produces designs with somewhat slower run times but dramatically lower power dissipation.
The CoDeL compiler produces an FSMD (Finite State Machine with Datapath) implementation of the circuit. This regular structure can be exploited to further reduce power through various techniques.
To reduce dynamic power dissipation in the resulting architecture, the CoDeL compiler automatically inserts clock gating for registers. Power analysis shows that CoDeL's automated, high-level clock gating provides considerably more power savings than existing automated clock gating tools.
To reduce static power, we use the CoDeL platform to analyze the potential and performance impact of power gating individual registers. We propose a static gating method, with very low area overhead, which uses the information available to the CoDeL compiler to predict, at compile time, when the registers can be powered off and powered on. Static branch prediction is used to more intelligently traverse the finite state machine description of the circuit to discover gating opportunities. Using simulation and estimation, we find that CoDeL with backward branch prediction gives the best overall combination of gating potential and performance. Compared to a dynamic time-based technique, this method gives dramatically more power savings, without any additional performance loss.
Finally, we propose techniques to efficiently partition a FSMD using Integer Linear Programming and a simulated annealing approach. The FSMD is split into two or more simpler communicating processors. These separate processors can then be clock gated or power gated to achieve considerable power savings since only one processor is active at any given time. Implementation and estimation shows that significant power savings can be expected, when the original machine is partitioned into two or more submachines.
|
Page generated in 0.0512 seconds