• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 748
  • 184
  • 112
  • 71
  • 62
  • 17
  • 14
  • 13
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • Tagged with
  • 1473
  • 251
  • 247
  • 226
  • 205
  • 188
  • 182
  • 159
  • 154
  • 144
  • 132
  • 114
  • 114
  • 111
  • 109
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Lookaside Load Balancing in a Service Mesh Environment / Extern Lastbalansering i en Service Mesh Miljö

Johansson, Erik January 2020 (has links)
As more online services are migrated from monolithic systems into decoupled distributed micro services, the need for efficient internal load balancing solutions increases. Today, there exists two main approaches for load balancing internal traffic between micro services. One approach uses either a central or sidecar proxy to load balance queries over all available server endpoints. The other approach lets client themselves decide which of all available endpoints to send queries to. This study investigates a new approach called lookaside load balancing. This approach consists of a load balancer that uses the control plane to gather a list of service endpoints and their current load. The load balancer can then dynamically provide clients with a subset of suitable endpoints they connect to directly. The endpoint distribution is controlled by a lookaside load balancing algorithm. This study presents such an algorithm that works by changing the endpoint assignment in order to keep current load between an upper and lower bound. In order to compare each of these three load balancing approaches, a test environment in Kubernetes is constructed and modeled to be similar to a real service mesh. With this test environment, we perform four experiments. The first experiment aims at finding suitable settings for the lookaside load balancing algorithm as well as a baseline load configuration for clients and servers. The second experiments evaluates the underlying network infrastructure to test for possible bias in latency measurements. The final two experiments evaluate each load balancing approach in both high and low load scenarios. Results show that lookaside load balancing can achieve similar performance as client-side load balancing in terms of latency and load distribution, but with a smaller CPU and memory footprint. When load is high and uneven, or when compute resource usage should be minimized, the centralized proxy approach is better. With regards to traffic flow control and failure resilience, we can show that lookaside load balancing is better than client-side load balancing. We draw the conclusion that lookaside load balancing can be an alternative approach to client-side load balancing as well as proxy load balancing for some scenarios. / Då fler online tjänster flyttas från monolitsystem till uppdelade distribuerade mikrotjänster, ökas behovet av intern lastbalansering. Idag existerar det två huvudsakliga tillvägagångssätt för intern lastbalansering mellan interna mikrotjänster. Ett sätt använder sig antingen utav en central- eller sido-proxy for att lastbalansera trafik över alla tillgängliga serverinstanser. Det andra sättet låter klienter själva välja vilken utav alla serverinstanser att skicka trafik till. Denna studie undersöker ett nytt tillvägagångssätt kallat extern lastbalansering. Detta tillvägagångssätt består av en lastbalanserare som använder kontrollplanet för att hämta en lista av alla serverinstanser och deras aktuella last. Lastbalanseraren kan då dynamiskt tillsätta en delmängd av alla serverinstanser till klienter och låta dom skapa direktkopplingar. Tillsättningen av serverinstanser kontrolleras av en extern lastbalanseringsalgoritm. Denna studie presenterar en sådan algoritm som fungerar genom att ändra på tillsättningen av serverinstanser för att kunna hålla lasten mellan en övre och lägre gräns. För att kunna jämföra dessa tre tillvägagångssätt för lastbalansering konstrueras och modelleras en testmiljö i Kubernetes till att vara lik ett riktigt service mesh. Med denna testmiljö utför vi fyra experiment. Det första experimentet har som syfte att hitta passande inställningar till den externa lastbalanseringsalgoritmen, samt att hitta en baskonfiguration för last hos klienter or servrar. Det andra experimentet evaluerar den underliggande nätverksinfrastrukturen för att testa efter potentiell partiskhet i latensmätningar. De sista två experimenten evaluerar varje tillvägagångssätt av lastbalansering i både scenarier med hög och låg belastning. Resultaten visar att extern lastbalansering kan uppnå liknande prestanda som klientlastbalansering avseende latens och lastdistribution, men med lägre CPU- och minnesanvändning. När belastningen är hög och ojämn, eller när beräkningsresurserna borde minimeras, är den centraliserade proxy-metoden bättre. Med hänsyn till kontroll över trafikflöde och resistans till systemfel kan vi visa att extern lastbalansering är bättre än klientlastbalansering. Vi drar slutsatsen att extern lastbalansering kan vara ett alternativ till klientlastbalansering samt proxylastbalansering i vissa fall.
402

A study of gas lift on oil/water flow in vertical risers

Brini Ahmed, Salem Kalifa January 2014 (has links)
Gas lift is a means of enhancing oil recovery from hydrocarbon reservoirs. Gas injected at the production riser base reduces the gravity component of the pressure drop and thereby, increases the supply of oil from the reservoir. Also, gas injection at the base of a riser helps to mitigate slugging and thus, improving the performance of the topside facility. In order to improve the efficiency of the gas lifting technique, a good understanding of the characteristics of gas-liquid multiphase flow in vertical pipes is very important. In this study, experiments of gas/liquid (air/water) two-phase flows, liquid/liquid of oil/water two-phase flows and gas/liquid/liquid (air/oil/water) three-phase flows were conducted in a 10.5 m high 52 mm ID vertical riser. These experiments were performed at liquid and gas superficial velocities ranging from 0.25 to 2 m/s and ~0.1 to ~6.30 m/s, respectively. Dielectric oil and tap water were used as test fluids. Instruments such as Coriolis mass flow meter, single beam gamma densitometer and wire-mesh sensor (WMS) were employed for investigating the flow characteristics. For the experiments of gas/liquid (air/water) two-phase flow, flow patterns of Bubbly, slug, churn flow regimes and transition regions were identified under the experimental conditions. Also, for flow pattern identification and void fraction measurements, the capacitance WMS results are consistent with those obtained simultaneously by the gamma densitometer. Generally, the total pressure gradient along the vertical riser has shown a significant decrease as the injected gas superficial velocity increased. In addition, the rate of decrease in total pressure gradient at the lower injected gas superficial velocities was found to be higher than that for higher gas superficial velocities. The frictional pressure gradient was also found to increase as the injected gas superficial velocity increased. For oil-water experiments, mixture density and total pressure gradient across the riser were found to increase with increasing water cut (ranging between 0 - 100%) and/or mixture superficial velocity. Phase slip between the oil and water was calculated and found to be significant at lower throughputs of 0.25 and 0.5 m/s. The phase inversion point always takes place at a point of input water cut of 42% when the experiments started from pure oil to water, and at an input water cut of 45% when the experiment’s route started from water to pure oil. The phase inversion point was accompanied by a peak increase of pressure gradient, particularly at higher oil-water mixture superficial velocities of 1, 1.5 and 2 m/s. The effects of air injection rates on the fluid flow characteristics were studied by emphasizing the total pressure gradient behaviour and identifying the flow pattern by analysing the output signals from gamma and WMS in air/oil/water experiments. Generally, riser base gas injection does not affect the water cut at the phase inversion point. However, a slight shift forward for the identified phase inversion point was found at highest flow rates of injected gas where the flow patterns were indicated as churn to annular flow. In terms of pressure gradient, the gas lifting efficiency (lowering pressure gradient) shows greater improvement after the phase inversion point (higher water cuts) than before and also at the inversion point. Also, it was found that the measured mean void fraction reaches its lowest value at the phase inversion point. These void fraction results were found to be consistent with previously published results.
403

Direct numerical simulation of gas transfer at the air-water interface in a buoyant-convective flow environment

Kubrak, Boris January 2014 (has links)
The gas transfer process across the air-water interface in a buoyant-convective environment has been investigated by Direct Numerical Simulation (DNS) to gain improved understanding of the mechanisms that control the process. The process is controlled by a combination of molecular diffusion and turbulent transport by natural convection. The convection when a water surface is cooled is combination of the Rayleigh-B´enard convection and the Rayleigh-Taylor instability. It is therefore necessary to accurately resolve the flow field as well as the molecular diffusion and the turbulent transport which contribute to the total flux. One of the challenges from a numerical point of view is to handle the very different levels of diffusion when solving the convection-diffusion equation. The temperature diffusion in water is relatively high whereas the molecular diffusion for most environmentally important gases is very low. This low molecular diffusion leads to steep gradients in the gas concentration, especially near the interface. Resolving the steep gradients is the limiting factor for an accurate resolution of the gas concentration field. Therefore a detailed study has been carried out to find the limits of an accurate resolution of the transport for a low diffusivity scalar. This problem of diffusive scalar transport was studied in numerous 1D, 2D and 3D numerical simulations. A fifth-order weighted non-oscillatory scheme (WENO) was deployed to solve the convection of the scalars, in this case gas concentration and temperature. The WENO-scheme was modified and tested in 1D scalar transport to work on non-uniform meshes. To solve the 2D and 3D velocity field the incompressible Navier-Stokes equations were solved on a staggered mesh. The convective terms were solved using a fourth-order accurate kinetic energy conserving discretization while the diffusive terms were solved using a fourth-order central method. The diffusive terms were discretized using a fourth-order central finite difference method for the second derivative. For the time-integration of the velocity field a second-order Adams-Bashworth method was employed. The Boussinesq approximation was employed to model the buoyancy due to temperature differences in the water. A linear relationship between temperature and density was assumed. A mesh sensitivity study found that the velocity field is fully resolved on a relatively coarse mesh as the level of turbulence is relatively low. However a finer mesh for the gas concentration field is required to fully capture the steep gradients that occur because of its low diffusivity. A combined dual meshing approach was used where the velocity field was solved on a coarser mesh and the scalar field (gas concentration and temperature) was solved on an overlaying finer submesh. The velocities were interpolated by a second-order method onto the finer sub-mesh. A mesh sensitivity study identified a minimum mesh size required for an accurate solution of the scalar field for a range of Schmidt numbers from Sc = 20 to Sc = 500. Initially the Rayleigh-B´enard convection leads to very fine plumes of cold liquid of high gas concentration that penetrate the deeper regions. High concentration areas remain in fine tubes that are fed from the surface. The temperature however diffuses much stronger and faster over time and the results show that temperature alone is not a good identifier for detailed high concentration areas when the gas transfer is investigated experimentally. For large timescales the temperature field becomes much more homogeneous whereas the concentration field stays more heterogeneous. However, the temperature can be used to estimate the overall transfer velocity KL. If the temperature behaves like a passive scalar a relation between Schmidt or Prandtl number and KL is evident. A qualitative comparison of the numerical results from this work to existing experiments was also carried out. Laser Induced Fluorescence (LIF) images of the oxygen concentration field and Schlieren photography has been compared to the results from the 3D simulations, which were found to be in good agreement. A detailed quantitative analysis of the process was carried out. A study of the horizontally averaged convective and diffusive mass flux enabled the calculation of transfer velocity KL at the interface. With KL known the renewal rate r for the so called surface renewal model could be determined. It was found that the renewal rates are higher than in experiments in a grid stirred tank. The horizontally averaged mean and fluctuating concentration profiles were analysed and from that the boundary layer thickness could be accurately monitored over time. A lot of this new DNS data obtained in this research might be inaccessible in experiments and reveal previously unknown details of the gas transfer at the air water interface.
404

Aerosol Transport Simulations in Indoor and Outdoor Environments using Computational Fluid Dynamics (CFD)

Landázuri, Andrea Carolina January 2016 (has links)
This dissertation focuses on aerosol transport modeling in occupational environments and mining sites in Arizona using computational fluid dynamics (CFD). The impacts of human exposure in both environments are explored with the emphasis on turbulence, wind speed, wind direction and particle sizes. Final emissions simulations involved the digitalization process of available elevation contour plots of one of the mining sites to account for realistic topographical features. The digital elevation map (DEM) of one of the sites was imported to COMSOL MULTIPHYSICS® for subsequent turbulence and particle simulations. Simulation results that include realistic topography show considerable deviations of wind direction. Inter-element correlation results using metal and metalloid size resolved concentration data using a Micro-Orifice Uniform Deposit Impactor (MOUDI) under given wind speeds and directions provided guidance on groups of metals that coexist throughout mining activities. Groups between Fe-Mg, Cr-Fe, Al-Sc, Sc-Fe, and Mg-Al are strongly correlated for unrestricted wind directions and speeds, suggesting that the source may be of soil origin (e.g. ore and tailings); also, groups of elements where Cu is present, in the coarse fraction range, may come from mechanical action mining activities and saltation phenomenon. Besides, MOUDI data under low wind speeds (<2 m/s) and at night showed a strong correlation for particles 1-micrometer in diameter between the groups: Sc-Be-Mg, Cr-Al, Cu-Mn, Cd-Pb-Be, Cd-Cr, Cu-Pb, Pb-Cd, As-Cd-Pb. The As-Cd-Pb group correlates strongly in almost all ranges of particle sizes. When restricted low wind speeds were imposed more groups of elements are evident and this may be justified with the fact that at lower speeds particles are more likely to settle. When linking these results with CFD simulations and Pb-isotope results it is concluded that the source of elements found in association with Pb in the fine fraction come from the ore that is subsequently processed in the smelter site, whereas the source of elements associated to Pb in the coarse fraction is of different origin. CFD simulation results will not only provide realistic and quantifiable information in terms of potential deleterious effects, but also that the application of CFD represents an important contribution to actual dispersion modeling studies; therefore, Computational Fluid Dynamics can be used as a source apportionment tool to identify areas that have an effect over specific sampling points and susceptible regions under certain meteorological conditions, and these conclusions can be supported with inter-element correlation matrices and lead isotope analysis, especially since there is limited access to the mining sites. Additional results concluded that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail, provides higher number of locations with monotonic convergence than the manual grids, and requires the least computational effort. CFD simulations were approached using the k-epsilon model, with the aid of computer aided engineering software: ANSYS® and COMSOL MULTIPHYSICS®. The success of aerosol transport simulations depends on a good simulation of the turbulent flow. A lot of attention was placed on investigating and choosing the best models in terms of convergence, independence and computational effort. This dissertation also includes preliminary studies of transient discrete phase, eulerian and species transport modeling, importance of saltation of particles, information on CFD methods, and strategies for future directions that should be taken.
405

Wireless Rotor Data Acquisition System

Kpodzo, Elias, DiLemmo, Marc, Wang, Wearn-Juhn 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / Flight test data acquisition systems have been widely deployed in helicopter certification programs for a few decades. A data acquisition system uses a series of strategically placed sensors to provide instantaneous status condition of the helicopter's components and structure. However, until recently, it has been difficult to collect flight test data from helicopter rotors in motion. Traditional rotor solutions have used slip rings to electrically connect fixed and rotating mechanical elements; but slip rings are inconvenient to use, prone to wear, and notoriously unreliable.
406

A Software Framework for Prioritized Spectrum Access in Heterogeneous Cognitive Radio Networks

Yao, Yong January 2014 (has links)
Today, the radio spectrum is rarely fully utilized. This problem is valid in more domains, e.g., time, frequency and geographical location. To provide an efficient utilization of the radio spectrum, the Cognitive Radio Networks (CRNs) have been advanced. The key idea is to open up the licensed spectrum to unlicensed users, thus allowing them to use the so-called spectrum opportunities as long as they do not harmfully interfere with licensed users. An important focus is laid on the limitation of previously reported research efforts, which is due to the limited consideration of the problem of competition among unlicensed users for spectrum access in heterogeneous CRNs. A software framework is introduced, which is called PRioritized Opportunistic spectrum Access System (PROAS). In PROAS, the heterogeneity aspects of CRNs are specifically expressed in terms of cross-layer design and various wireless technologies. By considering factors like ease of implementation and efficiency of control, PROAS provides priority scheduling based solutions to alleviate the competition problem of unlicensed users in heterogenous CRNs. The advanced solutions include theoretical models, numerical analysis and experimental simulations for performance evaluation. By using PROAS, three particular CRN models are studied, which are based on ad-hoc, mesh-network and cellular-network technologies. The reported results show that PROAS has the ability to bridge the gap between research results and the practical implementation of CRNs.
407

Coulomb breakup of halo nuclei by a time-dependent method

Capel, Pierre 29 January 2004 (has links)
Halo nuclei are among the strangest nuclear structures. They are viewed as a core containing most of the nucleons surrounded by one or two loosely bound nucleons. These have a high probability of presence at a large distance from the core. Therefore, they constitute a sort of halo surrounding the other nucleons. The core, remaining almost unperturbed by the presence of the halo is seen as a usual nucleus. <P> The Coulomb breakup reaction is one of the most useful tools to study these nuclei. It corresponds to the dissociation of the halo from the core during a collision with a heavy (high <I>Z</I>) target. In order to correctly extract information about the structure of these nuclei from experimental cross sections, an accurate theoretical description of this mechanism is necessary. <P> In this work, we present a theoretical method for studying the Coulomb breakup of one-nucleon halo nuclei. This method is based on a semiclassical approximation in which the projectile is assumed to follow a classical trajectory. In this approximation, the projectile is seen as evolving in a time-varying potential simulating its interaction with the target. This leads to the resolution of a time-dependent Schrödinger equation for the projectile wave function. <P> In our method, the halo nucleus is described with a two-body structure: a pointlike nucleon linked to a pointlike core. In the present state of our model, the interaction between the two clusters is modelled by a local potential. <P> The main idea of our method is to expand the projectile wave function on a three-dimensional spherical mesh. With this mesh, the representation of the time-dependent potential is fully diagonal. Furthermore, it leads to a simple representation of the Hamiltonian modelling the halo nucleus. This expansion is used to derive an accurate evolution algorithm. <P> With this method, we study the Coulomb breakup of three nuclei: <sup>11</sup>Be, <sup>15</sup>C and <sup>8</sup>B. <sup>11</sup>Be is the best known one-neutron halo nucleus. Its Coulomb breakup has been extensively studied both experimentally and theoretically. Nevertheless, some uncertainty remains about its structure. The good agreement between our calculations and recent experimental data suggests that it can be seen as a <I>s1/2</I> neutron loosely bound to a <sup>10</sup>Be core in its 0<sup>+</sup> ground state. However, the extraction of the corresponding spectroscopic factor have to wait for the publication of these data. <P> <sup>15</sup>C is a candidate one-neutron halo nucleus whose Coulomb breakup has just been studied experimentally. The results of our model are in good agreement with the preliminary experimental data. It seems therefore that <sup>15</sup>C can be seen as a <sup>14</sup>C core in its 0<sup>+</sup> ground state surrounded by a <I>s1/2</I> neutron. Our analysis suggests that the spectroscopic factor corresponding to this configuration should be slightly lower than unity. <P> We have also used our method to study the Coulomb breakup of the candidate one-proton halo nucleus <sup>8</sup>B. Unfortunately, no quantitative agreement could be obtained between our results and the experimental data. This is mainly due to an inaccuracy in the treatment of the results of our calculations. Accordingly, no conclusion can be drawn about the pertinence of the two-body model of <sup>8</sup>B before an accurate reanalysis of these results. <P> In the future, we plan to improve our method in two ways. The first concerns the modelling of the halo nuclei. It would be indeed of particular interest to test other models of halo nuclei than the simple two-body structure used up to now. The second is the extension of this semiclassical model to two-neutron halo nuclei. However, this cannot be achieved without improving significantly the time-evolution algorithm so as to reach affordable computational times.
408

Adaptive Aggregation of Voice over IP in Wireless Mesh Networks

Dely, Peter January 2007 (has links)
<p>When using Voice over IP (VoIP) in Wireless Mesh Networks the overhead induced by the IEEE 802.11 PHY and MAC layer accounts for more than 80% of the channel utilization time, while the actual payload only uses 20% of the time. As a consequence, the Voice over IP capacity is very low. To increase the channel utilization efficiency and the capacity several IP packets can be aggregated in one large packet and transmitted at once. This paper presents a new hop-by-hop IP packet aggregation scheme for Wireless Mesh Networks.</p><p>The size of the aggregation packets is a very important performance factor. Too small packets yield poor aggregation efficiency; too large packets are likely to get dropped when the channel quality is poor. Two novel distributed protocols for calculation of the optimum respectively maximum packet size are described. The first protocol assesses network load by counting the arrival rate of routing protocol probe messages and constantly measuring the signal-to-noise ratio of the channel. Thereby the optimum packet size of the current channel condition can be calculated. The second protocol, which is a simplified version of the first one, measures the signal-to-noise ratio and calculates the maximum packet size.</p><p>The latter method is implemented in the ns-2 network simulator. Performance measurements with no aggregation, a fixed maximum packet size and an adaptive maximum packet size are conducted in two different topologies. Simulation results show that packet aggregation can more than double the number of supported VoIP calls in a Wireless Mesh Network. Adaptively determining the maximum packet size is especially useful when the nodes have different distances or the channel quality is very poor. In that case, adaptive aggregation supports twice as many VoIP calls as fixed maximum packet size aggregation.</p>
409

Convergence rates of adaptive algorithms for deterministic and stochastic differential equations

Moon, Kyoung-Sook January 2001 (has links)
No description available.
410

Tilt-up Panel Investigation

French, Anton January 2014 (has links)
The aim of this report is to investigate the ductile performance of concrete tilt-up panels reinforced with cold-drawn mesh to improve the current seismic assessment procedure. The commercial impact of the project was also investigated. Engineering Advisory Group (EAG) guidelines state that a crack in a panel under face loading may be sufficient to fracture the mesh. The comments made by EAG regarding the performance of cold-drawn mesh may be interpreted as suggesting that assessment of such panels be conducted with a ductility of 1.0. Observations of tilt-up panel performance following the Christchurch earthquakes suggest that a ductility higher than μ=1.0 is likely to be appropriate for the response of panels to out-of-plane loading. An experimental test frame was designed to subject ten tilt-panel specimens to a cyclic quasi-static loading protocol. Rotation ductility, calculated from the force-displacement response from the test specimens, was found to range between 2.9 and 5.8. Correlation between tensile tests on 663L mesh, and data collected from instrumentation during testing confirmed that the mesh behaves as un-bonded over the pitch length of 150mm. Recommendation: Based on a moment-rotation assessment approach with an un-bonded length equal to the pitch of the mesh, a rotation ductility of μ=2.5 appears to be appropriate for the seismic assessment of panels reinforced with cold-drawn mesh.

Page generated in 0.1669 seconds