• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 642
  • 165
  • 95
  • 65
  • 24
  • 21
  • 18
  • 18
  • 18
  • 18
  • 18
  • 18
  • 13
  • 11
  • 11
  • Tagged with
  • 1243
  • 1243
  • 278
  • 269
  • 255
  • 255
  • 167
  • 164
  • 164
  • 130
  • 129
  • 113
  • 107
  • 105
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
781

Optimisation of a hollow fibre membrane bioreactor for water reuse

Verrecht, Bart January 2010 (has links)
Over the last two decades, implementation of membrane bioreactors (MBRs) has increased due to their superior effluent quality and low plant footprint. However, they are still viewed as a high-cost option, both with regards to capital and operating expenditure (capex and opex). The present thesis extends the understanding of the impact of design and operational parameters of membrane bioreactors on energy demand, and ultimately whole life cost. A simple heuristic aeration model based on a general algorithm for flux vs. aeration shows the benefits of adjusting the membrane aeration intensity to the hydraulic load. It is experimentally demonstrated that a lower aeration demand is required for sustainable operation when comparing 10:30 to continuous aeration, with associated energy savings of up to 75%, without being penalised in terms of the fouling rate. The applicability of activated sludge modelling (ASM) to MBRs is verified on a community-scale MBR, resulting in accurate predictions of the dynamic nutrient profile. Lastly, a methodology is proposed to optimise the energy consumption by linking the biological model with empirical correlations for energy demand, taking into account of the impact of high MLSS concentrations on oxygen transfer. The determining factors for costing of MBRs differ significantly depending on the size of the plant. Operational cost reduction in small MBRs relies on process robustness with minimal manual intervention to suppress labour costs, while energy consumption, mainly for aeration, is the major contributor to opex for a large MBR. A cost sensitivity analysis shows that other main factors influencing the cost of a large MBR, both in terms of capex and opex, are membrane costs and replacement interval, future trends in energy prices, sustainable flux, and the average plant utilisation which depends on the amount of contingency built in to cope with changes in the feed flow.
782

Flow and transport modeling in large river networks

Tavakoly Zadeh, Ahmad A. 17 September 2014 (has links)
The work presented in this dissertation discusses large scale flow and transport in river networks and investigates advantages and disadvantages of grid-based and vector-based river networks. This research uses the Mississippi River basin as a continental-case study and the Guadalupe and San Antonio rivers and Seine basin in France as regional-case studies. The first component of this research presents an extension of regional river flow modeling to the continental scale by using high resolution river data from NHDPlus dataset. This research discovers obstacles of flow computations for river a network with hundreds of thousands river segments in continental scales. An upscaling process is developed based on the vector-based river network to decrease the computational effort, and to reduce input file size. This research identifies drainage area as a key factor in the flow simulation, especially in a wet climate. The second component of this research presents an enhanced GIS framework for a steady-state riverine nitrogen transport modeling in the San Antonio and Guadalupe river network. Results show that the GIS framework can be applied to represent a spatial distribution of flow and total nitrogen in a large river network with thousands of connected river segment. However, time features of the GIS environment limit its applicability to large scale time-varied modeling. The third component shows a modeling regional flow and transport with consideration of stream-aquifer interactions at a regional scale at high resolution. The STICS- Eau-Dyssée combined system is implemented for entire seine basin to compute daily nitrate flux in the Seine grid river network. Results show that river-aquifer exchange has a significant impact on river flow and transport modeling in larger river networks. / text
783

Large-scale streaks in wall-bounded turbulent flows: amplication, instability, self-sustaining process and control

Hwang, Yongyun 17 December 2010 (has links) (PDF)
Wall-bounded turbulent flows such as plane Couette flow, channel, pipe flows and boundary layer flows are fundamental problem of interest that we often meet in many scientific and engineering situations. The goal of the present thesis is to investigate the origin of large-scale streaky motions observed in the wall-bounded turbulent flows. Under a hypothesis that the large-scale streaky motions sustain with a process similar to the well-known near-wall self-sustaining cycle, the present thesis have pursued on four separate subjects: (i) non-modal amplification of streaks, (ii) the secondary instability of the finite amplitude streaks, (iii) existence of a self-sustaining process at large scale and (iv) turbulent skin friction reduction by forcing streaks. First, using a linear model with turbulent mean flow and the related eddy viscosity, it is shown that the streaks are largely amplified by harmonic and stochastic forcing. The largely amplified streaks undergo the secondary instability and it has been associated with the formation of the large-scale motions (bulge). The existence of a self-sustaining process involving the amplification and instability of streaks at large scale is proved by quenching the smaller-scale energy carrying eddies in the near-wall and logarithmic regions. Finally, it is shown that artificially forcing of large-scale streaks reduce the turbulent skin friction up to 10\% by attenuating the near-wall streamwise vortices.
784

Scalable analysis of stochastic process algebra models

Tribastone, Mirco January 2010 (has links)
The performance modelling of large-scale systems using discrete-state approaches is fundamentally hampered by the well-known problem of state-space explosion, which causes exponential growth of the reachable state space as a function of the number of the components which constitute the model. Because they are mapped onto continuous-time Markov chains (CTMCs), models described in the stochastic process algebra PEPA are no exception. This thesis presents a deterministic continuous-state semantics of PEPA which employs ordinary differential equations (ODEs) as the underlying mathematics for the performance evaluation. This is suitable for models consisting of large numbers of replicated components, as the ODE problem size is insensitive to the actual population levels of the system under study. Furthermore, the ODE is given an interpretation as the fluid limit of a properly defined CTMC model when the initial population levels go to infinity. This framework allows the use of existing results which give error bounds to assess the quality of the differential approximation. The computation of performance indices such as throughput, utilisation, and average response time are interpreted deterministically as functions of the ODE solution and are related to corresponding reward structures in the Markovian setting. The differential interpretation of PEPA provides a framework that is conceptually analogous to established approximation methods in queueing networks based on meanvalue analysis, as both approaches aim at reducing the computational cost of the analysis by providing estimates for the expected values of the performance metrics of interest. The relationship between these two techniques is examined in more detail in a comparison between PEPA and the Layered Queueing Network (LQN) model. General patterns of translation of LQN elements into corresponding PEPA components are applied to a substantial case study of a distributed computer system. This model is analysed using stochastic simulation to gauge the soundness of the translation. Furthermore, it is subjected to a series of numerical tests to compare execution runtimes and accuracy of the PEPA differential analysis against the LQN mean-value approximation method. Finally, this thesis discusses the major elements concerning the development of a software toolkit, the PEPA Eclipse Plug-in, which offers a comprehensive modelling environment for PEPA, including modules for static analysis, explicit state-space exploration, numerical solution of the steady-state equilibrium of the Markov chain, stochastic simulation, the differential analysis approach herein presented, and a graphical framework for model editing and visualisation of performance evaluation results.
785

A nano-CMOS based universal voltage level converter for multi-VDD SoCs.

Vadlmudi, Tripurasuparna 05 1900 (has links)
Power dissipation of integrated circuits is the most demanding issue for very large scale integration (VLSI) design engineers, especially for portable and mobile applications. Use of multiple supply voltages systems, which employs level converter between two voltage islands is one of the most effective ways to reduce power consumption. In this thesis work, a unique level converter known as universal level converter (ULC), capable of four distinct level converting operations, is proposed. The schematic and layout of ULC are built and simulated using CADENCE. The ULC is characterized by performing three analysis such as parametric, power, and load analysis which prove that the design has an average power consumption reduction of about 85-97% and capable of producing stable output at low voltages like 0.45V even under varying load conditions.
786

Influence of Antarctic oscillation on intraseasonal variability of large-scale circulations over the Western North Pacific

Burton, Kenneth R., Jr. 03 1900 (has links)
Approved for public release, distribution is unlimited / This study examines Southern Hemisphere mid-latitude wave variations connected to the Antarctic Oscillation (AAO) to establish connections with the 15- to 25-day wave activity in the western North Pacific monsoon trough region. The AAO index defined from the leading empirical orthogonal functions of 700 hPa height anomalies led to seven distinct circulation patterns that vary in conjunction with the 15- to 25-day monsoon trough mode. For nearly one half of the significant events the onset of 15- to 25-day monsoon trough convective activity coincided with a peak negative AAO index and the peak in monsoon trough convection coincided with a peak positive index. The remaining events either occur when the AAO is not significantly varying or when the AAO-related Southern Hemisphere mid-latitude circulations do not match 15- to 25-day transitions. When a significant connection occurs between the Southern Hemisphere mid-latitude circulations related to the AAO and the 15- to 25-day wave activity in the western North Pacific monsoon trough, the mechanism is via equatorward Rossby-wave dispersion. When wave energy flux in the Southern Hemisphere is directed zonally, no connection is established between the AAO and the alternating periods of enhanced and reduced convection in the western North Pacific monsoon trough. / Captain, United States Air Force
787

Secondary large-scale index theory and positive scalar curvature

Zeidler, Rudolf 24 August 2016 (has links)
No description available.
788

Data dissemination in large-cardinality social graphs

Maryokhin, Tymur January 2015 (has links)
Near real-time event streams are a key feature in many popular social media applications. These types of applications allow users to selectively follow event streams to receive a curated list of real-time events from various sources. Due to the emphasis on recency, relevance, personalization of content, and the highly variable cardinality of social subgraphs, it is extremely difficult to implement feed following at the scale of major social media applications. This leads to multiple architectural approaches, but no consensus has been reached as to what is considered to be an idiomatic solution. As of today, there are various theoretical approaches exploiting the dynamic nature of social graphs, but not all of them have been applied in practice. In this paper, large-cardinality graphs are placed in the context of existing research to highlight the exceptional data management challenges that are posed for large-scale real-time social media applications. This work outlines the key characteristics of data dissemination in large-cardinality social graphs, and overviews existing research and state-of-the-art approaches in industry, with the goal of stimulating further research in this direction.
789

Large-scale and high-quality multi-view stereo / Stéréo multi-vues à grande-échelle et de haute-qualité

Vu, Hoang Hiep 05 December 2011 (has links)
L'acquisition de modèles 3D des scènes réelles trouve son utilité dans de nombreuses applications pratiques, comme l'archivage numérique, les jeux vidéo, l'ingénierie, la publicité. Il existe principalement deux méthodes pour acquérir un modèle 3D: la reconstruction avec un scanner laser (méthode active) et la reconstruction à partir de plusieurs photographies d'une même scène prise dans des points de vues différentes (méthode passive). Si la méthode active permet d'acquérir des modèles avec une grande précision, il est cependant coûteux et difficile à mettre en place pour de grandes scènes extérieures. La méthode passive, ou la stéréo multi-vues est en revanche plus flexible, facile à mettre en oeuvre et surtout moins coûteuse que la méthode active. Cette thèse s'attaque au problème de la reconstruction de stéréo multi-vues à grande échelle et précise pour les scènes extérieures. Nous améliorons des méthodes précédentes et les assemblons pour créer une chaîne de stéréo multi-vues efficace tirant parti de l'accélération de cartes graphiques. La chaîne produit des maillages de qualité à partir d'images de haute résolution, ce qui permet d'atteindre les meilleurs scores dans de nombreuses évaluations. Aux plus grandes échelles, nous développons d'une part des techniques de type diviser-pour-régner pour reconstruire des morceaux partiaux de la scène. D'autre part, pour combiner ces résultats séparés, nous créons une nouvelle méthode qui fusionne rapidement des centaines de maillages. Nous réussissons à reconstruire de beaux maillages urbains et des monuments historiques précis à partir de grandes collections d'images (environ 1600 images de 5M Pixel) / Acquisition of 3D model of real objects and scenes is indispensable and useful in many practical applications, such as digital archives, game and entertainment industries, engineering, advertisement. There are 2 main methods for 3D acquisition : laser-based reconstruction (active method) and image-based reconstruction from multiple images of the scene in different points of view (passive method). While laser-based reconstruction achieves high accuracy, it is complex, expensive and difficult to set up for large-scale outdoor reconstruction. Image-based, or multi-view stereo methods are more versatile, easier, faster and cheaper. By the time we begin this thesis, most multi-view methods could handle only low resolution images under controlled environment. This thesis targets multi-view stereo both both in large scale and high accuracy issues. We significantly improve some previous methods and combine them into a remarkably effective multi-view pipeline with GPU acceleration. From high-resolution images, we produce highly complete and accurate meshes that achieve best scores in many international recognized benchmarks. Aiming even larger scale, on one hand, we develop Divide and Conquer approaches in order to reconstruct many small parts of a big scene. On the other hand, to combine separate partial results, we create a new merging method, which can merge automatically and quickly hundreds of meshes. With all these components, we are successful to reconstruct highly accurate water-tight meshes for cities and historical monuments from large collections of high-resolution images (around 1600 images of 5 M Pixel images)
790

Les moments cumulant d'ordre supérieur à deux points des champs cosmologiques : propriétés théoriques et applications.

Bel, Julien 04 December 2012 (has links)
La philosophie de cette thèse est de dire que nos plus grandes chances de trouver et de caractériser les ingrédients essentiels du modèle cosmologique passe par l'élargissement de l'éventail des méthodes que l'on peut utiliser dans le but de trouver une nouvelle physique. Bien qu'il soit d'une importance primordiale de continuer à affiner, à de-biaiser et à rendre plus puissantes, les stratégies qui ont contribué à établir le modèle de concordance, il est également crucial de remettre en question, avec de nouvelles méthodes, tous les secteurs de l'actuel paradigme cosmologique. Cette thèse, par conséquent, s'engage dans le défi de développer des sondes cosmologiques nouvelle et performantes qui visent à optimiser les résultats scientifiques des futures grand sondages de galaxies. L'objectif est double. Du côté théorique, je cherche à mettre au point de nouvelles stratégies de test qui sont peu (voire pas du tout) affectées par des incertitudes astrophysiques ou par des modèles phénoménologiques qui ne sont pas complet . Cela rendra les interprétations cosmologiques plus facile et plus sûr. Du côté des observations, l'objectif est d'évaluer les performances des stratégies proposées en utilisant les données actuelles, dans le but de démontrer leur potentiel pour les futures grandes missions cosmologiques tels que BigBoss et EUCLID. / The philosophy of this thesis is that our best chances of finding and characterizing the essential ingredients of a well grounded cosmological model is by enlarging the arsenal of methods with which we can hunt for new physics. While it is of paramount importance to continue to refine, de-bias and power, the very same testing strategies that contributed to establish the concordance model, it is also crucial to challenge, with new methods, all the sectors of the current cosmological paradigm. This thesis, therefore, engages in the challenge of developing new and performant cosmic probes that aim at optimizing the scientific output of future large redshift surveys. The goal is twofold. From the theoretical side, I aim at developing new testing strategies that are minimally (if not at all) affected by astrophysical uncertainties or by not fully motivated phenomenological models. This will make cosmological interpretations easier and safer. From the observational side, the goal is to gauge the performances of the proposed strategies using current, state of the art, redshift data, and to demonstrate their potential for the future large cosmological missions such as BigBOSS and EUCLID.

Page generated in 0.0949 seconds