431 |
Le contrôle hypothalamique de l'homéostasie énergétique : impact de l'environnement maternel et implication du CNTFCouvreur, Odile 21 December 2011 (has links) (PDF)
Le maintien de l'homéostasie énergétique est placé sous le contrôle de cytokines qui agissent dans le système nerveux central, notamment au niveau de l'hypothalamus. En particulier, la leptine, cytokine produite par le tissu adipeux, diminue la prise alimentaire et stimule la perte de poids. L'obésité est une épidémie mondiale qui progresse de façon alarmante, notamment chez les enfants et souvent associée à des pathologies sévères et des désordres endocriniens comme la résistance à la leptine ou à l'insuline. Le CNTF (Ciliary Neurotrophic Factor) est une neurocytokine de la même famille que la leptine dont l'un des principaux avantages est qu'il stimule la perte de poids dans les cas de leptino-résistance en activant les mêmes voies de signalisation que la leptine (Benomar et al., 2009). Face à l'épidémie mondiale d'obésité,chez la population adulte comme enfantine, il apparaît nécessaire de décrypter les mécanismes impliqués dans la genèse de la maladie ainsi que les potentiels agents thérapeutiques.L'objectif premier de ce travail de thèse a été de caractériser l'impact d'une alimentation maternelle hyperlipidique (HF) sur les capacités de contrôle de l'homéostasie énergétique chez la descendance. En effet, le concept de " programmation métabolique " propose que des pertubations de l'environnement périnatal puissent influencer durablement la descendance, la rendant plus susceptible de développer une obésité dans un contexte nutrionnellement riche.Des études menées au sein du laboratoire ont montré qu'un régime maternel HF pouvait programmer l'acquisition de la leptino-résistance chez la descendance à l'âge adulte (Ferezou-Viala et al., 2007b). Nous avons donc testé la prédispostion de ces animaux à prendre du poids lorsqu'ils étaient nourris avec un régime hypercalorique (P). Nos données ont montréqu'étonnamment, le régime maternel HF protégeait la descendance contre le gain de poids induite par le régime P, induisait des modifications d'expression des marqueurs de l'homéostasie énergétique dans le foie et l'hypothalamus, ainsi que de profondes réorganisations cytoarchitectoniques dans le noyau arqué. Plus précisément, le régime maternel HF était associé à une réorganisation de la couverture astrocytaire périvasculaire dans le noyau arqué de la descendance qui persistait à l'âge adulte.Dans une seconde partie de la thèse, nous avons étudié les mécanismes d'action du CNTF. En effet, notre équipe a récemment mis en évidence que le CNTF endogène pourrait jouer un rôle dans la régulation de l'homéostasie énergétique. Les niveaux hypothalamiques de cette cytokine, présente dans les astrocytes et les neurones du noyau arqué, augmentent chez les animaux résistant à une alimentation hypercalorique. Cela pourrait suggérer un rôle protecteur du CNTF contre la prise de poids chez certains individus (Vacher et al., 2008). A ce jour, les mécanismes d'action du CNTF restent cependant mal compris car ce dernier ne possède pas de peptide signal et n'est donc pas sécrété selon des mécanismes d'exocytose classiques. Partant du constat que le CNTF et ses sous-unités réceptrices étaient distribuées de façon similaire dans les cellules du noyau arqué, nous avons émis l'hypothèse que le CNTF pourrait exercer une action intracellulaire sur les cellules de cette structure. Dans cette étude nous démontrons que le CNTF peut interagir directement avec ses récepteurs dans le noyau des neurones anorexigènes du noyau arqué, pour réguler leur activité transcriptionnelle. Ces données proposent ainsi un nouveau mécanisme à l'action anorexigène du CNTF
|
432 |
Quantization of Random Processes and Related Statistical ProblemsShykula, Mykola January 2006 (has links)
In this thesis we study a scalar uniform and non-uniform quantization of random processes (or signals) in average case setting. Quantization (or discretization) of a signal is a standard task in all nalog/digital devices (e.g., digital recorders, remote sensors etc.). We evaluate the necessary memory capacity (or quantization rate) needed for quantized process realizations by exploiting the correlation structure of the model random process. The thesis consists of an introductory survey of the subject and related theory followed by four included papers (A-D). In Paper A we develop a quantization coding method when quantization levels crossings by a process realization are used for its coding. Asymptotical behavior of mean quantization rate is investigated in terms of the correlation structure of the original process. For uniform and non-uniform quantization, we assume that the quantization cellwidth tends to zero and the number of quantization levels tends to infinity, respectively. In Papers B and C we focus on an additive noise model for a quantized random process. Stochastic structures of asymptotic quantization errors are derived for some bounded and unbounded non-uniform quantizers when the number of quantization levels tends to infinity. The obtained results can be applied, for instance, to some optimization design problems for quantization levels. Random signals are quantized at sampling points with further compression. In Paper D the concern is statistical inference for run-length encoding (RLE) method, one of the compression techniques, applied to quantized stationary Gaussian sequences. This compression method is widely used, for instance, in digital signal and image processing. First, we deal with mean RLE quantization rates for various probabilistic models. For a time series with unknown stochastic structure, we investigate asymptotic properties (e.g., asymptotic normality) of two estimates for the mean RLE quantization rate based on an observed sample when the sample size tends to infinity. These results can be used in communication theory, signal processing, coding, and compression applications. Some examples and numerical experiments demonstrating applications of the obtained results for synthetic and real data are presented.
|
433 |
Enabling Timing Analysis of Complex Embedded Software SystemsKraft, Johan January 2010 (has links)
Cars, trains, trucks, telecom networks and industrial robots are examples of products relying on complex embedded software systems, running on embedded computers. Such systems may consist of millions of lines of program code developed by hundreds of engineers over many years, often decades. Over the long life-cycle of such systems, the main part of the product development costs is typically not the initial development, but the software maintenance, i.e., improvements and corrections of defects, over the years. Of the maintenance costs, a major cost is the verification of the system after changes has been applied, which often requires a huge amount of testing. However, today's techniques are not sufficient, as defects often are found post-release, by the customers. This area is therefore of high relevance for industry. Complex embedded systems often control machinery where timing is crucial for accuracy and safety. Such systems therefore have important requirements on timing, such as maximum response times. However, when maintaining complex embedded software systems, it is difficult to predict how changes may impact the system's run-time behavior and timing, e.g., response times.Analytical and formal methods for timing analysis exist, but are often hard to apply in practice on complex embedded systems, for several reasons. As a result, the industrial practice in deciding the suitability of a proposed change, with respect to its run-time impact, is to rely on the subjective judgment of experienced developers and architects. This is a risky and inefficient, trial-and-error approach, which may waste large amounts of person-hours on implementing unsuitable software designs, with potential timing- or performance problems. This can generally not be detected at all until late stages of testing, when the updated software system can be tested on system level, under realistic conditions. Even then, it is easy to miss such problems. If products are released containing software with latent timing errors, it may cause huge costs, such as car recalls, or even accidents. Even when such problems are found using testing, they necessitate design changes late in the development project, which cause delays and increases the costs. This thesis presents an approach for impact analysis with respect to run-time behavior such as timing and performance for complex embedded systems. The impact analysis is performed through optimizing simulation, where the simulation models are automatically generated from the system implementation. This approach allows for predicting the consequences of proposed designs, for new or modified features, by prototyping the change in the simulation model on a high level of abstraction, e.g., by increasing the execution time for a particular task. Thereby, designs leading to timing-, performance-, or resource usage problems can be identified early, before implementation, and a late redesigns are thereby avoided, which improves development efficiency and predictability, as well as software quality. The contributions presented in this thesis is within four areas related to simulation-based analysis of complex embedded systems: (1) simulation and simulation optimization techniques, (2) automated model extraction of simulation models from source code, (3) methods for validation of such simulation models and (4) run-time recording techniques for model extraction, impact analysis and model validation purposes. Several tools has been developed during this work, of which two are in commercialization in the spin-off company Percepio AB. Note that the Katana approach, in area (2), is subject for a recent patent application - patent pending. / PROGRESS
|
434 |
Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate DistributionsKarawatzki, Roman, Leydold, Josef January 2005 (has links) (PDF)
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
435 |
Modeling Constitutive Behavior And Hot Rolling Of SteelsPhaniraj, M P 12 1900 (has links)
Constitutive behavior models for steels are typically semi-empirical, however recently neural network is also being used. Existing neural network models are highly complex with a large network structure i.e. the number of neurons and layers. Furthermore, the network structure is different for different grades of steel. In the present study a simple neural network structure, 3:4:1, is developed which models flow behavior better than other models available in literature. Using this neural network structure constitutive behavior of 8 steels: 4 carbon steels, V and V-Ti microalloyed steels, an austenitic stainless steel and a high speed steel could be modeled with reasonable accuracy.
The stress-strain behavior for the vanadium microalloyed steel was obtained from hot compression tests carried out at 850-1150 ï°C and 0.1-60 s-1. It is found that a better estimate of the constants in the semi-empirical model developed for this steel could be obtained by simultaneous nonlinear regression.
A model that can predict the effect of chemical composition on the constitutive behavior would be industrially useful for e.g., in optimizing rolling schedules for new grades of steel. In the present study, a neural network model, 5:6:1, is developed which predicts the flow behavior for a range of carbon steels. It is found that the effect of manganese is best accounted for by taking Ceq=C+Mn/6 as one of the inputs of the network. Predictions from this model show that the effect of carbon on flow stress is nonlinear.
The hot strip mill at Jindal Vijaynagar Steel Ltd., Toranagallu, Karnataka, India, was simulated for calculating the rolling loads, finish rolling temperature (FRT) and microstructure evolution. DEFORM-2d a commercial finite element package was used to simulate deformation and heat transfer in the rolling mill. The simulation was carried out for 18 strips of 2-4 mm thickness with compositions in the range and 0.025-0.139 %C. The rolling loads and FRT could be calculated within ï±15 % and ï±15 ï°C respectively. Analysis based on the variation in the roll diameter, roll gap and the effect of roll flattening and temperature of the roll showed that an error of ï±6 % is inherent in the prediction of loads. Simulation results indicated that strain induced transformation to ferrite occurred in the finishing mill. The microstructure after rolling was validated against experimental data for ferrite microstructure and mechanical properties.
The mechanical properties of steels with predominantly ferrite microstructures depend on the prior austenite grain size, strain retained before transformation and cooling rate on the run-out table. A parametric study based on experimental data available in literature showed that a variation in cooling rate by a factor of two on the run-out table gives rise to only a ï±20 MPa variation in the mechanical properties.
|
436 |
A CUSUM test for discrete monitoring of intensity of a Poisson processEger, Karl-Heinz 13 June 2010 (has links) (PDF)
This paper deals with CUSUM tests for monitoring
of intensity parameter of a Poisson process if this
process can be observed in a restricted manner only at pregiven
equidistant time points. In this case the process can
be monitored by means of a CUSUM test for the parameter
of a corresponding Poisson distribution.
For rational reference parameter values the computation
of average run length is reduced to that of solving of a
system of simultaneous linear equations. The performance
of obtained CUSUM tests is discussed by means of corresponding
examples.
|
437 |
The Omnibus language and integrated verification approachWilson, Thomas January 2007 (has links)
This thesis describes the Omnibus language and its supporting framework of tools. Omnibus is an object-oriented language which is superficially similar to the Java programming language but uses value semantics for objects and incorporates a behavioural interface specification language. Specifications are defined in terms of a subset of the query functions of the classes for which a frame-condition logic is provided. The language is well suited to the specification of modelling types and can also be used to write implementations. An overview of the language is presented and then specific aspects such as subtleties in the frame-condition logic, the implementation of value semantics and the role of equality are discussed. The challenges of reference semantics are also discussed. The Omnibus language is supported by an integrated verification tool which provides support for three assertion-based verification approaches: run-time assertion checking, extended static checking and full formal verification. The different approaches provide different balances between rigour and ease of use. The Omnibus tool allows these approaches to be used together in different parts of the same project. Guidelines are presented in order to help users avoid conflicts when using the approaches together. The use of the integrated verification approach to meet two key requirements of safe software component reuse, to have clear descriptions and some form of certification, are discussed along with the specialised facilities provided by the Omnibus tool to manage the distribution of components. The principles of the implementation of the tool are described, focussing on the integrated static verifier module that supports both extended static checking and full formal verification through the use of an intermediate logic. The different verification approaches are used to detect and correct a range of errors in a case study carried out using the Omnibus language. The case study is of a library system where copies of books, CDs and DVDs are loaned out to members. The implementation consists of 2278 lines of Omnibus code spread over 15 classes. To allow direct comparison of the different assertion-based verification approaches considered, run-time assertion checking, extended static checking and then full formal verification are applied to the application in its entirety. This directly illustrates the different balances between error coverage and ease-of-use which the approaches offer. Finally, the verification policy system is used to allow the approaches to be used together to verify different parts of the application.
|
438 |
Schwallwellen infolge der Bewegung einer BegrenzungsflächeRöhner, Michael 29 September 2011 (has links) (PDF)
Restlöcher ausgekohlter Braunkohlentagebaue werden aus landeskulturellen und ökonomischen Gründen wasserwirtschaftlich als Speicher, Hochwasserrückhaltebecken, Klärteiche, Wassergewinnungsanlagen sowie zur Naherholung genutzt. Diese Restlöcher werden zum großen Teil von aus geschüttetem Abraum bestehenden Böschungen umschlossen. Bei Wasserspiegelschwankungen neigen diese unbefestigten Böschungen zum Rutschen. Als Folge dieser Böschungsrutschungen bilden sich auf der Wasseroberfläche Wellen, die eine beachtliche Größe erreichen können. Diese Schwallwellen übertreffen in ihren Ausmaßen die Windwellen in den Tagebaurestlöchern um ein Vielfaches. Um diese Erscheinungen vorausberechnen zu können, wurden im Hubert-Engels-Laboratorium der Sektion Wasserwesen Untersuchungen durchgeführt.
Die Entwicklung einer allgemeingültigen Berechnungsmethode für die Schwallwelle bei der Bewegung eines Teiles der das Wasserbecken begrenzenden Böschung verlangt die Einführung erfassbarer Parameter wie der Breite der rutschenden Böschung, den zeitlichen Verlauf der Wasserverdrängung sowie Tiefen- und Lageverhältnisse des Beckens. Die dafür notwendigen Kennzahlen können nur näherungsweise bestimmt werden, so dass einfache Beckengeometrien, ein über die Rutschzeit gleich bleibender Verlauf der Wasserverdrängung und Erhaltung der Böschungskante einem Berechnungsverfahren zugrunde gelegt werden müssen.
Für die Berechnung des Füllschwalles auf das ruhende Wasser sind einige Verfahren bekannt geworden, die auf eine gemeinsame Gleichung für die Berechnung der Schwallhöhe zurückzuführen sind. Für die ebene Ausbreitung des Füllschwalles über Ruhewasser ergeben sieh zwei prinzipielle Abflussmöglichkeiten: Auflösung in Wellen oder brandender Schwallkopf. Diese beiden Möglichkeiten sowie der Übergangsbereich werden durch FROUDE-zahlen festgelegt.
Der Wellenkopf von Füllschwallwellen wird durch eine Einzelwelle gebildet.
Die Rutschung einer Böschung wurde durch die gleichzeitige Horizontal- und Vertikalbewegung einer Platte nachgebildet. Die Bewegung der Platte, die entstehenden Wellen und die Kräfte auf Auflaufböschung wurden durch einen Oszillografen aufgezeichnet.
Die Auswertung der Versuche ergab eine Übereinstimmung zwischen Messergebnissen und den Berechnungen nach den Gesetzen des Füllschwalls. Die sekundlich verdrängte Wassermenge pro Breiteneinheit und die Ruhewassertiefe bestimmen die entstehenden Schwallwellen. Ein Einfluss der vertikalen Bewegungskomponente ist im untersuchten Bereich nicht nachweisbar. Die dynamischen Kräfte auf die Abschlussböschung können durch den Impuls der Einzelwelle dargestellt werden.
Die räumliche Ausbreitung der Schwallwellen wurde in einem Modell untersucht. Dabei wurde festgestellt, dass die größten Wellenhöhen in der Richtung der Bewegung der Platte auftreten, während die Wellenhöhen in seitlichen Ausbreitungsrichtungen kleiner sind.
Berechnungsansätze für die maximale Wellenhöhe der front wurden ermittelt.
Als Ergebnis wurde ein Berechnungsverfahren entwickelt, welches ausgehend von den Parametern dar Rutschung, die Eigenschaften der Schwallwellen einschließlich der durch sie hervorgerufenen Belastungen auf der Auflaufböschung ermöglicht. Mit diesem Berechnungsverfahren ist es möglich, Böschungen wirtschaftlich zu gestalten und schädliche Rückwirkungen auf das Staubecken durch Schwallwellen zu vermeiden. Bisher notwendige Kosten für eine sehr flache Gestaltung der Böschung können entfallen. Gleichzeitig bleibt ein größerer nutzbarer Stauraum erhalten.
Die Digitalisierung der vorliegenden Arbeit durch die Sächsische Landesbibliothek - Staats- und Universitätsbibliothek Dresden (SLUB) wurde durch die Gesellschaft der Förderer des Hubert-Engels-Institutes für Wasserbau und Technische Hydromechanik an der Technischen Universität Dresden e.V. unterstützt.
|
439 |
Big boxes and stormwaterFite-Wassilak, Alexander H. 11 July 2008 (has links)
Big-box Urban Mixed-use Developments (BUMDs) are mixed-use developments with a consistent typology that incorporate big-box retailers in a central role. They are also becoming popular in the Atlanta region. While BUMDs serve an important
economic role, they also cause issues with stormwater. This study explores integrating a
on-site approach to stormwater management into the design of BUMDs. These new designs not only significantly lower the amount of stormwater run-off, but also have potential for better, more attractive, developments.
|
440 |
An investigation into wave run-up on vertical surface piercing cylinders in monochromatic wavesMorris-Thomas, Michael January 2003 (has links)
[Formulae and special characters can only be approximated here. Please see the pdf version of the abstract for an accurate reproduction.] Wave run-up is the vertical uprush of water when an incident wave impinges on a free- surface penetrating body. For large volume offshore structures the wave run-up on the weather side of the supporting columns is particularly important for air-gap design and ultimately the avoidance of pressure impulse loads on the underside of the deck structure. This investigation focuses on the limitations of conventional wave diffraction theory, where the free-surface boundary condition is treated by a Stokes expansion, in predicting the harmonic components of the wave run-up, and the presentation of a simplified procedure for the prediction of wave run-up. The wave run-up is studied on fixed vertical cylinders in plane progressive waves. These progressive waves are of a form suitable for description by Stokes' wave theory whereby the typical energy content of a wave train consists of one fundamental harmonic and corresponding phase locked Fourier components. The choice of monochromatic waves is indicative of ocean environments for large volume structures in the diffraction regime where the assumption of potential flow theory is applicable, or more formally A/a < Ο(1) (A and a being the wave amplitude and cylinder radius respectively). One of the unique aspects of this work is the investigation of column geometry effects - in terms of square cylinders with rounded edges - on the wave run-up. The rounded edges of each cylinder are described by the dimensionless parameter rc/a which denotes the ratio of edge corner radius to half-width of a typical column with longitudinal axis perpendicular to the quiescent free-surface. An experimental campaign was undertaken where the wave run-up on a fixed column in plane progressive waves was measured with wire probes located close to the cylinder. Based on an appropriate dimensional analysis, the wave environment was represented by a parametric variation of the scattering parameter ka and wave steepness kA (where k denotes the wave number). The effect of column geometry was investigated by varying the edge corner radius ratio within the domain 0 <=rc/a <= 1, where the upper and lower bounds correspond to a circular and square shaped cylinder respectively. The water depth is assumed infinite so that the wave run-up caused purely by wave-structure interaction is examined without the additional influence of a non-decaying horizontal fluid velocity and finite depth effects on wave dispersion. The zero-, first-, second- and third-harmonics of the wave run-up are examined to determine the importance of each with regard to local wave diffraction and incident wave non-linearities. The modulus and phase of these harmonics are compared to corresponding theoretical predictions from conventional diffraction theory to second-order in wave steepness. As a result, a basis is formed for the applicability of a Stokes expansion to the free-surface boundary condition of the diffraction problem, and its limitations in terms of local wave scattering and incident wave non-linearities. An analytical approach is pursued and solved in the long wavelength regime for the interaction of a plane progressive wave with a circular cylinder in an ideal fluid. The classical Stokesian assumption of infinitesimal wave amplitude is invoked to treat the free-surface boundary condition along with an unconventional requirement that the cylinder width is assumed much smaller than the incident wavelength. This additional assumption is justified because critical wavelengths for wave run-up on a fixed cylinder are typically much larger in magnitude than the cylinder's width. In the solution, two coupled perturbation schemes, incorporating a classical Stokes expansion and cylinder slenderness expansion, are invoked and the boundary value problem solved to third-order. The formulation of the diffraction problem in this manner allows for third-harmonic diffraction effects and higher-order effects operating at the first-harmonic to be found. In general, the complete wave run-up is not well accounted for by a second-order Stokes expansion of the free-surface boundary condition and wave elevation. This is however, dependent upon the coupling of ka and kA. In particular, whilst the modulus and phase of the second-harmonic are moderately predicted, the mean set-up is not well predicted by a second-order Stokes expansion scheme. This is thought to be caused by higher than second-order non-linear effects since experimental evidence has revealed higher-order diffraction effects operating at the first-harmonic in waves of moderate to large steepness when k < < 1. These higher-order effects, operating at the first-harmonic, can be partially accounted for by the proposed long wavelength formulation. For small ka and large kA, subsequent comparisons with measured results do indeed provide a better agreement than the classical linear diffraction solution of Havelock (1940). To account for the complete wave run-up, a unique approach has been adopted where a correction is applied to a first-harmonic analytical solution. The remaining non-linear portion is accounted for by two methods. The first method is based on regression analysis in terms of ka and kA and provides an additive correction to the first-harmonic solution. The second method involves an amplification correction of the first-harmonic. This utilises Bernoulli's equation applied at the mean free-surface position where the constant of proportionality is empirically determined and is inversely proportional to ka. The experimental and numerical results suggest that the wave run-up increases as rc/a--› 0, however this is most significant for short waves and long waves of large steepness. Of the harmonic components, experimental evidence suggests that the effect of a variation in rc/a on the wave run-up is particularly significant for the first-harmonic only. Furthermore, the corner radius effect on the first-harmonic wave run-up is well predicted by numerical calculations using the boundary element method. Given this, the proposed simplified wave run-up model includes an additional geometry correction which accounts for rc/a to first-order in local wave diffraction. From a practical view point, it is the simplified model that is most useful for platform designers to predict the wave run-up on a surface piercing column. It is computationally inexpensive and the comparison of this model with measured results has proved more promising than previously proposed schemes.
|
Page generated in 0.0457 seconds