• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 220
  • 141
  • 134
  • 46
  • 24
  • 13
  • 11
  • 9
  • 7
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 693
  • 97
  • 92
  • 66
  • 65
  • 64
  • 63
  • 60
  • 58
  • 56
  • 50
  • 49
  • 43
  • 40
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

A determination of the W boson mass by direct reconstruction using the DELPHI detector at LEPII

Thomas, Julie Eleanor January 1999 (has links)
No description available.
382

A first estimate of #sigma#(e'+p #-># e'+W#+-#X) and studies of high ←p←T leptons with ZEUS detector at HERA

Waters, David January 1998 (has links)
No description available.
383

A pharmacokinetic-pharmacodynamic relationship study between GABA-ergic drugs and anxiety levels in an animal model of PTSD / Jacolene Myburgh

Myburgh, Jacolene January 2005 (has links)
Posttraumatic stress disorder (PTSD) is classified as an anxiety disorder and the characteristic symptoms (re-experiencing, avoidance as well as numbing of general responsiveness and hyperarousal) of this disorder develop in response to a traumatic event. The disorder is characterised by hypothalamic-pituitary-adrenal (HPA) axis abnormalities linked with changes in cortisol moreover, the hippocampus and cortex also play a role in the neurobiology. With regard to the neurochemistry of this disorder it is known that gamma amino butyric acid (GABA) is involved however, the precise role of GABA in PTSD and how stress changes GABA concentrations in the brain are still not fully understood. Another aspect regarding PTSD that has not been clearly defined is the treatment of PTSD. Classic anxiolytics such as diazepam is expected to relieve the anxiety linked with PTSD. Studies with this group of drugs have however not produced the concrete evidence needed to establish it as a treatment of choice for PTSD and subsequently other classes of drugs have been investigated as possible treatment options for PTSD. Among these is lamotrigine, which in a clinical study was found to be effective in alleviating symptoms of PTSD. Moreover, a possible pharmacokinetic-pharmacodynamic relationship for each of these drugs has also not been elucidated. In order to elude on some of these uncertainties, an animal model of PTSD, time dependent sensitisation (TDS), was used. GABA levels in the rat hippocampus and frontal cortex were determined at two different time intervals following the TDS procedure (1 day and 7 days post re-stress). High performance liquid chromatography (HPLC) with electrochemical (EC) detection was used to determine gamma amino butyric acid (GABA) concentrations. To investigate the possible anxiolytic effects of diazepam and lamotrigine in this model, as well as a possible pharmacokinetic-pharmacodynamic relationship for each drug, pharmacokinetic profiles for both drugs were established in order to find the times of peak and trough levels of each drug. Blood samples were collected at different time intervals after drug administration either from the tail vein of rats (lamotrigine) or directly from the heart (diazepam). Subsequently, drug concentrations at each time interval were determined by means of HPLC with ultraviolet (UV) detection. The behaviour of rats was analysed using the elevated plus-maze (EPM) at peak or trough concentrations of the drugs and this was performed after either acute administration of the drug, or after a 14 day chronic treatment regime. GABA levels in the hippocampus were not found to change statistically significantly in response to stress at either 1 day or 7 days post re-stress. In the frontal cortex, however, GABA levels increased in response to stress at 1 day post re-stress, with a statistically insignificant, but strong trend towards an increase, at 7 days post re-stress. With regard to the pharmacokinetic profiles, the peak concentration of diazepam was found to occur at 60 minutes, with lamotrigine's peak at 120 minutes. The behavioural studies indicated that acute treatment with diazepam 3 mg/kg resulted in a statistically significant increase in both ratio open arm entries and ratio time spent in the open arms at peak level of the drug. After acute treatment with diazepam 3 mg/kg a statistically significant decrease in ratio time spent in open arms was also found when the ratio time spent in open arms at peak level of the drug and the ratio time spent in open arms at trough level of the drug was compared. In response to chronic treatment with diazepam 3 mg/kg for 14 days, test animals exhibited an increase in the ratio open arm entries at trough level of the drug, with a statistically insignificant yet definite trend towards an increase at peak level. Acute treatment with lamotrigine 10 mg/kg resulted in no statistically significant change in EPM parameters. In response to chronic treatment, however, a statistically significant increase was found in ratio time spent in open arms at peak level of the drug, with a statistically insignificant trend towards an increase at trough level. From the results of this study, we may therefore conclude that GABA-levels in the brain are definitely affected, but in different ways, following TDS-stress. A pharmacokinetic-pharmacodynamic relationship between the drugs' levels and aversive behaviour could also be established. Furthermore it appears that more sustained anxiolytic effects are evident following chronic treatment with both drugs than with acute administration of these drugs. / Thesis (M.Sc. (Pharmacology))--North-West University, Potchefstroom Campus, 2006
384

Model predictive control of a magnetically suspended flywheel energy storage system / Christiaan Daniël Aucamp

Aucamp, Christiaan Daniël January 2012 (has links)
The goal of this dissertation is to evaluate the effectiveness of model predictive control (MPC) for a magnetically suspended flywheel energy storage uninterruptible power supply (FlyUPS). The reason this research topic was selected was to determine if an advanced control technique such as MPC could perform better than a classical control approach such as decentralised Proportional-plus-Differential (PD) control. Based on a literature study of the FlyUPS system and the MPC strategies available, two MPC strategies were used to design two possible MPC controllers were designed for the FlyUPS, namely a classical MPC algorithm that incorporates optimisation techniques and the MPC algorithm used in the MATLAB® MPC toolbox™. In order to take the restrictions of the system into consideration, the model used to derive the controllers was reduced to an order of ten according to the Hankel singular value decomposition of the model. Simulation results indicated that the first controller based on a classical MPC algorithm and optimisation techniques was not verified as a viable control strategy to be implemented on the physical FlyUPS system due to difficulties obtaining the desired response. The second controller derived using the MATLAB® MPC toolbox™ was verified to be a viable control strategy for the FlyUPS by delivering good performance in simulation. The verified MPC controller was then implemented on the FlyUPS. This implementation was then analysed in order to validate that the controller operates as expected through a comparison of the simulation and implementation results. Further analysis was then done by comparing the performance of MPC with decentralised PD control in order to determine the advantages and limitations of using MPC on the FlyUPS. The advantages indicated by the evaluation include the simplicity of the design of the controller that follows directly from the specifications of the system and the dynamics of the system, and the good performance of the controller within the parameters of the controller design. The limitations identified during this evaluation include the high computational load that requires a relatively long execution time, and the inability of the MPC controller to adapt to unmodelled system dynamics. Based on this evaluation MPC can be seen as a viable control strategy for the FlyUPS, however more research is needed to optimise the MPC approach to yield significant advantages over other control techniques such as decentralised PD control. / Thesis (MIng (Computer and Electronic Engineering))--North-West University, Potchefstroom Campus, 2013
385

L’agir professionnel des enseignants dans la gestion des difficultés “ordinaires” d’apprentissage au primaire : analyse d’un cas dans une perspective historico-culturelle

Lamouric, Marc January 2016 (has links)
Cette thèse porte sur l’agir professionnel d’enseignants dans la gestion des difficultés “ordinaires” d’apprentissage, notamment à travers les gestes professionnels qu’ils peuvent mobiliser pour aider les élèves à les surmonter. L’originalité de cette étude réside dans sa capacité à documenter, à l’appui d’assises conceptuelles pertinentes, le “paradigme manquant” (Roiné, 2014) dans les analyses actuelles sur les difficultés d’apprentissage, tout en se démarquant des approches dites pathologisantes. En d’autres termes, elle s’efforce de déplacer le centre de gravité de l’interprétation des difficultés d’apprentissage vers la sphère pédagogique et didactique tout en œuvrant à la définition d’un agir spécifique dans le cadre de la classe. Au sein de cette approche, qui se veut psychopédagogique (Benoît, 2005), les difficultés d’apprentissage sont considérées comme normales et inhérentes à l’apprentissage, ce qui met l’agir enseignant au centre des solutions à envisager. En cherchant à documenter la manière dont ce type de difficultés peut être pris en compte dans le cadre même de la classe, nous avons défini la notion de difficultés “ordinaires” d’apprentissage et mobilisé le concept d’agir professionnel (Jorro, 2004, 2006a ; Jorro et Crocé-Spinelli, 2010). Ce dernier s’articule autour de quatre gestes professionnels (les gestes langagiers, les gestes de mise en scène des savoirs, les gestes éthiques et les gestes d’ajustement dans la situation). Nous avons inscrit notre démarche dans le cadre de la théorie historique-culturelle proposée par Vygotski (1997, 2016) dans la mesure où le concept de zone de développement le plus proche contient en soi, à notre sens, la notion de difficulté, d’une part, et d’accompagnement (agir) de la part de l’enseignant, d’autre part. À l’instar d’autres travaux portant sur l’agir professionnel d’enseignants, la présente recherche, portant sur une étude de cas (Savoie-Zajc, 2011), permet de poser quelques hypothèses signifiantes quant à l’impact des gestes professionnels mobilisés par un enseignant pour intervenir auprès d’élèves éprouvant des difficultés “ordinaires” d’apprentissage en contexte de classe. Cette étude tend à montrer aux acteurs de l’éducation que l’agir enseignant peut détenir tout le potentiel nécessaire au développement d’une approche psychopédagogique visant à intervenir dans la gestion des difficultés “ordinaires” d’apprentissage, sans qu’il soit systématiquement nécessaire d’externaliser leur prise en charge ou de recourir aux professionnels du soin (Morel, 2014).
386

Détection de points chauds de déforestation à Bornéo de 2000 à 2009 à partir d'images MODIS

Dorais, Alexis 01 1900 (has links)
Ce travail s’inscrit dans le cadre d’un programme de recherches appuyé par le Conseil de recherches en sciences humaines du Canada. / Les forêts de Bornéo sont inestimables. En plus d’une faune et d’une flore riche et diversifiée, ses milieux naturels constituent d’efficaces réservoirs de carbone. En outre, la matière ligneuse qui y est abondante fait l’objet d’une exploitation intensive. Par contre, c’est le potentiel agricole de l’île qui crée le plus d’enthousiasme, principalement en ce qui concerne la culture du palmier à huile. Pour tenter de mieux comprendre et surveiller le phénomène, nous avons développé des méthodes de détection de la déforestation et de la dégradation des forêts. Ces méthodes doivent tenir compte des caractéristiques propres à l’île. C’est que Bornéo est abondamment affectée par une nébulosité constante qui complexifie considérablement son observation à partir des satellites. Malgré ces contraintes, nous avons produit une série chronologique annuelle des points chauds de déforestation et de dégradation des forêts pour les années 2000 à 2009. / Borneo’s forests are priceless. Beyond the richness and diversity of its fauna and flora, its natural habitats constitute efficient carbon reservoirs. Unfortunately, the vast forests of the island are rapidly being cut down, both by the forestry industry and the rapidly expanding oil palm industry. In this context, we’ve developed methods to detect deforestation and forest degradation in order to better understand and monitor the phenomena. In doing so, the peculiarities of Borneo, such as the persistent cloud cover, had to be accounted for. Nevertheless, we succeeded in producing a time series of the yearly forest degradation and deforestations hotspots for the year 2000 through the year 2009.
387

Géo localisation en environnement fermé des terminaux mobiles / Indoor geo-location static and dynamic geo-location of mobile terminals in indoor environments

Dakkak, Mustapha 29 November 2012 (has links)
Récemment, la localisation statique et dynamique d'un objet ou d'une personne est devenue l'un des plus importantes fonctionnalités d'un système de communication, du fait de ses multiples applications. En effet, connaître la position d'un terminal mobile (MT), en milieu extérieur ou intérieur, est généralement d'une importance majeure pour des applications fournissant des services basés sur la localisation. Ce développement des systèmes de localisation est dû au faible coût des infrastructures de réseau sans fil en milieu intérieur (WLAN). Les techniques permettant de localiser des MTs diffèrent selon les paramètres extraits des signaux radiofréquences émis entre des stations de base (BSs) et des MTs. Les conditions idéales pour effectuer des mesures sont des environnements dépourvus de tout obstacle, permettant des émissions directes entre BS et MT. Ce n'est pas le cas en milieu intérieur, du fait de la présence continuelle d'obstacles dans l'espace, qui dispersent les rayonnements. Les mesures prises dans ces conditions (NLOS, pour Non Line of Sight) sont imprévisibles et diffèrent de celles prises en condition LOS. Afin de réduire les erreurs de mesure, différentes techniques peuvent être utilisées, comme la mitigation, l'approximation, la correction à priori, ou le filtrage. En effet, l'application de systèmes de suivi (TSs) constitue une base substantielle pour la navigation individuelle, les réseaux sociaux, la gestion du trafic, la gestion des ressources mobiles, etc. Différentes techniques sont appliquées pour construire des TSs en milieu intérieur, où le signal est bruité, faible voire inexistant. Bien que les systèmes de localisation globaux (GPS) et les travaux qui en découlent fonctionnent bien hors des bâtiments et dans des canyons urbains, le suivi d'utilisateurs en milieu intérieur est bien plus problématique. De ce fait, le problème de prédiction reste un obstacle essentiel à la construction de TSs fiable dans de tels environnements. Une étape de prédiction est inévitable, en particulier, dans le cas où l'on manque d'informations. De multiples approches ont été proposées dans la littérature, la plupart étant basées sur un filtre linéaire (LF), un filtre de Kalman (KF) et ses variantes, ou sur un filtre particulaire (PF). Les filtres de prédiction sont souvent utilisés dans des problèmes d'estimation et l'application de la dérivation non entière peut limiter l'impact de la perte de performances. Ce travail présente une nouvelle approche pour la localisation intérieure par WLAN utilisant un groupement des coordonnées. Ensuite, une étude comparative des techniques déterministes et des techniques d'apprentissage pour la localisation intérieure est présentée. Enfin, une nouvelle approche souple pour les systèmes de suivi en milieu intérieur, par application de la dérivation non entière, est présentée / Recently, the static and dynamic geo-location of a device or a person has become one of the most important aspects of communication systems because of its multiple applications. In general, knowing the position of a mobile terminal (MT) in outdoor or indoor environments is of major importance for applications providing services based on the location. The development of localization systems has been mainly driven by the avail- ability of the affordable cost of indoor wireless local area network (WLAN) infrastructure. There exist different techniques to localize MTs with the different mainly depending on the type of the metrics extracted from the radio frequency signals communicated between base stations (BSs) and MTs. Ideal measurements are taken in environments which are free of obstacles and in direct ray tracings between BS and MT. This is not the case in indoor environment because the daily use of permanent obstacles in the work space scatters the ray tracings. Measurements taken in Non Line Of Sight (NLOS) are unpredictable and different from those taken in LOS. In order to reduce measurement errors, one can apply different techniques such as mitigation, approximation, prior correction, or filtering. Tracking systems (TSs) have many concrete applications in the space of individual navigation, social net- working, asset management, traffic management, mobile resource management, etc. Different techniques are applied to build TSs in indoor environments, where the signal is noisy, weak or even non-existent. While the Global Positioning System (GPS) devices work well outside buildings and in urban canyons, tracking an indoor user in a real-world environment is much more problematic. The prediction problem remains an essential obstacle to construct reliable indoor TSs. Then lacks of reliable wireless signals represent the main issue for indoor geo-location systems. This obviously calls for some sort of predictions and corrections to overcome signal reliability, which unavoidably open the door for a multitude of challenges. Varieties of approaches were proposed in the literature. The most used are the ones based on prediction filters, such as Linear Filter (LF), Kalman Filter (KF) and its derivatives, and Particle Filters (PF). Prediction filters are often used in estimation problems and applying Digital Fractional Differentiation can limit the impact of performance degradations. This work presents a novel approach for the WLAN indoor geo-location by using coordinates clustering. This approach allows overcoming the limitations of NLOS methods without applying any of mitigation, approximation, prior correction, or filtering approaches. Then a comparison study of deterministic and learning techniques for indoor geo-location is presented. Finally, it presents a novel soft approach for indoor tracking system by applying digital fractional integration (DFI) to classical prediction filters
388

A simulation study of the error induced in one-sided reliability confidence bounds for the Weiball distribution using a small sample size with heavily censored data

Hartley, Michael A. 12 1900 (has links)
Approved for public release; distribution in unlimited. / Budget limitations have reduced the number of military components available for testing, and time constraints have reduced the amount of time available for actual testing resulting in many items still operating at the end of test cycles. These two factors produce small test populations (small sample size) with "heavily" censored data. The assumption of "normal approximation" for estimates based on these small sample sizes reduces the accuracy of confidence bounds of the probability plots and the associated quantities. This creates a problem in acquisition analysis because the confidence in the probability estimates influences the number of spare parts required to support a mission or deployment or determines the length of warranty ensuring proper operation of systems. This thesis develops a method that simulates small samples with censored data and examines the error of the Fisher-Matrix (FM) and the Likelihood Ratio Bounds (LRB) confidence methods of two test populations (size 10 and 20) with three, five, seven and nine observed failures for the Weibull distribution. This thesis includes a Monte Carlo simulation code written in S-Plus that can be modified by the user to meet their particular needs for any sampling and censoring scheme. To illustrate the approach, the thesis includes a catalog of corrected confidence bounds for the Weibull distribution, which can be used by acquisition analysts to adjust their confidence bounds and obtain a more accurate representation for warranty and reliability work. / Civilian, Department of the Air Force
389

Outils pour l'optimisation de la consommation des véhicules électriques / Optimization tools for electric vehicles energy consumption

Baouche, Fouad 02 June 2015 (has links)
Le contexte écologique et économique actuel incite les autorités et le public à la réduction des émissions de CO2 et les dépendances vis-à-vis des hydrocarbures. Le transport représente 23 % des émissions de polluants dans le monde, et ce chiffre passe à 39 % pour la France. L’adoption de nouvelles solutions de transport est primordiale pour la réduction de ces émissions. L’électromobilité représente une alternative viable aux véhicules thermiques conventionnels. Si les véhicules électriques permettent une mobilité avec zéro émission, certaines de leurs caractéristiques empêchent leur développement. Les principaux freins à l’adoption de ce type de véhicules sont l’autonomie limitée, le faible déploiement des stations de recharge en milieu urbain (et extra urbain) ainsi que les temps de recharge importants. Aussi, afin de promouvoir l’usage de ce type de mobilité, il incombe de développer des outils visant à optimiser la consommation électrique tenant compte des caractéristiques liées à ce type de mobilité. C’est l’objectif de ce travail de thèse qui se focalise sur le développement d’outils permettant d’optimiser l’usage de véhicules électriques. Pour ce faire, trois grands axes sont définis : la modélisation des véhicules électriques, l’affectation des stations de recharge et le choix d’éco-itinéraires. La première partie de cette thèse s’intéresse à l’estimation de la consommation des véhicules électriques ainsi qu’à la présentation de la librairie de modèles dynamiques VEHLIB d’estimation de la consommation de ce type de véhicules. La seconde partie est consacrée à l’affectation optimale des stations de recharge. Une méthodologie de déploiement d’infrastructures de recharge est proposée pour la ville de Lyon avec prise en compte de la demande de mobilité issue des enquêtes ménages déplacements. La troisième partie de la thèse s’intéresse à la thématique du choix d’éco-itinéraire (green routing). Celle-ci aboutit à la proposition d’une méthodologie multi-objectif de recherche de stations de recharge afin de déterminer des itinéraires optimaux avec déviation vers ces stations lorsque l’état de charge de la batterie du véhicule ne permet pas de terminer le trajet. Pour finir, une expérimentation a été réalisée à l’aide d’un véhicule électrique équipé de capteurs de position et de consommation pour d’une part valider les méthodologies proposées et d’autre part analyser les facteurs exogènes qui influent sur la consommation des véhicules électriques. / The current ecological and economic context encourages the authorities and the public to reduce CO2 emissions and oil dependence. The transportation is responsible for 23% of pollutants emissions in the world, and this proportion increases up to 37% in France/ The adoption of new transport solutions is primordial to reduce these emissions. Electro mobility is a viable alternative to conventional vehicles. While electric vehicles offer mobility with zero emissions, some of their characteristicds impede their development. The main obtacle to the adoption of these vehicles is the limited autonomy, a sparse distribution of charging stations in urban areas as well as a significant charging time. Also, to promote the use of this type of mobility, it is primordial to develop tools that optimize the energy consumption and take in to account the characteristics associated with this type of mobility. To achieve this, three areas are difined: modeling of electric vehicles, optimized charging station deployment and eco routing. The first part of this theis focuses on the consumption estimation of the electric vehicles and the presentation of the dynamic model library VEHLIB. The second part is dedicated to optimal allocation of charging stations; A methodology for the deployment of electric vehicle charging infrastructures is proposed for the urban area o fthe city of Lyon, taking into account the mobility demand derived from the household travel surveys.The third part of the thesis deals with the eco-routing (green routing). A multi-objective methodology for eco routing with recharge en-route is proposed. The solutions take into account battery state does not permit to finish the trip.Finally, an experiment was carried out using an electric vehicle equipped with position and consumption sensors in order to validate the proposed methodologies and analyze exogenous factor that impact the electric vehicle consumption.
390

Characterisation of ambient atmospheric aerosols using accelerator-based techniques

Sekonya, Kamela Godwin 15 April 2010 (has links)
Atmospheric haze, which builds up over South Africa including our study areas, Cape Town and the Mpumalanga Highveld under calm weather conditions, causes public concern. The scope of this study was to determine the concentration and composition of atmospheric aerosol at Khayelitsha (an urban site in the Western Cape) and Ferrobank (an industrial site in Witbank, Mpumalanga). Particulate matter was collected in Khayelitsha from 18 May 2007 to 20 July 2007 (i.e. 20 samples) using a Partisol-plus sampler and a Tapered Element Oscillating Microbalance (TEOM) sampler. Sampling took place at Ferrobank from 07 February 2008 to 11 March 2008 (6 samples) using a Partisol-plus sampler and an E-sampler. The gravimetric mass of each exposed sample was determined from pre- and post-sampling weighing. The elemental composition of the particulate matter was determined for 16 elements at Khayelitsha using Proton Induced X-ray Emission (PIXE). The concentration of the elements Al, Si, S, Cl, K, Ca, Ti, Cr, Mn, Fe, Cu, Zn, As, Br, Sn, and Pb was determined by analysing the PIXE spectra obtained. In similar manner, the elemental composition of the particulate matter was determined for 15 elements at Ferrobank (Al, Si, S, Cl, K, Ca, Ti, Cr, Mn, Fe, Cu, Zn, As, Br and Pb). The average aerosol mass concentrations for different days at the Khayelitsha site were found to vary between 8.5 μg/m3 and 124.38 μg/m3. At the Khayelitsha site on three occasions during the sampling campaign the average aerosol mass concentrations exceeded the current South African air quality standard of 75 μg/m3 over 24 h. At the Ferrobank site, there are no single days that exceeded the limit of the South African air quality standard during the sampling campaign. Enrichment factors for each element of the particles sampled with an aerodynamic diameter of less than 10 μm (PM10) samples have been calculated in order to identify their possible sources. The analysis yielded five potential sources of PM10 : soil dust, sea salt, gasoline emissions, domestic wood and coal combustion. Interestingly, enrichment factor values for the Khayelitsha samples show that sea salt constitutes a major source of emissions, while Ferrobank samples, the source apportionment by unique ratios (SPUR) indicate soil dust and coal emission are the major sources of pollution. The source apportionment at Khayelitsha shows that sea salt and biomass burning are major source of air pollution.

Page generated in 0.0333 seconds