• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 602
  • 268
  • 127
  • 64
  • 55
  • 21
  • 11
  • 9
  • 9
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1448
  • 231
  • 204
  • 190
  • 188
  • 142
  • 140
  • 134
  • 108
  • 101
  • 99
  • 98
  • 97
  • 94
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Essays on economic mobility

Yalonetzky, Gaston Isaias January 2008 (has links)
This thesis is a collection of three essays with contributions to the intergenerational and intra-generational mobility literature. The essay on full risk insurance and measurement error examines the likelihood that measurement error may reconcile observed departures from perfect rank immobility in insurable consumption with the mobility predictions of full risk insurance, by generating spurious rank-breaking transitions. The essay shows that under certain assumptions full risk insurance predicts perfect rank immobility and that there exists ranges of error covariance matrices for which the mobility predictions of full risk insurance plus measurement error can not be rejected in the Peruvian data. A novel approach to test these mobility predictions is presented. The essay on discrete time-states Markov chain models applied to welfare dynamics shows that models with higher order may fit data better than the popular first-order, stationary model, and that the order of the chain, in turn, affects the estimation of equilibrium distributions. A best-practice methodology to conduct homogeneity tests between two samples with different optimal order is proposed, and an index by Shorrocks, based on the trace of the transition matrix, is extended to discrete Markov chain models with higher order. The essay on cohort heterogeneity in intergenerational mobility of education shows how cohort heterogeneity affects the analysis of cross-group homogeneity and long-term prospects of a welfare variable, based on transition matrix analysis. The essay compares the transition matrices of Peruvian groups divided by gender and ethnicity and finds genuine reductions in heterogeneity of the mobility regimes between male and female and between indigenous and non-indigenous groups among the youngest cohorts. The essay proposes a methodology to conduct first-order stochastic dominance analysis with equilibrium distributions and shows that among the youngest cohorts past stochastic dominance of male over females and non-indigenous over indigenous disappear in the long term.
322

Geometric modelling and shape optimisation of pharmaceutical tablets : geometric modelling and shape optimisation of pharmaceutical tablets using partial differential equations

Ahmat, Norhayati Binti January 2012 (has links)
Pharmaceutical tablets have been the most dominant form for drug delivery and they need to be strong enough to withstand external stresses due to packaging and loading conditions before use. The strength of the produced tablets, which is characterised by their compressibility and compactibility, is usually deter-mined through a physical prototype. This process is sometimes quite expensive and time consuming. Therefore, simulating this process before hand can over-come this problem. A technique for shape modelling of pharmaceutical tablets based on the use of Partial Differential Equations is presented in this thesis. The volume and the sur-face area of the generated parametric tablet in various shapes have been es-timated numerically. This work also presents an extended formulation of the PDE method to a higher dimensional space by increasing the number of pa-rameters responsible for describing the surface in order to generate a solid tab-let. The shape and size of the generated solid tablets can be changed by ex-ploiting the analytic expressions relating the coefficients associated with the PDE method. The solution of the axisymmetric boundary value problem for a finite cylinder subject to a uniform axial load has been utilised in order to model a displace-ment component of a compressed PDE-based representation of a flat-faced round tablet. The simulation results, which are analysed using the Heckel model, show that the developed model is capable of predicting the compressibility of pharmaceutical powders since it fits the experimental data accurately. The opti-mal design of pharmaceutical tablets with particular volume and maximum strength has been obtained using an automatic design optimisation which is performed by combining the PDE method and a standard method for numerical optimisation.
323

Compact high-repetition-rate terahertz source based on difference frequency generation from an efficient 2-μm dual-wavelength KTP OPO

Mei, Jialin, Zhong, Kai, Wang, Maorong, Liu, Pengxiang, Xu, Degang, Wang, Yuye, Shi, Wei, Yao, Jianquan, Norwood, Robert A., Peyghambarian, Nasser 03 November 2016 (has links)
A compact optical terahertz (THz) source was demonstrated based on an efficient high-repetition-rate doubly resonant optical parametric oscillator (OPO) around 2 mu m with two type-II phase-matched KTP crystals in the walk-off compensated configuration. The KTP OPO was intracavity pumped by an acousto-optical (AO) Q-switched Nd:YVO4 laser and emitted two tunable wavelengths near degeneracy. The tuning range extended continuously from 2.068 mu m to 2.191 mu m with a maximum output power of 3.29 W at 24 kHz, corresponding to an optical-optical conversion efficiency (from 808 nm to 2 mu m) of 20.69%. The stable pulsed dual-wavelength operation provided an ideal pump source for generating terahertz wave of micro-watt level by the difference frequency generation (DFG) method. A 7.84-mm-long periodically inverted quasi-phase-matched (QPM) GaAs crystal with 6 periods was used to generate a terahertz wave, the maximum voltage of 180 mV at 1.244 THz was acquired by a 4.2-K Si bolometer, corresponding to average output power of 0.6 mu W and DFG conversion efficiency of 4.32x10(-7). The acceptance bandwidth was found to be larger than 0.35 THz (FWHM). As to the 15-mm-long GaSe crystal used in the type-II collinear DFG, a tunable THz source ranging from 0.503 THz to 3.63 THz with the maximum output voltage of 268 mV at 1.65 THz had been achieved, and the corresponding average output power and DFG conversion efficiency were 0.9 mu W and 5.86x10(-7) respectively. This provides a potential practical palm-top tunable THz sources for portable applications.
324

CANNABINOID RECEPTORS IN THE 3D RECONSTRUCTED MOUSE BRAIN: FUNCTION AND REGULATION

Nguyen, Peter 05 August 2010 (has links)
CB1 receptors (CB1R) mediate the psychoactive and therapeutic effects of cannabinoids including ∆9-tetrahydrocannabinol (THC), the main psychoactive constituent in marijuana. However, therapeutic use is limited by side effects and tolerance and dependence with chronic administration. Tolerance to cannabinoid-mediated effects is associated with CB1R adaptations, including desensitization (receptor-G-protein uncoupling) and downregulation (receptor degradation). The objectives of this thesis are to investigate the regional-specificity in CB1R function and regulation. Previous studies have investigated CB1Rs in a subset of regions involved in cannabinoid effects, but an inclusive regional comparison of the relative efficacies of different classes of cannabinoids to activate G-proteins has not been conducted. A novel unbiased whole-brain analysis was developed based on Statistical Parametric Mapping (SPM) for 3D-reconstructed mouse brain images derived from agonist-stimulated [35S]GTPgS autoradiography, which has not been described before. SPM demonstrated regional differences in the relative efficacies of cannabinoid agonists methanandamide (M-AEA), CP55,940 (CP), and WIN55,212-2 (WIN) in mouse brains. To assess potential contribution of novel sites, CB1R knockout (KO) mice were used. SPM analysis revealed that WIN, but not CP or M-AEA, stimulated [35S]GTPgS binding in regions that partially overlapped with the expression of CB1Rs. We then examined the role of the regulatory protein Beta-arrestin-2 (βarr2) in CB1R adaptations to chronic THC treatment. Deletion of βarr2 reduced CB1R desensitization/downregulation in the cerebellum, caudal periaqueductal gray (PAG), and spinal cord. However in hippocampus, amygdala and rostral PAG, similar desensitization was present in both genotypes. Interestingly, enhanced desensitization was found in the hypothalamus and cortex in βarr2 KO animals. Intra-regional differences in the magnitude of desensitization were noted in the caudal hippocampus, where βarr2 KO animals exhibited greater desensitization compared to WT. Regional differences in βarr2-mediated CB1R adaptation were associated with differential effects on tolerance, where THC-mediated antinociception, but not catalepsy or hypothermia, was attenuated in βarr2 KO mice. Overall, studies using SPM revealed intra- and inter-regional specificity in the function and regulation of CB1Rs and underscores an advantage of using a whole-brain unbiased approach. Understanding the regulation of CB1R signaling within different anatomical contexts represents an important fundamental prerequisite in the therapeutic exploitation of the cannabinoid system.
325

Machine learning methods for discrete multi-scale fows : application to finance / Méthodes d'apprentissage pour des flots discrets multi-échelles : application à la finance

Mahler, Nicolas 05 June 2012 (has links)
Ce travail de recherche traite du problème d'identification et de prédiction des tendances d'une série financière considérée dans un cadre multivarié. Le cadre d'étude de ce problème, inspiré de l'apprentissage automatique, est défini dans le chapitre I. L'hypothèse des marchés efficients, qui entre en contradiction avec l'objectif de prédiction des tendances, y est d'abord rappelée, tandis que les différentes écoles de pensée de l'analyse de marché, qui s'opposent dans une certaine mesure à l'hypothèse des marchés efficients, y sont également exposées. Nous explicitons les techniques de l'analyse fondamentale, de l'analyse technique et de l'analyse quantitative, et nous nous intéressons particulièrement aux techniques de l'apprentissage statistique permettant le calcul de prédictions sur séries temporelles. Les difficultés liées au traitement de facteurs temporellement dépendants et/ou non-stationnaires sont soulignées, ainsi que les pièges habituels du surapprentrissage et de la manipulation imprudente des données. Les extensions du cadre classique de l'apprentissage statistique, particulièrement l'apprentissage par transfert, sont présentées. La contribution principale de ce chapitre est l'introduction d'une méthodologie de recherche permettant le développement de modèles numériques de prédiction de tendances. Cette méthodologie est fondée sur un protocole d'expérimentation, constitué de quatre modules. Le premier module, intitulé Observation des Données et Choix de Modélisation, est un module préliminaire dévoué à l'expression de choix de modélisation, d'hypothèses et d'objectifs très généraux. Le second module, Construction de Bases de Données, transforme la variable cible et les variables explicatives en facteurs et en labels afin d'entraîner les modèles numériques de prédiction de tendances. Le troisième module, intitulé Construction de Modèles, a pour but la construction de modèles numériques de prédiction de tendances. Le quatrième et dernier module, intitulé Backtesting et Résultats Numériques, évalue la précision des modèles de prédiction de tendances sur un ensemble de test significatif, à l'aide de deux procédures génériques de backtesting. Le première procédure renvoie les taux de reconnaissance des tendances de hausse et de baisse. La seconde construit des règles de trading au moyen des predictions calculées sur l'ensemble de test. Le résultat (P&L) de chacune des règles de trading correspond aux gains et aux pertes accumulés au cours de la période de test. De plus, ces procédures de backtesting sont complétées par des fonctions d'interprétation, qui facilite l'analyse du mécanisme décisionnel des modèles numériques. Ces fonctions peuvent être des mesures de la capacité de prédiction des facteurs, ou bien des mesures de fiabilité des modèles comme des prédictions délivrées. Elles contribuent de façon décisive à la formulation d'hypothèses mieux adaptées aux données, ainsi qu'à l'amélioration des méthodes de représentation et de construction de bases de données et de modèles. Ceci est explicité dans le chapitre IV. Les modèles numériques, propres à chacune des méthodes de construction de modèles décrites au chapitre IV, et visant à prédire les tendances des variables cibles introduites au chapitre II, sont en effet calculés et backtestés. Les raisons du passage d'une méthode de construction de modèles à une autre sont particulièrement étayées. L'influence du choix des paramètres - et ceci à chacune des étapes du protocole d'expérimentation - sur la formulation de conclusions est elle aussi mise en lumière. La procédure PPVR, qui ne requiert aucun calcul annexe de paramètre, a ainsi été utilisée pour étudier de façon fiable l'hypothèse des marchés efficients. De nouvelles directions de recherche pour la construction de modèles prédictifs sont finalement proposées. / This research work studies the problem of identifying and predicting the trends of a single financial target variable in a multivariate setting. The machine learning point of view on this problem is presented in chapter I. The efficient market hypothesis, which stands in contradiction with the objective of trend prediction, is first recalled. The different schools of thought in market analysis, which disagree to some extent with the efficient market hypothesis, are reviewed as well. The tenets of the fundamental analysis, the technical analysis and the quantitative analysis are made explicit. We particularly focus on the use of machine learning techniques for computing predictions on time-series. The challenges of dealing with dependent and/or non-stationary features while avoiding the usual traps of overfitting and data snooping are emphasized. Extensions of the classical statistical learning framework, particularly transfer learning, are presented. The main contribution of this chapter is the introduction of a research methodology for developing trend predictive numerical models. It is based on an experimentation protocol, which is made of four interdependent modules. The first module, entitled Data Observation and Modeling Choices, is a preliminary module devoted to the statement of very general modeling choices, hypotheses and objectives. The second module, Database Construction, turns the target and explanatory variables into features and labels in order to train trend predictive numerical models. The purpose of the third module, entitled Model Construction, is the construction of trend predictive numerical models. The fourth and last module, entitled Backtesting and Numerical Results, evaluates the accuracy of the trend predictive numerical models over a "significant" test set via two generic backtesting plans. The first plan computes recognition rates of upward and downward trends. The second plan designs trading rules using predictions made over the test set. Each trading rule yields a profit and loss account (P&L), which is the cumulated earned money over time. These backtesting plans are additionally completed by interpretation functionalities, which help to analyze the decision mechanism of the numerical models. These functionalities can be measures of feature prediction ability and measures of model and prediction reliability. They decisively contribute to formulating better data hypotheses and enhancing the time-series representation, database and model construction procedures. This is made explicit in chapter IV. Numerical models, aiming at predicting the trends of the target variables introduced in chapter II, are indeed computed for the model construction methods described in chapter III and thoroughly backtested. The switch from one model construction approach to another is particularly motivated. The dramatic influence of the choice of parameters - at each step of the experimentation protocol - on the formulation of conclusion statements is also highlighted. The RNN procedure, which does not require any parameter tuning, has thus been used to reliably study the efficient market hypothesis. New research directions for designing trend predictive models are finally discussed.
326

Optimization of a Parallel Mechanism Design with Respect to a Stewart Platform Control Design / Optimization of a Parallel Mechanism Design with Respect to a Stewart Platform Control Design

Březina, Lukáš January 2010 (has links)
Předkládaná práce se zabývá návrhem modelu dynamiky paralelního manipulátoru optimálního pro účely návrhu řízení. Zvolený přístup je založen na modelování dynamiky systému v simulačním prostředí Matlab SimMechanics následovaném linearizací modelu. Výsledný stavový lineární model mimo jiné umožňuje snadné posouzení řiditelnosti a pozorovatelnosti modelu. Díky své relativní jednoduchosti je model také výpočetně nenáročný. Přístup je demonstrován na návrhu dvouvrstvého řízení SimMechanics modelu Stewartovy platformy, na kterém bylo následně navržené řízení úspěšně testováno. Podstatná část práce obsahuje přístup k modelování neurčitých parametrů dynamického modelu Stewartovy platformy a stejnosměrného motoru Maxon RE 35 a jeho výsledky. Předložený přístup je založen na modelování parametrické neurčitosti způsobem, kdy je neurčitost definována individuálně pro jednotlivé prvky stavových matic modelu. Samotná neurčitost je potom určena rozdílem mezi jednotlivými parametry příslušných matic nominálního modelu a modelu se stanovenou maximální neurčitostí parametrů. Výsledný neurčitostní model je vzhledem ke své stavové reprezentaci vhodný pro návrh regulátoru založeném na metodách návrhu robustního řízení, například minimalizaci normy H-nekonečno. Popsaná metoda byla použita pro kompenzaci posunu mezi pracovními body, okolo kterých je prováděna linearizace a pro kompenzaci nepřesnosti modelování vybraných parametrů modelů Stewartovy platformy a stejnosměrného motoru. Získané modely (v prostředí SimMechanics a neurčitostní model) byly experimentálně porovnány s chováním jednoho z lineárních pohonů Stewartovy platformy. Rozdíl v datech obdržených ze simulace v prostředí SimMechanics a naměřených na reálném stroji byl téměř kompletně pokryt neurčitostním modelem. Prezentovaná metoda neurčitostního modelování je velice univerzální a aplikovatelná na libovolný stavový model.
327

Mise en évidence de nouveaux types de vagues de très grandes amplitudes / Experimental evidence of new types of large amplitudes waves

Leroux, Alphonse 08 November 2013 (has links)
Au moyen d'une expérience d'excitation paramétrique d'onde de surface, nous mettons en évidence l'existence de nouveaux types d'ondes solitaires et stationnaires à la surface de l'eau. Ces ondes de grande amplitude sont très non-linéaires et l'étude théorique réalisée ne permet pas de rendre compte de la forme des vagues mais permet de comprendre l'origine du phénomène d'hystérésis observé qui est nécessaire à la compréhension des phénomènes observés. En effet, l'existence de ces ondes (dans notre configuration expérimentale) est conditionnée par la présence d'un domaine de bistabilité dans le plan amplitude d'excitation - amplitude des vagues au coeur duquel nous avons montré qu'il était possible d'avoir coexistence de deux solutions, une d'amplitude nulle et une d'amplitude non nulle. Ces expériences en géométrie Hele-Shaw ont aussi permis de mettre en évidence des ondes enveloppes qui ne sont encore décrit par aucun modèle existant. Il s'agit à notre connaissance de la première onde enveloppe stationnaire observé à la surface de l'eau. Nous mettons aussi en évidence des ondes de gravité de très grande amplitude, qui sont formées alternativement d'étoiles et de polygones. Nous montrons que la symétrie du motif (nombre de branche de l'étoile) est indépendante de la taille et de la forme du récipient vibré. Nous montrons qu'un mécanisme de couplage non-linéaire résonant à trois ondes peut expliquer cette géométrie, bien que cette possibilité fut rejetée pour des ondes purement gravitaire. / By means of the parametric excitation of water waves in a Hele-Shaw cell, we report the existence of two new types of highly localized, standing surface waves of large amplitude. They are respectively of odd and even symmetries. Both solitary waves oscillate subharmonically with the forcing frequency. They are highly nonlinear, and dier strongly from the other types of localized patterns. Moreover, to our knowledge, such a solitary waves of odd symmetry has never been reported hitherto. We report a new type of standing gravity waves of large amplitude, having alternatively the shape of a star and of a polygon. This wave is observed by means of a laboratory experiment by vibrating vertically a tank. The symmetry of the star (i.e. the number of branches) is independent of the container form and size, and can be changed according to the amplitude and frequency of the vibration. We show that this wave geometry results from nonlinear resonant couplings between three waves, although this possibility has been denied for pure gravity waves up to now.
328

Data Driven Visual Recognition

Aghazadeh, Omid January 2014 (has links)
This thesis is mostly about supervised visual recognition problems. Based on a general definition of categories, the contents are divided into two parts: one which models categories and one which is not category based. We are interested in data driven solutions for both kinds of problems. In the category-free part, we study novelty detection in temporal and spatial domains as a category-free recognition problem. Using data driven models, we demonstrate that based on a few reference exemplars, our methods are able to detect novelties in ego-motions of people, and changes in the static environments surrounding them. In the category level part, we study object recognition. We consider both object category classification and localization, and propose scalable data driven approaches for both problems. A mixture of parametric classifiers, initialized with a sophisticated clustering of the training data, is demonstrated to adapt to the data better than various baselines such as the same model initialized with less subtly designed procedures. A nonparametric large margin classifier is introduced and demonstrated to have a multitude of advantages in comparison to its competitors: better training and testing time costs, the ability to make use of indefinite/invariant and deformable similarity measures, and adaptive complexity are the main features of the proposed model. We also propose a rather realistic model of recognition problems, which quantifies the interplay between representations, classifiers, and recognition performances. Based on data-describing measures which are aggregates of pairwise similarities of the training data, our model characterizes and describes the distributions of training exemplars. The measures are shown to capture many aspects of the difficulty of categorization problems and correlate significantly to the observed recognition performances. Utilizing these measures, the model predicts the performance of particular classifiers on distributions similar to the training data. These predictions, when compared to the test performance of the classifiers on the test sets, are reasonably accurate. We discuss various aspects of visual recognition problems: what is the interplay between representations and classification tasks, how can different models better adapt to the training data, etc. We describe and analyze the aforementioned methods that are designed to tackle different visual recognition problems, but share one common characteristic: being data driven. / <p>QC 20140604</p>
329

Development of Acoustic Simulations using Parametric CAD Models in COMSOL

Bouilloux-Lafont, Antoine, Noya Pozo, Rubén January 2019 (has links)
With constantly changing regulations on emissions, heavy commercial vehicles manufacturershave to adapt for their products to preserve their quality while meetingthese new requirements. Over the past decades, noise emissions have become a greatconcern and new stricter laws demand companies to decrease their vehicle pass-bynoise target values.To address the requirements from different disciplines, Scania follows a simulationdriven design process to develop new concept models EATS. The collaboration amongengineers from different fields is thereby necessary in order to obtain higher performancesilencers. However, the pre-processing step in terms of acoustic simulationsis time-consuming, which can slow the concept development process.In this thesis, a new method was introduced to automate the pre-processing of silenceracoustic models and allow for design optimisation based on acoustic performanceresults. A common Scania product study case was provided to several theseswithin the NXD organisation. The collaboration among the master thesis workersaimed to demonstrate the benefits of KBE and MDO and how they can be integratedwithin Scania’s current concept development and product introduction processes.The performed work was divided in the following steps: data collection, methoddevelopment and concluding work. The first step consisted in gathering sufficientknowledge by conducting a thorough literature review and interviews. Then, an initialmethod was formulated and tested on a simplified silencer model. Once approvedand verified, the method was applied to the study case EATS.The study case showed that a complex product can have its acoustic pre-processingstep automated by ensuring a good connectivity among the required software anda correct denomination of the geometrical objects involved in the simulations. Themethod investigated how morphological optimisations can be performed at bothglobal and local levels to enhance the transmission loss of a silencer. Besides optimisingthe acoustic performance of the models, the method allowed the identificationof correlations and inter-dependencies among their design variables and ouput parameters.
330

Fisher's Randomization Test versus Neyman's Average Treatment Test

Georgii Hellberg, Kajsa-Lotta, Estmark, Andreas January 2019 (has links)
The following essay describes and compares Fisher's Randomization Test and Neyman's average treatment test, with the intention of concluding an easily understood blueprint for the comprehension of the practical execution of the tests and the conditions surrounding them. Focus will also be directed towards the tests' different implications on statistical inference and how the design of a study in relation to assumptions affects the external validity of the results. The essay is structured so that firstly the tests are presented and evaluated, then their different advantages and limitations are put against each other before they are applied to a data set as a practical example. Lastly the results obtained from the data set are compared in the Discussion section. The example used in this paper, which compares cigarette consumption after having treated one group with nicotine patches and another with fake nicotine patches, shows a decrease in cigarette consumption for both tests. The tests differ however, as the result from the Neyman test can be made valid for the population of interest. Fisher's test on the other hand only identifies the effect derived from the sample, consequently the test cannot draw conclusions about the population of heavy smokers. In short, the findings of this paper suggests that a combined use of the two tests would be the most appropriate way to test for treatment effect. Firstly one could use the Fisher test to check if any effect at all exist in the experiment, and then one could use the Neyman test to compensate the findings of the Fisher test, by estimating an average treatment effect for example.

Page generated in 0.0895 seconds