• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 621
  • 158
  • 86
  • 74
  • 55
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1436
  • 211
  • 191
  • 191
  • 183
  • 180
  • 124
  • 118
  • 104
  • 103
  • 99
  • 86
  • 82
  • 80
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Understanding And Guiding Software Product Lines Evolution Based On Requirements Engineering Activities

Oliveira, Raphael Pereira de 10 September 2015 (has links)
Submitted by Kleber Silva (kleberbs@ufba.br) on 2017-06-01T20:36:17Z No. of bitstreams: 1 2015_Thesis_Final_v01.pdf: 25381943 bytes, checksum: cf9b5a7ab05c5f433c6abe06c7c8815e (MD5) / Approved for entry into archive by Vanessa Reis (vanessa.jamile@ufba.br) on 2017-06-07T11:38:56Z (GMT) No. of bitstreams: 1 2015_Thesis_Final_v01.pdf: 25381943 bytes, checksum: cf9b5a7ab05c5f433c6abe06c7c8815e (MD5) / Made available in DSpace on 2017-06-07T11:38:56Z (GMT). No. of bitstreams: 1 2015_Thesis_Final_v01.pdf: 25381943 bytes, checksum: cf9b5a7ab05c5f433c6abe06c7c8815e (MD5) / Software Product Line (SPL) has emerged as an important strategy to cope with the increasing demand of large-scale products customization. SPL has provided companies with an efficient and effective means of delivering products with higher quality at a lower cost, when compared to traditional software engineering strategies. However, such benefits do not come for free. There is a necessity in SPL to deal with the evolution of its assets to support changes within the environment and user needs. These changes in SPL are firstly represented by requirements. Thus, SPL should manage the commonality and variability of products by means of a “Requirements Engineering (RE) - change management” process. Hence, besides dealing with the reuse and evolution of requirements in an SPL, the RE for SPL also needs an approach to represent explicitly the commonality and variability information (e.g., through feature models and use cases). To understand the evolution in SPL, this Thesis presents two empirical studies within industrial SPL projects and a systematic mapping study on SPL evolution. The two empirical studies evaluated Lehman’s laws of software evolution in two industrial SPL projects,demonstrating that most of the laws are supported by SPL environments. The systematic mapping study on SPL evolution identified approaches in the area and revealed gaps for researching, such as, that most of the proposed approaches perform the evolution of SPL requirements in an ad-hoc way and were evaluated through feasibility studies. These results led to systematize, through guidelines, the SPL processes by starting with the SPL requirements. Thus, it was proposed an approach to specify SPL requirements called Feature-Driven Requirements Engineering (FeDRE). FeDRE specifies SPL requirements in a systematic way driven by a feature model. To deal with the evolution of FeDRE requirements, a new approach called Feature-Driven Requirements Engineering Evolution (FeDRE2) was presented. FeDRE2 is responsible for guiding, in a systematic way, the SPL evolution based on activities from RE. Both proposed approaches, FeDRE and and FeDRE2, were evaluated and the results, besides being preliminaries, shown that the approaches were perceived as easy to use and also useful, coping with the improvement and systematization of SPL processes.
432

Essays on illiquidity premium

Pereira, Ricardo Buscariolli 23 May 2014 (has links)
Submitted by Ricardo Buscariolli Pereira (ribusca@yahoo.com) on 2014-06-18T16:45:36Z No. of bitstreams: 1 tese_final.pdf: 7712126 bytes, checksum: 31167f00e858b4955d0dbdbde639006a (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2014-06-18T18:36:04Z (GMT) No. of bitstreams: 1 tese_final.pdf: 7712126 bytes, checksum: 31167f00e858b4955d0dbdbde639006a (MD5) / Made available in DSpace on 2014-06-18T20:06:52Z (GMT). No. of bitstreams: 1 tese_final.pdf: 7712126 bytes, checksum: 31167f00e858b4955d0dbdbde639006a (MD5) Previous issue date: 2014-05-23 / This dissertation is composed of three related essays on the relationship between illiquidity and returns. Chapter 1 describes the time-series properties of the relationship between market illiquidity and market return using both yearly and monthly datasets. We find that stationarized versions of the illiquidity measure have a positive, significant, and puzzling high premium. In Chapter 2, we estimate the response of illiquidity to a shock to returns, assuming that causality runs from returns to illiquidity and find that an increase in firms' returns lowers illiquidity. In Chapter 3 we take both effects into account and account for the endogeneity of returns and illiquidity to estimate the liquidity premium. We find evidence that the illiquidity premium is a smaller than the previous evidence suggests. Finally, Chapter 4 shows topics for future research where we describe a return decomposition with illiquidity costs.
433

Impact of Product Market Competition on Expected Returns

Liu, Chung-Shin 12 1900 (has links)
x, 94 p. : ill. (some col.) / This paper examines how competition faced by firms affects asset risk and expected returns. Contrary to Hou and Robinson's (2006) findings, I find that cross-industry variation in competition, as measured by the concentration ratio, is not a robust determinant of unconditional expected stock returns. In contrast, within-industry competition, as measured by relative price markup, is positively related to expected stock returns. Moreover, this relation is not captured by commonly used models of expected returns. When using the Markov regime-switching model advocated by Perez-Quiros and Timmermann (2000), I test and find support for Aguerrevere's (2009) recent model of competition find risk dynamics. In particular, systematic risk is greater in more competitive industries during bad times and greater in more concentrated industries during good times. In addition, real investment by firms facing greater competition leads real investment by firms facing less competition, supporting Aguerrevere's notion that less competition results in higher growth options and hence higher risk in good times. / Committee in charge: Dr. Roberto Gutierrez, Chair; Dr. Roberto Gutierrez, Advisor; Dr. Diane Del Guercio, Inside Member; Dr. John Chalmers, Inside Member; Dr. Bruce Blonigen, Outside Member
434

Surfaces quantile : propriétés, convergences et applications / Quantile surfaces : properties, convergence and applications

Ahidar-Coutrix, Adil 03 July 2015 (has links)
Dans la thèse on introduit et on étudie une généralisation spatiale sur $\R^d$ du quantile réel usuel sous la forme d'une surface quantile via des formes $\phi$ et d'un point d'observation $O$. Notre point de départ est de simplement admettre la subjectivité due à l'absence de relation d'ordre totale dans $\R^d$ et donc de développer une vision locale et directionnelle des données. Ainsi, les observations seront ordonnées du point de vue d'un observateur se trouvant à un point $O \in \R^d$. Dans le chapitre 2, on introduit la notion du quantile vue d'un observateur $O$ dans la direction $u \in \Sd$ et de niveau $\alpha$ via des des demi-espaces orthogonaux à chaque direction d'observation. Ce choix de classe implique que les résultats de convergence ne dépendent pas du choix de $O$. Sous des hypothèses minimales de régularité, l'ensemble des points quantile vue de $O$ définit une surface fermée. Sous hypothèses minimales, on établit pour les surfaces quantile empiriques associées les théorèmes limites uniformément en le niveau de quantile et la direction d'observation, avec vitesses asymptotiques et bornes d'approximation non-asymptotiques. Principalement la LGNU, la LLI, le TCLU, le principe d'invariance fort uniforme puis enfin l'approximation du type Bahadur-Kiefer uniforme, et avec vitesse d'approximation. Dans le chapitre 3, on étend les résultats du chapitre précédent au cas où les formes $\phi$ sont prises dans une classe plus générale (fonctions, surfaces, projections géodésiques, etc) que des demi-espaces qui correspondent à des projections orthogonales par direction. Dans ce cadre plus général, les résultats dépendent fortement du choix de $O$, et c'est ce qui permet de tirer des interprétations statistiques. Dans le chapitre 4, des conséquences méthodologiques en statistique inférentielle sont tirées. Tout d'abord on introduit une nouvelle notion de champ de profondeurs directionnelles baptisée champ d'altitude. Ensuite, on définit une notion de distance entre lois de probabilité, basée sur la comparaison des deux collections de surfaces quantile du type Gini-Lorrentz. La convergence avec vitesse des mesures empiriques pour cette distance quantile, permet de construire différents tests en contrôlant leurs niveaux et leurs puissances. Enfin, on donne une version des résultats dans le cas où une information auxiliaire est disponible sur une ou plusieurs coordonnées sous la forme de la connaissance exacte de la loi sur une partition finie. / The main issue of the thesis is the development of spatial generalizations on $\R^d$ of the usual real quantile. Facing the usual fact that $\R^d$ is not naturally ordered, our idea is to simply admit subjectivity and thus to define a local viewpoint rather than a global one, anchored at some point of reference $O$ and arbitrary shape $\phi$ with the motivation of crossing information gathered by changing viewpoint $O$, shape $\phi$ and $\alpha$-th order of quantile. In Chapter 2, we study the spatial quantile points seen from an observer $O$ in a direction $u \in \Sd$ of level $\alpha$ through the class of the half-spaces orthogonal to the direction $u$. This choice implies that the convergence theorems do not depend on the choice of $O$. Under minimal regularity assumptions, the set of all quantile points seen from $O$ is a closed surface. Under minimal assumptions, we establish for the associated empirical quantile surfaces the convergence theorems uniformly on the quantile level and the observation direction with the asymptotic speed and non-asymptotic bounds of approximation. Mainly, we establish the ULLN, the ULIL, the UCLT, the uniform strong invariance principle and finally the Bahadur-Kiefer type embedding, with the approximation speed rate. In Chapter 3, all the results of the previous chapter are extended to the case where the shapes $ \phi $ are taken in a class more general (functions, surfaces, geodesic projections, etc) than orthogonal projections (half-spaces). In this general setting, the results depend strongly on the choice of $ O $. It is this dependence which permit to draw statistical interpretations: modes detection, mass localization, etc. In Chapter 4, some methodological consequences in inferential statistics are drawn. First we introduce a new concept of directional depth fields called altitude fields. In a second application is defined a new distances between probability distributions, based on the comparison of two collections of quantile surfaces, which are indexes of the type Gini-Lorrentz. The convergence with speed of the empirical quantile measures for these distances, can build different tests with control of their level and their power. A third use of the quantile surfaces is for the case where $ \alpha = 1/2$. Finally, we give a version of our theorems in the case where auxiliary information is available on one or more coordinates of the random variable. By assuming known the probability of the elements of a finite partition, the asymptotic variance of the limiting process decreases and the simulations with few points clearly shows the reframe of the estimated surfaces to the real ones.
435

Previsão de longo prazo de níveis no sistema hidrológico do TAIM

Galdino, Carlos Henrique Pereira Assunção January 2015 (has links)
O crescimento populacional e a degradação dos corpos d’água vêm exercendo pressão à agricultura moderna, a proporcionar respostas mais eficientes quanto ao uso racional da água. Para uma melhor utilização dos recursos hídricos, faz-se necessário compreender o movimento da água na natureza, onde o conhecimento prévio dos fenômenos atmosféricos constitui uma importante ferramenta no planejamento de atividades que utilizam os recursos hídricos como fonte primária de abastecimento. Nesse trabalho foram realizadas previsões de longo prazo com antecedência de sete meses e intervalo de tempo mensal de níveis no Sistema Hidrológico do Taim, utilizando previsões de precipitação geradas por um modelo de circulação global. Para realizar as previsões foi elaborado um modelo hidrológico empírico de regressão, onde foram utilizadas técnicas estatísticas de análise e manipulação de séries históricas para correlacionar os dados disponíveis aos níveis (volumes) de água no banhado. Partindo do pressuposto que as previsões meteorológicas são a maior fonte de incerteza na previsão hidrológica, foi utilizada a técnica de previsão por conjunto (ensemble) e dados do modelo COLA, com 30 membros, para quantificar as incertezas envolvidas. Foi elaborado um algoritmo para gerar todas as possibilidades de regressão linear múltipla com os dados disponíveis, onde oito equações candidatas foram selecionadas para realizar as previsões. Numa análise preliminar dos dados de entrada de precipitações previstas foi observado que o modelo de circulação global não representou os extremos observados de forma satisfatória, sendo executado um processo de remoção do viés. O modelo de empírico de simulação foi posteriormente executado em modo continuo, gerando previsões de longo prazo de níveis para os próximos sete meses, para cada mês no período de junho/2004 a dezembro/2011. Os resultados obtidos mostraram que a metodologia utilizada obteve bons resultados, com desempenho satisfatórios até o terceiro mês, decaindo seu desempenho nos meses posteriores, mas configurando-se em uma ferramenta para auxílio à gestão dos recursos hídricos do local de estudo. / Population growth and degradation of water bodies have been pressuring modern agriculture, to provide more efficient responses about the rational use of water. For a better use of water resources, it is necessary to understand the movement of water in nature, where prior knowledge of atmospheric phenomena is an important tool in planning activities that use water as the primary source of supply. In this study were performed long-term forecasts of water levels (seven months of horizon, monthly time-step) in the Hydrological System Taim, using rainfall forecasts generated by a global circulation model as input. To perform predictions was developed an empirical hydrological regression model. This model was developed based on statistical techniques of analysis and manipulation of historical data to correlate the input data available to the levels (volume) of water in a wetland. Assuming that weather forecasts are a major source of uncertainty in hydrological forecasting, we used an ensemble forecast from COLA 2.2 with 30 members to quantify the uncertainties involved. An algorithm was developed to generate all the multiple linear regression models with the available data, where eight candidates equations were selected for hydrological forecasting. In a preliminary analysis of the precipitation forecast was observed that the global circulation model did not achieve a good representation of extremes values, thus a process of bias removal was carried out. Then the empirical model was used to generate water levels forecast for the next seven months, in each month of the period june/2004 to december/2011. The results showed that the methodology used has a satisfactory performance until the lead time three (third month in the future) where the performance starts to show lower values. Beside the sharply lost of performance in the last lead times, the model is a support tool that can help the decision making in the management of water resources for the study case.
436

Detecção e diagnóstico de falhas baseado em modelos empíricos no subespaço das variáveis de processo (EMPVSUB)

Bastidas, Maria Eugenia Hidalgo January 2018 (has links)
O escopo desta dissertação é o desenvolvimento de uma metodologia para a detecção e diagnóstico de falhas em processos industriais baseado em modelos empíricos no subespaço das variáveis do processo com expansão não linear das bases. A detecção e o diagnóstico de falhas são fundamentais para aumentar a segurança, confiabilidade e lucratividade de processos industriais. Métodos qualitativos, quantitativos e baseados em dados históricos do processo têm sido estudados amplamente. Para demonstrar as vantagens da metodologia proposta, ela será comparada com duas metodologias consideradas padrão, uma baseada em Análise de Componentes Principais (PCA) e a outra baseada em Mínimos Quadrados Parciais (PLS). Dois estudos de casos são empregados nessa comparação. O primeiro consiste em um tanque de aquecimento com mistura e o segundo contempla o estudo de caso do processo da Tennessee Eastman. As vantagens da metodologia proposta consistem na redução da dimensionalidade dos dados a serem usados para um diagnóstico adequado, além de detectar efetivamente a anormalidade e identificar as variáveis mais relacionadas à falha, permitindo um melhor diagnóstico. Além disso, devido à expansão das bases dos modelos é possível trabalhar efetivamente com sistemas não lineares, através de funções polinomiais e exponenciais dentro do modelo. Adicionalmente o trabalho contém uma metodologia de validação dos resultados da metodologia proposta, que consiste na eliminação das variáveis do melhor modelo obtido pelos Modelos Empíricos, através do método Backward Elimination. A metodologia proposta forneceu bons resultados na área do diagnóstico de falhas: conseguiu-se uma grande diminuição da dimensionalidade nos sistemas estudados em até 93,55%, bem como uma correta detecção de anormalidades e permitiu a determinação das variáveis mais relacionadas às anormalidades do processo. As comparações feitas com as metodologias padrões permitiram demonstrar que a metodologia proposta tem resultados superiores, pois consegue detectar as anormalidades em um espaço dimensional reduzido, detectando comportamentos não lineares e diminuindo incertezas. / Fault detection and diagnosis are critical to increasing the safety, reliability, and profitability of industrial processes. Qualitative and quantitative methods and process historical data have been extensively studied. This article proposes a methodology for fault detection and diagnosis, based on historical data of processes and the creation of empirical models with the expansion of nonlinear bases (polynomial and exponential bases) and regularization techniques. To demonstrate the advantages of the proposed approach, it is compared with two standard methodologies: Principal Components Analysis (PCA) and the Partial Least Squares (PLS), performed in two case studies: a mixed heating tank and the Tennessee Eastman Process. The advantages of the proposed methodology are the reduction of the dimensionality of the data used, in addition to the effective detection of abnormalities, identifying the variables most related to the fault. Furthermore, the work contains a methodology to validate the diagnosis results consisting of variable elimination from the best empirical models with the Backward Elimination algorithm. The proposed methodology achieved a promising performance, since it can decrease the dimensionality of the studied systems up to 93.55%, reducing uncertainties, and capturing nonlinear behaviors.
437

Métodos Semi-Empíricos: princípios básicos e aplicações / Semi-Empirical Methods: basics principles and uses

Bretanha Neto, Nelson [UNESP] 18 December 2015 (has links)
Submitted by Nelson Bretanha (physikosmos@gmail.com) on 2016-01-01T12:55:20Z No. of bitstreams: 1 Métodos Semi-Empíricos (Princípios Básicos e Aplicações).pdf: 4755970 bytes, checksum: 0e2d74ff6ffd8de112da824ec714d84e (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2016-01-05T18:04:11Z (GMT) No. of bitstreams: 1 bretanhaneto_ne_me_rcla.pdf: 4755970 bytes, checksum: 0e2d74ff6ffd8de112da824ec714d84e (MD5) / Made available in DSpace on 2016-01-05T18:04:11Z (GMT). No. of bitstreams: 1 bretanhaneto_ne_me_rcla.pdf: 4755970 bytes, checksum: 0e2d74ff6ffd8de112da824ec714d84e (MD5) Previous issue date: 2015-12-18 / Neste trabalho investigamos os efeitos decorrentes da presença de heteroátomos numa folha de “supercoroneno” (molécula C58H18 utilizada como uma primeira aproximação para o grafeno). Mais especificamente, utilizando método semi-empírico PM6, implementado no pacote computacional MOPAC2012®, estudamos as diferentes curvas de energia (calor de formação). As curvas estudadas foram obtidas pela passagem de um átomo de carbono sobre uma folha de supercoroneno na qual existe um heteroátomo (foram utilizadas três diferentes estruturas: C57H18Si, C57H18Ge e C57H18Sn). Nesse contexto, estudamos também o efeito destes heteroátomos sobre a densidade eletrônica e os orbitais HOMO e LUMO das diferentes estruturas. / In this work we investigate the effects of the presence of heteroatoms on a “supercoronene” sheet (C58H18 molecule used as a molecular model for graphene). Specifically, we used the semiempirical method PM6 as implemented in MOPAC2012® software to study several (formation) energy curves. The energy curves were obtained by positioning a carbon atom over a “supercoronene” sheet in which there is a heteroatom (this was done for three different structures: C57H18Si, C57H18Ge and C57H18Sn). In this context, the effect of these heteroatoms on the electronic density distributions, as well as their effect on HOMO and LUMO orbitals were also investigated.
438

Investigações sobre raciocínio e aprendizagem temporal em modelos conexionistas / Investigations about temporal reasoning and learning in connectionist models

Borges, Rafael Vergara January 2007 (has links)
A inteligência computacional é considerada por diferentes autores da atualidade como o destino manifesto da Ciência da Computação. A modelagem de diversos aspectos da cognição, tais como aprendizagem e raciocínio, tem sido a motivação para o desenvolvimento dos paradigmas simbólico e conexionista da inteligência artificial e, mais recentemente, para a integração de ambos com o intuito de unificar as vantagens de cada abordagem em um modelo único. Para o desenvolvimento de sistemas inteligentes, bem como para diversas outras áreas da Ciência da Computação, o tempo é considerado como um componente essencial, e a integração de uma dimensão temporal nestes sistemas é fundamental para conseguir uma representação melhor do comportamento cognitivo. Neste trabalho, propomos o SCTL (Sequential Connectionist Temporal Logic), uma abordagem neuro-simbólica para integrar conhecimento temporal, representado na forma de programas em lógica, em redes neurais recorrentes, de forma que a caracterização semântica de ambas representações sejam equivalentes. Além da estratégia para realizar esta conversão entre representações, e da verificação formal da equivalência semântica, também realizamos uma comparação da estratégia proposta com relação a outros sistemas que realizam representação simbólica e temporal em redes neurais. Por outro lado, também descrevemos, de foma algorítmica, o comportamento desejado para as redes neurais geradas, para realizar tanto inferência quanto aprendizagem sob uma ótica temporal. Este comportamento é analisado em diversos experimentos, buscando comprovar o desempenho de nossa abordagem para a modelagem cognitiva considerando diferentes condições e aplicações. / Computational Intelligence is considered, by di erent authors in present days, the manifest destiny of Computer Science. The modelling of di erent aspects of cognition, such as learning and reasoning, has been a motivation for the integrated development of the symbolic and connectionist paradigms of artificial intelligence. More recently, such integration has led to the construction of models catering for integrated learning and reasoning. The integration of a temporal dimension into such systems is a relevant task as it allows for a richer representation of cognitive behaviour features, since time is considered an essential component in intelligent systems development. This work introduces SCTL (Sequential Connectionist Temporal Logic), a neuralsymbolic approach for integrating temporal knowledge, represented as logic programs, into recurrent neural networks. This integration is done in such a way that the semantic characterization of both representations are equivalent. Besides the strategy to achieve translation from one representation to another, and verification of the semantic equivalence, we also compare the proposed approach to other systems that perform symbolic and temporal representation in neural networks. Moreover, we describe the intended behaviour of the generated neural networks, for both temporal inference and learning through an algorithmic approach. Such behaviour is then evaluated by means several experiments, in order to analyse the performance of the model in cognitive modelling under di erent conditions and applications.
439

A pesquisa empírica sobre o planejamento da execução instrumental : uma reflexão crítica do sujeito de um estudo de caso

Barros, Luís Cláudio January 2008 (has links)
Este trabalho propõe discutir o planejamento da execução instrumental (piano) na área de Práticas Interpretativas. Foram examinadas as diferentes abordagens e os posicionamentos de especialistas na área proposta através da reflexão crítica do conhecimento produzido pelas pesquisas empíricas. O objetivo foi examinar as temáticas, as estratégias de estudo, o campo teórico, os procedimentos metodológicos, a sistematização do processo de aprendizagem do repertório pianístico e as relações interatuantes ocorrentes nas pesquisas descritivas de delineamento experimental, nos estudos de caso, estudos com entrevista e levantamentos selecionados. Foi realizada uma análise crítica sobre as investigações que abordaram o comportamento da prática e as estratégias de estudo, além de uma reflexão sobre a construção dos referenciais dos trabalhos empíricos e os elementos envolvidos no processo de pesquisa. O presente trabalho estabeleceu seus pilares centrais no estudo crítico sobre as pesquisas empíricas na área do planejamento da execução instrumental (o pilar teórico) e a sua conexão com a experiência pessoal deste autor como sujeito de um estudo de caso durante seu Estágio de Doutorado no Exterior (o pilar prático, vivenciado). No estudo de caso, duas estratégias de estudo para resgatar as informações musicais da memória de longa duração foram elaboradas pelo sujeito para melhorar seu desempenho no Teste Experimental II. A partir da inter-relação entre os pilares e de seus reflexos na construção crítica sobre a estruturação dos processos investigativos, buscou-se discutir diversos pontos nevrálgicos nas pesquisas empíricas, como as relações hierárquicas entre pesquisador e sujeito e as possíveis lacunas investigativas. Foi sugerida uma proposta de modelo para maior interação entre as áreas da Psicologia da Música e da área de Práticas Interpretativas. Desse modo, buscouse examinar as implicações pedagógicas provenientes do profundo conhecimento das etapas do planejamento e do entendimento de como se constrói uma execução instrumental em nível de excelência. / The present thesis discusses the research line in piano performance planning. It examined different research approaches and points of view of specialists in the music field through a critical reflection about the knowledge produced by empirical research. The goal was to investigate the subjects, especially the strategies, the theoretical framework, methodological procedures, learning process and the relationship between experimental research, case studies, survey and studies with interview. A critical analysis of the research studies considering the behavior of the musicians during practice, their strategies, the construction of the theoretical references and the elements involved was undertaken. The present work established two pillars: (1) theoretical, focused on the empirical research, and (2) practical, related to the personal experience of the author as the subject in a case study during his doctoral internship. During that time, the present author elaborated two practice strategies to reinforce memory retrieval and to aid him in succeeding in the Experimental Test II. As a result of relationship between these two pillars of construction in the investigative processes, the present work discusses basic points relating to empirical research, such as the investigative problems and the hierarchy between the experimenter and the subject. The author proposed a model of interaction between the Psychology and Music areas. It is aimed at examining the pedagogical implications based on the profound understanding of performance planning, as well as how to construct an instrumental performance at a level of excellence.
440

Essays on Environmental Spillovers in Supply Chains

January 2018 (has links)
abstract: The phenomenon of global warming and climate change has increasingly attracted attention by researchers in the field of supply chain and operations management. Firms have developed efficient plans and intervention measures to reduce greenhouse gas (GHG) emissions. While a majority of research in supply chain management has adopted a firm-centric view to study environmental management, this dissertation focuses on the context of GHG emissions reduction by considering a firm’s vertical and horizontal relationships with other parties, and the associated spillover effects. A theoretical framework is first proposed to facilitate the field's understanding of the possible spillover effects in GHG emissions reduction via vertical and horizontal interactions. Two empirical studies are then presented to test the spillover effect in GHG emissions reduction, focusing on the vertical interactions - when firms interact with their supply chain members. Drawing data from Bloomberg Environmental Social and Governance, and Bloomberg SPLC, this study conducts econometric analyses using various models. The results suggest that first, a higher level of supply chain GHG emissions is associated with the adoption of emissions reduction programs by a firm, and that this supply chain leakage contributes to the firm’s financial performance. Second, a firm's supply base innovativeness can contribute to its internal GHG emissions reduction, and this effect is contingent on a firm's supply base structure. As such, this dissertation answers the recent call in the field of supply chain and operations management for more empirical research in socially and environmentally responsible value chains. Further, this study contributes to the literature by providing a better understanding of the externalities that value chain members can impose on one another when pursuing sustainability goals. / Dissertation/Thesis / Doctoral Dissertation Business Administration 2018

Page generated in 0.074 seconds