• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Research of investment evaluation on biotechnology company

Lu, Tsung-Hsien 04 August 2010 (has links)
Abstract As for the biotechnology industry development had constructed the basis of favorable environment in Taiwan, government gives fresh impetus to biotechnology industry in recent years. The Executive Yuan established ¡§Bio Taiwan Commission (BTC)¡¨ on 2004. For the purpose of strengthen the development blueprint of biotechnology industry; the BTC belong to the rank of national policy. The government shows its determination to develop this new and developing industry by marking biotechnology industry as priority event on biotechnology industry strategy convention in latest 5 years and ¡§The challenge to year 2008 nation development significant plan¡¨. The management problems, which are brought with this new and developing industry, needs to be resolved one by one. This study focus on biotechnology industry, which is high risk long-term development, huge money investment on research and development, people¡¦s conservative investment attitude¡Ketc characters. The purpose of this study is not only to establish an valid assessment dimensions and assessment items on investment evaluation of venture capital; but also that in order to reduce the risk of investment. This study not only examines the investment evaluation standard on starting enterprises in the past but also generalizes overall assessment dimensions based on characters of biotechnology industry. The overall assessment dimensions are: management team; product technology; market size and marketing; financial management and patent. The study also includes individual assessment items and discussion of significance level. These assessment items include team competency; social experience; product features; product technology and manufacture; marketing access; financial forecast rationality; capital requirements; patent layout and contract¡Ketc. The result demonstrate that investment manager insist on attention significance level from these items analysis. This research method is based on questionnaires and in-depth interview, which establish appropriate investment standard on biotechnology industry. The research outcome shows management team is first priority on overall assessment dimensions. This research explain goal-oriented; management ability in a team; product life cycle; critical technology; marketing access; capital expenditure; stock price rationality; patent range and related party transactions that are first tier on significance level. These studies reveal that investors focus on what evaluation factor is import and provide for a reference to fund-raising of biotechnology companies in the future. Key words: venture capital, investment evaluation, assessment dimensions, assessment items, significance level
2

The Swedish Housing Market : An empirical analysis of the real price development on the Swedish housing market.

Landberg, Nils January 2016 (has links)
This thesis discusses the real price development on the Swedish housing market and the effects by qualitative variables. The housing market shows signs of being overpriced and this paper investigates if these qualitative values significantly effect the real price development. Valueguard Corporation has supplied Price development data. Focus magazine has supplied data regarding a large dataset for Swedish municipalizes which measures which state of quality of living prevailing in the investigated area. Empirical results show that qualitative variables and increased population have a positive effect on the real price development. Increased cost of interest rates has a significant negative effect on the price development. Increased amortizing rates and interest rates are assumed to slow down an unsustainable price development.
3

Audit účetní závěrky v praxi / Audit of financial statement in practice

Martínková, Pavla January 2014 (has links)
The thesis deals with the audit of the financial statement. Solves the characteristics and evolution of the audit, as well as legislation under which an entity must follow when preparing financial statement. Explains which demands are placed on the person performing the audit. Explains the audit procedures from operations before accepting the order to release the auditor's report. The practical part deals with the real audit of financial statement of selected company.
4

Definição do nível de significância em função do tamanho amostral / Setting the level of significance depending on the sample size

Oliveira, Melaine Cristina de 28 July 2014 (has links)
Atualmente, ao testar hipóteses utiliza-se como convenção um valor fixo (normalmente 0,05) para o Erro Tipo I máximo aceitável (probabilidade de Rejeitar H0 dado que ela é verdadeira) , também conhecido como nível de significância do teste de hipóteses proposto, representado por alpha. Na maioria das vezes nem se chega a calcular o Erro tipo II ou beta (probabilidade de Aceitar H0 dado que ela é falsa). Tampouco costuma-se questionar se o alpha adotado é razoável para o problema analisado ou mesmo para o tamanho amostral apresentado. Este texto visa levar à reflexão destas questões. Inclusive sugere que o nível de significância deve ser função do tamanho amostral. Ao invés de fixar-se um nível de significância único, sugerimos fixar a razão de gravidade entre os erros tipo I e tipo II baseado nas perdas incorridas em cada caso e assim, dado um tamanho amostral, definir o nível de significância ideal que minimiza a combinação linear dos erros de decisão. Mostraremos exemplos de hipóteses simples, compostas e precisas para a comparação de proporções, da forma mais convencionalmente utilizada comparada com a abordagem bayesiana proposta. / Usually the significance level of the hypothesis test is fixed (typically 0.05) for the maximum acceptable Type I error (probability of Reject H0 as it is true), also known as the significance level of the hypothesis test, represented here by alpha. Normally the type II error or beta (probability of Accept H0 as it is false) is not calculed. Nor often wonder whether the alpha adopted is reasonable for the problem or even analyzed for the sample size presented. This text aims to take the reflection of these issues. Even suggests that the significance level should be a function of the sample size. Instead of fix a unique level of significance, we suggest fixing the ratio of gravity between type I and type II errors based on losses incurred in each case and so, given a sample size, set the ideal level of significance that minimizes the linear combination of the decision errors. There are examples of simple, composite and sharp hypotheses for the comparison of proportions, the more conventionally used form compared with the Bayesian approach proposed.
5

Definição do nível de significância em função do tamanho amostral / Setting the level of significance depending on the sample size

Melaine Cristina de Oliveira 28 July 2014 (has links)
Atualmente, ao testar hipóteses utiliza-se como convenção um valor fixo (normalmente 0,05) para o Erro Tipo I máximo aceitável (probabilidade de Rejeitar H0 dado que ela é verdadeira) , também conhecido como nível de significância do teste de hipóteses proposto, representado por alpha. Na maioria das vezes nem se chega a calcular o Erro tipo II ou beta (probabilidade de Aceitar H0 dado que ela é falsa). Tampouco costuma-se questionar se o alpha adotado é razoável para o problema analisado ou mesmo para o tamanho amostral apresentado. Este texto visa levar à reflexão destas questões. Inclusive sugere que o nível de significância deve ser função do tamanho amostral. Ao invés de fixar-se um nível de significância único, sugerimos fixar a razão de gravidade entre os erros tipo I e tipo II baseado nas perdas incorridas em cada caso e assim, dado um tamanho amostral, definir o nível de significância ideal que minimiza a combinação linear dos erros de decisão. Mostraremos exemplos de hipóteses simples, compostas e precisas para a comparação de proporções, da forma mais convencionalmente utilizada comparada com a abordagem bayesiana proposta. / Usually the significance level of the hypothesis test is fixed (typically 0.05) for the maximum acceptable Type I error (probability of Reject H0 as it is true), also known as the significance level of the hypothesis test, represented here by alpha. Normally the type II error or beta (probability of Accept H0 as it is false) is not calculed. Nor often wonder whether the alpha adopted is reasonable for the problem or even analyzed for the sample size presented. This text aims to take the reflection of these issues. Even suggests that the significance level should be a function of the sample size. Instead of fix a unique level of significance, we suggest fixing the ratio of gravity between type I and type II errors based on losses incurred in each case and so, given a sample size, set the ideal level of significance that minimizes the linear combination of the decision errors. There are examples of simple, composite and sharp hypotheses for the comparison of proportions, the more conventionally used form compared with the Bayesian approach proposed.
6

Testing Structure of Covariance Matrix under High-dimensional Regime

Wu, Jiawei January 2020 (has links)
Statisticians are interested in testing the structure of covariance matrices, especially under the high-dimensional scenario in which the dimensionality of data matrices exceeds the sample size. Many test statistics have been introduced to test whether the covariance matrix is equal to identity structure (<img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?H_%7B01%7D:%20%5CSigma%20=%20I_p" />), sphericity structure (<img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?H_%7B02%7D:%20%5CSigma%20=%20%5Csigma%5E2I_p" />) or diagonal structure (<img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?H_%7B03%7D:%20%5CSigma%20=%20diag(d_1,%20d_2,%5Cdots,d_p)" />). These test statistics work under the assumption that data follows the multivariate normal distribution. In our thesis work, we want to compare the performance of test statistics for each structure test under given assumptions and when the distributional assumption is violated, and compare the test sensitivity to outliers. We apply simulation studies with the help of significance level, power of test, and goodness of fit tests to evaluate the performance of structure test statistics. In conclusion, we identify the recommended test statistics that perform well under different scenarios. Moreover, we find out that the test statistics for the identity structure test are more sensitive to the changes of distribution assumptions and outliers compared with others. The test statistics for the diagonal structure test have a better tolerant to the change of the data matrices.
7

Utilisation de méthodes de l'astrogéodésie et de la géodésie spatiale pour des études de déformations de l’écorce terrestre : représentations de déformations et de leur degré de signification par des tenseurs régulièrement répartis / Using the Space geodesy for crustal deformation studies : representation of deformations and their significance level by mapping a strain tensor field

Eissa, Leila 02 March 2011 (has links)
Les outils de la géodésie spatiale sont aujourd'hui très fortement impliqués dans les études géophysiques. Le champ de déformations horizontales d'un site d'étude est fourni par les vecteurs déplacement ou par un champ tensoriel. Ce dernier possède l'avantage d'être indépendant de tout référentiel, contrairement à ce qui est nécessaire pour exprimer les vecteurs vitesse. Néanmoins, les méthodes de calcul de tenseurs dépendent souvent d'une décomposition arbitraire en figures élémentaires à partir des points de mesures géodésiques. De plus, la représentation de ces tenseurs selon leurs axes principaux est d'une lecture et d'une interprétation assez difficiles et nécessitent un certain entraînement. Cette thèse traite, dans un premier temps, le problème de fournir un champ continu de déformations sous la forme des tenseurs régulièrement répartis, de façon peu dépendante des points de mesure, et dans un deuxième temps, de fournir une représentation cartographique intuitive de ces tenseurs avec, pour la première fois, une représentation simultanée de leur degré de significativité. L'estimation des incertitudes de la déformation obtenue est analysée selon deux points de vue : d'une part, une méthode de Monte Carlo est appliquée pour la détermination des barres d'erreurs liées aux mesures, son résultat permet le calcul de degré de significativité des tenseurs par comparaison des valeurs de tenseurs par rapport à leurs incertitudes, et d'autre part, une estimation des contraintes imposées par la géométrie de distribution des points de mesures qui est ensuite combinée avec la première source d'erreur. La nouvelle approche de représentation a été analysée via une enquête auprès d'un groupe de géophysiciens, en leur fournissant plusieurs possibilités de représentations. En se basant sur les résultats de cette enquête, nous avons pu valider la nouvelle représentation qui permet de mettre en évidence certains aspects mal mis en évidence par la représentation classique, et donc le choix des éléments graphiques de la carte permettant de fournir la représentation la plus intuitive possible / Space geodesy tools are now strongly involved in geophysical studies. The horizontal deformation field for a region of interest is provided by two main methods : a velocity field and a strain tensor field. A strain tensors field solution has the advantage of being independent of the reference frame in which the velocities are expressed. Nevertheless, the current methods of calculation of a strain tensors field depend on the positioning of geodetic points. Furthermore, the current mapping method of tensors by their mains axis is not easy to read and to interpret, needing some training. This thesis is devoted to the problem of calculating a continuous field of regularly spaced strain tensors, and providing an intuitive mapping method of these tensors with a simultaneous representation of their significance level on the same map. The estimation of uncertainties related to the deformation field is made in two steps : firstly, a Monte Carlo method is applied for the calculation of uncertainties related to the measurements, its results allow to define the significance level of tensors by normalizing tensor's values with respect to their related uncertainties, then, the constraints coming from the distribution of the network of measurement points are calculated and combined with the first source of error. The new approach of mapping tensors was analyzed through an opinion survey by providing several possibilities of representation. The results of this opinion survey allowed us to validate this new mapping method by geophysicists for representing a deformation field, because it allows highlighting some aspects not well illustrated by the classical mapping method of tensors, and therefore choosing the graphical elements of the map which provide the best intuitive method of mapping a strain tensors field
8

A comparative study of permutation procedures

Van Heerden, Liske 30 November 1994 (has links)
The unique problems encountered when analyzing weather data sets - that is, measurements taken while conducting a meteorological experiment- have forced statisticians to reconsider the conventional analysis methods and investigate permutation test procedures. The problems encountered when analyzing weather data sets are simulated for a Monte Carlo study, and the results of the parametric and permutation t-tests are compared with regard to significance level, power, and the average coilfidence interval length. Seven population distributions are considered - three are variations of the normal distribution, and the others the gamma, the lognormal, the rectangular and empirical distributions. The normal distribution contaminated with zero measurements is also simulated. In those simulated situations in which the variances are unequal, the permutation test procedure was performed using other test statistics, namely the Scheffe, Welch and Behrens-Fisher test statistics. / Mathematical Sciences / M. Sc. (Statistics)
9

Modelagem de circuitos neurais do sistema neuromotor e proprioceptor de insetos com o uso da transferência de informação entre conexões neurais / Neural circuits modeling of insects neuromotor system based on information transfer approach and neural connectivity

Endo, Wagner 31 March 2014 (has links)
Apresenta-se, neste trabalho, o desenvolvimento de um modelo bioinspirado a partir do circuito neural de insetos. Este modelo é obtido através da análise de primeira ordem dada pelo STA (Spike Triggered Average) e pela transferência de informação entre os sinais neurais. São aplicadas técnicas baseadas na identificação dos atrasos de tempo da máxima coerência da informação. Utilizam-se, para esta finalidade, os conceitos da teoria de informação: a DMI (Delayed Mutual Information) e a TE (Transfer Entropy). Essas duas abordagens têm aplicação em transferência de informação, cada uma com suas particularidades. A DMI é uma ferramenta mais simples do que a TE, do ponto de vista computacional, pois depende da análise estatística de funções densidades de probabilidades de segunda ordem, enquanto que a TE, de funções de terceira ordem. Dependendo dos recursos computacionais disponíveis, este é um fator que deve ser levado em consideração. Os resultados de atraso da informação são muito bem identificados pela DMI. No entanto, a DMI falha em distinguir a direção do fluxo de informação, quando se tem sistemas com transferência de informação indireta e com sobreposição da informação. Nesses casos, a TE é a ferramenta mais indicada para a determinação da direção do fluxo de informação, devido à dependência condicional imposta pelo histórico comum entre os sinais analisados. Em circuitos neurais, estas questões ocorrem em diversos casos. No gânglio metatorácico de insetos, os interneurônios locais possuem diferentes padrões de caminhos com sobreposição da informação, pois recebem sinais de diferentes neurônios sensores para o movimento das membros locomotores desses animais. O principal objetivo deste trabalho é propor um modelo do circuito neural do inseto, para mapear como os sinais neurais se comportam, quando sujeitos a um conjunto de movimentos aleatórios impostos no membro do inseto. As respostas neurais são reflexos provocados pelo estímulo táctil, que gera o movimento na junção femorotibial do membro posterior. Nestes circuitos neurais, os sinais neurais são processados por interneurônios locais dos tipos spiking e nonspiking que operam em paralelo para processar a informação vinda dos neurônios sensores. Esses interneurônios recebem sinais de entrada de mecanorreceptores do membro posterior e da junção motora dos insetos. A principal característica dos interneurônios locais é a sua capacidade de se comunicar com outros neurônios, tendo ou não a presença de impulsos nervosos (spiking e nonspiking). Assim, forma-se um circuito neural com sinais de entradas (neurônios sensores) e saídas (neurônios motores). Neste trabalho, os algoritmos propostos analisam desde a geração aleatória dos movimentos mecânicos e os estímulos nos neurônios sensores que chegam até o gânglio metatorácico, incluindo suas respostas nos neurônios motores. São implementados os algoritmos e seus respectivos pseudocódigos para a DMI e para a TE. É utilizada a técnica de Surrogate Data para inferir as medidas de significância estatística em relação à máxima coerência de informação entre os sinais neurais. Os resultados a partir dos Surrogate Data são utilizados para a compensação dos erros de desvio das medidas de transferência de informação. Um algoritmo, baseado na IAAFT (Iterative Amplitude Adjusted Fourier Transform), gera os dados substitutos, com mesmo espectro de potência e diferentes distribuições dos sinais originais. Os resultados da DMI e da TE com os Surrogate Data fornecem os valores das linhas de base quando ocorre a mínima transferência de informação. Além disso, foram utilizados dados simulados, para uma discussão sobre os efeitos dos tamanhos das amostras e as forças de associação da informação. Os sinais neurais coletados estão disponíveis em um banco de dados com diversos experimentos no gânglio metatorácico dos gafanhotos. No entanto, cada experimento possui poucos sinais coletados simultaneamente; assim, para diferentes experimentos, os sinais ficam sujeitos às variações de tamanho de amostras, além de ruídos que interferem nas medidas absolutas de transferência de informação. Para se mapear essas conexões neurais, foi utilizada a metodologia baseada na normalização e compensação dos erros de desvio para os cálculos da transferência de informação. As normalizações das medidas utilizam as entropias totais do sistema. Para a DMI, utiliza-se a média geométrica das entropias de X e Y , para a TE aplica-se a CMI (Conditional Mutual Information) para a sua normalização. Após a aplicação dessas abordagens, baseadas no STA e na transferência de informação, apresenta-se o modelo estrutural do circuito neural do sistema neuromotor de gafanhotos. São apresentados os resultados com o STA e a DMI para os neurônios sensores, dos quais são levantadas algumas hipóteses sobre o funcionamento desta parte do FeCO (Femoral Chordotonal Organ). Para cada tipo de neurônio foram identificados múltiplos caminhos no circuito neural, através dos tempos de atraso e dos valores de máxima coerência da informação. Nos interneurônios spiking obtiveram-se dois padrões de caminhos, enquanto que para os interneurônios nonspiking identificaram-se três padrões distintos. Esses resultados são obtidos computacionalmente e condizem com que é esperado a partir dos modelos biológicos descritos em Burrows (1996). / Herein, we present the development of a bioinspired model by the neural circuit of insects. This model is obtained by analyzing the first order from STA (Spike Triggered Average) and the transfer of information among neural signals. Techniques are applied based on the identification of the time delays of the information maximum coherence. For this purpose we use the concepts of the theory of information: DMI (Delayed Mutual Information) and TE (Transfer Entropy). These two approaches have applications on information transfer and each one has peculiarities. The DMI is a simpler tool than the TE, from the computational point of view. Therefore, DMI depends on the statistical analysis of second order probability density functions, whereas the TE depends on third order functions. If computational resources are a problem, those questions can be taken into consideration. The results of the information delay are very effective for DMI. However, DMI fails to distinguish the direction of the information flow when we have systems subjected to indirect information transfer and superposition of the information. In these cases, the TE is the most appropriate tool for determining the direction of the information flow, due to the conditional dependence imposed by a common history among the signals. In neural circuits, those issues occur in many cases. For example, in metathoracic ganglion of insects, the local interneurons have different pathways with superposition of the information. Therefore, the interneurons receive signals from different sensory neurons for moving the animals legs . The main objective of this work is propose a model of the neural circuit from an insect. Additionally, we map the neural signals when the hind leg is subjected to a set of movements. Neural responses are reflexes caused by tactile stimulus, which generates the movement of femoro-tibial joint of the hind leg. In these neural circuits, the signals are processed by neural spiking and nonspiking local interneurons. These types of neurons operate in parallel processing of the information from the sensory neurons. Interneurons receive input signals from mechanoreceptors by the leg and the insect knees. The main feature of local interneurons is their ability to communicate with others neurons. It can occur with or without of the presence of impulses (spiking and nonspiking). Thus, they form a neural circuit with input signals (sensory neurons) and outputs (motor neurons). The proposed algorithms analyze the random generation of movements and mechanical stimuli in sensory neurons. Which are processing in the metathoracic ganglion, including their responses in the motor neurons. The algorithms and the pseudo-code are implemented for TE and DMI. The technique of Surrogate Data is applied to infer the measures of statistical significance related to the information maximum coherence among neural signals. The results of the Surrogate Data are used for bias error compensation from information transfer. An algorithm, based on IAAFT (Iterative Amplitude Adjusted Fourier Transform), generates Surrogate Data with the same power spectrum and different distributions of the original signals. The results of the surrogate data, for DMI and TE, achieve the values of baselines when there are minimum information transfer. Additionally, we used simulated data to discuss the effects of sample sizes and different strengths of information connectivity. The collected neural signals are available from one database based on several experiments of the locusts metathoracic ganglion. However, each experiment has few simultaneously collected signals and the signals are subjected of variations in sample size and absolute measurements noisy of information transfer. We used a methodology based on normalization and compensation of the bias errors for computing the information transfer. The normalization of the measures uses the total entropy of the system. For the DMI, we applied the geometric mean of X and Y . Whereas, for the TE is computed the CMI (Conditional Mutual Information) for the normalization. We present the neural circuit structural model of the locusts neuromotor system, from those approaches based on STA and the information transfer. Some results are presented from STA and DMI for sensory neurones. Then, we achieve some new hypothesis about the neurophisiology function of FeCO. For each type of neuron, we identify multiple pathways in neural circuit through the time delay and the information maximum coherence. The spiking interneurons areyielded by two pathways, whereas the nonspiking interneurons has revealed three distinct patterns. These results are obtained computationally and are consistent with biological models described in Burrows (1996).
10

A comparative study of permutation procedures

Van Heerden, Liske 30 November 1994 (has links)
The unique problems encountered when analyzing weather data sets - that is, measurements taken while conducting a meteorological experiment- have forced statisticians to reconsider the conventional analysis methods and investigate permutation test procedures. The problems encountered when analyzing weather data sets are simulated for a Monte Carlo study, and the results of the parametric and permutation t-tests are compared with regard to significance level, power, and the average coilfidence interval length. Seven population distributions are considered - three are variations of the normal distribution, and the others the gamma, the lognormal, the rectangular and empirical distributions. The normal distribution contaminated with zero measurements is also simulated. In those simulated situations in which the variances are unequal, the permutation test procedure was performed using other test statistics, namely the Scheffe, Welch and Behrens-Fisher test statistics. / Mathematical Sciences / M. Sc. (Statistics)

Page generated in 0.0536 seconds