Spelling suggestions: "subject:"bypothesis test"" "subject:"bypothesis est""
21 |
ADOÇÃO DA AGRICULTURA DE PRECISÃO NA AMAZÔNIA: ESTUDO DE CASO NA REGIÃO CONE SUL DO ESTADO DE RONDÔNIA / ADOPTION OF PRECISION AGRICULTURE IN AMAZON: STUDIES IN THE REGION OF RONDONIA'S STATE SOUTH CORNERBatista, Jessé Alves 09 August 2016 (has links)
During the last years, the application of Precision Agriculture (PA) has been a research object worldwide, and the understanding of the factors related to it becomes crucial for the development of strategies that can meet the needs of dissemination and use of such technologies. This work's objective was to identify the level of PA tools adoption in the south corner of Rondonia, a region that produces soy and corn, it the Brazilian Amazon, as well as the intention to contribute for the state of the art use of these technologies in Brazil. Data gathering was done through applying a questionnaire on 37 PA adopting farmers in this region. The statistical processing of the data was based in two methodologies, Descriptive Statistics, wich displayed percentual, average, and standard diversion. And Matching Analysis, for variable matchings. Intending to compare the indexes of PA adoption in Rondonia to the indexes of adoption in other Brazilian regions where it has been studied, 5% significance intervals of Confidence Intervals (CI) were raised, and hypothesis tests to correlate different populations were applied. The results pointed that PA in Rondonia finds itself at initial adoption phase, being the main restrictive factors to adoption the machinery high cost and the post sale technical assistance low service quality. The automatic pilot, the soil geo-referenced sampling, the fertility map confection and soil correlation at variable rate were the most used tools in the studied region, due to have displayed similar indexes compared to other Brazilian regions. The harvester with yield sensor was the most found PA machine on farms, being the harvest mapping through the productivity map pointed as the next service to be adopted in Rondonia. PA shows great use potential in the region, especially to most of the largely disseminated tools in Brazil, once they are already present in Rondonia's farming activities, added to the expected Rondonian farmers intention of intensifying its use. / Nos últimos anos, o fenômeno da adoção da Agricultura de Precisão (AP) tem sido objeto de pesquisa no mundo todo e o entendimento dos fatores a isso relacionados torna-se crucial para o desenvolvimento de estratégias que possam vir ao encontro da necessidade de disseminação e uso dessas tecnologias. O objetivo deste trabalho foi identificar o nível de adoção das ferramentas de AP no Cone Sul de Rondônia, região produtora de soja e milho, localizada na Amazônia brasileira, bem como pretendeu contribuir com o esboço do estado da arte do uso dessas tecnologias no Brasil. A coleta de dados foi realizada a partir da aplicação de questionário a 37 agricultores adotantes dessa região. O tratamento estatístico dos dados foi baseado em duas metodologias, a estatística descritiva, que apresentou percentual, média e desvio padrão e a Análise de Correspondência (AC), para associação de variáveis. Com o intuito de comparar os índices de adoção da AP em Rondônia com os índices de adoção de outras regiões do Brasil onde se estudou o fenômeno da adoção da AP, foram calculados Intervalos de Confiança (IC) ao nível de 5% de significância e realizados testes de hipóteses para correlacionar diferentes populações. Os resultados indicaram que a AP em Rondônia encontra-se em fase inicial de adoção, sendo que os principais fatores apontados como restritivos à adoção foram o alto custo das máquinas e a baixa qualidade do serviço de assistência técnica pós-venda. O piloto automático, a amostragem georreferenciada de solo, a confecção de mapa de fertilidade e a correção de solo em taxa variável foram as ferramentas mais adotadas na região de estudo, por apresentarem índices semelhantes aos de outras regiões do Brasil. A colhedora com sensor de rendimento foi a máquina de AP mais frequente nas propriedades rurais, sendo que o mapeamento da colheita através do mapa de produtividade foi apontado como o próximo serviço a ser adotado em Rondônia. A agricultura de precisão apresenta grande potencial de uso na região, sobretudo para a maioria das ferramentas amplamente difundidas no Brasil uma vez que já estão presentes nas lavouras de Rondônia e tendo em vista a intenção dos agricultores rondonienses em intensificar o seu uso.
|
22 |
Statistická analýza ROC křivek / Statistical analysis of ROC curvesKutálek, David January 2010 (has links)
The ROC (Receiver Operating Characteristic) curve is a projection of two different cumulative distribution functions F0 and F1. On axis are values 1-F0(c) and 1-F1(c). The c-parameter is a real number. This curve is useful to check quality of discriminant rule which classify an object to one of two classes. The criterion is a size of an area under the curve. To solve real problems we use point and interval estimation of ROC curves and statistical hypothesis tests about ROC curves.
|
23 |
Three Essays on Environmental- and Spatial-Based Valuation of Urban Land and HousingLiu, Lu 01 May 2010 (has links)
This dissertation attempts to provide a comprehensive examination on the non-market valuation of the effect of open space amenities and local public infrastructure on the value of urban land and housing with both spatial heterogeneity and project heterogeneity. The demand for raw land is a derived demand for housing built on it. Therefore, we need to examine the land market and the housing market together. On the one hand, we estimate the value of urban land in a market that does not satisfy the usual assumptions of a competitive market structure as well as incentive incompatibility issues for transaction participants, with an application to a Chinese regional wholesale land market. These two violations to the traditional hedonic theory also generate two separate valuations on land with differentiated characteristics. On the other hand, we utilize the relative plane coordinates system, the three-dimensional distances, as well as the aggregate weight matrix, to implement the spatial hedonic estimation on the high-rise residential buildings in the same regional housing retail market in China. After these two steps, this dissertation, therefore, focuses on the profit maximization behavior of the property developer, which is the key role to link the factor market (i.e., the land market) and the commodity market (i.e., the housing market) together. Two methods are then employed to implement the hypothesis test on the hedonic price estimation including both inputs and outputs. First, a set of partial derivatives of the profit function with respect to various characteristics gives us the relationship between the marginal valuations in the land market and in the housing market. Second, we introduce a joint estimation approach that we call the spatial full information maximum likelihood (SFIML), which considers the land market, the housing market, and the property developer's profit maximization behavior all together in the estimation. Finally, we conduct a hypothesis test in both of these two scenarios to examine the validity of our linked markets assumption on the hedonic price estimation.
|
24 |
Automatic Feature Extraction for Human Activity Recognitionon the EdgeCleve, Oscar, Gustafsson, Sara January 2019 (has links)
This thesis evaluates two methods for automatic feature extraction to classify the accelerometer data of periodic and sporadic human activities. The first method selects features using individual hypothesis tests and the second one is using a random forest classifier as an embedded feature selector. The hypothesis test was combined with a correlation filter in this study. Both methods used the same initial pool of automatically generated time series features. A decision tree classifier was used to perform the human activity recognition task for both methods.The possibility of running the developed model on a processor with limited computing power was taken into consideration when selecting methods for evaluation. The classification results showed that the random forest method was good at prioritizing among features. With 23 features selected it had a macro average F1 score of 0.84 and a weighted average F1 score of 0.93. The first method, however, only had a macro average F1 score of 0.40 and a weighted average F1 score of 0.63 when using the same number of features. In addition to the classification performance this thesis studies the potential business benefits that automation of feature extractioncan result in. / Denna studie utvärderar två metoder som automatiskt extraherar features för att klassificera accelerometerdata från periodiska och sporadiska mänskliga aktiviteter. Den första metoden väljer features genom att använda individuella hypotestester och den andra metoden använder en random forest-klassificerare som en inbäddad feature-väljare. Hypotestestmetoden kombinerades med ett korrelationsfilter i denna studie. Båda metoderna använde samma initiala samling av automatiskt genererade features. En decision tree-klassificerare användes för att utföra klassificeringen av de mänskliga aktiviteterna för båda metoderna. Möjligheten att använda den slutliga modellen på en processor med begränsad hårdvarukapacitet togs i beaktning då studiens metoder valdes. Klassificeringsresultaten visade att random forest-metoden hade god förmåga att prioritera bland features. Med 23 utvalda features erhölls ett makromedelvärde av F1 score på 0,84 och ett viktat medelvärde av F1 score på 0,93. Hypotestestmetoden resulterade i ett makromedelvärde av F1 score på 0,40 och ett viktat medelvärde av F1 score på 0,63 då lika många features valdes ut. Utöver resultat kopplade till klassificeringsproblemet undersöker denna studie även potentiella affärsmässiga fördelar kopplade till automatisk extrahering av features.
|
25 |
Marknadens talan : En eventstudie om marknadens reaktion när företag byter VDBirgersson, Jonna, Nguyen, Silvia January 2015 (has links)
Purpose: The purpose of this work is to investigate how the market reacts to a change of CEO and if the impact on the share price is different when a company changes founder-CEO compared to a non founder-CEO. We chose to write about this type of phenomenon mainly because we see that CEOs today are replaced more frequently than before. Theory: The efficient market hypothesis, the agent theory and corporate governance. Method: The study is based on a quantitative deductive research approach. The companies investigated were Swedish companies listed on Nasdaq Stockholm and they have been examined using an event study and two hypothesis tests. Results: The result consists of 20 observations from 20 companies. The cumulated average abnormal return is presented for all days of the event window. The results of the hypothesis tests are also presented. Analysis: The first hypothesis test shows that the announcement of the change of CEO has an impact on the share price. The second hypothesis test shows that there is a difference regarding the impact between the announcement of change of a founder-CEO and a non founder-CEO. The event study shows that the impact of the change of a founder-CEO is positive and the impact of the change of a non founder-CEO is negative. Conclusion: The result of the study shows that the announcement of a change of CEO has a significant impact on the share price and the announcement of the change of a founder-CEO affect the stock price different from the change of a non founder-CEO. / Syfte: Syftet med detta arbete är att undersöka marknadens reaktion när ett företag byter VD samt om marknaden reagerar olika mellan ett byte av grundar-VD och icke grundar-VD. Vi valde att skriva om just denna typ av fenomen då vi upplever att VD:ar idag byts ut oftare jämfört med tidigare. Teori: Den effektiva marknadshypotesen, agentteorin och corporate governance. Metod: Den metod undersökningen har utgått från är en kvantitativ metod med deduktiv ansats. Svenska börsföretag har undersökts med hjälp av ett hypotestest och en eventstudie. Empiri: Empirin består av 20 observationer från 20 företag. Den ackumulerade genomsnittliga abnormala avkastningen presenteras för alla dagar i händelsefönstret. Även empirin från hypotestesten presenteras. Analys: Hypotestestet visar att tillkännagivande av VD-byte har en påverkan på aktiekursen. Det andra hypotestestet som utförts visar att det finns en skillnad i påverkan mellan tillkännagivande av VD-byte för grundar-VD och icke grundarVD. Eventstudien visar att påverkan av grundar-VD är positiv medan påverkan av icke grundar-VD är negativ. Slutsats: Empirin av undersökningen visar att tillkännagivande av VD-byte har en signifikant påverkan på aktiekursen samt att tillkännagivande för byte av en grundar-VD påverkar aktiekursen annorlunda jämfört med tillkännagivande av byte av en icke grundar-VD.
|
26 |
Multiple Outlier Detection: Hypothesis Tests versus Model Selection by Information CriteriaLehmann, Rüdiger, Lösler, Michael 14 June 2017 (has links) (PDF)
The detection of multiple outliers can be interpreted as a model selection problem. Models that can be selected are the null model, which indicates an outlier free set of observations, or a class of alternative models, which contain a set of additional bias parameters. A common way to select the right model is by using a statistical hypothesis test. In geodesy data snooping is most popular. Another approach arises from information theory. Here, the Akaike information criterion (AIC) is used to select an appropriate model for a given set of observations. The AIC is based on the Kullback-Leibler divergence, which describes the discrepancy between the model candidates. Both approaches are discussed and applied to test problems: the fitting of a straight line and a geodetic network. Some relationships between data snooping and information criteria are discussed. When compared, it turns out that the information criteria approach is more simple and elegant. Along with AIC there are many alternative information criteria for selecting different outliers, and it is not clear which one is optimal.
|
27 |
Confiabilidade de rede GPS de referência cadastral municipal - estudo de caso : rede do município de Vitória (ES) / Reliability of network GPS of municipal cadastral reference - study of case : network of the municipal district of Vitória (ES)Amorim, Geraldo Passos 25 March 2004 (has links)
A proposta deste trabalho é estudar as teorias de análise de qualidade de rede GPS, baseando-se nas teorias de confiabilidade de rede propostas por Baarda, em 1968. As hipóteses estatísticas para detecção de "outliers" constituem a base desse estudo, pois são fundamentais para elaboração dos testes de detecção de "outliers", localização e eliminação de erros grosseiros e, também, para a análise da confiabilidade da rede. A confiabilidade, que traduz a controlabilidade da rede e depende do número de redundância, é estudada em dois aspectos: confiabilidade interna e confiabilidade externa. A rede de referência cadastral do município de Vitória ES, escolhida para o estudo de caso foi estabelecida por GPS, em 2001, tendo como concepção básica a implantação de 37 pares de vértices intervisíveis, privilegiando locais públicos e de livre acesso. Essa rede foi ajustada em 2001 pela Prefeitura Municipal de Vitória, e as coordenadas ajustadas dos vértices são usadas, deste então, para apoiar todos os levantamentos topográficos e cadastrais realizados no município. O ajustamento dessa rede, em 2001, constituiu-se de um ajustamento simples em que os testes estatísticos de detecção de "outliers", a localização e eliminação dos erros grosseiros não foram levados em conta. A parte prática desta pesquisa compreendeu a medição de 21 novos vetores (linhas bases) para formar uma rede de controle, conforme estabelece a NBR-14166, o ajustamento dessa rede de controle (15 vértices) e o ajustamento da rede principal (78 vértices), tendo por injunção a rede de controle previamente ajustada. A principal diferença ente o ajustamento de 2001, feito pela Prefeitura Municipal de Vitória, e ajustamento de 2004, feito para esta pesquisa, foi a consideração no novo ajustamento dos testes estatísticos baseados nas teorias de confiabilidade propostas por Baarda. A comparação entre os resultados dos dois ajustamentos da rede cadastral de Vitória não apontou diferenças significativas entre as coordenadas ajustadas / The proposal of this work is to study the theories of analysis of network quality GPS, basing on the theories of reliability network proposed by Baarda, in 1968. The statistical hypotheses for outlier's detection constitute the base of this study, because they are fundamental for elaboration of the tests of outlier's detection tests, location and elimination of observations with gross errors as well as for the analysis of the realiability of the network. The reliability, that translates the controllability of the network and it depends of the redundancy number, it was studied in two aspects: internal reliability and external reliability. The network of cadastral reference of the municipal district of Vitória (ES), chosen for the case study it established by GPS, in 2001. The basic conception of this network was the implantation of 37 pair of vertexes inter-visible, privileging public places (of free access), as sidewalks and central stonemasons. This network adjusted in 2001 by the Municipal City Hall of Vitória, and the adjusted coordinates of the vertexes used, of this then, to support all topographical and cadastral survey accomplished in the municipal district. The adjustment of this network, in 2001, constituted of a simple adjustment in that did not take into account the statistical tests of outlier's detection and location and elimination of observations with gross errors. The practical part of this research was constituted of the measurement of 21 new vectors (line bases) to form a control network, as it establishes NBR-14166, the adjustment of that control network (15 vertexes) and the adjustment of the main network (78 vertexes), tends previously for injunction the control network adjusted. To principal it differentiates being the adjustment of 2001, done by the Municipal City Hall of Vitória, and adjustment of 2004, done for this research; it was the consideration in the new adjustment of the based statistical tests, mainly, in the reliability theories proposed by Baarda. The results of the adjustment of 2001 and of 2004 compared, and it verified that, in the case of the cadastral network of Vitória, there was not significant difference among results found in the two adjustments
|
28 |
O processo de Poisson estendido e aplicações. / O processo de Poisson estendido e aplicações.Salasar, Luis Ernesto Bueno 14 June 2007 (has links)
Made available in DSpace on 2016-06-02T20:05:59Z (GMT). No. of bitstreams: 1
DissLEBS.pdf: 1626270 bytes, checksum: c18112f89ed0a1eea09a198885cf2c2c (MD5)
Previous issue date: 2007-06-14 / Financiadora de Estudos e Projetos / Abstract
In this dissertation we will study how extended Poisson process can be applied to
construct discrete probabilistic models. An Extended Poisson Process is a continuous
time stochastic process with the state space being the natural numbers, it is obtained
as a generalization of homogeneous Poisson process where transition rates depend on
the current state of the process. From its transition rates and Chapman-Kolmogorov
di¤erential equations, we can determine the probability distribution at any …xed time of
the process. Conversely, given any probability distribution on the natural numbers, it
is possible to determine uniquely a sequence of transition rates of an extended Poisson
process such that, for some instant, the unidimensional probability distribution coincides
with the provided probability distribution. Therefore, we can conclude that extended
Poisson process is as a very ‡exible framework on the analysis of discrete data, since it
generalizes all probabilistic discrete models.
We will present transition rates of extended Poisson process which generate Poisson,
Binomial and Negative Binomial distributions and determine maximum likelihood estima-
tors, con…dence intervals, and hypothesis tests for parameters of the proposed models. We
will also perform a bayesian analysis of such models with informative and noninformative
prioris, presenting posteriori summaries and comparing these results to those obtained
by means of classic inference. / Nesta dissertação veremos como o proceso de Poisson estendido pode ser aplicado
à construção de modelos probabilísticos discretos. Um processo de Poisson estendido é
um processo estocástico a tempo contínuo com espaço de estados igual ao conjunto dos
números naturais, obtido a partir de uma generalização do processo de Poisson homogê-
neo onde as taxas de transição dependem do estado atual do processo. A partir das taxas
de transição e das equações diferenciais de Chapman-Kolmogorov pode-se determinar a
distribuição de probabilidades para qualquer tempo …xado do processo. Reciprocamente,
dada qualquer distribuição de probabilidades sobre o conjunto dos números naturais é pos-
sível determinar, de maneira única, uma seqüência de taxas de transição de um processo de
Poisson estendido tal que, para algum instante, a distribução unidimensional do processo
coincide com a dada distribuição de probabilidades. Portanto, o processo de Poisson es-
tendido se apresenta como uma ferramenta bastante ‡exível na análise de dados discretos,
pois generaliza todos os modelos probabilísticos discretos.
Apresentaremos as taxas de transição dos processos de Poisson estendido que ori-
ginam as distribuições de Poisson, Binomial e Binomial Negativa e determinaremos os
estimadores de máxima verossimilhança, intervalos de con…ança e testes de hipóteses dos
parâmetros dos modelos propostos. Faremos também uma análise bayesiana destes mod-
elos com prioris informativas e não informativas, apresentando os resumos a posteriori e
comparando estes resultados com aqueles obtidos via inferência clássica.
|
29 |
Squelettisation d’images en niveaux de gris et applications / Skeletonization of grayscale images and applicationsDouss, Rabaa 26 November 2015 (has links)
L’opération morphologique de squelettisation transforme chaque objet d’une image en une forme linéique qui préserve la topologie de ce dernier (propriété d’homotopie). Elle est largement utilisée en biométrie mais aussi dans la reconnaissance des caractères ainsi que pour l’extraction de la microarchitecture osseuse. L’objectif de cette thèse est de développer une méthode de squelettisation appliquée directement sur les niveaux de gris de l’image, ce qui a pour large avantage de s’affranchir de prétraitement comme la binarisation. Une revue des méthodes de squelettisation en niveaux de gris permet de constater que l’amincissement est l’une des approches les plus usitées de par sa propriété d’homotopie. Cependant, cette approche est sensible au bruit de l’image et produit des squelettes sur-connectés. Un premier paramétrage de l’amincissement a été proposé dans la littérature afin d’abaisser des configurations de pixels liées au bruit. La première contribution de ce travail est de proposer un ajustement de ce paramètre basé sur une décision statistique. Il s’agit d’identifier les tests d’hypothèses correspondants aux différentes configurations d’abaissement en vue de fixer ce paramètre de façon locale. Ceci conduit à la mise en place d’une squelettisation appelée Self Contrast Controlled Thinning (SCCT) puisque robuste au bruit tout en s’adaptant automatiquement au contraste de l’image. La squelettisation SCCT est rendue accessible aux domaines d’application grâce à son implantation optimisée basée sur les files d’attente hiérarchiques. Ayant noté le peu d’efforts consacrés à l’évaluation de la squelettisation en niveaux de gris, la deuxième contribution de ce travail est de proposer un protocole visant à évaluer l’opération de squelettisation sur la base des propriétés requises à savoir la préservation de la topologie et de la géométrie. Ce protocole est déroulé sur une base d’images synthétiques et nous permet de comparer notre approche à celles de la littérature. La troisième contribution est de proposer une structuration du squelette en graphe donnant accès aux descripteurs structurels et morphométriques des objets étudiés en vue d’une exploitation du squelette par les experts des domaines d’applications. Dans le cadre du projet Voxelo coordonné par le laboratoire B2OA de l’Université Paris Diderot, cette structuration est exploitée pour extraire les descripteurs de la qualité de la microarchitecture osseuse à partir d’images RX haute résolution. / Skeletonization is an image transformation that aims to represent objects by their medial axis while preserving their topological characteristics (homotopy). It is widely used in biometrics, character recognition and also in the extraction of bone microarchitecture. The objective of this thesis is to develop a skeletonization method applied directly on image gray levels. This has the large advantage of freeing the operation from preprocessing techniques such as binarization. A review of grayscale skeletonization methods shows that the morphological thinning is one of the most used approaches for its topology preservation property. However, this approach is sensitive to image noise and produces inexploitable skeletons. A first parameterization of the thinning process has been proposed in the literature to reduce noise-related information. The first contribution of this work is to propose an adjustment of this parameter based on a statistical decision. To this end, a hypothesis test is identified for each lowering criterion in order to set the thinning parameter locally. This leads us to propose the Self Contrast Controlled Thinning method SCCT that is robust to noise and is automatically adjusted to image contrast. The SCCT is made available to application domains through its optimized implementation based on hierarchical queues. Noticing the lack of efforts to assess grayscale skeletonization, the second contribution of this work is to propose a quantitative evaluation protocol assessing skeletonization with regard to its fundamental properties that are namely the preservation of topology and geometry. This protocol is conducted over a synthetic images database and allows us to compare SCCT to approaches from the literature. The third contribution consists in structuring the skeleton into a graph that gives access to objects structural and morphometric descriptors and enables the exploitation of the skeleton by experts from various fields of application. This structuring is applied in the context of Voxelo project which is coordinated by B2OA laboratory of the University Paris Diderot. In this context, descriptors of bone microarchitecture quality are extracted from X-ray high resolution images.
|
30 |
Multiple Outlier Detection: Hypothesis Tests versus Model Selection by Information CriteriaLehmann, Rüdiger, Lösler, Michael January 2016 (has links)
The detection of multiple outliers can be interpreted as a model selection problem. Models that can be selected are the null model, which indicates an outlier free set of observations, or a class of alternative models, which contain a set of additional bias parameters. A common way to select the right model is by using a statistical hypothesis test. In geodesy data snooping is most popular. Another approach arises from information theory. Here, the Akaike information criterion (AIC) is used to select an appropriate model for a given set of observations. The AIC is based on the Kullback-Leibler divergence, which describes the discrepancy between the model candidates. Both approaches are discussed and applied to test problems: the fitting of a straight line and a geodetic network. Some relationships between data snooping and information criteria are discussed. When compared, it turns out that the information criteria approach is more simple and elegant. Along with AIC there are many alternative information criteria for selecting different outliers, and it is not clear which one is optimal.
|
Page generated in 0.0409 seconds