581 |
Réduction de la dimension en régression / Dimension reduction in regressionPortier, François 02 July 2013 (has links)
Dans cette thèse, nous étudions le problème de réduction de la dimension dans le cadre du modèle de régression suivant Y=g(B X,e), où X est un vecteur de dimension p, Y appartient à R, la fonction g est inconnue et le bruit e est indépendant de X. Nous nous intéressons à l'estimation de la matrice B, de taille dxp où d est plus petit que p, (dont la connaissance permet d'obtenir de bonnes vitesses de convergence pour l'estimation de g). Ce problème est traité en utilisant deux approches distinctes. La première, appelée régression inverse nécessite la condition de linéarité sur X. La seconde, appelée semi-paramétrique ne requiert pas une telle condition mais seulement que X possède une densité lisse. Dans le cadre de la régression inverse, nous étudions deux familles de méthodes respectivement basées sur E[X f(Y)] et E[XX^T f(Y)]. Pour chacune de ces familles, nous obtenons les conditions sur f permettant une estimation exhaustive de B, aussi nous calculons la fonction f optimale par minimisation de la variance asymptotique. Dans le cadre de l'approche semi-paramétrique, nous proposons une méthode permettant l'estimation du gradient de la fonction de régression. Sous des hypothèses semi-paramétriques classiques, nous montrons la normalité asymptotique de notre estimateur et l'exhaustivité de l'estimation de B. Quel que soit l'approche considérée, une question fondamentale est soulevée : comment choisir la dimension de B ? Pour cela, nous proposons une méthode d'estimation du rang d'une matrice par test d'hypothèse bootstrap. / In this thesis, we study the problem of dimension reduction through the following regression model Y=g(BX,e), where X is a p dimensional vector, Y belongs to R, the function g is unknown and the noise e is independent of X. We are interested in the estimation of the matrix B, with dimension d times p where d is smaller than p (whose knowledge provides good convergence rates for the estimation of g). This problem is processed according to two different approaches. The first one, called the inverse regression, needs the linearity condition on X. The second one, called semiparametric, do not require such an assumption but only that X has a smooth density. In the context of inverse regression, we focus on two families of methods respectively based on E[X f(Y)] and E[XX^T f(Y)]. For both families, we provide conditions on f that allow an exhaustive estimation of B, and also we compute the better function f by minimizing the asymptotic variance. In the semiparametric context, we give a method for the estimation of the gradient of the regression function. Under some classical semiparametric assumptions, we show the root n consistency of our estimator, the exhaustivity of the estimation and the convergence in the processes space. Within each point, an important question is raised : how to choose the dimension of B ? For this we propose a method that estimates of the rank of a matrix by bootstrap hypothesis testing.
|
582 |
Altura de manejo do pasto e suas conseqüências sobre a produção animal e a dinâmica de pastagens anuais de inverno / Managemen sward height and his consequence in animal production and dynamics of anual winter pastureRocha, Lemar Maciel da January 2008 (has links)
O objetivo do experimento foi compreender e quantificar a mudança estrutural de pastagens anuais típicas do período hibernal do RS e avaliar o potencial produtivo e as características de carcaças de novilhos superprecoces. O experimento foi conduzido em área pertencente à Fazenda Espinilho, localizada no município de São Miguel das Missões – RS. Foram impostos quatro tratamentos por meio de diferentes alturas de manejo da pastagem: 10, 20, 30 e 40 cm, obtidas a partir da aplicação de diferentes cargas animais. O delineamento foi o de blocos completos casualizados com três repetições. Foram utilizados animais, com cerca de 10 meses de idade, machos inteiros, uniformes, sem padrão racial definido, com peso médio inicial de 190 kg. As variáveis estudadas foram: massa de forragem (MF), taxa de acúmulo de forragem (TAC), produção total de forragem (PTMS), relação lâmina foliar/colmo + bainha, ganho médio diário (GMD), ganho por área (GPA). Observou-se um aumento linear da MF com o aumento da altura do pasto, onde para cada cm de aumento na altura acima de 10 cm, correspondem um incremento de cerca de 108 kg/há na MF do pasto. Não houve efeito dos tratamentos para a TAC, bem como para a PTMS, cujos valores médios foram de 55,8 kg/há.dia e 8210 kg/ha, respectivamente. O aumento no GMD foi condicionado pelo incremento na qualidade e/ou na quantidade de forragem disponível, já que as OF diárias para os tratamentos de 10, 20, 30 e 40 cm de altura foram de 6, 7, 13 e 19 kg de MS/100 kg de PV, respectivamente. Portanto, o modelo de resposta do GMD em relação às alturas resultou em 0,96 e 1,24 kg/animal nos tratamentos de menor e maior GMD, respectivamente, que foram de 10 e 20 cm de altura. O maior GPA observado no tratamento 10 cm (515 kg de PV/ha) deveu-se à uma maior carga animal utilizada, e ambas apresentaram respostas lineares decrescendo com o aumento da altura de pastejo. O peso vivo dos animais antes do abate não foram incrementados com o aumento da altura do pasto (P>0,05). Houve um forte e abrupto decréscimo da relação lâmina/colmo+bainha colmo nos tratamentos 30 e 40 cm a partir de setembro. Investigou-se o ajuste de quatro tipos teóricos de distribuição das freqüências de altura em cada tratamento e em seis diferentes datas de observação, bem como o potencial de predição da MF por intermédio da altura do pasto. A distribuição das freqüências de altura se ajustou ao modelo Normal em apenas uma das noventa e seis séries analisadas. A distribuição tipo Gamma foi a que mais freqüentemente se ajustou aos dados de altura, porém, uma vez iniciado o pastejo, o incremento da heterogeneidade no pasto foi tal que a distribuição de freqüências não se ajustou a nenhum dos modelos estudados. Sugerese que as metas de altura de manejo devam ser variáveis ao longo do ciclo de pastejo, com o intuito de se administrar a heterogeneidade causada pelo animal. / Aiming to comprehend and quantify the structural changes in typical temperate annual pastures from RS and evaluate the potential production and characteristics of young beef steers carcass, this experiment was conducted at Fazenda Espinilho, located in São Miguel das Missões. Treatments were four sward height management targets (10, 20, 30 and 40 cm), using continuous variable stocking. A completely randomized block design with three replicates was applied. Beef steers weighting 190 Kg were used and they were ten months old, non-castrated males with no defined breed. The investigated variables were herbage mass (HM), herbage growth rate (GR), total dry matter production (TDMP) and laminae/stem+sheath relation, average daily gain (ADG), gain per area (GPA). Results indicated a linear increase relating herbage mass and sward height, where each cm on sward height above 10 cm increases herbage mass in 108 kg/ha. Treatments did not differ in GR and the TDMP, average values being 55,8 kg DM/ha and 8210 kg DM/ha, respectively. The increase on the average daily gain (ADG) was due to the increase of quality/amount of available forage and the herbage allowance for the treatments of 10, 20, 30, 40 cm, was 6, 7, 13, 19 DM kg/ 100 kg LW, respectively. So, the ADG was 0.96 and 1.24 kg/animal for the treatments of lowest and highest ADG, which corresponds to 10 cm and 20 cm. Once ADG was low, the highest GPA observed on the 10 cm treatment (515 Kg PV/ha) was due to a higher stocking rate and both presented a negative linear response, decreasing with the increase in sward height. Concerning final live weight there was no difference with increasing sward height (P>0,05). A pronounced decrease in laminae/stem+sheath relation for 30 and 40 cm treatments occurred by September. In addition, four theoretical distributions were fitted to sward height data for each treatment in six different sample dates. Sward frequency height distributions fitted Normal distribution only in one of ninety six data series analyzed. Gamma distribution was more frequently adjusted to sward height data, however, once grazing started, pasture heterogeneity was so increased that sward height did not fitted neither of the investigated models. It is suggested that sward height targets should be variable along the grazing season aiming to manage the heterogeneity caused by the animal.
|
583 |
Uma seqüência didática para aquisição/construção da noção de taxa de variação média de uma funçãoSilveira, Eugênio Cesar 06 November 2001 (has links)
Made available in DSpace on 2016-04-27T16:58:19Z (GMT). No. of bitstreams: 1
eugenio.pdf: 984600 bytes, checksum: e418bb88c201ea98edc58db64ac89cd4 (MD5)
Previous issue date: 2001-11-06 / The purpose of this work is to study the process by which students following a
university-level course in the exact sciences acquire/construct the notion of average
rate of change. An understanding of this notion could assist students in interpreting the
meaning of the derivative as the average rate of change of a point. A didactic sequence
was elaborated, inspired by Vergnaud (1994), who considers that the teaching and
learning of mathematical notions and concepts should be approached by an exploration
of problems, that is, by developing problem situations which favour new
conceptualisations in their resolution. Eighteen pairs of students from a first year
chemistry course worked on the sequence which lasted 1.440 minutes. The results
indicated that the students advanced their understandings of average rate of change, as
well as their ability to interpret graphs, for example, identifying intervals in which the
function increases or decreases and describing the meaning of points where the
function intersects the axes of the graph / Este trabalho tem por objetivo estudar o processo de aquisição/construção da
noção de taxa de variação média de uma função, por alunos que ingressaram em um
curso superior na área de exatas. A compreensão dessa noção pode favorecer a
interpretação do significado da derivada como taxa de variação num ponto. Para tanto,
foi elaborada uma seqüência didática inspirada nas concepções de Vergnaud (1994),
que considera que o processo de ensino e aprendizagem de noções e conceitos
matemáticos devem ser abordados mediante a exploração de problemas, ou seja, de
situações em que os alunos precisem desenvolver algum tipo de estratégia para
resolvê-las. Essa seqüência foi desenvolvida por 18 duplas de alunos do 1° ano de um
curso de Química, ao longo de 1.440 minutos. Os resultados obtidos revelam que
houve bom aproveitamento destes alunos na construção da taxa de variação média e
também no desenvolvimento de competências para a interpretação de gráficos, como a
identificação de intervalos de crescimento e decrescimento e na atribuição de
significados aos pontos de intersecção com os eixos coordenados
|
584 |
Modelagem matemática de copolimerização em emulsão de acrilato de butila e estireno para determinação dos valores médios de peso molecular e distribuição de tamanho de partículas. / Mathematical modeling of emulsion copolymerization of N-buty acrylate and styrene accounting for average molecular weights and particle size distribution.Pereira, Rodrigo Vallejo 09 October 2015 (has links)
Um modelo matemático da reação de copolimerização em emulsão de acrilato de butila e estireno em reator batelada e semi-batelada isotérmico foi desenvolvido e apresentou bons resultados quando comparado a experimentos disponíveis em literatura científica. O modelo contemplou a solução do balanço populacional, tanto para a distribuição de tamanho de partículas quando para a distribuição média de radicais por partícula. Contemplou-se também a solução do balanço de momentos, para obtenção da massa molar média numérica e mássica. O problema descrito foi resolvido através da solução numérica de um conjunto de equações algébricodiferenciais e o balanço populacional foi resolvido pelo método dos pivots fixos. Foi possível validar com boa aderência a conversão dos monômeros, o diâmetro médio de partículas, número de partículas por litro de emulsão, número médio de radicais por partícula, a distribuição de tamanho de partículas e a massa molar média numérica e mássica ao longo do tempo para um conjunto de experimentos. / A mathematical model of emulsion copolymerization reaction of styrene and butyl acrylate for batch and semi-batch isothermal reactor was developed and presented good results when compared to experiments available in the scientific literature. The model included the solution of the population balance for both particle size distribution and average number of radicals per particle. The balances of moments of the molecular weight distribution are solved to obtain the weight- and numberaveraged molecular weight of the polymer. The problem described was solved by numerical solution of a set of algebraic-differential equations and the population balance was solved by the method of fixed pivots. The model prediction were validated with a set of experiments with respect to the changes of monomer conversion, average particle diameter, number of particles per liter of emulsion, average number of radicals per particle, particle size distribution, number- and weight-average molecular weight during process time.
|
585 |
Stanovení hodnoty podniku působícího ve stavebnictví / Estimation of the Value of the Firm from the Construction BranchBaranovičová, Zuzana January 2015 (has links)
The diploma thesis deals with the determination of the value of the company from the construction branch. The thesis is divided into two parts. The first part aims at the methods of the determination of the value of the company. The theoretical knowledge is applied in the second part, namely in subsections about strategic, financial analysis and the evaluation by the yield method of discounted cash flow. Programs Stratex and Evalent are used to determine the value. The conclusion of the thesis includes the determination of the value of the company applied to 1st January 2014.
|
586 |
Conjugate Heat Transfer and Average Versus Variable Heat Transfer CoefficientsMacbeth, Tyler James 01 March 2016 (has links)
An average heat transfer coefficient, h_bar, is often used to solve heat transfer problems. It should be understood that this is an approximation and may provide inaccurate results, especially when the temperature field is of interest. The proper method to solve heat transfer problems is with a conjugate approach. However, there seems to be a lack of clear explanations of conjugate heat transfer in literature. The objective of this work is to provide a clear explanation of conjugate heat transfer and to determine the discrepancy in the temperature field when the interface boundary condition is approximated using h_bar compared to a local, or variable, heat transfer coefficient, h(x). Simple one-dimensional problems are presented and solved analytically using both h(x) and h_bar. Due to the one-dimensional assumption, h(x) appears in the governing equation for which the common methods to solve the differential equations with an average coefficient are no longer valid. Two methods, the integral equation and generalized Bessel methods are presented to handle the variable coefficient. The generalized Bessel method has previously only been used with homogeneous governing equations. This work extends the use of the generalized Bessel method to non-homogeneous problems by developing a relation for the Wronskian of the general solution to the generalized Bessel equation. The solution methods are applied to three problems: an external flow past a flat plate, a conjugate interface between two solids and a conjugate interface between a fluid and a solid. The main parameter that is varied is a combination of the Biot number and a geometric aspect ratio, A_1^2 = Bi*L^2/d_1^2. The Biot number is assumed small since the problems are one-dimensional and thus variation in A_1^2 is mostly due to a change in the aspect ratio. A large A_1^2 represents a long and thin solid whereas a small A_1^2 represents a short and thick solid. It is found that a larger A_1^2 leads to less problem conjugation. This means that use of h_bar has a lesser effect on the temperature field for a long and thin solid. Also, use of ¯ over h(x) tends to generally under predict the solid temperature. In addition is was found that A_2^2, the A^2 value for the second subdomain, tends to have more effect on the shape of the temperature profile of solid 1 and A_1^2 has a greater effect on the magnitude of the difference in temperature profiles between the use of h(x) and h_bar. In general increasing the A^2 values reduced conjugation.
|
587 |
Impact of Mortgage Characteristics on Retail Mortgage Transaction Completion TimeTannous, Kareem Atalla 01 January 2018 (has links)
In the mortgage industry, many mortgage lenders cannot manage mortgage workflow systems while meeting and exceeding organizational objectives. Organizations with an above-industry average turnaround time (ATT) to complete a retail mortgage transaction (RMT) from origination to funding experience revenue losses. Grounded in the proposition that mortgage loan purpose (MLP), mortgage loan type (MLT), and subject property type (SPT) impact ATT to complete an RMT, the purpose of this causal-comparative study was to assess the impact of MLP, MLT, and SPT on ATT to complete an RMT. Using archival data records (N = 146) from a selected mortgage institution in the state of Florida, the results of the 2 x 2 x 2 factorial ANOVA showed that there were no main or interaction effects F(5,140) = 0.42, p = .83. Implications for social change include the possibility for mortgage lenders to implement improved workflow processes to reduce costs and improve efficiency metrics and intrinsic value, thereby benefitting organizational stakeholders such as employees and consumers.
|
588 |
Dynamic time warping : apports théoriques pour l'analyse de données temporelles : application à la classification de séries temporelles d'images satellites / Dynamic time warping : theoretical contributions for data mining, application to the classification of satellite image time seriesPetitjean, François 13 September 2012 (has links)
Les séries temporelles d’images satellites (STIS) sont des données cruciales pour l’observation de la terre. Les séries temporelles actuelles sont soit des séries à haute résolution temporelle (Spot-Végétation, MODIS), soit des séries à haute résolution spatiale (Landsat). Dans les années à venir, les séries temporelles d’images satellites à hautes résolutions spatiale et temporelle vont être produites par le programme Sentinel de l’ESA. Afin de traiter efficacement ces immenses quantités de données qui vont être produites (par exemple, Sentinel-2 couvrira la surface de la terre tous les cinq jours, avec des résolutions spatiales allant de 10m à 60m et disposera de 13 bandes spectrales), de nouvelles méthodes ont besoin d’être développées. Cette thèse se focalise sur la comparaison des profils d’évolution radiométrique, et plus précisément la mesure de similarité « Dynamic Time Warping », qui constitue un outil permettant d’exploiter la structuration temporelle des séries d’images satellites. / Satellite Image Time Series are becoming increasingly available and will continue to do so in the coming years thanks to the launch of space missions, which aim at providing a coverage of the Earth every few days with high spatial resolution (ESA’s Sentinel program). In the case of optical imagery, it will be possible to produce land use and cover change maps with detailed nomenclatures. However, due to meteorological phenomena, such as clouds, these time series will become irregular in terms of temporal sampling. In order to consistently handle the huge amount of information that will be produced (for instance, Sentinel-2 will cover the entire Earth’s surface every five days, with 10m to 60m spatial resolution and 13 spectral bands), new methods have to be developed. This Ph.D. thesis focuses on the “Dynamic Time Warping” similarity measure, which is able to take the most of the temporal structure of the data, in order to provide an efficient and relevant analysis of the remotely observed phenomena.
|
589 |
Energy and Transient Power Minimization During Behavioral SynthesisMohanty, Saraju P 17 October 2003 (has links)
The proliferation of portable systems and mobile computing platforms has increased the need for the design of low power consuming integrated circuits. The increase in chip density and clock frequencies due to technology advances has made low power design a critical issue. Low power design is further driven by several other factors such as thermal considerations and environmental concerns. In low-power design for battery driven portable applications, the reduction of peak power, peak power differential, average power and energy are equally important. In this dissertation, we propose a framework for the reduction of these parameters through datapath scheduling at behavioral level. Several ILP based and heuristic based scheduling schemes are developed for datapath synthesis assuming : (i) single supply voltage and single frequency (SVSF), (ii) multiple supply voltages and dynamic frequency clocking (MVDFC), and (iii) multiple supply voltages and multicycling (MVMC). The scheduling schemes attempt to minimize : (i) energy, (ii) energy delay product, (iii) peak power, (iv) simultaneous peak power and average power, (v) simultaneous peak power, average power, peak power differential and energy, and (vi) power fluctuation.
A new parameter called "Cycle Power Function" (CPF) is defined which captures the transient power characteristics as the equally weighted sum of normalized mean cycle power and normalized mean cycle differential power. Minimizing this parameter using multiple supply voltages and dynamic frequency clocking results in the reduction of both energy and transient power. The cycle differential power can be modeled as either the absolute deviation from the average power or as the cycle-to-cycle power gradient. The switching activity information is obtained from behavioral simulations. Power fluctuation is modeled as the cycle-to-cycle power gradient and to reduce fluctuation the mean power gradient (MPG) is minimized. The power models take into consideration the effect of switching activity on the power consumption of the functional units.
Experimental results for selected high-level synthesis benchmark circuits under different constraints indicate that significant reductions in power, energy and energy delay product can be obtained and that the MVDFC and MVMC schemes yield better power reduction compared to the SVSF scheme. Several application specific VLSI circuits were designed and implemented for digital watermarking of images. Digital watermarking is the process that embeds data called a watermark into a multimedia object such that the watermark can be detected or extracted later to make an assertion about the object. A class of VLSI architectures were proposed for various watermarking algorithms : (i) spatial domain invisible-robust watermarking scheme, (ii) spatial domain invisible-fragile watermarking scheme, (iii) spatial domain visible watermarking scheme, (iv) DCT domain invisible-robust watermarking scheme, and (v) DCT domain visible watermarking scheme. Prototype implementation of (i), (ii) and (iii) are given. The hardware modules can be incorporated in a "JPEG encoder" or in a "digital still camera".
|
590 |
[en] PARTITION-BASED METHOD FOR TWO-STAGE STOCHASTIC LINEAR PROGRAMMING PROBLEMS WITH COMPLETE RECOURSE / [pt] MÉTODO DE PARTIÇÃO PARA PROBLEMAS DE PROGRAMAÇÃO LINEAR ESTOCÁSTICA DOIS ESTÁGIOS COM RECURSO COMPLETOCARLOS ANDRES GAMBOA RODRIGUEZ 22 March 2018 (has links)
[pt] A parte mais difícil de modelar os problemas de tomada de decisão do mundo real, é a incerteza associada a realização de eventos futuros. A programação estocástica se encarrega desse assunto; o objetivo é achar soluções que sejam factíveis para todas as possíveis realizações dos dados, otimizando o valor esperado de algumas funções das variáveis de decisão e de incerteza. A abordagem mais estudada está baseada em simulação de Monte Carlo e o método SAA (Sample Average Appmwimation) o qual é uma formulação
do problema verdadeiro para cada realização da data incerta, que pertence a um conjunto finito de cenários uniformemente distribuídos. É possível provar que o valor ótimo e a solução ótima do problema SAA converge a seus homólogos do problema verdadeiro quando o número de cenários é suficientemente grande.Embora essa abordagem seja útil ali existem fatores limitantes sobre o custo computacional para obter soluções mais precisas aumentando o número de cenários; no entanto o fato mais importante é que o problema SAA é função de cada amostra gerada e por essa razão é aleatório, o qual significa que
a sua solução também é incerta, e para medir essa incerteza e necessário considerar o número de replicações do problema SAA afim de estimar a dispersão da solução, aumentando assim o custo computacional. O propósito deste trabalho é apresentar uma abordagem alternativa baseada em um método de partição que permite obter cotas para estimar deterministicamente a solução do problema original, com aplicação da desigualdade de Jensen e de técnicas de otimização robusta. No final se analisa
a convergência dos algoritmos de solução propostos. / [en] The hardest part of modelling decision-making problems in the real world, is the uncertainty associated to realizations of futures events. The stochastic programming is responsible about this subject; the target is
finding solutions that are feasible for all possible realizations of the unknown data, optimizing the expected value of some functions of decision variables and random variables. The approach most studied is based on Monte Carlo simulation and the Sample Average Approximation (SAA) method which is a kind of
discretization of expected value, considering a finite set of realizations or scenarios uniformly distributed. It is possible to prove that the optimal value and the optimal solution of the SAA problem converge to their counterparts of the true problem when the number of scenarios is sufficiently big. Although that approach is useful, there exist limiting factors about the computational cost to increase the scenarios number to obtain a better solution; but the most important fact is that SAA problem is function of each sample generated, and for that reason is random, which means that the solution is also uncertain, and to measure its uncertainty it is necessary consider the replications of SAA problem to estimate the dispersion of the
estimated solution, increasing even more the computational cost. The purpose of this work is presenting an alternative approach based on robust optimization techniques and applications of Jensen s inequality,
to obtain bounds for the optimal solution, partitioning the support of distribution (without scenarios creation) of unknown data, and taking advantage of the convexity. At the end of this work the convergence of the bounding problem and the proposed solution algorithms are analyzed.
|
Page generated in 0.0649 seconds