• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 465
  • 63
  • 56
  • 56
  • 55
  • 48
  • 45
  • 43
  • 41
  • 40
  • 38
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Vilka faktorer påverkar företagsledningen i sin bedömning vid nedskrivning av goodwill? / What factors influence the management in its assessment of the impairment of goodwill?

Lansgard, Moa, Zheng, Ulfenborg, Ning January 2016 (has links)
För att uppnå större harmonisering mellan länder och förenkla jämförelser mellan företag har ett antal internationella standarder för finansiell rapportering upprättats. Den senaste IFRS- standarden, IFRS 3, gavs ut år 2004 och infördes i Europa år 2005, vilket innebar att börsnoterade företag bland annat i Sverige skulle följa standarden när de upprättar sin koncernredovisning. I och med den nya standarden infördes nya regler för nedskrivning av goodwill som innebar att årliga avskrivningar ersattes med regelbundna tester för nedskrivning. Eftersom genomförande av nedskrivning i stor utsträckning är beroende av subjektiva bedömningar, uppstår möjligheten för företagsledningar att påverka den finansiella rapporteringen till sin egen fördel. Syftet i föreliggande studie är att utreda vilka faktorer som påverkar företagsledningen i sin bedömning vid nedskrivning av goodwill. Föreliggande studie har använt kvantitativ metod och en deduktiv ansats. Datainsamlingen baserades på årsredovisningar från börsnoterade företag i large cap på Stockholmsbörsen OMX Nasdaq under perioden 2011-2014. Denna studies resultat har visat att det råder ett signifikant positivt samband mellan (1) byte av VD, (2) svaga resultat och nedskrivning av goodwill. Detta ger stöd till att det under vissa förhållanden finns incitament för företagsledningen att påverka redovisningen på ett sätt som inte nödvändigtvis avspeglar den verkliga finansiella situationen i företaget. Det är viktigt för företags intressenter och revisorer att ha kännedom om vilka incitament som kan föranleda påverkan av redovisningen. / In order to achieve greater harmonization between countries and facilitate comparisons between companies, a number of international financial reporting standards have been established. The recent IFRS standard, IFRS 3 was released in 2004 and was introduced in Europe in 2005, which meant that listed companies in Sweden would follow the standard when they prepare their consolidated accounts. With the new standard, new rules for the impairment of goodwill were introduced that replaced annual depreciation with regular testing for impairment. Since the implementation of the write-down is largely dependent on subjective judgments, the possibility arises that management manipulate financial reporting to their own advantage. The aim of the present study is to investigate the factors that influence the company management in assessing the impairment of goodwill. This study uses quantitative methods and has a deductive approach. The data collection is based on the annual reports of listed companies in the large cap on the OMX Nasdaq Stockholm Exchange during the period 2011-2014. The results in the study have shown that there is a significant positive relationship between (1) the replacement of the CEO, (2) weak earnings and impairment of goodwill. This supports that under certain conditions there is an incentive for management to manipulate accounting numbers in a way that does not necessarily reflect the real financial situation of the company. It is important for the company's stakeholders and auditors to have knowledge of what incentives can influence preparation of financial statements.
422

Confidence bands in quantile regression and generalized dynamic semiparametric factor models

Song, Song 01 November 2010 (has links)
In vielen Anwendungen ist es notwendig, die stochastische Schwankungen der maximalen Abweichungen der nichtparametrischen Schätzer von Quantil zu wissen, zB um die verschiedene parametrische Modelle zu überprüfen. Einheitliche Konfidenzbänder sind daher für nichtparametrische Quantil Schätzungen der Regressionsfunktionen gebaut. Die erste Methode basiert auf der starken Approximation der empirischen Verfahren und Extremwert-Theorie. Die starke gleichmäßige Konsistenz liegt auch unter allgemeinen Bedingungen etabliert. Die zweite Methode beruht auf der Bootstrap Resampling-Verfahren. Es ist bewiesen, dass die Bootstrap-Approximation eine wesentliche Verbesserung ergibt. Der Fall von mehrdimensionalen und diskrete Regressorvariablen wird mit Hilfe einer partiellen linearen Modell behandelt. Das Verfahren wird mithilfe der Arbeitsmarktanalysebeispiel erklärt. Hoch-dimensionale Zeitreihen, die nichtstationäre und eventuell periodische Verhalten zeigen, sind häufig in vielen Bereichen der Wissenschaft, zB Makroökonomie, Meteorologie, Medizin und Financial Engineering, getroffen. Der typische Modelierungsansatz ist die Modellierung von hochdimensionalen Zeitreihen in Zeit Ausbreitung der niedrig dimensionalen Zeitreihen und hoch-dimensionale zeitinvarianten Funktionen über dynamische Faktorenanalyse zu teilen. Wir schlagen ein zweistufiges Schätzverfahren. Im ersten Schritt entfernen wir den Langzeittrend der Zeitreihen durch Einbeziehung Zeitbasis von der Gruppe Lasso-Technik und wählen den Raumbasis mithilfe der funktionalen Hauptkomponentenanalyse aus. Wir zeigen die Eigenschaften dieser Schätzer unter den abhängigen Szenario. Im zweiten Schritt erhalten wir den trendbereinigten niedrig-dimensionalen stochastischen Prozess (stationär). / In many applications it is necessary to know the stochastic fluctuation of the maximal deviations of the nonparametric quantile estimates, e.g. for various parametric models check. Uniform confidence bands are therefore constructed for nonparametric quantile estimates of regression functions. The first method is based on the strong approximations of the empirical process and extreme value theory. The strong uniform consistency rate is also established under general conditions. The second method is based on the bootstrap resampling method. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. A labor market analysis is provided to illustrate the method. High dimensional time series which reveal nonstationary and possibly periodic behavior occur frequently in many fields of science, e.g. macroeconomics, meteorology, medicine and financial engineering. One of the common approach is to separate the modeling of high dimensional time series to time propagation of low dimensional time series and high dimensional time invariant functions via dynamic factor analysis. We propose a two-step estimation procedure. At the first step, we detrend the time series by incorporating time basis selected by the group Lasso-type technique and choose the space basis based on smoothed functional principal component analysis. We show properties of this estimator under the dependent scenario. At the second step, we obtain the detrended low dimensional stochastic process (stationary).
423

Geometric approach to multi-scale 3D gesture comparison

Ochoa Mayorga, Victor Manuel 11 1900 (has links)
The present dissertation develops an invariant framework for 3D gesture comparison studies. 3D gesture comparison without Lagrangian models is challenging not only because of the lack of prediction provided by physics, but also because of a dual geometry representation, spatial dimensionality and non-linearity associated to 3D-kinematics. In 3D spaces, it is difficult to compare curves without an alignment operator since it is likely that discrete curves are not synchronized and do not share a common point in space. One has to assume that each and every single trajectory in the space is unique. The common answer is to assert the similitude between two or more trajectories as estimating an average distance error from the aligned curves, provided that the alignment operator is found. In order to avoid the alignment problem, the method uses differential geometry for position and orientation curves. Differential geometry not only reduces the spatial dimensionality but also achieves view invariance. However, the nonlinear signatures may be unbounded or singular. Yet, it is shown that pattern recognition between intrinsic signatures using correlations is robust for position and orientation alike. A new mapping for orientation sequences is introduced in order to treat quaternion and Euclidean intrinsic signatures alike. The new mapping projects a 4D-hyper-sphere for orientations onto a 3D-Euclidean volume. The projection uses the quaternion invariant distance to map rotation sequences into 3D-Euclidean curves. However, quaternion spaces are sectional discrete spaces. The significance is that continuous rotation functions can be only approximated for small angles. Rotation sequences with large angle variations can only be interpolated in discrete sections. The current dissertation introduces two multi-scale approaches that improve numerical stability and bound the signal energy content of the intrinsic signatures. The first is a multilevel least squares curve fitting method similar to Haar wavelet. The second is a geodesic distance anisotropic kernel filter. The methodology testing is carried out on 3D-gestures for obstetrics training. The study quantitatively assess the process of skill acquisition and transfer of manipulating obstetric forceps gestures. The results show that the multi-scale correlations with intrinsic signatures track and evaluate gesture differences between experts and trainees.
424

Geometric approach to multi-scale 3D gesture comparison

Ochoa Mayorga, Victor Manuel Unknown Date
No description available.
425

Changements dans la répartition des décès selon l'âge : une approche non paramétrique pour l'étude de la mortalité adulte

Ouellette, Nadine 03 1900 (has links)
Au cours du siècle dernier, nous avons pu observer une diminution remarquable de la mortalité dans toutes les régions du monde, en particulier dans les pays développés. Cette chute a été caractérisée par des modifications importantes quant à la répartition des décès selon l'âge, ces derniers ne se produisant plus principalement durant les premiers âges de la vie mais plutôt au-delà de l'âge de 65 ans. Notre étude s'intéresse spécifiquement au suivi fin et détaillé des changements survenus dans la distribution des âges au décès chez les personnes âgées. Pour ce faire, nous proposons une nouvelle méthode de lissage non paramétrique souple qui repose sur l'utilisation des P-splines et qui mène à une expression précise de la mortalité, telle que décrite par les données observées. Les résultats de nos analyses sont présentés sous forme d'articles scientifiques, qui s'appuient sur les données de la Human Mortality Database, la Base de données sur la longévité canadienne et le Registre de la population du Québec ancien reconnues pour leur fiabilité. Les conclusions du premier article suggèrent que certains pays à faible mortalité auraient récemment franchi l'ère de la compression de la mortalité aux grands âges, ère durant laquelle les décès au sein des personnes âgées tendent à se concentrer dans un intervalle d'âge progressivement plus court. En effet, depuis le début des années 1990 au Japon, l'âge modal au décès continue d'augmenter alors que le niveau d'hétérogénéité des durées de vie au-delà de cet âge demeure inchangé. Nous assistons ainsi à un déplacement de l'ensemble des durées de vie adultes vers des âges plus élevés, sans réduction parallèle de la dispersion de la mortalité aux grands âges. En France et au Canada, les femmes affichent aussi de tels développements depuis le début des années 2000, mais le scénario de compression de la mortalité aux grands âges est toujours en cours chez les hommes. Aux États-Unis, les résultats de la dernière décennie s'avèrent inquiétants car pour plusieurs années consécutives, l'âge modal au décès, soit la durée de vie la plus commune des adultes, a diminué de manière importante chez les deux sexes. Le second article s'inscrit dans une perspective géographique plus fine et révèle que les disparités provinciales en matière de mortalité adulte au Canada entre 1930 et 2007, bien décrites à l'aide de surfaces de mortalité lissées, sont importantes et méritent d'être suivies de près. Plus spécifiquement, sur la base des trajectoires temporelles de l'âge modal au décès et de l'écart type des âges au décès situés au-delà du mode, les différentiels de mortalité aux grands âges entre provinces ont à peine diminué durant cette période, et cela, malgré la baisse notable de la mortalité dans toutes les provinces depuis le début du XXe siècle. Également, nous constatons que ce sont précisément les femmes issues de provinces de l'Ouest et du centre du pays qui semblent avoir franchi l'ère de la compression de la mortalité aux grands âges au Canada. Dans le cadre du troisième et dernier article de cette thèse, nous étudions la longévité des adultes au XVIIIe siècle et apportons un nouvel éclairage sur la durée de vie la plus commune des adultes à cette époque. À la lumière de nos résultats, l'âge le plus commun au décès parmi les adultes canadiens-français a augmenté entre 1740-1754 et 1785-1799 au Québec ancien. En effet, l'âge modal au décès est passé d'environ 73 ans à près de 76 ans chez les femmes et d'environ 70 ans à 74 ans chez les hommes. Les conditions de vie particulières de la population canadienne-française à cette époque pourraient expliquer cet accroissement. / Over the course of the last century, we have witnessed major improvements in the level of mortality in regions all across the globe, in particular in developed countries. This remarkable mortality decrease has also been characterized by fundamental changes in the mortality profile by age. Indeed, deaths are no longer occurring mainly at very young ages but rather at advanced ages such as above age 65. Our research focuses on monitoring and understanding historical changes in the age-at-death distribution among the elderly population. We propose a new flexible nonparametric smoothing approach based on P-splines leading to detailed mortality representations, as described by actual data. The results are presented in three scientific papers, which rest upon reliable data taken from the Human Mortality Database, the Canadian Human Mortality Database, and the Registre de la population du Québec ancien. Findings from the first paper suggest that some low mortality countries may have recently reached the end of the old-age compression of mortality era, where deaths among the elderly population tend to concentrate into a progressively shorter age interval over time. Indeed, since the early 1990s in Japan, the modal age at death continues to increase while reductions in the variability of age at death above the mode have stopped. Thus, the distribution of age at death at older ages has been sliding towards higher ages without changing its shape. In France and Canada, women show such developments since the early 2000s, whereas men are still boldly engaged in an old-age mortality compression regime. In the USA, the picture for the latest decade is worrying because for several consecutive years in that timeframe, women and men have both recorded important declines in their modal age at death, which corresponds to the most common age at death among adults. The second paper takes a look within national boundaries and examines regional adult mortality differentials in Canada between 1930 and 2007. Smoothed mortality surfaces reveal that provincial disparities among adults in general and among the elderly population in particular are substantial in this country and deserve to be monitored closely. More specifically, based on modal age at death and standard deviation above the mode time trends, provincial disparities at older ages have barely reduced during the period studied, despite the great mortality improvements recorded in all provinces since the early XXth century. Also, we find that women who have reached the end of the old-age compression of mortality era in Canada are respectively those of Western and Central provinces. The last paper focuses on adult longevity during the XVIIIth century in historical Quebec and provides new insight on the most common adult age at death. Indeed, our analysis reveals that the modal age at death increased among French-Canadian adults between 1740-1754 and 1785-1799. In 1740-1754, it was estimated at 73 years among females and at about 70 years among males. By 1785-1799, modal age at death estimates were almost 3 years higher for females and 4 years higher for males. Specific living conditions of the French-Canadian population at the time could explain these results.
426

[en] ANALYSIS TECHNIQUES FOR CONTROLLING ELECTRIC POWER FOR HIGH FREQUENCY DATA: APPLICATION TO THE LOAD FORECASTING / [pt] ANÁLISE DE TÉCNICAS PARA CONTROLE DE ENERGIA ELÉTRICA PARA DADOS DE ALTA FREQUÊNCIA: APLICAÇÃO À PREVISÃO DE CARGA

JULIO CESAR SIQUEIRA 08 January 2014 (has links)
[pt] O objetivo do presente trabalho é o desenvolvimento de um algoritmo estatístico de previsão da potência transmitida pela usina geradora termelétrica de Linhares, localizada no Espírito Santo, medida no ponto de entrada da rede da concessionária regional, a ser integrado em plataforma composta por sistema supervisório em tempo real em ambiente MS Windows. Para tal foram comparadas as metodologias de Modelos Arima(p,d,q), regressão usando polinômios ortogonais e técnicas de amortecimento exponencial para identificar a mais adequada para a realização de previsões 5 passos-à-frente. Os dados utilizados são provenientes de observações registradas a cada 5 minutos, contudo, o alvo é produzir estas previsões para observações registradas a cada 5 segundos. Os resíduos estimados do modelo ajustado foram analisados via gráficos de controle para checar a estabilidade do processo. As previsões produzidas serão usadas para subsidiar decisões dos operadores da usina, em tempo real, de forma a evitar a ultrapassagem do limite de 200.000 kW por mais de quinze minutos. / [en] The objective of this study is to develop a statistical algorithm to predict the power transmitted by a thermoelectric power plant in Linhares, located at Espírito Santo state, measured at the entrance of the utility regional grid, which will be integrated to a platform formed by a real time supervisor system developed in MS Windows. To this end we compared Arima (p,d,q), Regression using Orthogonal Polynomials and Exponential Smoothing techniques to identify the best suited approach to make predictions five steps ahead. The data used are observations recorded every 5 minutes, however, the target is to produce these forecasts for observations recorded in every five seconds. The estimated residuals of the fitted model were analysed via control charts to check on the stability of the process. The forecasts produced by this model will be used to help not to exceed the 200.000 kW energy generation upper bound for more than fifteen minutes.
427

Métodos geoestatísticos de co-estimativas: estudo do efeito da correlação entre variáveis na precisão dos resultados / Co-estimation geostatistical methods: a study of the correlation between variables at results precision

Watanabe, Jorge 29 February 2008 (has links)
Esta dissertação de mestrado apresenta os resultados de uma investigação sobre os métodos de co-estimativa comumente utilizados em geoestatística. Estes métodos são: cokrigagem ordinária; cokrigagem colocalizada e krigagem com deriva externa. Além disso, a krigagem ordinária foi considerada apenas a título de ilustração como esse método trabalha quando a variável primária estiver pobremente amostrada. Como sabemos, os métodos de co-estimativa dependem de uma variável secundária amostrada sobre o domínio a ser estimado. Adicionalmente, esta variável deveria apresentar correlação linear com a variável principal ou variável primária. Geralmente, a variável primária é pobremente amostrada enquanto a variável secundária é conhecida sobre todo o domínio a ser estimado. Por exemplo, em exploração petrolífera, a variável primária é a porosidade medida em amostras de rocha retiradas de testemunhos e a variável secundária é a amplitude sísmica derivada de processamento de dados de reflexão sísmica. É importante mencionar que a variável primária e a variável secundária devem apresentar algum grau de correlação. Contudo, nós não sabemos como eles funcionam dependendo do grau de correlação. Esta é a questão. Assim, testamos os métodos de co-estimativa para vários conjuntos de dados apresentando diferentes graus de correlação. Na verdade, esses conjuntos de dados foram gerados em computador baseado em algoritmos de transformação de dados. Cinco valores de correlação foram considerados neste estudo: 0,993, 0,870, 0,752, 0,588 e 0,461. A cokrigagem colocalizada foi o melhor método entre todos testados. Este método tem um filtro interno que é aplicado no cálculo do peso da variável secundária, que por sua vez depende do coeficiente de correlação. De fato, quanto maior o coeficiente de correlação, maior é o peso da variável secundária. Então isso significa que este método funciona mesmo quando o coeficiente de correlação entre a variável primária e a variável secundária é baixo. Este é o resultado mais impressionante desta pesquisa. / This master dissertation presents the results of a survey into co-estimation methods commonly used in geostatistics. These methods are ordinary cokriging, collocated cokriging and kriging with an external drift. Besides that ordinary kriging was considered just to illustrate how it does work when the primary variable is poorly sampled. As we know co-estimation methods depend on a secondary variable sampled over the estimation domain. Moreover, this secondary variable should present linear correlation with the main variable or primary variable. Usually the primary variable is poorly sampled whereas the secondary variable is known over the estimation domain. For instance in oil exploration the primary variable is porosity as measured on rock samples gathered from drill holes and the secondary variable is seismic amplitude derived from processing seismic reflection data. It is important to mention that primary and secondary variables must present some degree of correlation. However, we do not know how they work depending on the correlation coefficient. That is the question. Thus, we have tested co-estimation methods for several data sets presenting different degrees of correlation. Actually, these data sets were generated in computer based on some data transform algorithms. Five correlation values have been considered in this study: 0.993; 0.870; 0.752; 0.588 and 0.461. Collocated simple cokriging was the best method among all tested. This method has an internal filter applied to compute the weight for the secondary variable, which in its turn depends on the correlation coefficient. In fact, the greater the correlation coefficient the greater the weight of secondary variable is. Then it means this method works even when the correlation coefficient between primary and secondary variables is low. This is the most impressive result that came out from this research.
428

Métodos geoestatísticos de co-estimativas: estudo do efeito da correlação entre variáveis na precisão dos resultados / Co-estimation geostatistical methods: a study of the correlation between variables at results precision

Jorge Watanabe 29 February 2008 (has links)
Esta dissertação de mestrado apresenta os resultados de uma investigação sobre os métodos de co-estimativa comumente utilizados em geoestatística. Estes métodos são: cokrigagem ordinária; cokrigagem colocalizada e krigagem com deriva externa. Além disso, a krigagem ordinária foi considerada apenas a título de ilustração como esse método trabalha quando a variável primária estiver pobremente amostrada. Como sabemos, os métodos de co-estimativa dependem de uma variável secundária amostrada sobre o domínio a ser estimado. Adicionalmente, esta variável deveria apresentar correlação linear com a variável principal ou variável primária. Geralmente, a variável primária é pobremente amostrada enquanto a variável secundária é conhecida sobre todo o domínio a ser estimado. Por exemplo, em exploração petrolífera, a variável primária é a porosidade medida em amostras de rocha retiradas de testemunhos e a variável secundária é a amplitude sísmica derivada de processamento de dados de reflexão sísmica. É importante mencionar que a variável primária e a variável secundária devem apresentar algum grau de correlação. Contudo, nós não sabemos como eles funcionam dependendo do grau de correlação. Esta é a questão. Assim, testamos os métodos de co-estimativa para vários conjuntos de dados apresentando diferentes graus de correlação. Na verdade, esses conjuntos de dados foram gerados em computador baseado em algoritmos de transformação de dados. Cinco valores de correlação foram considerados neste estudo: 0,993, 0,870, 0,752, 0,588 e 0,461. A cokrigagem colocalizada foi o melhor método entre todos testados. Este método tem um filtro interno que é aplicado no cálculo do peso da variável secundária, que por sua vez depende do coeficiente de correlação. De fato, quanto maior o coeficiente de correlação, maior é o peso da variável secundária. Então isso significa que este método funciona mesmo quando o coeficiente de correlação entre a variável primária e a variável secundária é baixo. Este é o resultado mais impressionante desta pesquisa. / This master dissertation presents the results of a survey into co-estimation methods commonly used in geostatistics. These methods are ordinary cokriging, collocated cokriging and kriging with an external drift. Besides that ordinary kriging was considered just to illustrate how it does work when the primary variable is poorly sampled. As we know co-estimation methods depend on a secondary variable sampled over the estimation domain. Moreover, this secondary variable should present linear correlation with the main variable or primary variable. Usually the primary variable is poorly sampled whereas the secondary variable is known over the estimation domain. For instance in oil exploration the primary variable is porosity as measured on rock samples gathered from drill holes and the secondary variable is seismic amplitude derived from processing seismic reflection data. It is important to mention that primary and secondary variables must present some degree of correlation. However, we do not know how they work depending on the correlation coefficient. That is the question. Thus, we have tested co-estimation methods for several data sets presenting different degrees of correlation. Actually, these data sets were generated in computer based on some data transform algorithms. Five correlation values have been considered in this study: 0.993; 0.870; 0.752; 0.588 and 0.461. Collocated simple cokriging was the best method among all tested. This method has an internal filter applied to compute the weight for the secondary variable, which in its turn depends on the correlation coefficient. In fact, the greater the correlation coefficient the greater the weight of secondary variable is. Then it means this method works even when the correlation coefficient between primary and secondary variables is low. This is the most impressive result that came out from this research.
429

Advancing Optimal Control Theory Using Trigonometry For Solving Complex Aerospace Problems

Kshitij Mall (5930024) 17 January 2019 (has links)
<div>Optimal control theory (OCT) exists since the 1950s. However, with the advent of modern computers, the design community delegated the task of solving the optimal control problems (OCPs) largely to computationally intensive direct methods instead of methods that use OCT. Some recent work showed that solvers using OCT could leverage parallel computing resources for faster execution. The need for near real-time, high quality solutions for OCPs has therefore renewed interest in OCT in the design community. However, certain challenges still exist that prohibits its use for solving complex practical aerospace problems, such as landing human-class payloads safely on Mars.</div><div><br></div><div>In order to advance OCT, this thesis introduces Epsilon-Trig regularization method to simply and efficiently solve bang-bang and singular control problems. The Epsilon-Trig method resolves the issues pertaining to the traditional smoothing regularization method. Some benchmark problems from the literature including the Van Der Pol oscillator, the boat problem, and the Goddard rocket problem verified and validated the Epsilon-Trig regularization method using GPOPS-II.</div><div><br></div><div>This study also presents and develops the usage of trigonometry for incorporating control bounds and mixed state-control constraints into OCPs and terms it as Trigonometrization. Results from literature and GPOPS-II verified and validated the Trigonometrization technique using certain benchmark OCPs. Unlike traditional OCT, Trigonometrization converts the constrained OCP into a two-point boundary value problem rather than a multi-point boundary value problem, significantly reducing the computational effort required to formulate and solve it. This work uses Trigonometrization to solve some complex aerospace problems including prompt global strike, noise-minimization for general aviation, shuttle re-entry problem, and the g-load constraint problem for an impactor. Future work for this thesis includes the development of the Trigonometrization technique for OCPs with pure state constraints.</div>
430

La mortalité différentielle aux âges adultes et avancés selon le groupe linguistique au Québec : une étude de suivi sur la période 1991-2011

Ah-kion, Cecilia 04 1900 (has links)
No description available.

Page generated in 0.0679 seconds