• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 32
  • 29
  • 17
  • 11
  • 5
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 286
  • 38
  • 33
  • 31
  • 29
  • 27
  • 22
  • 22
  • 20
  • 18
  • 18
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Stabilisation polynomiale et contrôlabilité exacte des équations des ondes par des contrôles indirects et dynamiques / Polynomial stability and exact controlability of wave equations with indirect and dynamical control

Toufayli, Laila 18 January 2013 (has links)
La thèse est portée essentiellement sur la stabilisation et la contrôlabilité de deux équations des ondes moyennant un seul contrôle agissant sur le bord du domaine. Dans le cas du contrôle dynamique, le contrôle est introduit dans le système par une équation différentielle agissant sur le bord. C'est en effet un système hybride. Le contrôle peut être aussi applique directement sur le bord d'une équation, c'est le cas du contrôle indirecte mais non borne. La nature du système ainsi coupledépend du couplage des équations, et ceci donne divers résultats par la stabilisation (exponentielle et polynomiale) et la contrôlabilité exacte (espace contrôlable). Des nouvelles inégalités d'énergie permettent de mettre en oeuvre la Méthode fréquentielle et la Méthode d'Unicité de Hilbert. / This thesis is concerned with the stabilization and the exact controllability of two wave equations by means of only one control acting on the boundary of the domain. In the case of dynamic control, the control is introduced into the system by differential equation acting on the boundary. It is indeed a hybrid system. The control can be also applied directly on the boundary of one of the equations. In this case, the control is indirect but unbounded. The behavior of the obtained system depends on theways of coupling. Various results are established for the stabilization (exponential or polynomial) and the exact controllability (controllable space of initial data). A new inequality of energy allows to apply the Frequency Method and the Hilbert Uniqueness Method.
272

Modélisation multivariée hétéroscédastique et transmission financière / Multivariate heteroskedastic modelling and financial transmission

Sanhaji, Bilel 02 December 2014 (has links)
Cette thèse de doctorat composée de trois chapitres contribue au développement de tests statistiques et à analyser la transmission financière dans un cadre multivarié hétéroscédastique. Le premier chapitre propose deux tests du multiplicateur de Lagrange de constance des corrélations conditionnelles dans les modèles GARCH multivariés. Si l'hypothèse nulle repose sur des corrélations conditionnelles constantes, l'hypothèse alternative propose une première spécification basée sur des réseaux de neurones artificiels et une seconde représentée par une forme fonctionnelle inconnue qui est linéarisée à l'aide d'un développement de Taylor.Dans le deuxième chapitre, un nouveau modèle est introduit dans le but de tester la non-linéarité des (co)variances conditionnelles. Si l'hypothèse nulle repose sur une fonction linéaire des innovations retardées au carré et des (co)variances conditionnelles, l'hypothèse alternative se caractérise quant à elle par une fonction de transition non-linéaire : exponentielle ou logistique ; une configuration avec effets de levier est également proposée. Dans les deux premiers chapitres, les expériences de simulations et les illustrations empiriques montrent les bonnes performances de nos tests de mauvaise spécification.Le dernier chapitre étudie la transmission d'information en séance et hors séance de cotation en termes de rendements et de volatilités entre la Chine, l'Amérique et l'Europe. Le problème d'asynchronicité est considéré avec soin dans la modélisation bivariée avec la Chine comme référence. / This Ph.D. thesis composed by three chapters contributes to the development of test statistics and to analyse financial transmission in a multivariate heteroskedastic framework.The first chapter proposes two Lagrange multiplier tests of constancy of conditional correlations in multivariate GARCH models. Whether the null hypothesis is based on constant conditional correlations, the alternative hypothesis proposes a first specification based on artificial neural networks, and a second specification based on an unknown functional form linearised by a Taylor expansion.In the second chapter, a new model is introduced in order to test for nonlinearity in conditional (co)variances. Whether the null hypothesis is based on a linear function of the lagged squared innovations and the conditional (co)variances, the alternative hypothesis is characterised by a nonlinear exponential or logistic transition function; a configuration with leverage effects is also proposed.In the two first chapters, simulation experiments and empirical illustrations show the good performances of our misspecification tests.The last chapter studies daytime and overnight information transmission in terms of returns and volatilities between China, America and Europe. The asynchronicity issue is carefully considered in the bivariate modelling with China as benchmark.
273

Étude théorique et numérique de la stabilité de certains systèmes distribués avec contrôle frontière de type dynamique / Theoretical and numerical study of the stability of some distributed systems with dynamic boundary control

Sammoury, Mohamad Ali 08 December 2016 (has links)
Cette thèse est consacrée à l’étude de la stabilisation de certains systèmes distribués avec contrôle frontière de type dynamique. Nous considérons, d’abord, la stabilisation de l’équation de la poutre de Rayleigh avec un seul contrôle frontière dynamique moment ou force. Nous montrons que le système n’est pas uniformément (autrement dit exponentiellement) stable; mais par une méthode spectrale, nous établissons le taux polynomial optimal de décroissance de l’énergie du système. Ensuite, nous étudions la stabilisation indirecte de l’équation des ondes avec un amortissement frontière de type dynamique fractionnel. Nous montrons que le taux de décroissance de l’énergie dépend de la nature géométrique du domaine. En utilisant la méthode fréquentielle et une méthode spectrale, nous montrons la non stabilité exponentielle et nous établissons, plusieurs résultats de stabilité polynomiale. Enfin, nous considérons l’approximation de l’équation des ondes mono-dimensionnelle avec un seul amortissement frontière de type dynamique par un schéma de différence finie. Par une méthode spectrale, nous montrons que l’énergie discrétisée ne décroit pas uniformément (par rapport au pas du maillage) polynomialement vers zéro comme l’énergie du système continu. Nous introduisons, alors, un terme de viscosité numérique et nous montrons la décroissance polynomiale uniforme de l’énergie de notre schéma discret avec ce terme de viscosité. / This thesis is devoted to the study of the stabilization of some distributed systems with dynamic boundary control. First, we consider the stabilization of the Rayleigh beam equation with only one dynamic boundary control moment or force. We show that the system is not uniformly (exponentially) stable. However, using a spectral method, we establish the optimal polynomial decay rate of the energy of the system. Next, we study the indirect stability of the wave equation with a fractional dynamic boundary control. We show that the decay rate of the energy depends on the nature of the geometry of the domain. Using a frequency approach and a spectral method, we show the non exponential stability of the system and we establish, different polynomial stability results. Finally, we consider the finite difference space discretization of the 1-d wave equation with dynamic boundary control. First, using a spectral approach, we show that the polynomial decay of the discretized energy is not uniform with respect to the mesh size, as the energy of the continuous system. Next, we introduce a viscosity term and we establish the uniform (with respect to the mesh size) polynomial energy decay of our discrete scheme.
274

Diagnostika vysokonapěťových kondenzátorů pro kaskádní napěťový násobič / Diagnosis of high voltage capacitors for cascade voltage multiplier

Baev, Dmitriy January 2017 (has links)
The main subject of the final thesis is to find a suitable method for measuring the partial discharge (PD) in the dielectric of high-voltage capacitor. In the theoretical part of my thesis contains from the mechanisms of origin and the harmful effects of partial discharge at high voltage insulation of capacitor. It describes the global galvanic method of partial discharge measurement, the principle of cascade voltage multiplier, its main components are high-voltage capacitor and diode, facilities quality measurement of capacitors for voltage multipliers, advantages and disadvantages and principles of HIPOTRONICS DDX-8003 with the pulse discrimination system. In the experimental part of the diploma thesis is familiar with the diagnostics of high – voltage capacitors by means of laboratory measurements on the electronic bridge and with the help of partial discharge measurement system. Design of suitable electrode arrangement is described which eliminates the influence of corona which makes it impossible to measure partial discharges and the dissipation factor (tg ). Analysis data from measurement and determination of quality level, eventual degradation of measured capacitors. The result of this project should be designed the methodology for finding of poor – quality capacitors in order to increase the reliability of the voltage multiplier.
275

Diagnostika vysokonapěťových kondenzátorů pro kaskádní napěťový násobič / Diagnosis of high voltage capacitors for cascade voltage multiplier

Baev, Dmitriy January 2017 (has links)
The main subject of the final thesis is to find a suitable method for measuring the partial discharge (PD) in the dielectric of high-voltage capacitor. In the theoretical part of thesis contains from the mechanisms of origin and the harmful effects of partial discharge at high voltage insulation of capacitor. It describes the global galvanic method of partial discharge measurement, the principle of cascade voltage multiplier, its main components (high-voltage capacitor and diode). Nowadays are presented the possibilities of measuring the quality of capacitors for voltage multipliers. In the experimental part of the diploma thesis is familiar with the diagnostics of high – voltage capacitors by means of laboratory measurements on the electronic bridge and with the help of partial discharge measurement system. It describes the design of a suitable electrode arrangement that eliminates the influence of the corona, which makes it impossible to measure partial discharges and the dissipation factor tg . The thesis analyzes the obtained data from measurements and determines the quality level or the degradation of the measured capacitors. The result of the project should be designed the methodology for finding poor quality capacitors in order to increase the reliability of the voltage multiplier.
276

Frekvenční syntezátor pro mikrovlnné komunikační systémy / Frequency synthesizer for microwave communication systems

Klapil, Filip January 2020 (has links)
The main aim of the thesis is to develop a solution of a frequency synthesizer for a microwave communication systems. Specifically, it suggests a design for frequency synthesizer with phase-locked loop. At beginning of the thesis the principle and basic properties of this method of signal generation are explained. Then it is followed by a brief discussion of the parameters of synthesizers and their influence on design. Another part of the work is the analysis of circuit the frequency synthesizer with the phase-locked loop MAX2871, which is followed by a proposal for the design of the frequency synthesizer module hardware. The last part of the work deals with practical implementation, verification of function and measurement of achieved parameters and their evaluation.
277

Essays on business taxation

Zeida, Teega-Wendé Hervé 09 1900 (has links)
Cette thèse explore les effets macroéconomiques et distributionnels de la taxation dans l’économie américaine. Les trois premiers chapitres prennent en considération l’interaction entre l’entrepreneuriat et la distribution de richesse tandis que le dernier discute l’arbitrage du mode de financement d’une diminution d’impôt sur les sociétés sous la contrainte de neutralité fiscale pour le gouvernement. Spécifiquement, le chapitre 1 en utilisant les données du Panel Study of Income Dynamics (PSID) , fournit des évidences selon lesquelles le capital humain ou l’expérience entrepreneuriale est quantitativement important pour expliquer les disparités de revenu et de richesse entre les individus au cours de leur cycle de vie. Pour saisir ces tendances, je considère le modèle d’entrepreneuriat de Cagetti et De Nardi (2006), modifié pour prendre en compte la dynamique du cycle de vie. J’introduis également l’accumulation de l’experience entrepreneuriale, laquelle rend les entrepreneurs plus productifs. Je calibre ensuite deux versions du modèle (avec et sans accumulation d’expérience d’entreprise) en fonction des mêmes données américaines. Les résultats montrent que le modèle avec accumulation d’expérience réplique le mieux les données. La question de recherche du chapitre 2 est opportune à la réforme fiscale récente adoptée aux États-Unis, laquelle est un changement majeur du code fiscal depuis la loi de réforme fiscale de 1986. Le Tax Cuts and Jobs Act (TCJA) voté en décembre 2017 a significativement changé la manière dont le revenu d’affaires est imposé aux États-Unis. Je considère alors le modèle d’équilibre général dynamique avec choix d’occupations développé au Chapitre 1 pour une évaluation quantitative des effets macroéconomiques du TCJA, tant dans le court terme que dans le long terme. Le TCJA est modélisé selon ses trois provisions clés : un nouveau taux de déduction de 20% pour les firmes non- incorporées, une baisse du taux fiscal statutaire pour sociétés incorporées de 35% à 21% et la réduction de 39.6% à 37% du taux marginal supérieur pour les individus. Je trouve que l’économie connait un taux de croissance du PIB de 0.90% sur une fenêtre fiscale de dix ans et le stock de capital en moyenne augmente de 2.12%. Ces résultats sont consis- tants aux évaluations faites par le Congressional Budget Office et le Joint Committee on vi Taxation. Avec des provisions provisoires, le TCJA génère une réduction dans l’inégalité de la richesse et celle du revenu mais l’opposé se réalise une fois que les provisions sont faites permanentes. Dans les deux scénarios, la population subit une perte de bien-être et exprime un faible soutien. Le chapitre 3 répond à la question normative: Les entrepreneurs devraient-ils être imposés différemment? Par conséquent, j’analyse quantitativement la désirabilité d’une taxation basée sur l’occupation dans un modèle à générations imbriquées avec entrepreneuriat et une prise en compte explicite des cohortes transitionnelles. La reforme principale étudiée est le passage d’une taxation progressive fédérale identique tant pour les revenus du travail que pour le bénéfice d’entreprise au niveau individuel à un régime fiscal différentiel où le profit d’affaires fait face à un taux d’imposition proportionnel pendant que le revenu du travail est toujours soumis au code de taxation progressive. Je trouve qu’une taxe proportionnelle de 40% imposée aux entrepreneurs est optimale. Plus générale- ment, je montre que le taux d’imposition optimal varie entre 15% et 50%, augmentant avec l’aversion du planificateur pour les inégalités et diminuant avec son évaluation rel- ative du bien-être des générations futures. Dans le contexte de la réforme fiscalité des entreprises, le chapitre 4 évalue les compromis de neutralité fiscale de revenu dans le financement d’une réduction de l’impôt des sociétés. Pour respecter la neutralité fiscale, le gouvernement utilise trois instruments pour équilibrer son budget, à savoir l’impôt sur le revenu du travail, les dividendes et les gains en capital. Je construis ensuite un modèle d’équilibre général parcimonieux pour obtenir des multiplicateurs budgétaires équilibrés associés à une réforme de l’impôt sur les sociétés. En utilisant un calibration standard de l’économie américaine, je montre que les multiplicateurs liés à l’impôt sur le revenu du travail et l’impôt sur les dividendes sont négatifs, suggérant ainsi un compromis entre une réduction de l’impôt des sociétés et ces deux taux d’imposition. D’autre part, le multiplicateur lié à l’impôt sur les gains en capital est positif, ce qui prédit une coordination d’une double réduction des taux d’imposition des sociétés et des gains en capital. De plus, les gains de bien-être des différents scénarios sont mitigés. / This thesis explores the macroeconomic and distributional effects of taxation in the U.S. economy. The first three chapters take advantage of the interplay between entrepreneurship and wealth distribution while the last one discusses the trade-offs when financing a corporate tax cut under revenue neutrality. Specifically, Chapter 1 provides evidence using the Panel Study of Income Dynamics (PSID) that occupation-specific human capital or business experience is quantitatively important in explaining income and wealth disparities among individuals over their life cycle. To capture the data patterns, I build on Cagetti and De Nardi (2006) occupational choice model, modified to feature life-cycle dynamics. I also introduce managerial skill accumulation which leads entrepreneurs to become more productive with experience. I then calibrate two versions of the model (with and without accumulation of business experience) to the same U.S. data. Results show that the model with business experience margin is the closest one. Chapter 2's research question is timely to the recent tax reform enacted in the US, which is a major change of the tax code since the 1986 Tax Reform Act. The Tax Cuts and Jobs Act (TCJA) as of December 2017 significantly altered how business income is taxed in the US. I consider a dynamic general equilibrium model of entrepreneurship developed in Chapter 1 to provide a quantitative assessment of the macroeconomic effects of the TCJA, both in the short run and in the long run. The TCJA is modeled by its three key provisions: a new 20-percent-deduction rate for pass-throughs, a drop in the statutory tax rate for corporations from 35% to 21% and the reduction to 37% of the top marginal tax rate for individuals from 39.6%. I find that the economy experiences, a GDP growth rate of 0.90% over a ten-year window and average capital stock increases by 2.12%. These results are consistent with estimates made by the congressional budget office and the joint committee on taxation. With temporary provisions, the TCJA delivers a reduction in wealth and income inequality but the opposite occurs once provisions are made permanent. In both scenarios, the population suffers a welfare loss and finds them difficult to support. Chapter 3 answers the normative question: Should entrepreneurs be taxed differently? Accordingly, I quantitatively investigate the desirability of occupation-based taxation in the entrepreneurship model of Chapter 1, when transitional cohorts are explicitly taken into account. The main experiment is to move from the federal single progressive taxation for both labor income and business profit at the individual level to a differential tax regime where business income faces a proportional tax rate and labor income is still subject to the progressive scheme. I find that a tax rate of 40% is optimal. More generally, the optimal tax rate varies between 15% and 50%, increasing with the planner's aversion to inequality and decreasing with its relative valuation of future generations' welfare. In the context of business tax reform, chapter 4 assesses revenue-neutral trade-offs when financing a corporate tax cut. To meet revenue neutrality, the policymaker uses three instruments to balance the government budget, namely labor income tax, dividend tax, and capital gains tax. I then construct a parsimonious general equilibrium model to derive balanced fiscal multipliers associated with corporate tax reform. Using a standard calibration, I show that both labor income tax and dividend tax multipliers are negative, suggesting a trade-off between a corporate tax cut and these two tax rates. On the other hand, the multiplier related to the capital gains tax is positive, which predicts the coordination of a double cut in both corporate and capital gains tax rates. Moreover, the welfare gains of the different scenarios are mixed.
278

Adungované soustavy diferenciálních rovnic / Adjoint Differential Equations

Kmenta, Karel January 2007 (has links)
This project deals with solving differential equations. The aim is find the correct algorithm transforming differential equations of higher order with time variable coefficients to equivalent systems of differential equations of first order. Subsequently verify its functionality for equations containing the involutioin goniometrical functions and finally implement this algorithm. The reason for this transformation is requirement to solve these differential equations by programme TKSL (Taylor Kunovský simulation language).
279

On the Measurement, Theory and Estimation of Fiscal Multipliers: A Contribution to Improve the Forecasting Precisison Regarding the Impact of Fiscal Impulses

Gechert, Sebastian 16 July 2014 (has links)
The study is intended to identify relevant channels and possibly biasing factors with respect to fiscal multipliers, and thus to contribute to improving the precision of multiplier forecasts. This is done by, first, defining the concept of the multiplier used in the present study, presenting the main theoretical channels of influence as discussed in the literature and the problems of empirical identification. Second, by conducting a meta-regression analysis on the reported multipliers from a unique data set of 1069 multiplier observations and the respective study characteristics in order to derive quantitative stylzed facts. Third, by developing a simple multiplier model that explicitly takes into account the time elapse of the multiplier process as an explanatory factor that has been largely overlooked by the relevant theoretical literature. Fourth, by identifying, for US macroeconomic time series data, the extent to which fiscal multiplier estimates could be biased in the presence of financial cycles that have not been taken into account by the relevant empirical literature.:List of Figures IV List of Tables VI List of Acronyms VII List of Symbols IX 1 General Introduction, Aim and Scope 2 Principles of the Measurement, Theory and Estimation of Fiscal Multipliers 2.1 Introduction 7 2.2 Definition and Measurement of the Fiscal Multiplier 7 2.3 Determinants of the Fiscal Multiplier 14 2.4 Principles of Estimating Fiscal Multipliers 29 2.5 Conclusions 38 3 A Meta-Regression Analysis of Fiscal Multipliers 43 3.1 Introduction 43 3.2 Literature Review 45 3.3 Data Set and Descriptive Statistics 49 3.4 Meta Regression—Method 54 3.5 Meta Regression—Moderator Variables 56 3.6 Meta Regression—Results 60 3.7 Conclusions 74 4 The Multiplier Principle, Credit-Money and Time 82 4.1 Introduction 82 4.2 Literature Review 85 4.3 Developing an Augmented Multiplier Model 89 4.4 Dynamic Stability of the Multiplier Process 106 4.5 Identifying the Lag-length 109 4.6 Conclusions 111 5 Financial Cycles and Fiscal Multiplier Estimations 114 5.1 Introduction 114 5.2 Literature Review 116 5.3 Asset and Credit Markets and Fiscal Multiplier Estimations 118 5.4 A Formal Framework 120 5.5 Empirical Strategy 124 5.6 Data 125 5.7 Structure and Identification 126 5.8 Effects of Fiscal Policy Changes—Baseline vs. Augmented Models 132 5.9 Robustness 140 5.10 Conclusions 142 6 General Conclusions and Research Prospects 148 Bibliography 153
280

Robust Subspace Estimation Using Low-rank Optimization. Theory And Applications In Scene Reconstruction, Video Denoising, And Activity Recognition.

Oreifej, Omar 01 January 2013 (has links)
In this dissertation, we discuss the problem of robust linear subspace estimation using low-rank optimization and propose three formulations of it. We demonstrate how these formulations can be used to solve fundamental computer vision problems, and provide superior performance in terms of accuracy and running time. Consider a set of observations extracted from images (such as pixel gray values, local features, trajectories . . . etc). If the assumption that these observations are drawn from a liner subspace (or can be linearly approximated) is valid, then the goal is to represent each observation as a linear combination of a compact basis, while maintaining a minimal reconstruction error. One of the earliest, yet most popular, approaches to achieve that is Principal Component Analysis (PCA). However, PCA can only handle Gaussian noise, and thus suffers when the observations are contaminated with gross and sparse outliers. To this end, in this dissertation, we focus on estimating the subspace robustly using low-rank optimization, where the sparse outliers are detected and separated through the `1 norm. The robust estimation has a two-fold advantage: First, the obtained basis better represents the actual subspace because it does not include contributions from the outliers. Second, the detected outliers are often of a specific interest in many applications, as we will show throughout this thesis. We demonstrate four different formulations and applications for low-rank optimization. First, we consider the problem of reconstructing an underwater sequence by removing the iii turbulence caused by the water waves. The main drawback of most previous attempts to tackle this problem is that they heavily depend on modelling the waves, which in fact is ill-posed since the actual behavior of the waves along with the imaging process are complicated and include several noise components; therefore, their results are not satisfactory. In contrast, we propose a novel approach which outperforms the state-of-the-art. The intuition behind our method is that in a sequence where the water is static, the frames would be linearly correlated. Therefore, in the presence of water waves, we may consider the frames as noisy observations drawn from a the subspace of linearly correlated frames. However, the noise introduced by the water waves is not sparse, and thus cannot directly be detected using low-rank optimization. Therefore, we propose a data-driven two-stage approach, where the first stage “sparsifies” the noise, and the second stage detects it. The first stage leverages the temporal mean of the sequence to overcome the structured turbulence of the waves through an iterative registration algorithm. The result of the first stage is a high quality mean and a better structured sequence; however, the sequence still contains unstructured sparse noise. Thus, we employ a second stage at which we extract the sparse errors from the sequence through rank minimization. Our method converges faster, and drastically outperforms state of the art on all testing sequences. Secondly, we consider a closely related situation where an independently moving object is also present in the turbulent video. More precisely, we consider video sequences acquired in a desert battlefields, where atmospheric turbulence is typically present, in addition to independently moving targets. Typical approaches for turbulence mitigation follow averaging or de-warping techniques. Although these methods can reduce the turbulence, they distort the independently moving objects which can often be of great interest. Therefore, we address the iv problem of simultaneous turbulence mitigation and moving object detection. We propose a novel three-term low-rank matrix decomposition approach in which we decompose the turbulence sequence into three components: the background, the turbulence, and the object. We simplify this extremely difficult problem into a minimization of nuclear norm, Frobenius norm, and `1 norm. Our method is based on two observations: First, the turbulence causes dense and Gaussian noise, and therefore can be captured by Frobenius norm, while the moving objects are sparse and thus can be captured by `1 norm. Second, since the object’s motion is linear and intrinsically different than the Gaussian-like turbulence, a Gaussian-based turbulence model can be employed to enforce an additional constraint on the search space of the minimization. We demonstrate the robustness of our approach on challenging sequences which are significantly distorted with atmospheric turbulence and include extremely tiny moving objects. In addition to robustly detecting the subspace of the frames of a sequence, we consider using trajectories as observations in the low-rank optimization framework. In particular, in videos acquired by moving cameras, we track all the pixels in the video and use that to estimate the camera motion subspace. This is particularly useful in activity recognition, which typically requires standard preprocessing steps such as motion compensation, moving object detection, and object tracking. The errors from the motion compensation step propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. In contrast, we propose a novel approach which does not follow the standard steps, and accordingly avoids the aforementioned diffi- culties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions v of a scene. This is done in frames of unaligned video, and no object detection is required. In order to handle the moving camera, we decompose the trajectories into their camera-induced and object-induced components. Having obtained the relevant object motion trajectories, we compute a compact set of chaotic invariant features, which captures the characteristics of the trajectories. Consequently, a SVM is employed to learn and recognize the human actions using the computed motion features. We performed intensive experiments on multiple benchmark datasets, and obtained promising results. Finally, we consider a more challenging problem referred to as complex event recognition, where the activities of interest are complex and unconstrained. This problem typically pose significant challenges because it involves videos of highly variable content, noise, length, frame size . . . etc. In this extremely challenging task, high-level features have recently shown a promising direction as in [53, 129], where core low-level events referred to as concepts are annotated and modelled using a portion of the training data, then each event is described using its content of these concepts. However, because of the complex nature of the videos, both the concept models and the corresponding high-level features are significantly noisy. In order to address this problem, we propose a novel low-rank formulation, which combines the precisely annotated videos used to train the concepts, with the rich high-level features. Our approach finds a new representation for each event, which is not only low-rank, but also constrained to adhere to the concept annotation, thus suppressing the noise, and maintaining a consistent occurrence of the concepts in each event. Extensive experiments on large scale real world dataset TRECVID Multimedia Event Detection 2011 and 2012 demonstrate that our approach consistently improves the discriminativity of the high-level features by a significant margin.

Page generated in 0.0369 seconds