• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Registration of multiple ToF camera point clouds

Hedlund, Tobias January 2010 (has links)
<p>Buildings, maps and objects et cetera, can be modeled using a computer or reconstructed in 3D by data from different kinds of cameras or laser scanners. This thesis concerns the latter. The recent improvements of Time-of-Flight cameras have brought a number of new interesting research areas to the surface. Registration of several ToF camera point clouds is such an area.</p><p>A literature study has been made to summarize the research done in the area over the last two decades. The most popular method for registering point clouds, namely the Iterative Closest Point (ICP), has been studied. In addition to this, an error relaxation algorithm was implemented to minimize the accumulated error of the sequential pairwise ICP.</p><p>A few different real-world test scenarios and one scenario with synthetic data were constructed. These data sets were registered with varying outcome. The obtained camera poses from the sequential ICP were improved by loop closing and error relaxation.</p><p>The results illustrate the importance of having good initial guesses on the relative transformations to obtain a correct model. Furthermore the strengths and weaknesses of the sequential ICP and the utilized error relaxation method are shown.</p>
2

Registration of multiple ToF camera point clouds

Hedlund, Tobias January 2010 (has links)
Buildings, maps and objects et cetera, can be modeled using a computer or reconstructed in 3D by data from different kinds of cameras or laser scanners. This thesis concerns the latter. The recent improvements of Time-of-Flight cameras have brought a number of new interesting research areas to the surface. Registration of several ToF camera point clouds is such an area. A literature study has been made to summarize the research done in the area over the last two decades. The most popular method for registering point clouds, namely the Iterative Closest Point (ICP), has been studied. In addition to this, an error relaxation algorithm was implemented to minimize the accumulated error of the sequential pairwise ICP. A few different real-world test scenarios and one scenario with synthetic data were constructed. These data sets were registered with varying outcome. The obtained camera poses from the sequential ICP were improved by loop closing and error relaxation. The results illustrate the importance of having good initial guesses on the relative transformations to obtain a correct model. Furthermore the strengths and weaknesses of the sequential ICP and the utilized error relaxation method are shown.
3

Detecting change points in time series using the Bayesian approach with perfect simulation : a thesis presented for the degree of Master of Science in Statistics at the University of Canterbury, Christchurch, New Zealand /

Richens, Andrew Stephen. January 1900 (has links)
Thesis (M. Sc.)--University of Canterbury, 2008. / Typescript (photocopy). "8 February 2008." Includes bibliographical references (p. 112-114). Also available via the World Wide Web.
4

Resource-efficient and fast Point-in-Time joins for Apache Spark : Optimization of time travel operations for the creation of machine learning training datasets / Resurseffektiva och snabba Point-in-Time joins i Apache Spark : Optimering av tidsresningsoperationer för skapande av träningsdata för maskininlärningsmodeller

Pettersson, Axel January 2022 (has links)
A scenario in which modern machine learning models are trained is to make use of past data to be able to make predictions about the future. When working with multiple structured and time-labeled datasets, it has become a more common practice to make use of a join operator called the Point-in-Time join, or PIT join, to construct these datasets. The PIT join matches entries from the left dataset with entries of the right dataset where the matched entry is the row whose recorded event time is the closest to the left row’s timestamp, out of all the right entries whose event time occurred before or at the same time of the left event time. This feature has long only been a part of time series data processing tools but has recently received a new wave of attention due to the rise of the popularity of feature stores. To be able to perform such an operation when dealing with a large amount of data, data engineers commonly turn to large-scale data processing tools, such as Apache Spark. However, Spark does not have a native implementation when performing these joins and there has not been a clear consensus by the community on how this should be achieved. This, along with previous implementations of the PIT join, raises the question: ”How to perform fast and resource efficient Pointin- Time joins in Apache Spark?”. To answer this question, three different algorithms have been developed and compared for performing a PIT join in Spark in terms of resource consumption and execution time. These algorithms were benchmarked using generated datasets using varying physical partitions and sorting structures. Furthermore, the scalability of the algorithms was tested by running the algorithms on Apache Spark clusters of varying sizes. The results received from the benchmarks showed that the best measurements were achieved by performing the join using Early Stop Sort-Merge Join, a modified version of the regular Sort-Merge Join native to Spark. The best performing datasets were the datasets that were sorted by timestamp and primary key, ascending or descending, using a suitable number of physical partitions. Using this new information gathered by this project, data engineers have been provided with general guidelines to optimize their data processing pipelines to be able to perform more resource-efficient and faster PIT joins. / Ett vanligt scenario för maskininlärning är att träna modeller på tidigare observerad data för att för att ge förutsägelser om framtiden. När man jobbar med ett flertal strukturerade och tidsmärkta dataset har det blivit vanligare att använda sig av en join-operator som kallas Point-in-Time join, eller PIT join, för att konstruera dessa datauppsättningar. En PIT join matchar rader från det vänstra datasetet med rader i det högra datasetet där den matchade raden är den raden vars registrerade händelsetid är närmaste den vänstra raden händelsetid, av alla rader i det högra datasetet vars händelsetid inträffade före eller samtidigt som den vänstra händelsetiden. Denna funktionalitet har länge bara varit en del av datahanteringsverktyg för tidsbaserad data, men har nyligen fått en ökat popularitet på grund av det ökande intresset för feature stores. För att kunna utföra en sådan operation vid hantering av stora mängder data vänder sig data engineers vanligvis till storskaliga databehandlingsverktyg, såsom Apache Spark. Spark har dock ingen inbyggd implementation för denna join-operation, och det finns inte ett tydligt konsensus från Spark-rörelsen om hur det ska uppnås. Detta, tillsammans med de tidigare implementationerna av PIT joins, väcker frågan: ”Vad är det mest effektiva sättet att utföra en PIT join i Apache Spark?”. För att svara på denna fråga har tre olika algoritmer utvecklats och jämförts med hänsyn till resursförbrukning och exekveringstid. För att jämföra algoritmerna, exekverades de på genererade datauppsättningar med olika fysiska partitioner och sorteringstrukturer. Dessutom testades skalbarheten av algoritmerna genom att köra de på Spark-kluster av varierande storlek. Resultaten visade att de bästa mätvärdena uppnåddes genom att utföra operationen med algoritmen early stop sort-merge join, en modifierad version av den vanliga sort-merge join som är inbyggd i Spark, med en datauppsättning som är sorterad på tidsstämpel och primärnyckel, antingen stigande eller fallande. Fysisk partitionering av data kunde även ge bättre resultat, men det optimala antal fysiska partitioner kan variera beroende på datan i sig. Med hjälp av denna nya information som samlats in av detta projekt har data engineers försetts med allmänna riktlinjer för att optimera sina databehandlings-pipelines för att kunna utföra mer resurseffektiva och snabbare PIT joins
5

Analyse des risques sur un portefeuille de dettes / Risk analysis on a debt portfolio

Kheliouen, Mohamed Reda 12 September 2018 (has links)
Cette thèse de doctorat part du constat qu'un portefeuille de crédit est soumis à plusieurs risques qui proviennent principalement de la qualité de crédit de l'emprunteur et de son comportement de tirage et de pré-paiement sur ses lignes de crédit. Il s'avère que les risques observés sont dynamiques et dépendent de facteurs divers, autant micro que macro-économiques.Nous avons eu la volonté de comprendre l'articulation de ces risques pour avoir une gestion efficace de ceux-ci dans le présent, mais aussi une vision prospective si les conditions économiques changent, cela pour une gestion pro-active. Pour traiter cette problématique, nous avons articulé nos recherches autour de trois axes qui ont abouti à trois chapitres sous forme d'articles.(i) Analyse des changements des notations de crédit en fonction des facteurs de risque.L'utilisation des modèles de migration multi-factoriels nous a permis de reproduire des faits stylisés cités dans la littérature et d'en identifier d'autres. Nous reconstituons aussi le cycle économique entre 2006 et 2014 qui réussit à capter les crises de 2008 et 2012.(ii) Conception d'un modèle de cash-flow qui tient compte de l'évolution des comportements des emprunteurs sous l'influence de leurs environnements micro et macro-économiques.Nous prouvons l'influence de la notation de crédit, du cycle économique, du taux de recouvrement estimé et du taux d'intérêt court terme sur les taux d'utilisation. Ce modèle permet aussi d'obtenir des mesures de risque comme le Cash Flow-at-Risk et le Stressed Cash Flow-at-Risk sur des portefeuilles de crédit grâce à des simulations de Monte Carlo.(iii) Réflexion sur la Disposition-à-Payer (DAP) d'un décideur neutre à l'ambiguïté pour réduire le risque en présence d'incertitude sur les probabilités. Nous montrons que la présence de plusieurs sources d'ambiguïté (possiblement corrélées) change le bien-être d'un décideur averse au risque bien que celui-ci soit neutre à l'ambigüité / This thesis starts from the observation that a credit portfolio is subject to several risks, mainly due to the credit quality of the borrower and his behavior toward his credit lines (drawdown or prepayment). It turns out that the observed risks are dynamic and depend on various factors, both micro and macroeconomic. Our goal in one hand is to understand the articulation of these risks in order to efficiently manage them in the current time, in the other hand, we want to have a forward looking vision of these risks with respect to the changes in the economic conditions in order to have a pro-active management. To address our objectives, we have articulated our research on three axes that have resulted in three chapters in the form of articles.(i) Analysis of changes in the credit ratings with respect to risk factors. The use of factor migration models allowed us to reproduce some stylized facts mentioned in academic literature and to identify some others. We have also estimated the business cycle between2006 and 2014, which manages to capture the crises of 2008 and 2012.(ii) Design of a cash-_ow model that considers the changes in borrowers' behavior under the influence of their micro and macroeconomic environments. We prove the influence of the credit ratings, business cycle, estimated recovery rates and short-term interest rates on the utilization rates of a credit line. This model also provides risk measures such as Cash Flow-at-Risk and Stressed Cash Flow-at-Risk on credit portfolio using Monte Carlo simulations.(iii) Discussion on the Willingness-to-Pay (WTP) of an ambiguity neutral decision maker (DM) in order to reduce the risk in presence of ambiguity over probabilities. We show that the introduction of ambiguity through several ambiguity sources modifies the welfare level of all ambiguity-neutral and risk-averse DM when ambiguity and risk interact
6

Law in 3-Dimensions

2013 March 1900 (has links)
This project, overall, involves a theory of law as dimensions. Throughout the history of the study of law, many different theoretical paradigms have emerged proffering different and competing ways to answer the question ‘what is law’? Traditionally, many of these paradigms have been at irreconcilable odds with one another. Notwithstanding this seeming reality, the goal of this project was to attempt to take three of the leading paradigms in legal theory and provide a way to explain how each might fit into a single coherent theory of law. I set out to accomplish this by drawing on the field of theoretical physics and that field’s use of spatial dimensions in explaining various physical phenomena. By engaging in a dimensional analysis of law, I found that I was able to place each paradigm within its own dimension with that dimension being defined by a specific element of time, and in doing so much of the conflict between the paradigms came to be ameliorated. The project has been divided into two main parts. PART I discusses the fundamentals of legal theory (Chapter 1) and the fundamentals of dimensions (Chapter 2). These fundamentals provide a foundation for a dimensional analysis of law which takes place throughout PART II. In Chapter 3, I argue that the three fundamental theses of Positivism coalesce with the 1st-dimension of law, which is defined as law as it exists at any one point in time. From there, I argue in Chapter 4 that the 2nd-dimension of law, being law as it exists between two points in time (i.e. when cases are adjudicated), is characterized by Pragmatism. I then turn, in Chapter 5, to argue that the 3rd-dimension of law, being law as it exists from the very first point in legal time to the ever changing present day, coalesces with the fundamental theses of Naturalism. Ultimately then, I argue that a theory of law as dimensions, through the vantage points of the specific elements of time, provides a more complete account of the nature of law.

Page generated in 0.0337 seconds