• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 210
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 409
  • 159
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Vocation Clustering for Heavy-Duty Vehicles

Daniel Patrick Kobold Jr (9719936) 07 January 2021 (has links)
<p>The identification of the vocation of an unknown heavy-duty vehicle is valuable to parts manufacturers who may not have otherwise access to this information on a consistent basis. This study proposes a methodology for vocation identification that is based on clustering techniques. Two clustering algorithms are considered: K-Means and Expectation Maximization. These algorithms are used to first construct the operating profile of each vocation from a set of vehicles with known vocations. The vocation of an unknown vehicle is then determined using different assignment methods.</p> <p> </p> <p>These methods fall under two main categories: one-versus-all and one-versus-one. The one-versus-all approach compares an unknown vehicle to all potential vocations. The one-versus-one approach compares the unknown vehicle to two vocations at a time in a tournament fashion. Two types of tournaments are investigated: round-robin and bracket. The accuracy and efficiency of each of the methods is evaluated using the NREL FleetDNA dataset.</p> <p> </p> <p>The study revealed that some of the vocations may have unique operating profiles and are therefore easily distinguishable from others. Other vocations, however, can have confounding profiles. This indicates that different vocations may benefit from profiles with varying number of clusters. Determining the optimal number of clusters for each vocation can not only improve the assignment accuracy, but also enhance the computational efficiency of the application. The optimal number of clusters for each vocation is determined using both static and dynamic techniques. Static approaches refer to methods that are completed prior to training and may require multiple iterations. Dynamic techniques involve clusters being split or removed during training. The results show that the accuracy of dynamic techniques is comparable to that of static approaches while benefiting from a reduced computational time.</p>
132

Performance Limits of Communication with Energy Harvesting

Znaidi, Mohamed Ridha 04 1900 (has links)
In energy harvesting communications, the transmitters have to adapt transmission to the availability of energy harvested during communication. The performance of the transmission depends on the channel conditions which vary randomly due to mobility and environmental changes. During this work, we consider the problem of power allocation taking into account the energy arrivals over time and the quality of channel state information (CSI) available at the transmitter, in order to maximize the throughput. Differently from previous work, the CSI at the transmitter is not perfect and may include estimation errors. We solve this problem with respect to the energy harvesting constraints. Assuming a perfect knowledge of the CSI at the receiver, we determine the optimal power policy for different models of the energy arrival process (offline and online model). Indeed, we obtain the power allocation scheme when the transmitter has either perfect CSI or no CSI. We also investigate of utmost interest the case of fading channels with imperfect CSI. Moreover, a study of the asymptotic behavior of the communication system is proposed. Specifically, we analyze of the average throughput in a system where the average recharge rate goes asymptotically to zero and when it is very high.
133

Identifying municipalities most likely to contribute to an epidemic outbreak in Sweden using a human mobility network

Bridgwater, Alexander January 2021 (has links)
The importance of modelling the spreading of infectious diseases as part of a public health strategy has been highlighted by the ongoing coronavirus pandemic. This includes identifying the geographical areas or travel routes most likely to contribute to the spreading of an outbreak. These areas and routes can then be monitored as part of an early warning system, be part of intervention strategies, e.g. lockdowns, aiming to mitigate the spreading of the disease or be a focus of vaccination campaigns.  This thesis focus on developing a network-based infection model between the municipalities of Sweden in order to identify the areas most likely to contribute to an epidemic. First, a human mobility model is constructed based on the well-known radiation model. Then a network-based SEIR compartmental model is employed to simulate epidemic outbreaks with various parameters. Finally, the adoption of the influence maximization problem known in network science to identify the municipalities having the largest impact on the spreading of infectious diseases.  The resulting super-spreading municipalities point towards confirmation of the known fact that central highly populated regions in highly populated areas carry a greater risk than their neighbours initially. However, once these areas are targeted, the other resulting nodes show a greater variety in geographical location than expected. Furthermore, a correlation can be seen between increased infections time and greater variety, although more empirical data is required to support this claim.   For further evaluation of the model, the mobility network was studied due to its central role in creating data for the model parameters. Commuting data in the Gothenburg region were compared to the estimations, showing an overall good accuracy with major deviations in few cases.
134

Transitions in new technology and market structure: applications and new methods for discrete choice model estimation

Wang, Shuang 06 November 2021 (has links)
My dissertation consists of three chapters that evaluate the social welfare effect of either antitrust policy or industrial transition, all using discrete choice model estimation as the front end for counterfactual analysis. In the first chapter, I investigate the economic impact of the merger that created the world's largest hotel chain, Marriott's acquisition of Starwood, thereby shedding light on the antitrust authorities' performance in protecting competitive markets for the benefit of consumers. Different from traditional merger analysis that focuses on the tradeoff between the upward pricing pressure and the cost synergy among the merging parties while fixing the market structure, I endogenize firms’ entry decisions into an oligopoly price competition model. To tackle the associated multiple equilibria issue, I use moment inequality estimation and propose a novel lower probability bound that reduces the computational burden from being exponential to being linear in the number of players. It also adds to the scant empirical evidence on post-merger cost synergy by showing that every one more affiliated hotel in the local market reduces a hotel's marginal cost by up to 2.3%. Then a comparison between the simulated with-merger and without-merger equilibria indicates that this merger enhances social welfare. In particular, for those markets that are previously not profitable for any firm to enter, because of the post-merger cost saving, Marriott or Starwood would enter 6% - 24% of them, which provides a new perspective for merger reviews. The second chapter, joint with Mingli Chen, Marc Rysman and Krzysztof Wozniak, studies the determinants of the US payment system's shift from paper payment instruments, namely cash and check, to digital instruments, such as debit cards and credit cards. With a 5-year transaction-level panel data, for the first time in the literature, we can distinguish the short-term effects of transaction size from the long-term changes in households’ preferences. To do so, we incorporate a household-product-quarter fixed effect into a multinomial logit model. We develop a new method based on the Minorization-Maximization (MM) algorithm to address the prohibitive computational challenge of estimating over one million fixed effects in such a nonlinear model. Results show that over a short horizon (within a quarter), the probability of using card increases with transaction sizes in general but exhibits substantial household heterogeneity. While over long horizon (five-year period of the data), with the estimated household-product-quarter fixed effects, we decompose the increase in card usage into different channels and find that only a third of it is due to the changes in household preferences. Another significant driver is the households' entry and exit into the sample. In the third chapter, my coauthors Jacob LaRiviere, Aadharsh Kannan, and I explore the "death of distance” hypothesis with a novel anonymized customer-level dataset on demand for cloud computing, accounting for both spatial and price competition among public cloud providers. We introduce a mixed logit demand model of spatial competition estimable with detailed data of a single firm but only aggregate sales data of a second. We leverage the Expectation-Maximization (EM) algorithm to tackle the customer-level missing data problem of the second firm. Estimation results and counterfactuals show that standard spatial competition economics hold even when distance for cloud latency is trivial.
135

Some Financial Applications of Backward Stochastic Differential Equations with jump : Utility, Investment, and Pricing

柏原, 聡, KASHIWABARA, Akira 23 March 2012 (has links)
博士(経営) / 85 p. / 一橋大学
136

Language competition: An economic theory of language learning and production

Wiese, Harald 04 June 2018 (has links)
This article employs game theory to contribute to sociolinguistics (or the economics of language). From both the synchronic and the diachronic perspective, we are interested in the conditions (of language learning and literary production) that make some languages dominate others. Two results are particularly noteworthy: (i) Translations have an ambiguous effect on domination. (ii) We offer three different explanations of how a past language like Latin or Sanskrit can develop into a standard for literary production.
137

Algorithmic evaluation of Parameter Estimation for Hidden Markov Models in Finance

Lauri, Linus January 2014 (has links)
Modeling financial time series is of great importance for being successful within the financial market. Hidden Markov Models is a great way to include the regime shifting nature of financial data. This thesis will focus on getting an in depth knowledge of Hidden Markov Models in general and specifically the parameter estimation of the models. The objective will be to evaluate if and how financial data can be fitted nicely with the model. The subject was requested by Nordea Markets with the purpose of gaining knowledge of HMM’s for an eventual implementation of the theory by their index development group. The research chiefly consists of evaluating the algorithmic behavior of estimating model parameters. HMM’s proved to be a good approach of modeling financial data, since much of the time series had properties that supported a regime shifting approach. The most important factor for an effective algorithm is the number of states, easily explained as the distinguishable clusters of values. The suggested algorithm of continuously modeling financial data is by doing an extensive monthly calculation of starting parameters that are used daily in a less time consuming usage of the EM-algorithm.
138

Computer Aided Diagnosis In Digital Mammography: Classification Of Mass And Normal Tissue

Shinde, Monika 10 July 2003 (has links)
The work presented here is an important component of an on going project of developing an automated mass classification system for breast cancer screening and diagnosis for Digital Mammogram applications. Specifically, in this work the task of automatically separating mass tissue from normal breast tissue given a region of interest in a digitized mammogram is investigated. This is the crucial stage in developing a robust automated classification system because the classification depends on the accurate assessment of the tumor-normal tissue border as well as information gathered from the tumor area. In this work the Expectation Maximization (EM) method is developed and applied to high resolution digitized screen-film mammograms with the aim of segmenting normal tissue from mass tissue. Both the raw data and summary data generated by Laws' texture analysis are investigated. Since the ultimate goal is robust classification, the merits of the tissue segmentation are assessed by its impact on the overall classification performance. Based on the 300 image dataset consisting of 97 malignant and 203 benign cases, a 63% sensitivity and 89% specificity was achieved. Although, the segmentation requires further investigation, the development and related computer coding of the EM algorithm was successful. The method was developed to take in account the input feature correlation. This development allows other researchers at this facility to investigate various input features without having the intricate understanding of the EM approach.
139

Graph Mining for Influence Maximization in Social Networks / Fouille de Graphes pour Maximisation de l'Influence dans les Réseaux Sociaux

Rossi, Maria 17 November 2017 (has links)
La science moderne des graphes est apparue ces dernières années comme un domaine d'intérêt et a apporté des progrès significatifs à notre connaissance des réseaux. Jusqu'à récemment, les algorithmes d'exploration de données existants étaient destinés à des données structurées / relationnelles, alors que de nombreux ensembles de données nécessitent une représentation graphique, comme les réseaux sociaux, les réseaux générés par des données textuelles, les structures protéiques 3D ou encore les composés chimiques. Il est donc crucial de pouvoir extraire des informations pertinantes à partir de ce type de données et, pour ce faire, les méthodes d'extraction et d'analyse des graphiques ont été prouvées essentielles.L'objectif de cette thèse est d'étudier les problèmes dans le domaine de la fouille de graphes axés en particulier sur la conception de nouveaux algorithmes et d'outils liés à la diffusion d'informations et plus spécifiquement sur la façon de localiser des entités influentes dans des réseaux réels. Cette tâche est cruciale dans de nombreuses applications telles que la diffusion de l'information, les contrôles épidémiologiques et le marketing viral.Dans la première partie de la thèse, nous avons étudié les processus de diffusion dans les réseaux sociaux ciblant la recherche de caractéristiques topologiques classant les entités du réseau en fonction de leurs capacités influentes. Nous nous sommes spécifiquement concentrés sur la décomposition K-truss qui est une extension de la décomposition k-core. On a montré que les noeuds qui appartiennent au sous-graphe induit par le maximal K-truss présenteront de meilleurs proprietés de propagation par rapport aux critères de référence. De tels épandeurs ont la capacité non seulement d'influencer une plus grande partie du réseau au cours des premières étapes d'un processus d'étalement, mais aussi de contaminer une plus grande partie des noeuds.Dans la deuxième partie de la thèse, nous nous sommes concentrés sur l'identification d'un groupe de noeuds qui, en agissant ensemble, maximisent le nombre attendu de nœuds influencés à la fin du processus de propagation, formellement appelé Influence Maximization (IM). Le problème IM étant NP-hard, il existe des algorithmes efficaces garantissant l’approximation de ses solutions. Comme ces garanties proposent une approximation gloutonne qui est coûteuse en termes de temps de calcul, nous avons proposé l'algorithme MATI qui réussit à localiser le groupe d'utilisateurs qui maximise l'influence, tout en étant évolutif. L'algorithme profite des chemins possibles créés dans le voisinage de chaque nœud et précalcule l'influence potentielle de chaque nœud permettant ainsi de produire des résultats concurrentiels, comparés à ceux des algorithmes classiques.Finallement, nous étudions le point de vue de la confidentialité quant au partage de ces bons indicateurs d’influence dans un réseau social. Nous nous sommes concentrés sur la conception d'un algorithme efficace, correct, sécurisé et de protection de la vie privée, qui résout le problème du calcul de la métrique k-core qui mesure l'influence de chaque noeud du réseau. Nous avons spécifiquement adopté une approche de décentralisation dans laquelle le réseau social est considéré comme un système Peer-to-peer (P2P). L'algorithme est construit de telle sorte qu'il ne devrait pas être possible pour un nœud de reconstituer partiellement ou entièrement le graphe en utilisant les informations obtiennues lors de son exécution. Notre contribution est un algorithme incrémental qui résout efficacement le problème de maintenance de core en P2P tout en limitant le nombre de messages échangés et les calculs. Nous fournissons également une étude de sécurité et de confidentialité de la solution concernant la désanonymisation des réseaux, nous montrons ainsi la rélation avec les strategies d’attaque précédemment definies tout en discutant les contres-mesures adaptés. / Modern science of graphs has emerged the last few years as a field of interest and has been bringing significant advances to our knowledge about networks. Until recently the existing data mining algorithms were destined for structured/relational data while many datasets exist that require graph representation such as social networks, networks generated by textual data, 3D protein structures and chemical compounds. It has become therefore of crucial importance to be able to extract meaningful information from that kind of data and towards this end graph mining and analysis methods have been proven essential. The goal of this thesis is to study problems in the area of graph mining focusing especially on designing new algorithms and tools related to information spreading and specifically on how to locate influential entities in real-world networks. This task is crucial in many applications such as information diffusion, epidemic control and viral marketing. In the first part of the thesis, we have studied spreading processes in social networks focusing on finding topological characteristics that rank entities in the network based on their influential capabilities. We have specifically focused on the K-truss decomposition which is an extension of the core decomposition of the graph. Extensive experimental analysis showed that the nodes that belong to the maximal K-truss subgraph show a better spreading behavior when compared to baseline criteria. Such spreaders can influence a greater part of the network during the first steps of a spreading process but also the total fraction of the influenced nodes at the end of the epidemic is greater. We have also observed that node members of such dense subgraphs are those achieving the optimal spreading in the network.In the second part of the thesis, we focused on identifying a group of nodes that by acting all together maximize the expected number of influenced nodes at the end of the spreading process, formally called Influence Maximization (IM). The IM problem is actually NP-hard though there exist approximation guarantees for efficient algorithms that can solve the problem while obtaining a solution within the 63% of optimal classes of models. As those guarantees propose a greedy approximation which is computationally expensive especially for large graphs, we proposed the MATI algorithm which succeeds in locating the group of users that maximize the influence while also being scalable. The algorithm takes advantage the possible paths created in each node’s neighborhood to precalculate each node’s potential influence and produces competitive results in quality compared to those of baseline algorithms such as the Greedy, LDAG and SimPath. In the last part of the thesis, we study the privacy point of view of sharing such metrics that are good influential indicators in a social network. We have focused on designing an algorithm that addresses the problem of computing through an efficient, correct, secure, and privacy-preserving algorithm the k-core metric which measures the influence of each node of the network. We have specifically adopted a decentralization approach where the social network is considered as a Peer-to-peer (P2P) system. The algorithm is built based on the constraint that it should not be possible for a node to reconstruct partially or entirely the graph using the information they obtain during its execution. While a distributed algorithm that computes the nodes’ coreness is already proposed, dynamic networks are not taken into account. Our main contribution is an incremental algorithm that efficiently solves the core maintenance problem in P2P while limiting the number of messages exchanged and computations. We provide a security and privacy analysis of the solution regarding network de-anonimization and show how it relates to previously defined attacks models and discuss countermeasures.
140

Domain-based Collaborative Learning for Enhanced Health Management of Distributed Industrial Assets

Pandhare, Vibhor January 2021 (has links)
No description available.

Page generated in 0.0958 seconds