• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 7
  • 7
  • 5
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 46
  • 24
  • 18
  • 12
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Computations on Massive Data Sets : Streaming Algorithms and Two-party Communication / Calculs sur des grosses données : algorithmes de streaming et communication entre deux joueurs

Konrad, Christian 05 July 2013 (has links)
Dans cette thèse on considère deux modèles de calcul qui abordent des problèmes qui se posent lors du traitement des grosses données. Le premier modèle est le modèle de streaming. Lors du traitement des grosses données, un accès aux données de façon aléatoire est trop couteux. Les algorithmes de streaming ont un accès restreint aux données: ils lisent les données de façon séquentielle (par passage) une fois ou peu de fois. De plus, les algorithmes de streaming utilisent une mémoire d'accès aléatoire de taille sous-linéaire dans la taille des données. Le deuxième modèle est le modèle de communication. Lors du traitement des données par plusieurs entités de calcul situées à des endroits différents, l'échange des messages pour la synchronisation de leurs calculs est souvent un goulet d'étranglement. Il est donc préférable de minimiser la quantité de communication. Un modèle particulier est la communication à sens unique entre deux participants. Dans ce modèle, deux participants calculent un résultat en fonction des données qui sont partagées entre eux et la communication se réduit à un seul message. On étudie les problèmes suivants: 1) Les couplages dans le modèle de streaming. L'entrée du problème est un flux d'arêtes d'un graphe G=(V,E) avec n=|V|. On recherche un algorithme de streaming qui calcule un couplage de grande taille en utilisant une mémoire de taille O(n polylog n). L'algorithme glouton remplit ces contraintes et calcule un couplage de taille au moins 1/2 fois la taille d'un couplage maximum. Une question ouverte depuis longtemps demande si l'algorithme glouton est optimal si aucune hypothèse sur l'ordre des arêtes dans le flux est faite. Nous montrons qu'il y a un meilleur algorithme que l'algorithme glouton si les arêtes du graphe sont dans un ordre uniformément aléatoire. De plus, nous montrons qu'avec deux passages on peut calculer un couplage de taille strictement supérieur à 1/2 fois la taille d'un couplage maximum sans contraintes sur l'ordre des arêtes. 2) Les semi-couplages en streaming et en communication. Un semi-couplage dans un graphe biparti G=(A,B,E) est un sous-ensemble d'arêtes qui couple tous les sommets de type A exactement une fois aux sommets de type B de façon pas forcement injective. L'objectif est de minimiser le nombre de sommets de type A qui sont couplés aux même sommets de type B. Pour ce problème, nous montrons un algorithme qui, pour tout 0<=ε<=1, calcule une O(n^((1-ε)/2))-approximation en utilisant une mémoire de taille Ô(n^(1+ε)). De plus, nous montrons des bornes supérieures et des bornes inférieurs pour la complexité de communication entre deux participants pour ce problème et des nouveaux résultats concernant la structure des semi-couplages. 3) Validité des fichiers XML dans le modèle de streaming. Un fichier XML de taille n est une séquence de balises ouvrantes et fermantes. Une DTD est un ensemble de contraintes de validité locales d'un fichier XML. Nous étudions des algorithmes de streaming pour tester si un fichier XML satisfait les contraintes décrites dans une DTD. Notre résultat principal est un algorithme de streaming qui fait O(log n) passages, utilise 3 flux auxiliaires et une mémoire de taille O(log^2 n). De plus, pour le problème de validation des fichiers XML qui décrivent des arbres binaires, nous présentons des algorithmes en un passage et deux passages qui une mémoire de taille sous-linéaire. 4) Correction d'erreur pour la distance du cantonnier. Alice et Bob ont des ensembles de n points sur une grille en d dimensions. Alice envoit un échantillon de petite taille à Bob qui, après réception, déplace ses points pour que la distance du cantonnier entre les points d'Alice et les points de Bob diminue. Pour tout k>0 nous montrons qu'il y a un protocole presque optimal de communication avec coût de communication Ô(kd) tel que les déplacements des points effectués par Bob aboutissent à un facteur d'approximation de O(d) par rapport aux meilleurs déplacements de d points. / In this PhD thesis, we consider two computational models that address problems that arise when processing massive data sets. The first model is the Data Streaming Model. When processing massive data sets, random access to the input data is very costly. Therefore, streaming algorithms only have restricted access to the input data: They sequentially scan the input data once or only a few times. In addition, streaming algorithms use a random access memory of sublinear size in the length of the input. Sequential input access and sublinear memory are drastic limitations when designing algorithms. The major goal of this PhD thesis is to explore the limitations and the strengths of the streaming model. The second model is the Communication Model. When data is processed by multiple computational units at different locations, then the message exchange of the participating parties for synchronizing their calculations is often a bottleneck. The amount of communication should hence be as little as possible. A particular setting is the one-way two-party communication setting. Here, two parties collectively compute a function of the input data that is split among the two parties, and the whole message exchange reduces to a single message from one party to the other one. We study the following four problems in the context of streaming algorithms and one-way two-party communication: (1) Matchings in the Streaming Model. We are given a stream of edges of a graph G=(V,E) with n=|V|, and the goal is to design a streaming algorithm that computes a matching using a random access memory of size O(n polylog n). The Greedy matching algorithm fits into this setting and computes a matching of size at least 1/2 times the size of a maximum matching. A long standing open question is whether the Greedy algorithm is optimal if no assumption about the order of the input stream is made. We show that it is possible to improve on the Greedy algorithm if the input stream is in uniform random order. Furthermore, we show that with two passes an approximation ratio strictly larger than 1/2 can be obtained if no assumption on the order of the input stream is made. (2) Semi-matchings in Streaming and in Two-party Communication. A semi-matching in a bipartite graph G=(A,B,E) is a subset of edges that matches all A vertices exactly once to B vertices, not necessarily in an injective way. The goal is to minimize the maximal number of A vertices that are matched to the same B vertex. We show that for any 0<=ε<=1, there is a one-pass streaming algorithm that computes an O(n^((1-ε)/2))-approximation using Ô(n^(1+ε)) space. Furthermore, we provide upper and lower bounds on the two-party communication complexity of this problem, as well as new results on the structure of semi-matchings. (3) Validity of XML Documents in the Streaming Model. An XML document of length n is a sequence of opening and closing tags. A DTD is a set of local validity constraints of an XML document. We study streaming algorithms for checking whether an XML document fulfills the validity constraints of a given DTD. Our main result is an O(log n)-pass streaming algorithm with 3 auxiliary streams and O(log^2 n) space for this problem. Furthermore, we present one-pass and two-pass sublinear space streaming algorithms for checking validity of XML documents that encode binary trees. (4) Budget-Error-Correcting under Earth-Mover-Distance. We study the following one-way two-party communication problem. Alice and Bob have sets of n points on a d-dimensional grid [Δ]^d for an integer Δ. Alice sends a small sketch of her points to Bob and Bob adjusts his point set towards Alice's point set so that the Earth-Mover-Distance of Bob's points and Alice's points decreases. For any k>0, we show that there is an almost tight randomized protocol with communication cost Ô(kd) such that Bob's adjustments lead to an O(d)-approximation compared to the k best possible adjustments that Bob could make.
32

先佔優勢與門檻時間之探討-以台灣主機板產業為例

廖祐呈, Yu-cheng Liao Unknown Date (has links)
本研究針對『門檻時間』的概念做更多的探索,藉由個案訪談台灣主機板廠商的新產品上市過程了解門檻時間是否存在於台灣的主機板產業中,以及在台灣主機板產業中的門檻時間的長短。另外亦藉由Kerin,Varadarajan,and Peterson (1992)的『先佔優勢理論整合的觀念架構』來探索影響門檻時間長短的因素為何。最後建立『門檻時間的觀念架構』,並和『先佔優勢理論的整合觀念架構』做一個相映,為本研究的主軸。 本研究藉由文獻探討,個案訪談,以及訪談結論的推演,所得的主要結論如下: 1. 最先進入市場的廠商未必享有先佔優勢,視其領先其他廠商的時間差是否足夠而定,當領先其他廠商的時間無法超過一段『門檻時間』的時候,則該廠商將不具有先佔優勢。 2. 最先進入市場的廠商藉由時基競爭的方式能否獲得先佔優勢的因素除了『門檻時間的長短』之外,和其他廠商自身的『反應時間的長短』亦有關係。當其他廠商的反應時間大於門檻時間時,則將因為最先進入市場的廠商的先佔優勢而影響市場佔有率。反之,當其他廠商的反應時間小於門檻時間時,則最先進入市場廠商的先佔優勢不影響其他廠商的市場佔有率。 3. 從Kerin,Varadarajan,and Peterson (1992)『先佔優勢理論的整合觀念架構』找出影響『門檻時間』和廠商『反應時間』的因素,本研究並嚐試以門檻時間的架構將先佔優勢理論的整合架構做分類,可以得到影響先佔優勢的因素亦分為『影響門檻時間長短的因素』與『影響廠商反應時間長短的因素』。 4. Kerin,Varadarajan,and Peterson (1992)『先佔優勢理論的整合觀念架構』和本研究的重心『門檻時間的觀念架構』可以獲得一致且互相印證的結論。 5. 本研究的『門檻時間的觀念架構』結論對於廠商的策略涵義在於:廠商可藉由改變門檻時間的長度及反應時間的長度來影響先佔優勢的大小。 第壹章 前言................................................1 第一節 研究動機............................................1 第二節 研究目的............................................1 第三節 章節結構............................................3 第四節 研究範圍............................................4 第五節 名詞定義............................................5 第六節 研究流程............................................8 第貳章 文獻探討...........................................10 第一節 先佔優勢理論的整合架構.............................10 第二節 時基競爭相關的文獻回顧.............................19 第參章 研究架構...........................................22 第一節 門檻時間的概念.....................................24 第二節 門檻時間的命題.....................................29 第三節 先佔優勢和門檻時間長短之因素.......................35 第肆章 訪談結果和分析.....................................37 第一節 訪談結果...........................................38 第二節 運用先佔優勢的理論架構找出影響門檻時間長短的因素...49 第三節 訪談結果的理論基礎.................................54 第伍章 結論和建議.........................................63 第一節 研究發現...........................................63 第二節 策略涵義...........................................66 第三節 研究的特色與貢獻...................................69 第四節 研究限制後續研究建議方向...........................70 附錄......................................................73
33

Suspension design for off-road construction machines

Rehnberg, Adam January 2011 (has links)
Construction machines, also referred to as engineering vehicles or earth movers, are used in a variety of tasks related to infrastructure development and material handling. While modern construction machines represent a high level of sophistication in several areas, their suspension systems are generally rudimentary or even nonexistent. This leads to unacceptably high vibration levels for the operator, particularly when considering front loaders and dump trucks, which regularly traverse longer distances at reasonably high velocities. To meet future demands on operator comfort and high speed capacity, more refined wheel suspensions will have to be developed. The aim of this thesis is therefore to investigate which factors need to be considered in the fundamental design of suspension systems for wheeled construction machines. The ride dynamics of wheeled construction machines are affected by a number of particular properties specific to this type of vehicle. The pitch inertia is typically high in relation to the mass and wheelbase, which leads to pronounced pitching. The axle loads differ considerably between the loaded and the unloaded condition, necessitating ride height control, and hence the suspension properties may be altered as the vehicle is loaded. Furthermore, the low vertical stiffness of off-road tyres means that changes in the tyre properties will have a large impact on the dynamics of the suspended mass. The impact of these factors has been investigated using analytical models and parameters for a typical wheel loader. Multibody dynamic simulations have also been used to study the effects of suspended axles on the vehicle ride vibrations in more detail. The simulation model has also been compared to measurements performed on a prototype wheel loader with suspended axles. For reasons of manoeuvrability and robustness, many construction machines use articulated frame steering. The dynamic behaviour of articulated vehicles has therefore been examined here, focusing on lateral instabilities in the form of “snaking” and “folding”. A multibody dynamics model has been used to investigate how suspended axles influence the snaking stability of an articulated wheel loader. A remote-controlled, articulated test vehicle in model-scale has also been developed to enable safe and inexpensive practical experiments. The test vehicle is used to study the influence of several vehicle parameters on snaking stability, including suspension, drive configuration and mass distribution. Comparisons are also made with predictions using a simplified linear model. Off-road tyres represent a further complication of construction machine dynamics, since the tyres’ behaviour is typically highly nonlinear and difficult to evaluate in testing due to the size of the tyres. A rolling test rig for large tyres has here been evaluated, showing that the test rig is capable of producing useful data for validating tyre simulation models of varying complexity. The theoretical and experimental studies presented in this thesis contribute to the deeper understanding of a number of aspects of the dynamic behaviour of construction machines. This work therefore provides a basis for the continued development of wheel suspensions for such vehicles. / QC 20110531
34

Computations on Massive Data Sets : Streaming Algorithms and Two-party Communication

Konrad, Christian 05 July 2013 (has links) (PDF)
In this PhD thesis, we consider two computational models that address problems that arise when processing massive data sets. The first model is the Data Streaming Model. When processing massive data sets, random access to the input data is very costly. Therefore, streaming algorithms only have restricted access to the input data: They sequentially scan the input data once or only a few times. In addition, streaming algorithms use a random access memory of sublinear size in the length of the input. Sequential input access and sublinear memory are drastic limitations when designing algorithms. The major goal of this PhD thesis is to explore the limitations and the strengths of the streaming model. The second model is the Communication Model. When data is processed by multiple computational units at different locations, then the message exchange of the participating parties for synchronizing their calculations is often a bottleneck. The amount of communication should hence be as little as possible. A particular setting is the one-way two-party communication setting. Here, two parties collectively compute a function of the input data that is split among the two parties, and the whole message exchange reduces to a single message from one party to the other one. We study the following four problems in the context of streaming algorithms and one-way two-party communication: (1) Matchings in the Streaming Model. We are given a stream of edges of a graph G=(V,E) with n=|V|, and the goal is to design a streaming algorithm that computes a matching using a random access memory of size O(n polylog n). The Greedy matching algorithm fits into this setting and computes a matching of size at least 1/2 times the size of a maximum matching. A long standing open question is whether the Greedy algorithm is optimal if no assumption about the order of the input stream is made. We show that it is possible to improve on the Greedy algorithm if the input stream is in uniform random order. Furthermore, we show that with two passes an approximation ratio strictly larger than 1/2 can be obtained if no assumption on the order of the input stream is made. (2) Semi-matchings in Streaming and in Two-party Communication. A semi-matching in a bipartite graph G=(A,B,E) is a subset of edges that matches all A vertices exactly once to B vertices, not necessarily in an injective way. The goal is to minimize the maximal number of A vertices that are matched to the same B vertex. We show that for any 0<=ε<=1, there is a one-pass streaming algorithm that computes an O(n^((1-ε)/2))-approximation using Ô(n^(1+ε)) space. Furthermore, we provide upper and lower bounds on the two-party communication complexity of this problem, as well as new results on the structure of semi-matchings. (3) Validity of XML Documents in the Streaming Model. An XML document of length n is a sequence of opening and closing tags. A DTD is a set of local validity constraints of an XML document. We study streaming algorithms for checking whether an XML document fulfills the validity constraints of a given DTD. Our main result is an O(log n)-pass streaming algorithm with 3 auxiliary streams and O(log^2 n) space for this problem. Furthermore, we present one-pass and two-pass sublinear space streaming algorithms for checking validity of XML documents that encode binary trees. (4) Budget-Error-Correcting under Earth-Mover-Distance. We study the following one-way two-party communication problem. Alice and Bob have sets of n points on a d-dimensional grid [Δ]^d for an integer Δ. Alice sends a small sketch of her points to Bob and Bob adjusts his point set towards Alice's point set so that the Earth-Mover-Distance of Bob's points and Alice's points decreases. For any k>0, we show that there is an almost tight randomized protocol with communication cost Ô(kd) such that Bob's adjustments lead to an O(d)-approximation compared to the k best possible adjustments that Bob could make.
35

Statistical Methods for Life History Analysis Involving Latent Processes

Shen, Hua January 2014 (has links)
Incomplete data often arise in the study of life history processes. Examples include missing responses, missing covariates, and unobservable latent processes in addition to right censoring. This thesis is on the development of statistical models and methods to address these problems as they arise in oncology and chronic disease. Methods of estimation and inference in parametric, weakly parametric and semiparametric settings are investigated. Studies of chronic diseases routinely sample individuals subject to conditions on an event time of interest. In epidemiology, for example, prevalent cohort studies aiming to evaluate risk factors for survival following onset of dementia require subjects to have survived to the point of screening. In clinical trials designed to assess the effect of experimental cancer treatments on survival, patients are required to survive from the time of cancer diagnosis to recruitment. Such conditions yield samples featuring left-truncated event time distributions. Incomplete covariate data often arise in such settings, but standard methods do not deal with the fact that the covariate distribution is also affected by left truncation. We develop a likelihood and algorithm for estimation for dealing with incomplete covariate data in such settings. An expectation-maximization algorithm deals with the left truncation by using the covariate distribution conditional on the selection criterion. An extension to deal with sub-group analyses in clinical trials is described for the case in which the stratification variable is incompletely observed. In studies of affective disorder, individuals are often observed to experience recurrent symptomatic exacerbations of symptoms warranting hospitalization. Interest lies in modeling the occurrence of such exacerbations over time and identifying associated risk factors to better understand the disease process. In some patients, recurrent exacerbations are temporally clustered following disease onset, but cease to occur after a period of time. We develop a dynamic mover-stayer model in which a canonical binary variable associated with each event indicates whether the underlying disease has resolved. An individual whose disease process has not resolved will experience events following a standard point process model governed by a latent intensity. If and when the disease process resolves, the complete data intensity becomes zero and no further events will arise. An expectation-maximization algorithm is developed for parametric and semiparametric model fitting based on a discrete time dynamic mover-stayer model and a latent intensity-based model of the underlying point process. The method is applied to a motivating dataset from a cohort of individuals with affective disorder experiencing recurrent hospitalization for their mental health disorder. Interval-censored recurrent event data arise when the event of interest is not readily observed but the cumulative event count can be recorded at periodic assessment times. Extensions on model fitting techniques for the dynamic mover-stayer model are discussed and incorporate interval censoring. The likelihood and algorithm for estimation are developed for piecewise constant baseline rate functions and are shown to yield estimators with small empirical bias in simulation studies. Data on the cumulative number of damaged joints in patients with psoriatic arthritis are analysed to provide an illustrative application.
36

台灣醫學美容保養品產業先進者優勢探討 / A case study on the first-mover advantage of cosmeceutical industry

周千玉, Chou, Chien Yu Unknown Date (has links)
全球醫學美容的蓬勃發展,醫療院所在提供醫學美容服務同時,除了專業的醫療技術之外,也向消費者推薦這些可以幫助術後保養的醫學美容保養品。而這些經由臨床測試及專業醫師證實對於皮膚健康有療效的醫學美容保養品,既有科學的證實加上醫師推薦,形成一定的公信力,和一般市面上的美容商品形成區隔,進而讓這些起手術後搭配塗抹的醫學美容產品,也另闢形成一個全新的商機。 本研究針對醫學美容保養品產業以先進者優勢的相關研究進行探討,而過去的醫學美容文獻比較注重於由消費者端出發的議題研究,對於醫美品牌廠商進入產業的經營卻付之闕如。有鑑於此,本研究透過個案研究的方法,以國內醫美保養品品牌為例進行先進者優勢之探討,試圖找出欲成功達先進者優勢廠商需具備的能力和面向。 根據本研究顯示出台灣醫學美容保養品廠商進入市場具有先進者優勢,本研將究對於各品牌採用不同通路(醫療、開架式、網路)、不同品牌定位(高、中、平價)的醫學美容品牌,以其先進者廠商佔據不同的市場。各品牌廠商的優勢組合及養成及差異化形成:品牌廠商一方面教育消費者,並建立起「醫學美容保養品」的新概念,將專業度提升和傳統的保養進行區隔;另一方面建立起公司自身優勢,在此產業進入的廠商的優勢來源有:佔有技術領先、關鍵要素佔領和將顧客不確定性的消除,隨著廠商本身不同的資源,選擇專屬自身的行銷組合經營,並與供應商、通路商和終端的顧客建立關係。將產品多元化和採多品牌策略,滿足不同消費族群,而達到產能擴增、成本減低和增加其市佔率,藉此不斷擴增公司資源,進行更多的產品創新、技術精進和品牌行銷,卻也成為後進者進入市場的天然屏障,得以維持自身的優勢。 / Medical cosmetology is now growing rapidly around the whole world. Clinics, which perform cosmetic surgery, provide not only professional medical services, but also promote those cosmeceutical products which can help quick recovery from surgery. These cosmeceutical products, which have the approval from doctors and passed clinical trial, build a strong barrier for its competitor and create a new business opportunity. This research tries to study the First-Mover Advantage of the cosmeceutical industry. The old studies of cosmetic surgery focus more on customer driven issues. There are not many researches about business running for cosmeceutical brand. Therefore, our research study the First-Mover Advantage of cosmeceutical industry in Taiwan by case study methodology and try to find what are the necessary requirements for the First-Mover company. According to our research, cosmeceutical companies have the fist-mover advantage in Taiwan market. In this study, for each cosmeceutical brand have different channel (medical, open-frame , network), different brand positioning (high, middle, parity), with its advanced to occupy a different market. Each brand has their advantages of combination, and develop the differentiation. The cosmeceutical surgery brand companies educate its customer about the new concept of medical skincare product and build up its advantages. The advantages of these companies are: technology leadership, key factor occupation, customer uncertainty elimination. According to different resources of each company, the market and business strategy should be properly managed. The first mover companies have to build up the relationship with its suppliers, channels and end users. Use product diversification and multiple-brand strategy can satisfy various customer groups, increase production capacity, reduce costs and raise the market share. Based on the above strategies, the first mover companies can build up a strong barrier which the followers cannot easily passed, by product innovation, technology improvement, and branding.
37

Optimalizace procesů realizace vzdělávacích programů / Process Optimization of the Educational Programs Realisation

Nováková, Lenka January 2011 (has links)
The diploma thesis is devoted to the process analysis of realization educational programs and their optimization. The theoretical section is focused on educational programs, educational policy EU concept, process questions and connected notions. The analytical part researches implementation processes of educational programs, international activities analysis of Faculty of business, bilateral contracts and questionnaire survey. Knowledge of the analytical part is used in the proposal part, where results of the research are evaluated and drawn up procedures to improve processes. Proposals and disposals leading to increasing student´s participation and process optimization were compiled on the basis of gained information about Faculty of business and they respect educational concept of European Union.
38

Contactless mobile payments in Europe : Stakeholders´ perspective on ecosystem issues and developments

Englund, Rasmus, Turesson, David January 2012 (has links)
A progressive shift from cash and card –based in-store payments, towards contactless mobile payments, is currently in the making on the European market. This shift would imply payments in stores to be performed in a fast, simple, secure and preferably less costly manner, between a consumer´s mobile phone and a merchant´s payment terminal. Technologies such as Near Field Communication (NFC) and the use of Quick Response (QR) -codes, both facilitate such contactless payments, and have already built momentum in many European countries. This implies an undoubtedly very tempting new payment experience by the use of mobile phones. However, this shift entails several uncertainties and issues regarding the crystallization of the new “industry” that is forming. These issues regard social, organizational as well as market –related aspects, and adhere to stakeholders on both the provider- and user- side of contactless mobile payment products and services. It has been found that there is a great need for new research on this matter, from a more holistic perspective, where theories on industrial dynamics, developments and user adoption could be used to guide and explain these new industry-impeding issues as well as reveal new ones. This master thesis aims to answer this call – by using such theories in conjunction with a multi-stakeholder perspective from a wide base of empirically gathered data – in order to find, interpret and shed new light on key issues that impede the development and adoption of contactless mobile payments on the European market. It was deemed necessary to first conduct a thorough literature review on the current mobile payments landscape in Europe, in order to find out which key issues seem to be existent on the European market (adhering to both providers and users of mobile payment solution), with the intention to presuppose from those issues for further guidance of choices in theories and construction of empirical data gathering methodology. The theoretical framework was in such way built upon five different but highly interconnected theoretical concepts on new industry evolvement, strategy and adoption. The empirical data was gathered from a two-day conference on mobile payments in Europe, as well as from 10 in-depth interviews with different key stakeholders on the Swedish and European market. The theoretical framework and the empirical data was later merged for analysis purpose, in order to find, interpret and shed new light on these and other issues on contactless mobile payment development and adoption on the European market. This has led to some key findings or conclusions. Firstly, the literature review on the current mobile payments market in Europe revealed some key issues. On the provider-side of the stakeholder spectra; issues mainly revolve around collaboration and competition, where business models are hard to standardize due to the unevenly distributed control and power over the users. This was seen to relate heavily to the NFC Secure Element (SE) -placement, holding the consumers´ payment credentials, since different stakeholders prefer different SE -placements (on the SIM –card or integrated in the mobile phone). Some big actors have also created their own – more of end-to-end - contactless payment solutions, complicating the evolvement even further. This might further lead to issues related primarily to; early and late movers among providers, alternative mobile payment solutions, as well as issues related to interoperability between solutions/technologies as well as across borders. Security concerns have also been highlighted in the literature as a prioritized matter. Among the user-side of the stakeholder spectra; key issues relate to the adoption of in-store contactless mobile payments, such as investment costs for merchants to implement new hardware and/or software (terminals, mainly NFC -compatible), security concerns, reluctance in behavioral change among consumers´ payment habits, and uncertainties in the perceived added value through these new types of payments compared to foremost card payments. Secondly, after merging the theoretical framework with the empirical data for analysis purpose, it was revealed that the uncertain role of mobile network operators creates tensions in the ecosystem on various levels and to various extents. Secondly, preemption strategies utilized by indigenous firms in European countries shows the possibility of hampering payment interoperability, and first-movers risk hurting not only themselves, but the entire mobile payment ecosystem, if security breaches are discovered due to technological uncertainties. This is one strong reason for banks to move slower, but they mightcontradictively risk losing some of their high trustworthiness towards other stakeholders if being too passive. Moreover, two additional trade-off issues were discovered (technology/business model standardization versus innovation, and too many features in the provided offering versus too few features in the provided offering). The first of these trade-offs is further damaging for the ecosystem since there are strong differences in opinions on the matter, as well as what might increase adoption speed. The second trade-off is important to take into consideration where payment card penetration-rate is high. An additional factor carrying issues was the explicit focus of providers on only one side (consumers) in a two-sided market (consumers and merchants). Also, merchants can not be seen as a homogenous group. Finally, the “chicken and egg” –problem seem do not seem to be such a big of a problem after all.
39

Sustainable value creation : A case study of a Professional Service Firm / Hållbart värdeskapande : En fallstudie av ett professionellt tjänsteföretag

Kapsalis, Alexandra, Rosén, Andreas January 2022 (has links)
Today, sustainability has become a central part of every recent global agenda. As the world is establishing a more sustainable path, professional service firms have started setting up new targets and goals to meet the changing demands and regulations of governments, societies, and investors. In an initiative to create sustainable long-term value for multiple stakeholders, professional service firms are investigating alternative approaches to value creation and competitiveness in a previously unknown market. This has proven to be challenging as value today is often measured in short-term sales and income and long-term value is often overlooked in the business model and the project selection processes.  The aim of the study was to identify and evaluate the challenges professional service firms face when incorporating sustainable value creation into their business model and also to suggest actions to create competitiveness in a previously unexplored market of sustainable value creation. An abductive case study was performed on a large professional service firm. The data collection consisted of 10 semi-structured interviews with company representatives of different ranks. Additionally, a survey with 30 respondents from the case company was carried out. The study concludes that challenges related to sustainable value creation include; incorporating clear measurement and reporting systems in terms of the triple bottom line, transforming Value Uncaptured to Value Captured, lowering the perceived risk of sustainability incorporation, and convincing the right people to make the right decisions. Moreover, the study elicits a framework for sustainable value creation that professional service firms can use as a tool to incorporate sustainability into the business model.  In order to create competitiveness in the new market of sustainable value creation, professional service firms could incorporate sustainable value creation, through the assessment of tangible and intangible assets in all business cases and offerings, and thereby work proactively to meet future demands. This will create competitiveness through a first mover advantage. / På senare tid har hållbarhet blivit en central del av varje global agenda. I samband med att världen lägger mer fokus på hållbarhet har professionella tjänsteföretag börjat sätta upp nya mål för att bemöta de förändrade kraven och reglerna från beslutsfattare i samhället och investerare. I ett initiativ för att skapa hållbart långsiktigt värde för flera intressenter undersöker professionella tjänsteföretag alternativa tillvägagångssätt för att skapa värde och förbli konkurrenskraftiga i den outforskade marknaden. Detta har visat sig vara utmanande då värde idag ofta mäts i kortsiktig försäljning och inkomst och långsiktigt värde ofta förbises i affärsmodellen och projekturval processerna. Syftet med studien var att identifiera och utvärdera de utmaningar som professionella tjänsteföretag står inför när de införlivar hållbart värdeskapande i sin affärsmodell och även att föreslå åtgärder för att skapa konkurrenskraft på en tidigare outforskad marknad för hållbart värdeskapande. En abduktiv fallstudie utfördes på ett stort professionellt serviceföretag. Datainsamlingen bestod av 10 semistrukturerade intervjuer med företagsrepresentanter av olika rang. Dessutom genomfördes en undersökning med 30 respondenter från fallstudieföretaget. Studien drar slutsatsen att utmaningar relaterade till hållbart värdeskapande inkluderar; introduktionen av tydliga “triple bottom line”-baserade mät- och rapporteringssystem, omvandling av “Value uncaptured” till “Value captured”, reducering av den upplevda risken för hållbarhets inkorporering samt att övertyga rätt personer att fatta rätt beslut. Vidare, summeras studien i ett ramverk för hållbart värdeskapande som professionella tjänsteföretag kan använda som ett verktyg för att uppnå hållbarhet i affärsmodellen. För att skapa konkurrensfördelar på den nya marknaden för hållbart värdeskapande kan professionella serviceföretag inkorporera hållbart värdeskapande, genom bedömningen av materiella och immateriella tillgångar i alla affärsfall och erbjudanden, och därigenom arbeta proaktivt för att möta framtida krav. Detta skulle kunna skapa konkurrenskraft genom en ”först till marknad” fördel.
40

視頻網站先進者與後進者之策略行銷分析: 以樂視網、優酷網為例 / Marketing strategy analysis of first and late mover advantage in the video website industry the cases of LeTV, Youku

金肖序 Unknown Date (has links)
隨著web2.0的時代到來,很多基於互聯網的產品也發展迅速。分享性視頻網站YouTube誕生之後,中國大陸也開始了網絡視頻的發展。然而視頻網站在2006-2007年爆發式出現之後,又相繼沒落,真正能夠留下來並且流量大幅成長、甚至盈利的網站並不多。本研究以先進者樂視網,後進者優酷網為研究個案,以策略行銷4C架構來分析作為中國大陸視頻網站的先進者和後進者建立了哪方面的優勢從而在市場中立足,並且這些優勢是通過什麼樣的策略建立起來並可以支撐網站成為其核心競爭優勢。最終,本研究通過分析先進者樂視網和後進者優酷網兩家成功企業所建立的交換成本找到了其建立的核心優勢,並給出了視頻網站成立之後的實務建議。 / With the advent of web2.0, many Internet-based products are growing rapidly. After the birth of the sharing of video sites YouTube, mainland China also began the development of online video site. However, only few video sites can survive in the competitive market and grow successfully. This study examines the competitive advantage of the first mover, Letv and the late mover, Youku, based on the 4C framework. It also explore what kinds of strategies that the two video sites deploy to develop their core competitive advantages. In the end, the study finds the core advantage of the establishment of the two companies by analyzing the exchange cost established, and gives the practical suggestions after the establishment of the video website.

Page generated in 0.0361 seconds