• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1489
  • 473
  • 437
  • 372
  • 104
  • 74
  • 68
  • 34
  • 33
  • 32
  • 28
  • 26
  • 21
  • 18
  • 11
  • Tagged with
  • 3676
  • 1096
  • 750
  • 488
  • 460
  • 450
  • 419
  • 390
  • 389
  • 348
  • 346
  • 328
  • 321
  • 317
  • 316
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Market segmentation to become the partner of choice

Deines, Tara January 1900 (has links)
Master of Agribusiness / Department of Agricultural Economics / Kevin Gwinner / The agriculture industry has been a dynamic industry exploding with change in recent years. The world has experienced extreme population growth, along with shifts in social status, dietary habits, and consumption patterns that have led to a rapidly growing and changing agriculture industry demanding increasing grain production. The expected pace of production necessary to continue to feed the world has heightened the competition in the agriculture industry. This study focuses on analyzing how Company XYZ, a strong competitor in the grain and ethanol industry, can leverage the opportunities that the growth of the agriculture industry has provided. In order to maximize opportunities with each customer and remain competitive in new territories, the need is presented to develop a repeatable process. This process will focus on determining how to interpret customer preferences to quickly make the company the first preference of choice for target customers as they grow further into North America and beyond. This thesis will focus on understanding and operationalizing two components. First, identifying the most desirable customers and what makes them desirable. Secondly, understanding, anticipating, and consistently addressing the needs of customers to address them better than the competition. To analyze and understand customer habits and behaviors this thesis examines the results of a survey conducted with existing customers. Regression analysis of the overall profitability of a customer to the company and a regression analysis of the customer's ratings of Company XYZ in relation to the competition were used to help identify how the discrimination and segmentation factors impact each regression. A cluster analysis is also implemented with the survey data to segment customers in order to develop a structured plan that can be implemented within the business practices. The cluster analysis revealed three dominant clusters that customers can be segmented into. These clusters, in conjunction with the findings from the regression analyses, help identify areas of strength and weakness to develop a plan of action for Company XYZ to implement. The plan, known as the Partner of Choice, directs the focus on implementing market segmentation to leverage customized marketing opportunities, behavioral management alignment, employee incentive opportunities, and a structured training program.
622

Study on China's Capital Market Segmentation under Fragmented Regulations

January 2015 (has links)
abstract: The Chinese capital market is characterized by high segmentation due to governmental regulations. In this thesis I investigate both the causes and consequences of this market segmentations. Specifically, I address the following questions: (1) to which degree this capital market segmentation is caused by the fragmented regulations in China, (2) what are the key characteristics of this market segmentation, and (3) what are the impacts of this market segmentation on capital costs and resources allocations. Answers to these questions can have important implications for Chinese policy makers to improve capital market regulatory coordination and efficiency. I organize this thesis as follows. First, I define the concepts of capital market segmentation and fragmented regulation based on literature reviews and theoretical analysis. Next, on the basis of existing theories and methods in finance and economics, I select a number of indicators to systematically measure the degree of regulatory segmentation in China’s capital market. I then develop an econometric model of capital market frontier efficiency analysis to calculate and analyze China’s capital market segmentation and regulatory fragmentation. Lastly, I use the production function analysis technique and the even study method to examine the impacts of fragmented regulatory segmentation on the connections and price distortions in the equity, debt, and insurance markets. Findings of this thesis enhance the understanding of how institutional forces such as governmental regulations influence the function and efficiency of the capital markets. / Dissertation/Thesis / Doctoral Dissertation Business Administration 2015
623

Particle Image Segmentation Based on Bhattacharyya Distance

January 2015 (has links)
abstract: Image segmentation is of great importance and value in many applications. In computer vision, image segmentation is the tool and process of locating objects and boundaries within images. The segmentation result may provide more meaningful image data. Generally, there are two fundamental image segmentation algorithms: discontinuity and similarity. The idea behind discontinuity is locating the abrupt changes in intensity of images, as are often seen in edges or boundaries. Similarity subdivides an image into regions that fit the pre-defined criteria. The algorithm utilized in this thesis is the second category. This study addresses the problem of particle image segmentation by measuring the similarity between a sampled region and an adjacent region, based on Bhattacharyya distance and an image feature extraction technique that uses distribution of local binary patterns and pattern contrasts. A boundary smoothing process is developed to improve the accuracy of the segmentation. The novel particle image segmentation algorithm is tested using four different cases of particle image velocimetry (PIV) images. The obtained experimental results of segmentations provide partitioning of the objects within 10 percent error rate. Ground-truth segmentation data, which are manually segmented image from each case, are used to calculate the error rate of the segmentations. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2015
624

Speech segmentation and speaker diarisation for transcription and translation

Sinclair, Mark January 2016 (has links)
This dissertation outlines work related to Speech Segmentation – segmenting an audio recording into regions of speech and non-speech, and Speaker Diarization – further segmenting those regions into those pertaining to homogeneous speakers. Knowing not only what was said but also who said it and when, has many useful applications. As well as providing a richer level of transcription for speech, we will show how such knowledge can improve Automatic Speech Recognition (ASR) system performance and can also benefit downstream Natural Language Processing (NLP) tasks such as machine translation and punctuation restoration. While segmentation and diarization may appear to be relatively simple tasks to describe, in practise we find that they are very challenging and are, in general, ill-defined problems. Therefore, we first provide a formalisation of each of the problems as the sub-division of speech within acoustic space and time. Here, we see that the task can become very difficult when we want to partition this domain into our target classes of speakers, whilst avoiding other classes that reside in the same space, such as phonemes. We present a theoretical framework for describing and discussing the tasks as well as introducing existing state-of-the-art methods and research. Current Speaker Diarization systems are notoriously sensitive to hyper-parameters and lack robustness across datasets. Therefore, we present a method which uses a series of oracle experiments to expose the limitations of current systems and to which system components these limitations can be attributed. We also demonstrate how Diarization Error Rate (DER), the dominant error metric in the literature, is not a comprehensive or reliable indicator of overall performance or of error propagation to subsequent downstream tasks. These results inform our subsequent research. We find that, as a precursor to Speaker Diarization, the task of Speech Segmentation is a crucial first step in the system chain. Current methods typically do not account for the inherent structure of spoken discourse. As such, we explored a novel method which exploits an utterance-duration prior in order to better model the segment distribution of speech. We show how this method improves not only segmentation, but also the performance of subsequent speech recognition, machine translation and speaker diarization systems. Typical ASR transcriptions do not include punctuation and the task of enriching transcriptions with this information is known as ‘punctuation restoration’. The benefit is not only improved readability but also better compatibility with NLP systems that expect sentence-like units such as in conventional machine translation. We show how segmentation and diarization are related tasks that are able to contribute acoustic information that complements existing linguistically-based punctuation approaches. There is a growing demand for speech technology applications in the broadcast media domain. This domain presents many new challenges including diverse noise and recording conditions. We show that the capacity of existing GMM-HMM based speech segmentation systems is limited for such scenarios and present a Deep Neural Network (DNN) based method which offers a more robust speech segmentation method resulting in improved speech recognition performance for a television broadcast dataset. Ultimately, we are able to show that the speech segmentation is an inherently ill-defined problem for which the solution is highly dependent on the downstream task that it is intended for.
625

Expertise, Attribution, and Ad Blocking in the World of Online Marketing

Despotakis, Stylianos 01 May 2018 (has links)
In this dissertation, we model and provide insights to some of the main challenges the world of online marketing currently faces. In the first chapter, we study the role of information asymmetry introduced by the presence of experts in online marketplaces and how it affects the strategic decisions of different parties in these markets. In the second chapter, we study the attribution problem in online advertising and examine optimal ways for advertisers to allocate their marketing budget across channels. In the third chapter, we explore the effects of modern ad blockers on users and online platforms. In the first chapter, we examine the effect of the presence of expert buyers on other buyers, the platform, and the sellers in online markets. We model buyer expertise as the ability to accurately predict the quality, or condition, of an item, modeled as its common value. We show that nonexperts may bid more aggressively, even above their expected valuation, to compensate for their lack of information. As a consequence, we obtain two interesting implications. First, auctions with a “hard close” may generate higher revenue than those with a “soft close”. Second, contrary to the linkage principle, an auction platform may obtain a higher revenue by hiding the item’s common-value information from the buyers. We also consider markets where both auctions and posted prices are available and show that the presence of experts allows the sellers of high quality items to signal their quality by choosing to sell via auctions. In the second chapter, we study the problem of attributing credit for customer acquisition to different components of a digital marketing campaign using an analytical model. We investigate attribution contracts through which an advertiser tries to incentivize two publishers that affect customer acquisition. We situate such contracts in a two-stage marketing funnel, where the publishers should coordinate their efforts to drive conversions. First, we analyze the popular class of multi-touch contracts where the principal splits the attribution among publishers using fixed weights depending on their position. Our first result shows the following counterintuitive property of optimal multi-touch contracts: higher credit is given to the portion of the funnel where the existing baseline conversion rate is higher. Next, we show that social welfare maximizing contracts can sometimes have even higher conversion rate than optimal multi-touch contracts, highlighting a prisoners’ dilemma effect in the equilibrium for the multi-touch contract. While multi-touch attribution is not globally optimal, there are linear contracts that “coordinate the funnel” to achieve optimal revenue. However, such optimal-revenue contracts require knowledge of the baseline conversion rates by the principal. When this information is not available, we propose a new class of ‘reinforcement’ contracts and show that for a large range of model parameters these contracts yield better revenue than multi-touch. In the third chapter, we study the effects of ad blockers in online advertising. While online advertising is the lifeline of many internet content platforms, the usage of ad blockers has surged in recent years presenting a challenge to platforms dependent on ad revenue. In this chapter, using a simple analytical model with two competing platforms, we show that the presence of ad blockers can actually benefit platforms. In particular, there are conditions under which the optimal equilibrium strategy for the platforms is to allow the use of ad blockers (rather than using an adblock wall, or charging a fee for viewing ad-free content). The key insight is that allowing ad blockers serves to differentiate platform users based on their disutility to viewing ads. This allows platforms to increase their ad intensity on those that do not use the ad blockers and achieve higher returns than in a world without ad blockers. We show robustness of these results when we allow a larger combination of platform strategies, as well as by explaining how ad whitelisting schemes offered by modern ad blockers can add value. Our study provides general guidelines for what strategy a platform should follow based on the heterogeneity in the ad sensitivity of their user base.
626

Analýza služeb cestovního ruchu na Lipensku / Analyse of employment tourism on Lipno area

MATĚJČEK, Jindřich January 2008 (has links)
The main aim of my diploma thesis is the analysis of present state of tourism in the area of Lipno and a suggestion of an effective and complex strategy for the tourism development through the identification of number one problems in given territory.A short characteristics of the certain area is to be found there which is followed by the analysis of the current tourism condition. In the second section I have evaluated the offer and demand in the summer and winter season. In the thesis the tourism development is also handled, where the developments of the offer and demand and the troubled sections of this sector. In conclusion there are some proposals for the region development, which priorities are divided into particular activities.
627

Segmentace automobilového trhu ve vybrané firmě / Automotive market segmentation in the chosen company

BUREŠOVÁ, Hana January 2008 (has links)
The main objective of my thesis was to carry out a automotive market segmentation.I have chosen the company ´Autocentrum Šmucler´, Ltd., as the object of my thesis. This firm deals with sale of new cars and also second-hand vehicles.I have devided my writing into a theoretical and practical part. In the theoretical part the meaning of the word ´marketing´, ´market´ and ´car market´ is explained, then everything concerning market segmentation in the concrete {--} its process, its procedure, its reason, its type, its effective criterions, market targeting and searching of a position in the market.In the practical part the automotive market segmentation is carry out. I have made a marketing research, in this case by the questionary survey. I have got all informations needed for each segment characteristics by the research.
628

3D Rooftop Detection And Modeling Using Orthographic Aerial Images

January 2013 (has links)
abstract: Detection of extruded features like rooftops and trees in aerial images automatically is a very active area of research. Elevated features identified from aerial imagery have potential applications in urban planning, identifying cover in military training or flight training. Detection of such features using commonly available geospatial data like orthographic aerial imagery is very challenging because rooftop and tree textures are often camouflaged by similar looking features like roads, ground and grass. So, additonal data such as LIDAR, multispectral imagery and multiple viewpoints are exploited for more accurate detection. However, such data is often not available, or may be improperly registered or inacurate. In this thesis, we discuss a novel framework that only uses orthographic images for detection and modeling of rooftops. A segmentation scheme that initializes by assigning either foreground (rooftop) or background labels to certain pixels in the image based on shadows is proposed. Then it employs grabcut to assign one of those two labels to the rest of the pixels based on initial labeling. Parametric model fitting is performed on the segmented results in order to create a 3D scene and to facilitate roof-shape and height estimation. The framework can also benefit from additional geospatial data such as streetmaps and LIDAR, if available. / Dissertation/Thesis / M.S. Computer Science 2013
629

Saliency Cut: an Automatic Approach for Video Object Segmentation Based on Saliency Energy Minimization

January 2013 (has links)
abstract: Video object segmentation (VOS) is an important task in computer vision with a lot of applications, e.g., video editing, object tracking, and object based encoding. Different from image object segmentation, video object segmentation must consider both spatial and temporal coherence for the object. Despite extensive previous work, the problem is still challenging. Usually, foreground object in the video draws more attention from humans, i.e. it is salient. In this thesis we tackle the problem from the aspect of saliency, where saliency means a certain subset of visual information selected by a visual system (human or machine). We present a novel unsupervised method for video object segmentation that considers both low level vision cues and high level motion cues. In our model, video object segmentation can be formulated as a unified energy minimization problem and solved in polynomial time by employing the min-cut algorithm. Specifically, our energy function comprises the unary term and pair-wise interaction energy term respectively, where unary term measures region saliency and interaction term smooths the mutual effects between object saliency and motion saliency. Object saliency is computed in spatial domain from each discrete frame using multi-scale context features, e.g., color histogram, gradient, and graph based manifold ranking. Meanwhile, motion saliency is calculated in temporal domain by extracting phase information of the video. In the experimental section of this thesis, our proposed method has been evaluated on several benchmark datasets. In MSRA 1000 dataset the result demonstrates that our spatial object saliency detection is superior to the state-of-art methods. Moreover, our temporal motion saliency detector can achieve better performance than existing motion detection approaches in UCF sports action analysis dataset and Weizmann dataset respectively. Finally, we show the attractive empirical result and quantitative evaluation of our approach on two benchmark video object segmentation datasets. / Dissertation/Thesis / M.S. Computer Science 2013
630

Evaluation of hierarchical segmentation for natural vegetation: a case study of the Tehachapi Mountains, California

January 2013 (has links)
abstract: Two critical limitations for hyperspatial imagery are higher imagery variances and large data sizes. Although object-based analyses with a multi-scale framework for diverse object sizes are the solution, more data sources and large amounts of testing at high costs are required. In this study, I used tree density segmentation as the key element of a three-level hierarchical vegetation framework for reducing those costs, and a three-step procedure was used to evaluate its effects. A two-step procedure, which involved environmental stratifications and the random walker algorithm, was used for tree density segmentation. I determined whether variation in tone and texture could be reduced within environmental strata, and whether tree density segmentations could be labeled by species associations. At the final level, two tree density segmentations were partitioned into smaller subsets using eCognition in order to label individual species or tree stands in two test areas of two tree densities, and the Z values of Moran's I were used to evaluate whether imagery objects have different mean values from near segmentations as a measure of segmentation accuracy. The two-step procedure was able to delineating tree density segments and label species types robustly, compared to previous hierarchical frameworks. However, eCognition was not able to produce detailed, reasonable image objects with optimal scale parameters for species labeling. This hierarchical vegetation framework is applicable for fine-scale, time-series vegetation mapping to develop baseline data for evaluating climate change impacts on vegetation at low cost using widely available data and a personal laptop. / Dissertation/Thesis / M.A. Geography 2013

Page generated in 0.0963 seconds