471 |
Segmentation Marketing: A Case Study on Performance Solutions Group, LLC.Brian, Jordan 01 May 2015 (has links)
The purpose of this research is to show how Performance Solutions Group, LLC can effectively use segmentation marketing both in their current market and in expansion. The goal is to find a solution and suggest changes that should be made to the marketing team at Performance Solutions Group. This research was completed by looking at how segmentation marketing is used in broad industries currently and investigating how Performance Solutions Group can use it in their company. This case study shows that segmentation marketing is an effective way for Performance Solutions Group to market its services.
|
472 |
中国双重上市公司A、H股价差影响因素的实证研究January 2019 (has links)
abstract: 中国证券市场一直存在着双重上市公司A、H股价差异现象,这一“同股同权不同价”的现象,长期以来都是国内外学者热议的课题之一。
本文在系统性整理前人研究成果基础上,首先对造成A、H股价差效应的内在逻辑进行了系统梳理,提炼出影响双重上市公司A、H股价格差异的9个潜在因素:信息不对称、需求差异、流动性差异、投机性差异、风险差异、公司治理结构、利率差异、市场强弱差异、汇率预期。其次,本文为各潜在影响因素构建了新的代理变量,建立面板数据模型,从全市场和行业两大视角做了实证分析,验证了影响双重上市公司A、H股价格差异的可能因素,且实证结果均通过了平稳性检验。实证结果显示:全市场视角下,仅公司治理结构和市场强弱差异对A、H价格差异的影响不显著。行业视角下,对于金融行业的双重上市公司而言,影响其A、H股价格差异的因素包括:需求差异、流动性差异、风险差异、市场强弱差异、利率差异;信息不对称、投机性差异、公司治理结构、汇率预期不具有显著影响。而对于非金融行业的双重上市公司而言,影响其A、H股价格差异的因素包括:信息不对称、需求差异、流动性差异、风险差异、投机性差异、市场强弱差异、利率差异、汇率预期;公司治理结构则不是显著的影响因素。
本文在实证分析所得结论的基础上,考虑到当前A、H股市场的现状,提出了加强资本市场双向开放、大力发展以基金为代表的机构投资者、坚定推行股票发行注册制改革、推动金融创新、丰富投资工具等建议。这一研究结果对于推动我国资本市场进一步完善,具有重要的理论与现实意义。 / Dissertation/Thesis / Doctoral Dissertation Business Administration 2019
|
473 |
Using Multiview Annotation to Annotate Multiple Images SimultaneouslyPrice, Timothy C. 01 June 2017 (has links)
In order for a system to learn a model for object recognition, it must have a lot of positive images to learn from. Because of this, datasets of similar objects are built to train the model. These object datasets used for learning models are best when large, diverse and have annotations. But the process of obtaining the images and creating the annotations often times take a long time, and are costly. We use a method that obtains many images of the same objects in different angles very quickly and then reconstructs those images into a 3D model. We then use the 3D reconstruction of these images of an object to connect information about the different images of the same object together. We use that information to annotate all of the images taken very quickly and cheaply. These annotated images are then used to train the model.
|
474 |
A Semi-Automated Algorithm for Segmenting the Hippocampus in Patient and Control PopulationsMuncy, Nathan McKay 01 June 2016 (has links)
Calculating hippocampal volume from Magnetic Resonance (MR) images is an essential task in many studies of neurocognition in healthy and diseased populations. The `gold standard' method involves hand tracing, which is accurate but laborious, requiring expertly trained researchers and significant amounts of time. As such, segmenting large datasets with the standard method is impractical. Current automated pipelines are inaccurate at hippocampal demarcation and volumetry. We developed a semi-automated hippocampal segmentation pipeline based on the Advanced Normalization Tools (ANTs) suite of programs to segment the hippocampus. We applied the semi-automated segmentation pipeline to 70 participant scans (26 female) from groups that included participants diagnosed with autism spectrum disorder, healthy older adults (mean age 74) and healthy younger controls. We found that hippocampal segmentations obtained with the semi-automated pipeline more closely matched the segmentations of an expert rater than those obtained using FreeSurfer or the segmentations of novice raters. Further, we found that the pipeline performed best when including manually- placed landmarks and when using a template generated from a heterogeneous sample (that included the full variability of group assignments) than a template generated from more homogeneous samples (using only individuals within a given age or with a specific neuropsychiatric diagnosis). Additionally, the semi-automated pipeline required much less time (5 minutes per brain) than manual segmentation (30-60 minutes per brain) or FreeSurfer (8 hours per brain).
|
475 |
Multi-scale convolutional neural networks for segmentation of pulmonary structures in computed tomographyGerard, Sarah E. 01 December 2018 (has links)
Computed tomography (CT) is routinely used for diagnosing lung disease and developing treatment plans using images of intricate lung structure with submillimeter resolution. Automated segmentation of anatomical structures in such images is important to enable efficient processing in clinical and research settings. Convolution neural networks (ConvNets) are largely successful at performing image segmentation with the ability to learn discriminative abstract features that yield generalizable predictions. However, constraints in hardware memory do not allow deep networks to be trained with high-resolution volumetric CT images. Restricted by memory constraints, current applications of ConvNets on volumetric medical images use a subset of the full image; limiting the capacity of the network to learn informative global patterns. Local patterns, such as edges, are necessary for precise boundary localization, however, they suffer from low specificity. Global information can disambiguate structures that are locally similar.
The central thesis of this doctoral work is that both local and global information is important for segmentation of anatomical structures in medical images. A novel multi-scale ConvNet is proposed that divides the learning task across multiple networks; each network learns features over different ranges of scales. It is hypothesized that multi-scale ConvNets will lead to improved segmentation performance, as no compromise needs to be made between image resolution, image extent, and network depth. Three multi-scale models were designed to specifically target segmentation of three pulmonary structures: lungs, fissures, and lobes.
The proposed models were evaluated on a diverse datasets and compared to architectures that do not use both local and global features. The lung model was evaluated on humans and three animal species; the results demonstrated the multi-scale model outperformed single scale models at different resolutions. The fissure model showed superior performance compared to both a traditional Hessian filter and a standard U-Net architecture that is limited in global extent.
The results demonstrated that multi-scale ConvNets improved pulmonary CT segmentation by incorporating both local and global features using multiple ConvNets within a constrained-memory system. Overall, the proposed pipeline achieved high accuracy and was robust to variations resulting from different imaging protocols, reconstruction kernels, scanners, lung volumes, and pathological alterations; demonstrating its potential for enabling high-throughput image analysis in clinical and research settings.
|
476 |
Automated delineation and quantitative analysis of blood vessels in retinal fundus imageXu, Xiayu 01 May 2012 (has links)
Automated fundus image analysis plays an important role in the computer aided diagnosis of ophthalmologic disorders. A lot of eye disorders, as well as cardiovascular disorders, are known to be related with retinal vasculature changes. Many studies has been done to explore these relationships. However, most of the studies are based on limited data obtained using manual or semi-automated methods due to the lack of automated techniques in the measurement and analysis of retinal vasculature. In this thesis, a fully automated retinal vessel width measurement technique is proposed. This novel method models the accurate vessel boundary delineation problem in two-dimension into an optimal surface segmentation problem in threedimension. Then the optimal surface segmentation problem is transformed into finding a minimum-cost closed set problem in a vertex-weighted geometric graph. The problem is modeled differently for straight vessel and for branch point because of the different conditions in straight vessel and in branch point. Furthermore, many of the retinal image analysis needs the location of the optic disc and fovea as a prerequisite information, for example, in the analysis of the relationship between vessel width and the distance to the optic disc. Hence, a simultaneous optic disc and fovea detection method is presented, which includes a two-step classification of three classes. The major contributions of this thesis include: 1) developing a fully automated vessel width measurement technique for retinal blood vessels, 2) developing a simultaneous optic disc and fovea detection method, 3) validating the methods using multiple datasets, and 4) applying the proposed methods in multiple retinal vasculature analysis studies.
|
477 |
Foreground Removal in a Multi-Camera SystemMortensen, Daniel T. 01 December 2019 (has links)
Traditionally, whiteboards have been used to brainstorm, teach, and convey ideas with others. However distributing whiteboard content remotely can be challenging. To solve this problem, A multi-camera system was developed which can be scaled to broadcast an arbitrarily large writing surface while removing objects not related to the whiteboard content. Related research has been performed previously to combine multiple images together, identify and remove unrelated objects, also referred to as foreground, in a single image and correct for warping differences in camera frames. However, this is the first time anyone has attempted to solve this problem using a multi-camera system.
The main components of this problem include stitching the input images together, identifying foreground material, and replacing the foreground information with the most recent background (desired) information. This problem can be subdivided into two main components: fusing multiple images into one cohesive frame, and detecting/removing foreground objects. for the first component, homographic transformations are used to create a mathematical mapping from the input image to the desired reference frame. Blending techniques are then applied to remove artifacts that remain after the perspective transform. For the second, statistical tests and modeling in conjunction with additional classification algorithms were used.
|
478 |
Segmentation of lung tissue in CT images with disease and pathologyHua, Panfang 01 December 2010 (has links)
Lung segmentation is an important first step for quantitative lung CT image analysis and computer aided diagnosis. However, accurate and automated lung CT image segmentation may be made difficult by the presence of the abnormalities. Since many lung diseases change tissue density, resulting in intensity changes in CT image data, intensity-only segmentation algorithms will not work for most pathological lung cases. This thesis presents two automatic algorithms for pathological lung segmentation. One is based on the geodesic active contour, another method uses graph search driven by a cost function combining the intensity, gradient, boundary smoothness, and the rib information. The methods were tested on several 3D thorax CT data sets with lung disease. Given the manual segmentation result as gold standard, we validate our methods by comparing our automatic segmentation results with Hu's method. Sensitivity, specificity, and Hausdorff distance were calculated to evaluate the methods.
|
479 |
Novel use of video and image analysis in a video compression systemStobaugh, John David 01 May 2015 (has links)
As consumer demand for higher quality video at lower bit-rate increases, so does the need for more sophisticated methods of compressing videos into manageable file sizes. This research attempts to address these concerns while still maintaining reasonable encoding times. Modern segmentation and grouping analysis are used with code vectorization techniques and other optimization paradigms to improve quality and performance within the next generation coding standard, High Efficiency Video Coding. This research saw on average a 50% decrease in run-time by the encoder with marginal decreases in perceived quality.
|
480 |
Background subtraction using ensembles of classifiers with an extended feature setKlare, Brendan F 30 June 2008 (has links)
The limitations of foreground segmentation in difficult environments using standard color space features often result in poor performance during autonomous tracking. This work presents a new approach for classification of foreground and background pixels in image sequences by employing an ensemble of classifiers, each operating on a different feature type such as the three RGB features, gradient magnitude and orientation features, and eight Haar features. These thirteen features are used in an ensemble classifier where each classifier operates on a single image feature. Each classifier implements a Mixture of Gaussians-based unsupervised background classification algorithm. The non-thresholded, classification decision score of each classifier are fused together by taking the average of their outputs and creating one single hypothesis. The results of using the ensemble classifier on three separate and distinct data sets are compared to using only RGB features through ROC graphs. The extended feature vector outperforms the RGB features on all three data sets, and shows a large scale improvement on two of the three data sets. The two data sets with the greatest improvements are both outdoor data sets with global illumination changes and the other has many local illumination changes. When using the entire feature set, to operate at a 90% true positive rate, the per pixel, false alarm rate is reduced five times in one data set and six times in the other data set.
|
Page generated in 0.0952 seconds