451 |
Skin lesion segmentation and classification using deep learningUnknown Date (has links)
Melanoma, a severe and life-threatening skin cancer, is commonly misdiagnosed
or left undiagnosed. Advances in artificial intelligence, particularly deep learning,
have enabled the design and implementation of intelligent solutions to skin lesion
detection and classification from visible light images, which are capable of performing
early and accurate diagnosis of melanoma and other types of skin diseases. This work
presents solutions to the problems of skin lesion segmentation and classification. The
proposed classification approach leverages convolutional neural networks and transfer
learning. Additionally, the impact of segmentation (i.e., isolating the lesion from the
rest of the image) on the performance of the classifier is investigated, leading to the
conclusion that there is an optimal region between “dermatologist segmented” and
“not segmented” that produces best results, suggesting that the context around a
lesion is helpful as the model is trained and built. Generative adversarial networks,
in the context of extending limited datasets by creating synthetic samples of skin
lesions, are also explored. The robustness and security of skin lesion classifiers using
convolutional neural networks are examined and stress-tested by implementing
adversarial examples. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
452 |
Using Deep Learning Semantic Segmentation to Estimate Visual OdometryUnknown Date (has links)
In this research, image segmentation and visual odometry estimations in real time
are addressed, and two main contributions were made to this field. First, a new image
segmentation and classification algorithm named DilatedU-NET is introduced. This deep
learning based algorithm is able to process seven frames per-second and achieves over
84% accuracy using the Cityscapes dataset. Secondly, a new method to estimate visual
odometry is introduced. Using the KITTI benchmark dataset as a baseline, the visual
odometry error was more significant than could be accurately measured. However, the
robust framerate speed made up for this, able to process 15 frames per second. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
453 |
Application of reference point theory to merger activity and characteristicsUnknown Date (has links)
In Essay I, I analyze the impact of the target and bidder reference points on the probability of acquisition under general economic conditions as well as in strong/weak economic periods. I find that the target and the bidder reference points have a significant impact on the probability of a firm becoming a bidder or a target. While the target reference point also has a significant impact on the successful completion of the merger, the bidder reference point does not. In addition, I find that the target reference point is a significant determinant of management-led buyout mergers, while the bidder reference point has a significant impact on the probability of the bidder launching a hostile bid. In Essay II, I focus on the impact of the target and bidder reference points on the method of payment in the context of what the target seeks, what the bidder offers, and what the two parties use as their final method of payment. The analysis is performed under general economic conditions and in strong/weak economic periods. I find that while the target reference point has a strong impact on the method of payment agreed upon between the two parties, the bidder reference point does not. This is especially important given that the bidder reference point influences the consideration offered by the bidder but does not translate into a significant impact on the final method of payment. In essay III, I examine the impact of bidder reference point on public targets and the impact of bidder and target reference points on private firms. I analyze the aforementioned relationships under different economic conditions. Consistent with the literature on premium and public targets, I find that the target reference point has a strong and positive relationship with the premium paid for private firms. The relationship is stronger in weak economic times. / At the same time, I do not find any evidence that the bidder reference point exerts a significant influence on the premium paid for public firms. Interestingly, the relationship between the bidder reference point and the premium paid for private firms is negative and significant. / Inga Chira. / Thesis (Ph.D.)--Florida Atlantic University, 2013. / Includes bibliography. / Mode of access: World Wide Web. / System requirements: Adobe Reader.
|
454 |
Segmental contribution accounting system design for marketing performance assessment: a hypothetical case.January 1994 (has links)
by Fong Kwan-ting, Ronald, Koo Cheuk-wah, Anthony. / Thesis (M.B.A.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 56-58). / ACKNOWLEDGEMENT --- p.i / ABSTRACT --- p.ii / TABLE OF CONTENTS --- p.iii / LIST OF FIGURES --- p.v / LIST OF EXHIBITS --- p.vi / Chapter / Chapter I. --- INTRODUCTION --- p.1 / Objective of this Project --- p.2 / Planning and Allocating Resources --- p.2 / Controliing Operations --- p.3 / Evaluating the Performance of Segment Managers --- p.3 / Background of C&P Company -- a Hypothetical Case --- p.3 / Chapter II. --- LITERATURE REVIEW --- p.5 / Marketing Performance Assessment --- p.5 / Marketing Efficiency --- p.6 / Marketing Effectiveness --- p.7 / Marketing audit --- p.7 / Marketing effectiveness --- p.9 / Recent Developments of Marketing Performance Assessment --- p.10 / Concluding Remarks --- p.13 / Segmental Contribution Analysis --- p.14 / Terminologies Used in Segmental Contribution Analysis --- p.14 / Direct fixed costs --- p.14 / Common fixed costs- --- p.15 / Contribution margin --- p.15 / Performance margin --- p.15 / Segment margin --- p.15 / Residual income analysis --- p.16 / Net income --- p.16 / Segmental Contribution Accounting System --- p.16 / Application of the Proposed Segmental Contribution Accounting System --- p.18 / Contribution margin --- p.18 / Segment margin --- p.18 / Evaluating segment manager's performance --- p.19 / Concluding Remarks --- p.19 / Chapter III. --- SYSTEM DESIGN FOR THE C&P COMPANY --- p.21 / Prototype --- p.21 / Input Formats --- p.21 / Output Formats --- p.22 / Structure Analysis --- p.22 / Data Flow Diagram --- p.22 / System Dictionary --- p.23 / Transform Descriptions --- p.23 / Chapter IV. --- CONCLUSION & DIRECTION FOR FUTURE RESEARCH --- p.24 / EXHIBITS --- p.25 / BIBLIOGRAPHY --- p.56
|
455 |
Understanding users of a freely-available online health risk assessment : an exploration using segmentationHodgson, Corinne January 2015 (has links)
Health organizations and governments are investing considerable resources into Internet-based health promotion. There is a large and growing body of research on health “etools” but to date most has been conducted using experimental paradigms; much less is known about those that are freely-available. Analysis was conducted of the data base generated through the operation of the freely-available health risk assessment (HRA) of the Heart and Stroke Foundation of Ontario. During the study period of February 1 to December 20, 2011, 147,274 HRAs were completed, of which 120,510 (79.8%) included consent for the use of information for research and were completed by adults aged 18 to 90 years. Comparison of Canadian users to national statistics confirmed that the HRA sample is not representative of the general population. The HRA sample is significantly and systematically biased by gender, education, employment, heath behaviours, and the prevalence of specific chronic diseases. Etool users may be a large but select segment of the population, those previously described as “Internet health information seekers.” Are all Internet health information seekers the same? To explore this issue, segmentation procedures available in common commercial packages (k-means clustering, two-step clustering, and latent class analysis) were conducted using five combinations of variables. Ten statistically significant solutions were created. The most robust solution divided the sample into four groups differentiated by age (two younger and two older groups) and healthiness, as reflected by disease and modifiable risk factor burden and readiness to make lifestyle changes. These groups suggest that while all users of online health etools may be health information seekers, they vary in the extent to which they are health oriented or health conscientious (i.e., engaging in preventive health behaviours or ready for behaviour change). It is hoped that this research will provide other organizations with similar data bases with a model for analyzing their client populations, therefore increasing our knowledge about health etool users.
|
456 |
vU-net: edge detection in time-lapse fluorescence live cell images based on convolutional neural networksZhang, Xitong 23 April 2018 (has links)
Time-lapse fluorescence live cell imaging has been widely used to study various dynamic processes in cell biology. As the initial step of image analysis, it is important to localize and segment cell edges with higher accuracy. However, fluorescence live-cell images usually have issues such as low contrast, noises, uneven illumination in comparison to immunofluorescence images. Deep convolutional neural networks, which learn features directly from training images, have successfully been applied in natural image analysis problems. However, the limited amount of training samples prevents their routine application in fluorescence live-cell image analysis. In this thesis, by exploiting the temporal coherence in time-lapse movies together with VGG-16 [1] pre-trained model, we demonstrate that we can train a deep neural network using a limited number of image frames to segment the entire time-lapse movies. We propose a novel framework, vU-net, which combines the advantages of VGG-16 [1] in feature extraction and U-net [2] in feature reconstruction. Moreover, we design an auxiliary convolutional block at the end of the architecture to enhance edge detection. We evaluate our framework using dice coefficient and the distance between the predicted edge and the ground truth on high-resolution image datasets of an adhesion marker, paxillin, acquired by a Total Internal Reflection Fluorescence (TIRF) microscope. Our results demonstrate that, on difficult datasets: (i) The testing dice coefficient of vU-net is 3.2% higher than U-net with the same amount of training images. (ii) vU-net can achieve the best prediction results of U-net with one third of training images needed by U-net. (iii) vU-net produces more robust prediction than U-net. Therefore, vU-net can be more practically applied to challenging live cell movies than U-net since it requires a small size of training sets and achieved accurate segmentation.
|
457 |
Improved 3D Heart Segmentation Using Surface Parameterization for Volumetric Heart DataXing, Baoyuan 24 April 2013 (has links)
Imaging modalities such as CT, MRI, and SPECT have had a tremendous impact on diagnosis and treatment planning. These imaging techniques have given doctors the capability to visualize 3D anatomy structures of human body and soft tissues while being non-invasive. Unfortunately, the 3D images produced by these modalities often have boundaries between the organs and soft tissues that are difficult to delineate due to low signal to noise ratios and other factors. Image segmentation is employed as a method for differentiating Regions of Interest in these images by creating artificial contours or boundaries in the images. There are many different techniques for performing segmentation and automating these methods is an active area of research, but currently there are no generalized methods for automatic segmentation due to the complexity of the problem. Therefore hand-segmentation is still widely used in the medical community and is the €œGold standard€� by which all other segmentation methods are measured. However, existing manual segmentation techniques have several drawbacks such as being time consuming, introduce slice interpolation errors when segmenting slice-by-slice, and are generally not reproducible. In this thesis, we present a novel semi-automated method for 3D hand-segmentation that uses mesh extraction and surface parameterization to project several 3D meshes to 2D plane . We hypothesize that allowing the user to better view the relationships between neighboring voxels will aid in delineating Regions of Interest resulting in reduced segmentation time, alleviating slice interpolation artifacts, and be more reproducible.
|
458 |
Image processing and forward propagation using binary representations, and robust audio analysis using deep learningPedersoli, Fabrizio 15 March 2019 (has links)
The work presented in this thesis consists of three main topics:
document segmentation and classification into text and score,
efficient computation with binary representations, and deep learning
architectures for polyphonic music transcription and classification.
In the case of musical documents, an important
problem is separating text from musical score by detecting the
corresponding boundary boxes. A new algorithm is
proposed for pixel-wise classification of digital documents in musical
score and text. It is based on a bag-of-visual-words approach and
random forest classification. A robust technique for identifying
bounding boxes of text and music score from the pixel-wise
classification is also proposed.
For efficient processing of learned models, we turn our attention to
binary representations. When dealing with binary data, the use of
bit-packing and bit-wise computation can reduce computational time and
memory requirements considerably. Efficiency is a key factor when
processing large scale datasets and in industrial applications.
SPmat is an optimized framework for binary image processing.
We propose a bit-packed representation for binary images that encodes
both pixels and square neighborhoods, and design SPmat, an optimized
framework for binary image processing, around it.
Bit-packing and bit-wise computation can also be used for efficient
forward propagation in deep neural networks. Quantified deep neural
networks have recently been proposed with the goal of improving
computational time performance and memory requirements while
maintaining as much as possible classification performance. A particular
type of quantized neural networks are binary neural networks in which
the weights and activations are constrained to $-1$ and $+1$. In this
thesis, we describe and evaluate Espresso, a novel optimized framework
for fast inference of binary neural networks that takes advantage of
bit-packing and bit-wise computations. Espresso is self contained,
written in C/CUDA and provides optimized implementations of all the
building blocks needed to perform forward propagation.
Following the recent success, we further investigate Deep neural
networks. They have achieved state-of-the-art results and
outperformed traditional machine learning methods in many applications
such as: computer vision, speech recognition, and machine translation.
However, in the case of music information retrieval (MIR) and audio
analysis, shallow neural networks are commonly used. The
effectiveness of deep and very deep architectures for MIR and audio
tasks has not been explored in detail. It is also not clear what is
the best input representation for a particular task. We therefore
investigate deep neural networks for the following audio analysis
tasks: polyphonic music transcription, musical genre classification,
and urban sound classification. We analyze the performance of common
classification network architectures using different input
representations, paying specific attention to residual networks. We
also evaluate the robustness of these models in case of degraded audio
using different combinations of training/testing data. Through
experimental evaluation we show that residual networks provide
consistent performance improvements when analyzing degraded audio
across different representations and tasks. Finally, we present a
convolutional architecture based on U-Net that can improve polyphonic
music transcription performance of different baseline transcription
networks. / Graduate
|
459 |
Unconstrained road sign recognitionAl Qader, Akram Abed Al Karim Abed January 2017 (has links)
There are many types of road signs, each of which carries a different meaning and function: some signs regulate traffic, others indicate the state of the road or guide and warn drivers and pedestrians. Existent image-based road sign recognition systems work well under ideal conditions, but experience problems when the lighting conditions are poor or the signs are partially occluded. The aim of this research is to propose techniques to recognize road signs in a real outdoor environment, especially to deal with poor lighting and partially occluded road signs. To achieve this, hybrid segmentation and classification algorithms are proposed. In the first part of the thesis, we propose a hybrid dynamic threshold colour segmentation algorithm based on histogram analysis. A dynamic threshold is very important in road sign segmentation, since road sign colours may change throughout the day due to environmental conditions. In the second part, we propose a geometrical shape symmetry detection and reconstruction algorithm to detect and reconstruct the shape of the sign when it is partially occluded. This algorithm is robust to scale changes and rotations. The last part of this thesis deals with feature extraction and classification. We propose a hybrid feature vector based on histograms of oriented gradients, local binary patterns, and the scale-invariant feature transform. This vector is fed into a classifier that combines a Support Vector Machine (SVM) using a Random Forest and a hybrid SVM k-Nearest Neighbours (kNN) classifier. The overall method proposed in this thesis shows a high accuracy rate of 99.4% in ideal conditions, 98.6% in noisy and fading conditions, 98.4% in poor lighting conditions, and 92.5% for partially occluded road signs on the GRAMUAH traffic signs dataset.
|
460 |
Segmentação de imagens coloridas baseada na mistura de cores e redes neurais / Segmentation of color images based on color mixture and neural networksDiego Rafael Moraes 26 March 2018 (has links)
O Color Mixture é uma técnica para segmentação de imagens coloridas, que cria uma \"Retina Artificial\" baseada na mistura de cores, e faz a quantização da imagem projetando todas as cores em 256 planos no cubo RGB. Em seguida, atravessa todos esses planos com um classificador Gaussiano, visando à segmentação da imagem. Porém, a abordagem atual possui algumas limitações. O classificador atual resolve exclusivamente problemas binários. Inspirado nesta \"Retina Artificial\" do Color Mixture, esta tese define uma nova \"Retina Artificial\", propondo a substituição do classificador atual por uma rede neural artificial para cada um dos 256 planos, com o objetivo de melhorar o desempenho atual e estender sua aplicação para problemas multiclasse e multiescala. Para esta nova abordagem é dado o nome de Neural Color Mixture. Para a validação da proposta foram realizadas análises estatísticas em duas áreas de aplicação. Primeiramente para a segmentação de pele humana, tendo sido comparado seus resultados com oito métodos conhecidos, utilizando quatro conjuntos de dados de tamanhos diferentes. A acurácia de segmentação da abordagem proposta nesta tese superou a de todos os métodos comparados. A segunda avaliação prática do modelo proposto foi realizada com imagens de satélite devido à vasta aplicabilidade em áreas urbanas e rurais. Para isto, foi criado e disponibilizado um banco de imagens, extraídas do Google Earth, de dez regiões diferentes do planeta, com quatro escalas de zoom (500 m, 1000 m, 1500 m e 2000 m), e que continham pelo menos quatro classes de interesse: árvore, solo, rua e água. Foram executados quatro experimentos, sendo comparados com dois métodos, e novamente a proposta foi superior. Conclui-se que a nova proposta pode ser utilizada para problemas de segmentação de imagens coloridas multiclasse e multiescala. E que possivelmente permite estender o seu uso para qualquer aplicação, pois envolve uma fase de treinamento, em que se adapta ao problema. / The Color Mixture is a technique for color images segmentation, which creates an \"Artificial Retina\" based on the color mixture, and quantizes the image by projecting all the colors in 256 plans into the RGB cube. Then, it traverses all those plans with a Gaussian classifier, aiming to reach the image segmentation. However, the current approach has some limitations. The current classifier solves exclusively binary problems. Inspired by this \"Artificial Retina\" of the Color Mixture, we defined a new \"Artificial Retina\", as well as we proposed the replacement of the current classifier by an artificial neural network for each of the 256 plans, with the goal of improving current performance and extending your application to multiclass and multiscale issues. We called this new approach \"Neural Color Mixture\". To validate the proposal, we analyzed it statistically in two areas of application. Firstly for the human skin segmentation, its results were compared with eight known methods using four datasets of different sizes. The segmentation accuracy of the our proposal in this thesis surpassed all the methods compared. The second practical evaluation of the our proposal was carried out with satellite images due to the wide applicability in urban and rural areas. In order to do this, we created and made available a database of satellite images, extracted from Google Earth, from ten different regions of the planet, with four zoom scales (500 m, 1000 m, 1500 m and 2000 m), which contained at least four classes of interest: tree, soil, street and water. We compared our proposal with a neural network of the multilayer type (ANN-MLP) and an Support Vector Machine (SVM). Four experiments were performed, compared to two methods, and again the proposal was superior. We concluded that our proposal can be used for multiclass and multiscale color image segmentation problems, and that it possibly allows to extend its use to any application, as it involves a training phase, in which our methodology adapts itself to any kind of problem.
|
Page generated in 0.1081 seconds