• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 57
  • 16
  • 11
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 270
  • 270
  • 243
  • 102
  • 73
  • 62
  • 60
  • 50
  • 40
  • 36
  • 31
  • 30
  • 29
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Semi-supervised adverse drug reaction detection / Halvvägledd upptäckt av läkemedelsreleterade biverkningar

Ohl, Louis January 2021 (has links)
Pharmacogivilance consists in carefully monitoring drugs in order to re-evaluate their risk for people’s health. The sooner the Adverse Drug Reactions are detected, the sooner one can act consequently. This thesis aims at discovering such reactions in electronical health records under the constraint of lacking annotated data, in order to replicate the scenario of the Regional Center for Pharmacovigilance of Nice. We investigate how in a semi-supervised learning design the unlabeled data can contribute to improve classification scores. Results suggest an excellent recall in discovering adverse reactions and possible classification improvements under specific data distribution. / Läkemedelsövervakningen består i kolla försiktigt läkemedlen så att utvärdera dem för samhällets hälsa. Ju tidigare de läkemedelsrelaterade biverkningarna upptäcks, desto tidigare man får handla dem. Detta exjobb söker att upptäcka de där läkemedelsrelaterade biverkningarnna inom elektroniska hälsopost med få datamärkningar, för att återskapa Nice regionalt läkemedelelsöveraknings-centrumets situationen. Vi undersöker hur en halvväglett lärande lösning kan hjälpa att förbättra klassificeringsresultat. Resultaten visar en god återställning med biverknings-upptäckning och möjliga förbättringar.
202

An Industrial Application of Semi-supervised techniques for automatic surface inspection of stainless steel. : Are pseudo-labeling and consistency regularization effective in a real industrial context?

Zoffoli, Mattia January 2022 (has links)
Recent developments in the field of Semi-Supervised Learning are working to avoid the bottleneck of data labeling. This can be achieved by leveraging unlabeled data to limit the amount of labeled data needed for training deep learning models. Semi-supervised learning algorithms are showing promising results; however, research has been focusing on algorithm development, without proceeding to test their effectiveness in real-world applications. This research project has adapted and tested some semi-supervised learning algorithms on a dataset extracted from the manufacturing en-vironment, in the context of the surface analysis of stainless steel, in collaboration with Outokumpu Stainless Oy. In particular, a simple algorithm combining Pseudo-Labeling and Consistency Regularization has been developed, inspired by the state-of-the-art algorithm Fix match. The results show some potential, because the usage of Semi-Supervised Learning techniques has significantly reduced overfitting on the training set, while maintaining a good accuracy on the test set. However, some doubts are raised regarding the application of these techniques in a real environment, due to the imperfect nature of real datasets and the high algorithm development cost due to the increased complexity introduced with these methods. / Den senaste utvecklingen inom området Semi-Supervised Learning arbetarför att undvika flaskhalsen med datamärkning. Detta kan uppnås genom att utnyttja omärkta data för att begränsa mängden märkt data som behövs för att träna modeller för djupinlärning. Semi-övervakade inlärningsalgoritmer visarlovande resultat; forskning har dock fokuserat på algoritmutveckling, utan att testa deras effektivitet i verkliga tillämpningar. Detta forskningsprojekt har anpassat och testat några semi-övervakade in-lärningsalgoritmer på en datauppsättning extraherad från tillverkningsmiljön, i samband med ytanalys av rostfritt stål, i samarbete med Outokumpu Stainless Oy. I synnerhet har en enkel algoritm som kombinerar Pseudo-Labeling och Consistency Regularization utvecklats, inspirerad av den toppmoderna algoritmen Fixmatch .Resultaten visar en viss potential, eftersom användningen av Semi-Supervised Learning-tekniker avsevärt har minskat överanpassningen av träningssetet, samtidigt som en god noggrannhet på testsetet bibehålls. Vissa tvivel reses dock angående tillämpningen av dessa tekniker i en verklig miljö, på grund av den ofullkomliga karaktären hos riktiga datauppsättningar och den höga algoritmutvecklingskostnaden på grund av den ökade komplexiteten som introduceras med dessa metoder.
203

Representation Learning for Modulation Recognition of LPI Radar Signals Through Clustering / Representationsinlärning för modulationsigenkänning av LPI-radarsignaler genom klustring

Grancharova, Mila January 2020 (has links)
Today, there is a demand for reliable ways to perform automatic modulation recognition of Low Probability of Intercept (LPI) radar signals, not least in the defense industry. This study explores the possibility of performing automatic modulation recognition on these signals through clustering and more specifically how to learn representations of input signals for this task. A semi-supervised approach using a bootstrapped convolutional neural network classifier for representation learning is proposed. A comparison is made between training the representation learner on raw time-series and on spectral representations of the input signals. It is concluded that, overall, the system trained on spectral representations performs better, though both approaches show promise and should be explored further. The proposed system is tested both on known modulation types and on previously unseen modulation types in the task of novelty detection. The results show that the system can successfully identify known modulation types with adjusted mutual information of 0.86 for signal-to-noise ratios ranging from -10 dB to 10 dB. When introducing previously unseen modulations, up to six modulations can be identified with adjusted mutual information above 0.85. Furthermore, it is shown that the system can learn to separate LPI radar signals from telecom signals which are present in most signal environments. / Idag finns ett behov av pålitlig automatiserad modulationsigenkänning (AMR) av Low Probability of Inercept (LPI)-radarsignaler, inte minst hos försvarsindustrin. Denna studie utforskar möjligheten att utföra AMR av dessa signaler genom klustring och mer specifikt hur man bör lära in representationer av signalerna i detta syfte. En halvövervakad inlärningsmetod som använder en klassificerare baserad på faltningsnätverk föreslås. En jämförelse görs mellan ett system som tränar för representationsinlärning på råa tidsserier och ett system som tränar på spektrala representationer av signalerna. Resultaten visar att systemet tränat på spektrala representationer på det stora hela presterar bättre, men båda metoderna visar lovande resultat och bör utforskas vidare. Systemet testas på signaler från både kända och för systemet tidigare okända modulationer i syfte att pröva förmågan att upptäcka nya typer av modulationer. Systemet identifierar kända modulationer med adjusted mutual information på 0.86 i brusnivåer från -10 dB till 10 dB. När tidigare okända modulationer introduceras till systemet ligger adjusted mutual information över 0.85 för upp till sex modulationer. Studien visar dessutom att systemet kan lära sig skilja LPI-radarsignaler från telekommunikationssignaler som är vanliga i de flesta signalmiljöer.
204

A Semi Supervised Support Vector Machine for a Recommender System : Applied to a real estate dataset

Méndez, José January 2021 (has links)
Recommender systems are widely used in e-commerce websites to improve the buying experience of the customer. In recent years, e-commerce has been quickly expanding and its growth has been accelerated during the COVID-19 pandemic, when customers and retailers were asked to keep their distance and do lockdowns. Therefore, there is an increasing demand for items and good recommendations to the users to improve their shopping experience. In this master’s thesis a recommender system for a real-estate website is built, based on Support Vector Machines (SVM). The main characteristic of the built model is that it is trained with a few labelled samples and the rest of unlabelled samples, using a semi-supervised machine learning paradigm. The model is constructed step-by-step from the simple SVM, until the semi-supervised Nested Cost-Sensitive Support Vector Machine (NCS-SVM). Then, we compare our model using four different kernel functions: gaussian, second-degree polynomial, fourth-degree polynomial, and linear. We also compare a user with strict housing requirements against a user with vague requirements. We finish with a discussion focusing principally on parameter tuning, and briefly in the model downsides and ethical considerations.
205

Action Recognition with Knowledge Transfer

Choi, Jin-Woo 07 January 2021 (has links)
Recent progress on deep neural networks has shown remarkable action recognition performance from videos. The remarkable performance is often achieved by transfer learning: training a model on a large-scale labeled dataset (source) and then fine-tuning the model on the small-scale labeled datasets (targets). However, existing action recognition models do not always generalize well on new tasks or datasets because of the following two reasons. i) Current action recognition datasets have a spurious correlation between action types and background scene types. The models trained on these datasets are biased towards the scene instead of focusing on the actual action. This scene bias leads to poor generalization performance. ii) Directly testing the model trained on the source data on the target data leads to poor performance as the source, and target distributions are different. Fine-tuning the model on the target data can mitigate this issue. However, manual labeling small- scale target videos is labor-intensive. In this dissertation, I propose solutions to these two problems. For the first problem, I propose to learn scene-invariant action representations to mitigate the scene bias in action recognition models. Specifically, I augment the standard cross-entropy loss for action classification with 1) an adversarial loss for the scene types and 2) a human mask confusion loss for videos where the human actors are invisible. These two losses encourage learning representations unsuitable for predicting 1) the correct scene types and 2) the correct action types when there is no evidence. I validate the efficacy of the proposed method by transfer learning experiments. I trans- fer the pre-trained model to three different tasks, including action classification, temporal action localization, and spatio-temporal action detection. The results show consistent improvement over the baselines for every task and dataset. I formulate human action recognition as an unsupervised domain adaptation (UDA) problem to handle the second problem. In the UDA setting, we have many labeled videos as source data and unlabeled videos as target data. We can use already exist- ing labeled video datasets as source data in this setting. The task is to align the source and target feature distributions so that the learned model can generalize well on the target data. I propose 1) aligning the more important temporal part of each video and 2) encouraging the model to focus on action, not the background scene, to learn domain-invariant action representations. The proposed method is simple and intuitive while achieving state-of-the-art performance without training on a lot of labeled target videos. I relax the unsupervised target data setting to a sparsely labeled target data setting. Then I explore the semi-supervised video action recognition, where we have a lot of labeled videos as source data and sparsely labeled videos as target data. The semi-supervised setting is practical as sometimes we can afford a little bit of cost for labeling target data. I propose multiple video data augmentation methods to inject photometric, geometric, temporal, and scene invariances to the action recognition model in this setting. The resulting method shows favorable performance on the public benchmarks. / Doctor of Philosophy / Recent progress on deep learning has shown remarkable action recognition performance. The remarkable performance is often achieved by transferring the knowledge learned from existing large-scale data to the small-scale data specific to applications. However, existing action recog- nition models do not always work well on new tasks and datasets because of the following two problems. i) Current action recognition datasets have a spurious correlation between action types and background scene types. The models trained on these datasets are biased towards the scene instead of focusing on the actual action. This scene bias leads to poor performance on the new datasets and tasks. ii) Directly testing the model trained on the source data on the target data leads to poor performance as the source, and target distributions are different. Fine-tuning the model on the target data can mitigate this issue. However, manual labeling small-scale target videos is labor-intensive. In this dissertation, I propose solutions to these two problems. To tackle the first problem, I propose to learn scene-invariant action representations to mitigate background scene- biased human action recognition models for the first problem. Specifically, the proposed method learns representations that cannot predict the scene types and the correct actions when there is no evidence. I validate the proposed method's effectiveness by transferring the pre-trained model to multiple action understanding tasks. The results show consistent improvement over the baselines for every task and dataset. To handle the second problem, I formulate human action recognition as an unsupervised learning problem on the target data. In this setting, we have many labeled videos as source data and unlabeled videos as target data. We can use already existing labeled video datasets as source data in this setting. The task is to align the source and target feature distributions so that the learned model can generalize well on the target data. I propose 1) aligning the more important temporal part of each video and 2) encouraging the model to focus on action, not the background scene. The proposed method is simple and intuitive while achieving state-of-the-art performance without training on a lot of labeled target videos. I relax the unsupervised target data setting to a sparsely labeled target data setting. Here, we have many labeled videos as source data and sparsely labeled videos as target data. The setting is practical as sometimes we can afford a little bit of cost for labeling target data. I propose multiple video data augmentation methods to inject color, spatial, temporal, and scene invariances to the action recognition model in this setting. The resulting method shows favorable performance on the public benchmarks.
206

Label-Efficient Visual Understanding with Consistency Constraints

Zou, Yuliang 24 May 2022 (has links)
Modern deep neural networks are proficient at solving various visual recognition and understanding tasks, as long as a sufficiently large labeled dataset is available during the training time. However, the progress of these visual tasks is limited by the number of manual annotations. On the other hand, it is usually time-consuming and error-prone to annotate visual data, rendering the challenge of scaling up human labeling for many visual tasks. Fortunately, it is easy to collect large-scale, diverse unlabeled visual data from the Internet. And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how to utilize the unlabeled data and synthetic labeled data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea is to encourage deep neural networks to produce consistent predictions across different transformations (\eg geometry, temporal, photometric, etc.). We organize the dissertation as follows. In Part I, we propose to use the consistency over different geometric formulations and a cycle consistency over time to tackle the low-level scene geometry perception tasks in a self-supervised learning setting. In Part II, we tackle the high-level semantic understanding tasks in a semi-supervised learning setting, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains with one single forward pass, without model training or optimization at the inference time. / Doctor of Philosophy / Recently, deep learning has emerged as one of the most powerful tools to solve various visual understanding tasks. However, the development of deep learning methods is significantly limited by the amount of manually labeled data. On the other hand, it is usually time-consuming and error-prone to annotate visual data, making the human labeling process not easily scalable. Fortunately, it is easy to collect large-scale, diverse raw visual data from the Internet (\eg search engines, YouTube, Instagram, etc.). And we can acquire a large amount of synthetic visual data with annotations from game engines effortlessly. In this dissertation, we explore how we can utilize the raw visual data and synthetic data for various visual tasks, aiming to replace or reduce the direct supervision from the manual annotations. The key idea behind this is to encourage deep neural networks to produce consistent predictions of the same visual input across different transformations (\eg geometry, temporal, photometric, etc.). We organize the dissertation as follows. In Part I, we propose using the consistency over different geometric formulations and a forward-backward cycle consistency over time to tackle the low-level scene geometry perception tasks, using unlabeled visual data only. In Part II, we tackle the high-level semantic understanding tasks using both a small amount of labeled data and a large amount of unlabeled data jointly, with the constraint that different augmented views of the same visual input maintain consistent semantic information. In Part III, we tackle the cross-domain image segmentation problem. By encouraging an adaptive segmentation model to output consistent results for a diverse set of strongly-augmented synthetic data, the model learns to perform test-time adaptation on unseen target domains.
207

Interactive Transcription of Old Text Documents

Serrano Martínez-Santos, Nicolás 09 June 2014 (has links)
Nowadays, there are huge collections of handwritten text documents in libraries all over the world. The high demand for these resources has led to the creation of digital libraries in order to facilitate the preservation and provide electronic access to these documents. However text transcription of these documents im- ages are not always available to allow users to quickly search information, or computers to process the information, search patterns or draw out statistics. The problem is that manual transcription of these documents is an expensive task from both economical and time viewpoints. This thesis presents a novel ap- proach for e cient Computer Assisted Transcription (CAT) of handwritten text documents using state-of-the-art Handwriting Text Recognition (HTR) systems. The objective of CAT approaches is to e ciently complete a transcription task through human-machine collaboration, as the e ort required to generate a manual transcription is high, and automatically generated transcriptions from state-of-the-art systems still do not reach the accuracy required. This thesis is centered on a special application of CAT, that is, the transcription of old text document when the quantity of user e ort available is limited, and thus, the entire document cannot be revised. In this approach, the objective is to generate the best possible transcription by means of the user e ort available. This thesis provides a comprehensive view of the CAT process from feature extraction to user interaction. First, a statistical approach to generalise interactive transcription is pro- posed. As its direct application is unfeasible, some assumptions are made to apply it to two di erent tasks. First, on the interactive transcription of hand- written text documents, and next, on the interactive detection of the document layout. Next, the digitisation and annotation process of two real old text documents is described. This process was carried out because of the scarcity of similar resources and the need of annotated data to thoroughly test all the developed tools and techniques in this thesis. These two documents were carefully selected to represent the general di culties that are encountered when dealing with HTR. Baseline results are presented on these two documents to settle down a benchmark with a standard HTR system. Finally, these annotated documents were made freely available to the community. It must be noted that, all the techniques and methods developed in this thesis have been assessed on these two real old text documents. Then, a CAT approach for HTR when user e ort is limited is studied and extensively tested. The ultimate goal of applying CAT is achieved by putting together three processes. Given a recognised transcription from an HTR system. The rst process consists in locating (possibly) incorrect words and employs the user e ort available to supervise them (if necessary). As most words are not expected to be supervised due to the limited user e ort available, only a few are selected to be revised. The system presents to the user a small subset of these words according to an estimation of their correctness, or to be more precise, according to their con dence level. Next, the second process starts once these low con dence words have been supervised. This process updates the recogni- tion of the document taking user corrections into consideration, which improves the quality of those words that were not revised by the user. Finally, the last process adapts the system from the partially revised (and possibly not perfect) transcription obtained so far. In this adaptation, the system intelligently selects the correct words of the transcription. As results, the adapted system will bet- ter recognise future transcriptions. Transcription experiments using this CAT approach show that this approach is mostly e ective when user e ort is low. The last contribution of this thesis is a method for balancing the nal tran- scription quality and the supervision e ort applied using our previously de- scribed CAT approach. In other words, this method allows the user to control the amount of errors in the transcriptions obtained from a CAT approach. The motivation of this method is to let users decide on the nal quality of the desired documents, as partially erroneous transcriptions can be su cient to convey the meaning, and the user e ort required to transcribe them might be signi cantly lower when compared to obtaining a totally manual transcription. Consequently, the system estimates the minimum user e ort required to reach the amount of error de ned by the user. Error estimation is performed by computing sepa- rately the error produced by each recognised word, and thus, asking the user to only revise the ones in which most errors occur. Additionally, an interactive prototype is presented, which integrates most of the interactive techniques presented in this thesis. This prototype has been developed to be used by palaeographic expert, who do not have any background in HTR technologies. After a slight ne tuning by a HTR expert, the prototype lets the transcribers to manually annotate the document or employ the CAT ap- proach presented. All automatic operations, such as recognition, are performed in background, detaching the transcriber from the details of the system. The prototype was assessed by an expert transcriber and showed to be adequate and e cient for its purpose. The prototype is freely available under a GNU Public Licence (GPL). / Serrano Martínez-Santos, N. (2014). Interactive Transcription of Old Text Documents [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37979
208

Handling Domain Shift in 3D Point Cloud Perception

Saltori, Cristiano 10 April 2024 (has links)
This thesis addresses the problem of domain shift in 3D point cloud perception. In the last decades, there has been tremendous progress in within-domain training and testing. However, the performance of perception models is affected when training on a source domain and testing on a target domain sampled from different data distributions. As a result, a change in sensor or geo-location can lead to a harmful drop in model performance. While solutions exist for image perception, addressing this problem in point clouds remains unresolved. The focus of this thesis is the study and design of solutions for mitigating domain shift in 3D point cloud perception. We identify several settings differing in the level of target supervision and the availability of source data. We conduct a thorough study of each setting and introduce a new method to solve domain shift in each configuration. In particular, we study three novel settings in domain adaptation and domain generalization and propose five new methods for mitigating domain shift in 3D point cloud perception. Our methods are used by the research community, and at the time of writing, some of the proposed approaches hold the state-of-the-art. In conclusion, this thesis provides a valuable contribution to the computer vision community, setting the groundwork for the development of future works in cross-domain conditions.
209

<b>A Study on the Use of Unsupervised, Supervised, and Semi-supervised Modeling for Jamming Detection and Classification in Unmanned Aerial Vehicles</b>

Margaux Camille Marie Catafort--Silva (18477354) 02 May 2024 (has links)
<p dir="ltr">In this work, first, unsupervised machine learning is proposed as a study for detecting and classifying jamming attacks targeting unmanned aerial vehicles (UAV) operating at a 2.4 GHz band. Three scenarios are developed with a dataset of samples extracted from meticulous experimental routines using various unsupervised learning algorithms, namely K-means, density-based spatial clustering of applications with noise (DBSCAN), agglomerative clustering (AGG) and Gaussian mixture model (GMM). These routines characterize attack scenarios entailing barrage (BA), single- tone (ST), successive-pulse (SP), and protocol-aware (PA) jamming in three different settings. In the first setting, all extracted features from the original dataset are used (i.e., nine in total). In the second setting, Spearman correlation is implemented to reduce the number of these features. In the third setting, principal component analysis (PCA) is utilized to reduce the dimensionality of the dataset to minimize complexity. The metrics used to compare the algorithms are homogeneity, completeness, v-measure, adjusted mutual information (AMI) and adjusted rank index (ARI). The optimum model scored 1.00, 0.949, 0.791, 0.722, and 0.791, respectively, allowing the detection and classification of these four jamming types with an acceptable degree of confidence.</p><p dir="ltr">Second, following a different study, supervised learning (i.e., random forest modeling) is developed to achieve a binary classification to ensure accurate clustering of samples into two distinct classes: clean and jamming. Following this supervised-based classification, two-class and three-class unsupervised learning is implemented considering three of the four jamming types: BA, ST, and SP. In this initial step, the four aforementioned algorithms are used. This newly developed study is intended to facilitate the visualization of the performance of each algorithm, for example, AGG performs a homogeneity of 1.0, a completeness of 0.950, a V-measure of 0.713, an ARI of 0.557 and an AMI of 0.713, and GMM generates 1, 0.771, 0.645, 0.536 and 0.644, respectively. Lastly, to improve the classification of this study, semi-supervised learning is adopted instead of unsupervised learning considering the same algorithms and dataset. In this case, GMM achieves results of 1, 0.688, 0.688, 0.786 and 0.688 whereas DBSCAN achieves 0, 0.036, 0.028, 0.018, 0.028 for homogeneity, completeness, V-measure, ARI and AMI respectively. Overall, this unsupervised learning is approached as a method for jamming classification, addressing the challenge of identifying newly introduced samples.</p>
210

EFFICIENT INFERENCE AND DOMINANT-SET BASED CLUSTERING FOR FUNCTIONAL DATA

Xiang Wang (18396603) 03 June 2024 (has links)
<p dir="ltr">This dissertation addresses three progressively fundamental problems for functional data analysis: (1) To do efficient inference for the functional mean model accounting for within-subject correlation, we propose the refined and bias-corrected empirical likelihood method. (2) To identify functional subjects potentially from different populations, we propose the dominant-set based unsupervised clustering method using the similarity matrix. (3) To learn the similarity matrix from various similarity metrics for functional data clustering, we propose the modularity guided and dominant-set based semi-supervised clustering method.</p><p dir="ltr">In the first problem, the empirical likelihood method is utilized to do inference for the mean function of functional data by constructing the refined and bias-corrected estimating equation. The proposed estimating equation not only improves efficiency but also enables practically feasible empirical likelihood inference by properly incorporating within-subject correlation, which has not been achieved by previous studies.</p><p dir="ltr">In the second problem, the dominant-set based unsupervised clustering method is proposed to maximize the within-cluster similarity and applied to functional data with a flexible choice of similarity measures between curves. The proposed unsupervised clustering method is a hierarchical bipartition procedure under the penalized optimization framework with the tuning parameter selected by maximizing the clustering criterion called modularity of the resulting two clusters, which is inspired by the concept of dominant set in graph theory and solved by replicator dynamics in game theory. The advantage offered by this approach is not only robust to imbalanced sizes of groups but also to outliers, which overcomes the limitation of many existing clustering methods.</p><p dir="ltr">In the third problem, the metric-based semi-supervised clustering method is proposed with similarity metric learned by modularity maximization and followed by the above proposed dominant-set based clustering procedure. Under semi-supervised setting where some clustering memberships are known, the goal is to determine the best linear combination of candidate similarity metrics as the final metric to enhance the clustering performance. Besides the global metric-based algorithm, another algorithm is also proposed to learn individual metrics for each cluster, which permits overlapping membership for the clustering. This is innovatively different from many existing methods. This method is superiorly applicable to functional data with various similarity metrics between functional curves, while also exhibiting robustness to imbalanced sizes of groups, which are intrinsic to the dominant-set based clustering approach.</p><p dir="ltr">In all three problems, the advantages of the proposed methods are demonstrated through extensive empirical investigations using simulations as well as real data applications.</p>

Page generated in 0.0684 seconds