• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 21
  • 21
  • 16
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 172
  • 72
  • 48
  • 33
  • 29
  • 28
  • 26
  • 23
  • 23
  • 22
  • 20
  • 19
  • 18
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Řízení o přestupcích v prvním stupni / Proceedings for administrative infractions in the first instance

Lokvenc, Jan January 2011 (has links)
Title of this thesis: Proceedings for administrative infractions in the first instance The thesis deals with administrative infraction proceedings in the first instance, specifically about the steps of administrative office before instituting the administrative proceedings. It is based on the new scholarly literature, incorporates new judicature and takes account of methodologies of Ministry of the Interior and practical problems. In the chapter I there are defined administrative infraction proceedings, its relation to Administrative proceedings Act and the infraction and there is also described the possibility of using analogy. In the chapter II there is description of the main procedural principles of administrative infraction proceedings and their importance to proceedings. In addition to the principles resulted directly from Misdemeanours Act there are also described the constitutional and administrative principles. In the chapter III there is described competence of administrative office in administrative infraction proceedings, namely subject-matter jurisdiction, local jurisdiction and function competence. It deals with the changes of these competences too. Furthermore, in this context the thesis deals with professional competence of persons in authority and with contracts under public law....
92

A parameter-adaptive dynamic programming approach for inferring cophylogenies

Merkle, Daniel, Middendorf, Martin, Wieseke, Nikolas 26 October 2018 (has links)
Background Coevolutionary systems like hosts and their parasites are commonly used model systems for evolutionary studies. Inferring the coevolutionary history based on given phylogenies of both groups is often done by employing a set of possible types of events that happened during coevolution. Costs are assigned to the different types of events and a reconstruction of the common history with a minimal sum of event costs is sought. Results This paper introduces a new algorithm and a corresponding tool called CoRe-PA, that can be used to infer the common history of coevolutionary systems. The proposed method utilizes an event-based concept for reconciliation analyses where the possible events are cospeciations, sortings, duplications, and (host) switches. All known event-based approaches so far assign costs to each type of cophylogenetic events in order to find a cost-minimal reconstruction. CoRe-PA uses a new parameter-adaptive approach, i.e., no costs have to be assigned to the coevolutionary events in advance. Several biological coevolutionary systems that have already been studied intensely in literature are used to show the performance of CoRe-PA. Conclusion From a biological point of view reasonable cost values for event-based reconciliations can often be estimated only very roughly. CoRe-PA is very useful when it is difficult or impossible to assign exact cost values to different types of coevolutionary events in advance.
93

Automatic classification of fish and bubbles at pixel-level precision in multi-frequency acoustic echograms using U-Net convolutional neural networks

Slonimer, Alex 05 April 2022 (has links)
Multi-frequency backscatter acoustic profilers (echosounders) are used to measure biological and physical phenomena in the ocean in ways that are not possible with optical methods. Echosounders are commonly used on ocean observatories and by commercial fisheries but require significant manual effort to classify species of interest within the collected echograms. The work presented in this thesis tackles the challenging task of automating the identification of fish and other phenomena in echosounder data, with specific application to aggregations of juvenile salmon, schools of herring, and bubbles of air that have been mixed into the water. U-Net convolutional neural networks (CNNs) are used to accomplish this task by identifying classes at the pixel level. The data considered here were collected in Okisollo Channel on the coast of British Columbia, Canada, using an Acoustic Zooplankton and Fish Profiler at four frequencies (67.5, 125, 200, and 455 kHz). The entrainment of air bubbles and the behaviour of fish are both governed by the surrounding physical environment. To improve the classification, simulated channels for water depth and solar elevation angle (a proxy for sunlight) are used to encode the CNNs with information related to the environment providing spatial and temporal context. The manual annotation of echograms at the pixel level is a challenging process, and a custom application was developed to aid in this process. A relatively small set of annotations were created and are used to train the CNNs. During training, the echogram data are divided into randomly-spaced square tiles to encode the models with robust features, and into overlapping tiles for added redundancy during classification. This is done without removing noise in the data, thus ensuring broad applicability. This approach is proven highly successful, as evidenced by the best-performing U-Net model producing F1 scores of 93.0%, 87.3% and 86.5% for herring, salmon, and bubble classes, respectively. These models also achieve promising results when applied to echogram data with coarser resolution. One goal in fisheries acoustics is to detect distinct schools of fish. Following the initial pixel level classification, the results from the best performing U-Net model are fed through a heuristic module, inspired by traditional fisheries methods, that links connected components of identified fish (school candidates) into distinct school objects. The results are compared to the outputs from a recent study that relied on a Mask R-CNN architecture to apply instance segmentation for classifying fish schools. It is demonstrated that the U-Net/heuristic hybrid technique improves on the Mask R-CNN approach by a small amount for the classification of herring schools, and by a large amount for aggregations of juvenile salmon (improvement in mean average precision from 24.7% to 56.1%). / Graduate
94

Les instances narratives dans Les soleils des indépendances d’Ahmadou Kourouma / The narrative instances in The Suns of Independence by Ahmadou Kourouma

Sylvan, Anna January 2021 (has links)
The African novel The Suns of Independence, written by Ivorian author Ahmadou Kourouma, is considered one of the first to study the disillusionment of the postcolonial era after the independencies in Africa. The novel is celebrated for its narrative style, inspired by the Malinke culture and language, and characterised by its oral tradition and the interaction between the narrator and his audience. Using the concepts of Gérard Genette (1983), this study analyses the following narrative instances in the novel: The narrator addressing the narratee, proverbs, comparisons and riddles, the narrator addressing a character, the procedure of question-answer, and the dream, and discusses for each of the narrative instances the relationship of the narrator towards the story, the perspective, the narrative level, and the function of the narrator. The findings show that the alternation of narrative instances gives access to more functions of the narrator. The narrative instances in which the narrator addresses the narratee or the character and the procedure of question-response create an illusion of a dialogue between narrator and narratee, thus enhances the communicative function, whereas proverbs, comparisons, and riddles, apart from connecting with the narratee, also play an important role in order to explain and evaluate developments, characters and environments. Other narrative instances, such as the dream, play an important role for the narrative function.
95

Learning Techniques For Information Retrieval And Mining In High-dimensional Databases

Cheng, Hao 01 January 2009 (has links)
The main focus of my research is to design effective learning techniques for information retrieval and mining in high-dimensional databases. There are two main aspects in the retrieval and mining research: accuracy and efficiency. The accuracy problem is how to return results which can better match the ground truth, and the efficiency problem is how to evaluate users' requests and execute learning algorithms as fast as possible. However, these problems are non-trivial because of the complexity of the high-level semantic concepts, the heterogeneous natures of the feature space, the high dimensionality of data representations and the size of the databases. My dissertation is dedicated to addressing these issues. Specifically, my work has five main contributions as follows. The first contribution is a novel manifold learning algorithm, Local and Global Structures Preserving Projection (LGSPP), which defines salient low-dimensional representations for the high-dimensional data. A small number of projection directions are sought in order to properly preserve the local and global structures for the original data. Specifically, two groups of points are extracted for each individual point in the dataset: the first group contains the nearest neighbors of the point, and the other set are a few sampled points far away from the point. These two point sets respectively characterize the local and global structures with regard to the data point. The objective of the embedding is to minimize the distances of the points in each local neighborhood and also to disperse the points far away from their respective remote points in the original space. In this way, the relationships between the data in the original space are well preserved with little distortions. The second contribution is a new constrained clustering algorithm. Conventionally, clustering is an unsupervised learning problem, which systematically partitions a dataset into a small set of clusters such that data in each cluster appear similar to each other compared with those in other clusters. In the proposal, the partial human knowledge is exploited to find better clustering results. Two kinds of constraints are integrated into the clustering algorithm. One is the must-link constraint, indicating that the involved two points belong to the same cluster. On the other hand, the cannot-link constraint denotes that two points are not within the same cluster. Given the input constraints, data points are arranged into small groups and a graph is constructed to preserve the semantic relations between these groups. The assignment procedure makes a best effort to assign each group to a feasible cluster without violating the constraints. The theoretical analysis reveals that the probability of data points being assigned to the true clusters is much higher by the new proposal, compared to conventional methods. In general, the new scheme can produce clusters which can better match the ground truth and respect the semantic relations between points inferred from the constraints. The third contribution is a unified framework for partition-based dimension reduction techniques, which allows efficient similarity retrieval in the high-dimensional data space. Recent similarity search techniques, such as Piecewise Aggregate Approximation (PAA), Segmented Means (SMEAN) and Mean-Standard deviation (MS), prove to be very effective in reducing data dimensionality by partitioning dimensions into subsets and extracting aggregate values from each dimension subset. These partition-based techniques have many advantages including very efficient multi-phased pruning while being simple to implement. They, however, are not adaptive to different characteristics of data in diverse applications. In this study, a unified framework for these partition-based techniques is proposed and the issue of dimension partitions is examined in this framework. An investigation of the relationships of query selectivity and the dimension partition schemes discovers indicators which can predict the performance of a partitioning setting. Accordingly, a greedy algorithm is designed to effectively determine a good partitioning of data dimensions so that the performance of the reduction technique is robust with regard to different datasets. The fourth contribution is an effective similarity search technique in the database of point sets. In the conventional model, an object corresponds to a single vector. In the proposed study, an object is represented by a set of points. In general, this new representation can be used in many real-world applications and carries much more local information, but the retrieval and learning problems become very challenging. The Hausdorff distance is the common distance function to measure the similarity between two point sets, however, this metric is sensitive to outliers in the data. To address this issue, a novel similarity function is defined to better capture the proximity of two objects, in which a one-to-one mapping is established between vectors of the two objects. The optimal mapping minimizes the sum of distances between each paired points. The overall distance of the optimal matching is robust and has high retrieval accuracy. The computation of the new distance function is formulated into the classical assignment problem. The lower-bounding techniques and early-stop mechanism are also proposed to significantly accelerate the expensive similarity search process. The classification problem over the point-set data is called Multiple Instance Learning (MIL) in the machine learning community in which a vector is an instance and an object is a bag of instances. The fifth contribution is to convert the MIL problem into a standard supervised learning in the conventional vector space. Specially, feature vectors of bags are grouped into clusters. Each object is then denoted as a bag of cluster labels, and common patterns of each category are discovered, each of which is further reconstructed into a bag of features. Accordingly, a bag is effectively mapped into a feature space defined by the distances from this bag to all the derived patterns. The standard supervised learning algorithms can be applied to classify objects into pre-defined categories. The results demonstrate that the proposal has better classification accuracy compared to other state-of-the-art techniques. In the future, I will continue to explore my research in large-scale data analysis algorithms, applications and system developments. Especially, I am interested in applications to analyze the massive volume of online data.
96

Cost-Based Vectorization of Instance-Based Integration Processes

Boehm, Matthias, Habich, Dirk, Preissler, Steffen, Lehner, Wolfgang 19 January 2023 (has links)
The inefficiency of integration processes - as an abstraction of workflow-based integration tasks - is often reasoned by low resource utilization and significant waiting times for external systems. With the aim to overcome these problems, we proposed the concept of process vectorization. There, instance-based integration processes are transparently executed with the pipes-and-filters execution model. Here, the term vectorization is used in the sense of processing a sequence (vector) of messages by one standing process. Although it has been shown that process vectorization achieves a significant throughput improvement, this concept has two major drawbacks. First, the theoretical performance of a vectorized integration process mainly depends on the performance of the most cost-intensive operator. Second, the practical performance strongly depends on the number of available threads. In this paper, we present an advanced optimization approach that addresses the mentioned problems. Therefore, we generalize the vectorization problem and explain how to vectorize process plans in a cost-based manner. Due to the exponential complexity, we provide a heuristic computation approach and formally analyze its optimality. In conclusion of our evaluation, the message throughput can be significantly increased compared to both the instance-based execution as well as the rule-based process vectorization.
97

Testing Lifestyle Store Website Using JMeter in AWS and GCP

Tangella, Ankhit, Katari, Padmaja January 2022 (has links)
Background: As cloud computing has risen over the last decades, there are several cloud services accessible on the market, users may prefer to select those that are more flexible and efficient. Based on the preceding, we chose to research to evaluate cloud services in terms of which would be better for the user in terms ofgetting the needed data from the chosen website and utilizing JMeter for performance testing. If we continue our thesis study by assessing the performance of different sample users using JMeter as the testing tool, it is appropriate for our thesis research subject. In this case, the user interfaces of GCP and AWS are compared while doing several compute engine-related operations. Objectives: This thesis aims to test the website performance after deploying in two distinct cloud platforms.After the creation of instances in AWS, a domain in GCP and also the bucket, the website files are uploaded into the bucket. The GCP and AWS instances are connected to the lifestyle store website. The performance testing on the selected website is done on both services, and then comparison ofthe outcomes of our thesis research using the testing tool Jmeter is done. Methods: In these, we choose experimentation as our research methodology,and in this, the task is done in two cloud platforms in which the website will be deployed separately. The testing tool with performance testing is employed. JMeter is used to test a website’s performance in both possible services and then to gather our research results, and the visualization of the results are done in an aggregate graph, graphs and summary reports. The metrics are Throughput, average response time, median, percentiles and standard deviation. Results: The results are based on JMeter performance testing of a selected web-site between two cloud platforms. The results of AWS and GCP can be shown in the aggregate graph. The graph results are based on the testing tool to determine which service is best for users to obtain a response from the website for requested data in the shortest amount of time. We have considered 500 and 1000 users, and based on the results, we have compared the metrics throughput, average response time, standard deviation and percentiles. The 1000 user results are compared to know which cloud platform performs better. Conclusions: According to the results from the 1000 users, it can be concluded that AWS has a higher throughput than GCP and a less average response time.Thus, it can be said that AWS outperforms GCP in terms of performance.
98

Trajectory Similarity Based Prediction for Remaining Useful Life Estimation

Wang, Tianyi 06 December 2010 (has links)
No description available.
99

Comparing Weak and Strong Annotation Strategies for Multiple Instance Learning in Digital Pathology / Jämförelse av svaga och starka annoteringsstrategier för flerinstansinlärning i digital patologi

Ciallella, Alice January 2022 (has links)
Prostate cancer is the second most diagnosed cancer worldwide and its diagnosis is done through visual inspection of biopsy tissue by a pathologist, who assigns a score used by doctors to decide on the treatment. However, the scoring system, the Gleason score, is affected by a high inter and intra-observer variability, lack of standardization, and overestimation. Therefore, there is a need for new solutions that can reduce these issues and provide a more accurate diagnosis. Nowadays, high-resolution digital images of biopsy tissues can be obtained and stored. The availability of such images, called Whole Slide Images (WSI) allows the implementation of Machine and Deep learning models to assist pathologists in diagnosing prostate cancer. Multiple-Instance Learning (MIL) has been shown to reach very promising results in digital pathology and binary classification of prostate cancer slides. However, such models require large datasets to ensure good performances. This project wants to investigate the use of small sets of strongly annotated images to create new large datasets to train a MIL model. To evaluate the performance of this approach, the standard dataset is used to obtain baselines for both binary and multiclass classification tasks. For multiclassification, the International Society of Urological Pathology (ISUP) score is used, which is derived from the Gleason score. The dataset used is the publicly available PANDA. In this project, only the slides from RadboudUniversity Medical Center are used, which consists of 5160 images. The MIL model chosen is the Clustering-constrained Attention Multiple instance learning (CLAM) model, which is publicly available. The standard approach reaches a Cohen’s kappa (κ) of 0.78 and 0.59 for binary and multiclass classification respectively. To evaluate the new approach, large datasets are created starting from different set sizes. Using 500 images, the model reaches a κ of 0.72 and 0.38 respectively. While for the binary the results of the two approaches are comparable, the new approach is not beneficial for multiclass classification tasks.
100

Event Detection and Extraction from News Articles

Wang, Wei 21 February 2018 (has links)
Event extraction is a type of information extraction(IE) that works on extracting the specific knowledge of certain incidents from texts. Nowadays the amount of available information (such as news, blogs, and social media) grows in exponential order. Therefore, it becomes imperative to develop algorithms that automatically extract the machine-readable information from large volumes of text data. In this dissertation, we focus on three problems in obtaining event-related information from news articles. (1) The first effort is to comprehensively analyze the performance and challenges in current large-scale event encoding systems. (2) The second problem involves event detection and critical information extractions from news articles. (3) Third, the efforts concentrate on event-encoding which aims to extract event extent and arguments from texts. We start by investigating the two large-scale event extraction systems (ICEWS and GDELT) in the political science domain. We design a set of experiments to evaluate the quality of the extracted events from the two target systems, in terms of reliability and correctness. The results show that there exist significant discrepancies between the outputs of automated systems and hand-coded system and the accuracy of both systems are far away from satisfying. These findings provide preliminary background and set the foundation for using advanced machine learning algorithms for event related information extraction. Inspired by the successful application of deep learning in Natural Language Processing (NLP), we propose a Multi-Instance Convolutional Neural Network (MI-CNN) model for event detection and critical sentences extraction without sentence level labels. To evaluate the model, we run a set of experiments on a real-world protest event dataset. The result shows that our model could be able to outperform the strong baseline models and extract the meaningful key sentences without domain knowledge and manually designed features. We also extend the MI-CNN model and propose an MIMTRNN model for event extraction with distant supervision to overcome the problem of lacking fine level labels and small size training data. The proposed MIMTRNN model systematically integrates the RNN, Multi-Instance Learning, and Multi-Task Learning into a unified framework. The RNN module aims to encode into the representation of entity mentions the sequential information as well as the dependencies between event arguments, which are very useful in the event extraction task. The Multi-Instance Learning paradigm makes the system does not require the precise labels in entity mention level and make it perfect to work together with distant supervision for event extraction. And the Multi-Task Learning module in our approach is designed to alleviate the potential overfitting problem caused by the relatively small size of training data. The results of the experiments on two real-world datasets(Cyber-Attack and Civil Unrest) show that our model could be able to benefit from the advantage of each component and outperform other baseline methods significantly. / Ph. D. / Nowadays the amount of available information (such as news, blogs, and social media) grows in exponential order. The demand of making use of the massive on-line information during decision making process becomes increasing intensive. Therefore, it is imperative to develop algorithms that automatically extract the formatted information from large volumes of the unstructured text data. In this dissertation, we focus on three problems in obtaining event-related information from news articles. (1) The first effort is to comprehensively analyze the performance and challenges in current large-scale event encoding systems. (2) The second problem involves detecting the event and extracting key information about the event in the article. (3) Third, the efforts concentrate on extracting the arguments of the event from the text. We found that there exist significant discrepancies between the outputs of automated systems and hand-coded system and the accuracy of current event extraction systems are far away from satisfying. These findings provide preliminary background and set the foundation for using advanced machine learning algorithms for event related information extraction. Our experiments on two real-world event extraction tasks (Cyber-Attack and Civil Unrest) show the effectiveness of our deep learning approaches for detecting and extracting the event information from unstructured text data.

Page generated in 0.0662 seconds