• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 21
  • 21
  • 16
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 172
  • 72
  • 48
  • 33
  • 29
  • 28
  • 26
  • 23
  • 23
  • 22
  • 20
  • 19
  • 18
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Novel Image Representations and Learning Tasks

January 2017 (has links)
abstract: Computer Vision as a eld has gone through signicant changes in the last decade. The eld has seen tremendous success in designing learning systems with hand-crafted features and in using representation learning to extract better features. In this dissertation some novel approaches to representation learning and task learning are studied. Multiple-instance learning which is generalization of supervised learning, is one example of task learning that is discussed. In particular, a novel non-parametric k- NN-based multiple-instance learning is proposed, which is shown to outperform other existing approaches. This solution is applied to a diabetic retinopathy pathology detection problem eectively. In cases of representation learning, generality of neural features are investigated rst. This investigation leads to some critical understanding and results in feature generality among datasets. The possibility of learning from a mentor network instead of from labels is then investigated. Distillation of dark knowledge is used to eciently mentor a small network from a pre-trained large mentor network. These studies help in understanding representation learning with smaller and compressed networks. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2017
42

Materialidades na dramaturgia contemporânea: o Prêmio Shell em São Paulo (2005 - 2015) / Materiality in contemporary dramaturgy: the Shell Award in São Paulo (2005 - 2015)

Gomes, Marcos Nogueira [UNESP] 19 June 2017 (has links)
Submitted by MARCOS NOGUEIRA GOMES null (marcosncv@hotmail.com) on 2017-07-17T21:45:59Z No. of bitstreams: 1 Marcos Nogueira Gomes.pdf: 1082427 bytes, checksum: 42234af9dad12dcdff68cf2d20eb4374 (MD5) / Approved for entry into archive by Luiz Galeffi (luizgaleffi@gmail.com) on 2017-07-19T16:53:29Z (GMT) No. of bitstreams: 1 gomes_mn_me_ia.pdf: 1082427 bytes, checksum: 42234af9dad12dcdff68cf2d20eb4374 (MD5) / Made available in DSpace on 2017-07-19T16:53:29Z (GMT). No. of bitstreams: 1 gomes_mn_me_ia.pdf: 1082427 bytes, checksum: 42234af9dad12dcdff68cf2d20eb4374 (MD5) Previous issue date: 2017-06-19 / A proposta desta dissertação é analisar textos de dramaturgos contemporâneos paulistanos, entre os anos de 2005 e 2015, tendo em vista a relação entre texto e cena no interior dessas dramaturgias, identificando e refletindo sobre a presença de elementos incorporados de uma gramática cênica que evocam materialidades em suas estruturas textuais. A análise parte do debate sobre a mudança de status do texto ao longo do século XX, considerando o duplo movimento de emancipação do texto e da cena em relação à tutela do gênero dramático, o que configura novas possibilidades de escrita e encenação mais permissíveis entre si. Com base nas referências teóricas cotejadas no primeiro capítulo, cria-se uma grade de análise para o exame das dramaturgias selecionadas, com o objetivo de posicioná-las, no terceiro capítulo, entre dois polos: mythos e opsis – e posteriormente serem consideradas polaridades recorrentes entre dispositivos dramatúrgicos encontrados nos textos. O corpus desta dissertação, analisado no segundo capítulo, é composto por textos de autores vencedores do Prêmio Shell de Teatro em São Paulo na categoria de Melhor Autor no período de onze anos – até o início desta pesquisa. Este recorte é retomado no final do terceiro capítulo com o objetivo de estabelecer conexões entre o processo de legitimação de bens culturais por instâncias de consagração, segundo a conceituação do sociólogo Pierre Bourdieu, e as estruturas dramatúrgicas analisadas, pensando sobre os desdobramentos entre teoria e prática na contemporaneidade, bem como sobre a função do autor e da noção de autoria no contexto do novo status do texto teatral – e do lugar do dramaturgo – no processo de criação. / The purpose of this dissertation is to analyze texts written by contemporary playwrights from the city of São Paulo, between the years of 2005 to 2015, considering a relationship between text and the scene within the dramaturgies, identifying and reflecting on a presence of incorporated elements of a scenic grammar that evoke materialities in its textual structures. The analysis starts from the debate about the change in the state of the text throughout the twentieth century, considering the double movement of emancipation of the text and the scene in relation to the tutelage of the dramatic genre, which sets up new possibilities of writing and staging more permissible among themselves. Based on theoretical references in the first chapter, an analysis evaluation is created for the examination of selected dramaturgies, with the aim of positioning, in the third chapter, between two poles: mythos and opsis - and later considered recurrent polarities between devices found in the texts. The corpus of this dissertation, analyzed in the second chapter, is composed of texts by authors who won the Shell Prize of Theater in São Paulo in the category of Best Author in the period of eleven years - until the beginning of the research. This methodological cut is resumed at the end of the third chapter with the aim of establishing connections between the process of legitimation of cultural goods by instances of consecration, according to a conception of the sociologist Pierre Bourdieu, and the dramaturgical structures analyzed, on the unfolding between theory and practice in contemporaneity, as well as on the function of the author and the notion of authorship, in the context of the new state of the theatrical text - and in the place of the playwright - in the process of creation.
43

An Instance based Approach to Find the Types of Correspondence between the Attributes of Heterogeneous Datasets

Riaz, Muhammad Atif, Munir, Sameer January 2012 (has links)
Context: Determining attribute correspondence is the most important, time consuming and knowledge intensive part during databases integration. It is also used in other data manipulation applications such as data warehousing, data design, semantic web and e-commerce. Objectives: In this thesis the aim is to investigate how to find the types of correspondence between the attributes of heterogeneous datasets when schema design information of the data sets is unknown. Methods: A literature review was conducted to extract the knowledge related to the approaches that are used to find the correspondence between the attributes of heterogeneous datasets. Extracted knowledge from the literature review is used in developing an instance based approach for finding types of correspondence between the attributes of heterogeneous datasets when schema design information is unknown. To validate the proposed approach an experiment was conducted in the real environment using the data provided by the Telecom Industry (Ericsson) Karlskrona. Evaluation of the results was carried using the well known and mostly used measures from information retrieval field precision, recall and F-measure. Results: To find the types of correspondence between the attributes of heterogeneous datasets, good results depend on the ability of the algorithm to avoid the unmatched pairs of rows during the Row Similarity Phase. An evaluation of proposed approach is performed via experiments. We found 96.7% (average of three experiments) F-measure. Conclusions: The analysis showed that the proposed approach was feasible to be used and it provided users a mean to find the corresponding attributes and the types of correspondence between corresponding attributes, based on the information extracted from the similar pairs of rows from the heterogeneous data sets where their similarity based on the same common primary keys values.
44

Contextual Recurrent Level Set Networks and Recurrent Residual Networks for Semantic Labeling

Le, Ngan Thi Hoang 01 May 2018 (has links)
Semantic labeling is becoming more and more popular among researchers in computer vision and machine learning. Many applications, such as autonomous driving, tracking, indoor navigation, augmented reality systems, semantic searching, medical imaging are on the rise, requiring more accurate and efficient segmentation mechanisms. In recent years, deep learning approaches based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have dramatically emerged as the dominant paradigm for solving many problems in computer vision and machine learning. The main focus of this thesis is to investigate robust approaches that can tackle the challenging semantic labeling tasks including semantic instance segmentation and scene understanding. In the first approach, we convert the classic variational Level Set method to a learnable deep framework by proposing a novel definition of contour evolution named Recurrent Level Set (RLS). The proposed RLS employs Gated Recurrent Units to solve the energy minimization of a variational Level Set functional. The curve deformation processes in RLS is formulated as a hidden state evolution procedure and is updated by minimizing an energy functional composed of fitting forces and contour length. We show that by sharing the convolutional features in a fully end-to-end trainable framework, RLS is able to be extended to Contextual Recurrent Level Set (CRLS) Networks to address semantic segmentation in the wild problem. The experimental results have shown that our proposed RLS improves both computational time and segmentation accuracy against the classic variational Level Set-based methods whereas the fully end-to-end system CRLS achieves competitive performance compared to the state-of-the-art semantic segmentation approaches on PAS CAL VOC 2012 and MS COCO 2014 databases. The second proposed approach, Contextual Recurrent Residual Networks (CRRN), inherits all the merits of sequence learning information and residual learning in order to simultaneously model long-range contextual infor- mation and learn powerful visual representation within a single deep network. Our proposed CRRN deep network consists of three parts corresponding to sequential input data, sequential output data and hidden state as in a recurrent network. Each unit in hidden state is designed as a combination of two components: a context-based component via sequence learning and a visualbased component via residual learning. That means, each hidden unit in our proposed CRRN simultaneously (1) learns long-range contextual dependencies via a context-based component. The relationship between the current unit and the previous units is performed as sequential information under an undirected cyclic graph (UCG) and (2) provides powerful encoded visual representation via residual component which contains blocks of convolution and/or batch normalization layers equipped with an identity skip connection. Furthermore, unlike previous scene labeling approaches [1, 2, 3], our method is not only able to exploit the long-range context and visual representation but also formed under a fully-end-to-end trainable system that effectively leads to the optimal model. In contrast to other existing deep learning networks which are based on pretrained models, our fully-end-to-end CRRN is completely trained from scratch. The experiments are conducted on four challenging scene labeling datasets, i.e. SiftFlow, CamVid, Stanford background, and SUN datasets, and compared against various state-of-the-art scene labeling methods.
45

Právní a ekonomické aspekty institutu věřitele poslední instance / Legal and economic aspects of the institute of the lender of last resort

Vágnerová, Tereza January 2021 (has links)
Legal and economic aspects of the institute of the lender of last resort Abstract The master thesis deals with the legal and economic aspects of the institute of the lender of last resort, as a key institute that helps ensure the stability of financial markets. The thesis is divided into four parts. The first part is focused on the economic aspects of this institute, especially deals with the general definition of the term lender of last resort and its historical genesis, as well as the principles and forms of assistance of the lender of last resort. The key knowledge of this part is the Thornton-Bagehot doctrine of the lender of last resort. The second part of the thesis deals with the legal regulation of the lender of last resort in the Czech Republic. The third part is aimed at relevant European legislation and provides a summary of the relevant case law of the Court of Justice of the European Union, namely the cases Pringle, Gauweiler and Weiss. The fourth and final part of the thesis outlines the legal framework of this institute in the USA. The second to fourth part of the thesis also analyzes the behavior of 3 central banks (CNB, ECB and FED) in fulfilling the role of lender of last resort during the global financial crisis of 2008, the European debt crisis and the current coronavirus crisis,...
46

Stanovení výše pojistného plnění za škodu na rodinném domě v Lužci nad Vltavou způsobenou povodní / Determination of the Amount of Insurance Settlements for the Damage Caused by Flood on a Detached House in Lužec nad Vltavou

Kadlec, Miroslav January 2012 (has links)
The purpose of this thesis is to identify issues in the insurance coverage of a family house which was flooded. The main focus of my work is to value a property by the cost method, as well as documenting damages and determining the appropriate cost of the required repairs or reconstruction in accordance with the insurance contract. Furthermore, I will determine a new insurance value of the property. In the theoretical section, fundamental concepts are outlined, in accordance with applicable laws and regulations.
47

Matched instances of Quantum Sat (QSat): Product state solutions of restrictions

Goerdt, Andreas 18 January 2019 (has links)
Matched instances of the quantum satisfiability problem have an interesting property: They always have a product state solution. However, it is not clear how to find such a solution efficiently. Recenttly some progress on this question has been made by considering restricted instances of this problem. In this note we consider a different restriction of the problem which turns out to be solvable by techniques of linear algebra.
48

Instance-Based Matching of Large Life Science Ontologies

Kirsten, Toralf, Thor, Andreas, Rahm, Erhard 06 February 2019 (has links)
Ontologies are heavily used in life sciences so that there is increasing value to match different ontologies in order to determine related conceptual categories. We propose a simple yet powerful methodology for instance-based ontology matching which utilizes the associations between molecular-biological objects and ontologies. The approach can build on many existing ontology associations for instance objects like sequences and proteins and thus makes heavy use of available domain knowledge. Furthermore, the approach is flexible and extensible since each instance source with associations to the ontologies of interest can contribute to the ontology mapping. We study several approaches to determine the instance-based similarity of ontology categories. We perform an extensive experimental evaluation to use protein associations for different species to match between subontologies of the Gene Ontology and OMIM. We also provide a comparison with metadata-based ontology matching.
49

Improving Automatic Image Annotation Using Metadata

Wahlquist, Gustav January 2021 (has links)
Detecting and outlining products in images is beneficial for many use cases in e-commerce, such as automatically identifying and locating products within images and proposing matches for the detections. This study investigated how the utilisation of metadata associated with images of products could help boost the performance of an existing approach with the ultimate goal of reducing manual labour needed to annotate images. This thesis explored if approximate pseudo masks could be generated for products in images by leveraging metadata as image-level labels and subsequently using the masks to train a Mask R-CNN. However, this approach did not result in satisfactory results. Further, this study found that by incorporating the metadata directly in the Mask R-CNN, an mAP performance increase of nearly 5\% was achieved. Furthermore, utilising the available metadata to divide the training samples for a KNN model into subsets resulted in an increased top-3 accuracy of up to 16\%. By representing the data with embeddings created by a pre-trained CNN, the KNN model performed better with both higher accuracy and more reasonable suggestions.
50

Comparison of GCP and AWS using usability heuristic and cognitive walkthrough while creating and launching Virtual Machine instances in VirtualPrivate Cloud

Cherukuri, Prudhvi Nath Naidu, Ganja, Sree Kavya January 2021 (has links)
ABSTRACT Cloud computing has become increasingly important over the years, as the need for computational resources, data storage, and networking capabilities in the field of information technology has been increased. There are several large corporations that offer these services to small companies or to end-users such as GCP, AWS, Microsoft Azure, IBM Cloud, and many more. The main aim of this thesis is to perform the comparison of GCP and AWS consoles in terms of the user interface while performing tasks related to compute engine. The cognitive walkthrough has been performed on tasks such as the creation of VPC, creation of VM instances, and launching them and then from the results, both the interfaces are compared using usability heuristics. Background: As the usage of cloud computing has increased over the years, the companies that are offering these services have grown eventually. Though there are many cloud services available in the market the user might always choose the services that are more flexible and efficient to use. In this manner, the choice of our research is made to compare the cloud services in terms of user interaction user experience. As we dig deep into the topic of user interaction and experience there are evaluation techniques and principles such as cognitive walkthrough and usability heuristics are suitable for our research. Here the comparison is made among GCP and AWS user interfaces while performing some tasks related to compute engine. Objectives: The main objectives of this thesis are to create VPC, VM instances,s and launch VM instances in two different cloud services such as GCP and AWS. To find out the best user interface among these two cloud services from the perspective of the user. Method: The process of finding best user interface among GCP and AWS cloud services is based on the cognitive walkthrough and comparing with usability heuristics. The cognitive walkthrough is performed on chosen tasks in both the services and then compared using usability heuristics to get the results of our research. Results: The results that are obtained from cognitive walkthrough and comparison with usability heuristics shown in graphical formats such as bar graphs, pie charts, and the comparison results are shown in the form of tabular form. The results cannot be universal, as they are just observational results from cognitive walkthrough and usability heuristic evaluation. Conclusion: After performing the above-mentioned methods it is observed that the user interface of GCP is more flexible and efficient in terms of user interaction and experience. Though the user experience may vary based on the experience level of users in cloud services, as per our research the novice user and moderate users have chosen GCP as a better interactive system over AWS. Keywords: Cloud computing, VM instance, Cognitive walkthrough, Usability heuristics, User-interface.

Page generated in 0.0663 seconds