• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 741
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 10
  • 10
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1534
  • 302
  • 290
  • 289
  • 235
  • 195
  • 175
  • 146
  • 127
  • 123
  • 122
  • 111
  • 111
  • 92
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
901

Získávání znalostí z databází pohybujících se objektů / Knowledge Discovery in Databases of Moving Objects

Chovanec, Vladimír January 2011 (has links)
The aim of this master's thesis is to get familiar with problems of data mining and classification. This thesis also continues with application SUNAR, which is upgraded in practical part with SVM classification of persons passing between cameras. In the conclusion, we discuss ways to improve classification and person recognition in application SUNAR.
902

A learning framework for zero-knowledge game playing agents

Duminy, Willem Harklaas 17 October 2007 (has links)
The subjects of perfect information games, machine learning and computational intelligence combine in an experiment that investigates a method to build the skill of a game-playing agent from zero game knowledge. The skill of a playing agent is determined by two aspects, the first is the quantity and quality of the knowledge it uses and the second aspect is its search capacity. This thesis introduces a novel representation language that combines symbols and numeric elements to capture game knowledge. Insofar search is concerned; an extension to an existing knowledge-based search method is developed. Empirical tests show an improvement over alpha-beta, especially in learning conditions where the knowledge may be weak. Current machine learning techniques as applied to game agents is reviewed. From these techniques a learning framework is established. The data-mining algorithm, ID3, and the computational intelligence technique, Particle Swarm Optimisation (PSO), form the key learning components of this framework. The classification trees produced by ID3 are subjected to new post-pruning processes specifically defined for the mentioned representation language. Different combinations of these pruning processes are tested and a dominant combination is chosen for use in the learning framework. As an extension to PSO, tournaments are introduced as a relative fitness function. A variety of alternative tournament methods are described and some experiments are conducted to evaluate these. The final design decisions are incorporated into the learning frame-work configuration, and learning experiments are conducted on Checkers and some variations of Checkers. These experiments show that learning has occurred, but also highlights the need for further development and experimentation. Some ideas in this regard conclude the thesis. / Dissertation (MSc)--University of Pretoria, 2007. / Computer Science / MSc / Unrestricted
903

Die Entdeckung des radioaktiven Bleis durch Hofmann und Strauss im Jahre 1900

Niese, Siegfried 10 January 2017 (has links)
Karl Andreas Hofmann und Eduard Strauss vom Chemischen Staatslaboratorium in München entdeckten im Jahre 1900 in Uran enthaltenden Mineralen radioaktives Blei, das von Ernest Rutherford als Zerfallsprodukt des Radium und später als Bleiisotop 210Pb identifiziert wurde. 1898 hatten Marie und Pierre Curie und 1899 Julius Elster und Friedrich Geitel in Blei, das sie aus der Pechblende abgetrennt hatten, keine Radioaktivität gefunden, da sie die Strahlung mit Hilfe der Ionisation der Luft gemessen hatten, die sehr empfindlich für α-Strahlung aber nicht für die von 210Pb und dessen Tochternuklid 210Bi emittierte β-Strahlung ist, die Hofmann und Strauss mit Hilfe von Fotoplatten gemessen hatten. / In 1900 Karl Andreas Hofmann and Eduard Strauss discovered radioactive lead in uranium containing minerals identified by Ernest Rutherford as decay product of radium, and later as lead isotope 210Pb. In 1898 Marie and Pierre Curie and in 1899 Julius Elster and Friedrich Geitel didn´t found radioactivity in lead separated from pitchblende, because they had measured the radiation by air ionization which is sensitive for α-rays but not for β-rays emitted by 210Pb and its daughter nuclide 210Bi, measured by Hofmann and Strauss with photo plates.
904

APPLICATION OF FINANCIAL MARKET MODELS IN THE HOTEL INDUSTRY

Haejin Kim (9597320) 16 December 2020 (has links)
<p>In this dissertation, I investigated price dynamics in the hotel room-night market and attempted to explain pricing decisions from a market perspective. Since market dynamics of the hotel room-night market can be paralleled to those in the financial market, financial market models allowed for examination of various aspects of hotel room pricing decisions.</p><p>In the first study, advance-purchase discounts were estimated through application of an option pricing model considering property-specific attributes. Non-refundable advance-purchase discounts are a commonly used rate fence. One challenge to their implementation, however, is deciding upon the precise magnitude of the discount. Quan’s (2002) study on the price of room reservations is a good starting point, but it is a conceptual model that assumes away other property-specific factors. This study thus tested the idea that advance-purchase discounts are affected by various components, including the value of the right to cancel a reservation (e.g., cancelation option value) and the room- and property-specific factors in the hotel room-night market (e.g., uncertainty, reviews, and seasonality). The analysis supported this hypothesis and additionally revealed that advance-purchase discounts are smaller for rooms with high review ratings in a high-demand period. Interestingly, the divergence between advance-purchase discounts and cancelation option value components widened in a high-demand period, which implies a tendency by hotels to adjust their room rates rather than the amount of discount for customers who book their stay well in advance. Theoretically, this study thus contributes to finance literature by extending the application of the option pricing model to real options for non-financial assets. This study also contributes to the hospitality literature by demonstrating the effects of property-specific attributes on advance-purchase discount magnitude. The results also have implications to the hospitality industry by providing an analytical framework by which hoteliers can estimate property-specific advance-purchase discounts.</p><p>The second study concentrated on rate parity agreement’s effect on the hotel room-night market’s efficiency at reflecting product characteristics in room rates. This study examined the impact of rate parity agreement between hotels and online travel agencies by comparing hotel rates between Europe and the United States. This study found that room rates were less sensitive to property quality attributes under rate parity clauses. The reflection of property quality on room rates were less efficient when hotels have rate parity agreement with OTAs. Furthermore, the results supported the claim that rate parity exacerbates price increase in periods of high demand, which indicates possible collusion between suppliers (hotels) and distributors (OTAs). The findings provided theoretical implications by testing the market efficiency of the hotel room-night market and confirming the impact at the property level. This study also provided a perspective on pricing decision makers to understand how rate parity agreement influence their pricing decisions. Last, the findings provided support for recent policies in Europe that restrict rate parity agreements between hotels and OTAs.</p><p>The third study empirically examined hoteliers’ response to the demand by observing the price movement of two rates with different cancelation policies—free cancelation rates and non-refundable rates. By modifying Hasbrouck’s (1995) information share approach, this study examined the non-refundable rates’ contribution to the price discovery process. The perceived quality of accommodation by customers, one of the primary determinants of the price discovery process, was included in analysis. The results suggested that non-refundable rates were contribute more to the information variance than free cancelation rates did. The findings also suggested that consumers’ perceived quality and volatility influence non-refundable rates’ contribution to the price discovery process. The results also have practical implications for market participants, as they help to build an understanding of aggregated demand and its impact on pricing. Non-refundable rates are generally regarded as just one of many kinds of discounted rates, but the results of this study suggest that hoteliers should carefully consider the role that non-refundable rates play in their pricing strategy.<br></p>
905

FCART: A New FCA-based System for Data Analysis and Knowledge Discovery

Neznanov, Alexey A., Ilvovsky, Dmitry A., Kuznetsov, Sergei O. 28 May 2013 (has links)
We introduce a new software system called Formal Concept Analysis Research Toolbox (FCART). Our goal is to create a universal integrated environment for knowledge and data engineers. FCART is constructed upon an iterative data analysis methodology and provides a built-in set of research tools based on Formal Concept Analysis techniques for working with object-attribute data representations. The provided toolset allows for the fast integration of extensions on several levels: from internal scripts to plugins. FCART was successfully applied in several data mining and knowledge discovery tasks. Examples of applying the system in medicine and criminal investigations are considered.
906

Literature Study and Assessment of Trajectory Data Mining Tools / Litteraturstudie och utvärdering av verktyg för datautvinning från rörelsebanedata

Kihlström, Petter January 2015 (has links)
With the development of technologies such as Global Navigation Satellite Systems (GNSS), mobile computing, and Information and Communication Technology (ICT) the procedure of sampling positional data has lately been significantly simplified.  This enables the aggregation of large amounts of moving objects data (i.e. trajectories) containing potential information about the moving objects. Within Knowledge Discovery in Databases (KDD), automated processes for realization of this information, called trajectory data mining, have been implemented.   The objectives of this study is to examine 1) how trajectory data mining tasks are defined at an abstract level, 2) what type of information it is possible to extract from trajectory data, 3) what solutions trajectory data mining tools implement for different tasks, 4) how tools uses visualization, and 5) what the limiting aspects of input data are how those limitations are treated. The topic, trajectory data mining, is examined in a literature review, in which a large number of academic papers found trough googling were screened to find relevant information given the above stated objectives.   The literature research found that there are several challenges along the process arriving at profitable knowledge about moving objects. For example, the discrete modelling of movements as polylines is associated with an inherent uncertainty since the location between two sampled positions is unknown.  To reduce this uncertainty and prepare raw data for mining, data often needs to be processed in some way. The nature of pre-processing depends on sampling rate and accuracy properties of raw in-data as well as the requirements formulated by the specific mining method. Also a major challenge is to define relevant knowledge and effective methods for extracting this from the data. Furthermore are conveying results from mining to users an important function. Presenting results in an informative way, both at the level of individual trajectories and sets of trajectories, is a vital but far from trivial task, for which visualization is an effective approach.   Abstractly defined instructions for data mining are formally denoted as tasks. There are four main categories of mining tasks: 1) managing uncertainty, 2) extrapolation, 3) anomaly detection, and 4) pattern detection. The recitation of tasks within this study provides a basis for an assessment of tools used for the execution of these tasks. To arrive at profitable results the dimensions of comparison are selected with the intention to cover the essential parts of the knowledge discovery process. The measures to appraise this are chosen to make results correctly reflect the 1) sophistication, 2) user friendliness, and 3) flexibility of tools. The focus within this thesis is freely available tools, for which the range is proven to be very small and fragmented. The selection of tools found and reported on are: MoveMine 2.0, MinUS, GeT_Move and M-Atlas.   The tools are reviewed entirely through utilizing documentation of the tools. The performance of tools is proved to vary along all dimensional measures except visualization and graphical user interface which all tools provide. Overall the systems preform well considering user-friendliness, somewhat good considering sophistication and poorly considering flexibility. However, since the range of tasks, which tools intend to solve, overall is varying it might not be appropriate to compare the tools in term of better or worse.   This thesis further provides some theoretical insights for users regarding requirements on their knowledge, both concerning the technical aspects of tools and about the nature of the moving objects. Furthermore is the future of trajectory data mining in form of constraints on information extraction as well as requirements for development of tools discussed, where a more robust open source solution is emphasised. Finally, this thesis can altogether be regarded to provide material for guidance in what trajectory mining tools to use depending on application. Work to complement this thesis through comparing the actual performance of tools, when using them, is desirable. / I och med utvecklingen av tekniker så som Global Navigation Satellite systems (GNSS), mobile computing och Information and Communication Technology (ICT) har tillvägagångsätt för insamling av positionsdata drastiskt förenklats. Denna utveckling har möjliggjort för insamlandet av stora mängder data från rörliga objekt (i.e. trajecotries)(sv: rörelsebanor), innehållande potentiell information om dessa rörliga objekt. Inom Knowledge Discovery in Databases (KDD)(sv: kunskapsanskaffning i databaser) tillämpas automatiserade processer för att realisera sådan information, som kallas trajectory data mining (sv: utvinning från rörelsebanedata).   Denna studie ämnar undersöka 1) hur trajectory data mining tasks (sv: utvinning från rörelsebanedata uppgifter) är definierade på en abstrakt nivå, 2) vilken typ av information som är möjlig att utvinna ur rörelsebanedata, 3) vilka lösningar trajectory data ming tools (sv: verktyg för datautvinning från rörelsebanedata) implementerar för olika uppgifter, 4) hur verktyg använder visualisering, och 5) vilka de begränsande aspekterna av input-data är och hur dessa begränsningar hanteras. Ämnet utvinning från rörelsebanedata undersöks genom en litteraturgranskning, i vilken ett stort antal och akademiska rapporter hittade genom googling granskas för att finna relevant information givet de ovan nämnda frågeställningarna.   Litteraturgranskningen visade att processen som leder upp till en användbar kunskap om rörliga objekt innehåller dock flera utmaningar. Till exempel är modelleringen av rörelser som polygontåg associerad med en inbyggd osäkerhet eftersom positionen för objekt mellan två inmätningar är okänd. För att reducera denna osäkerhet och förbereda rådata för extraktion måste ofta datan processeras på något sätt. Karaktären av förprocessering avgörs av insamlingsfrekvens och exakthetsegenskaper hos rå indata tillsammans med de krav som ställs av de specifika datautvinningsmetoderna. En betydande utmaning är också att definiera relevant kunskap och effektiva metoder för att utvinna denna från data. Vidare är förmedlandet av resultat från utvinnande till användare en viktig funktion. Att presentera resultat på ett informativt sätt, både på en nivå av enskilda rörelsebanor men och grupper av rörelsebanor är en vital men långt ifrån trivial uppgift, för vilken visualisering är ett effektivt tillvägagångsätt.   Abstrakt definierade instruktioner för dataextraktion är formellt betecknade som uppgifter. Det finns fyra huvudkategorier av uppgifter: 1) hantering av osäkerhet, 2) extrapolation, 3) anomalidetektion, and 4) mönsterdetektion. Sammanfattningen av uppgifter som ges i denna rapport utgör ett fundament för en utvärdering av verktyg, vilka används för utförandet av uppgifter. För att landa i ett givande resultat har jämförelsegrunderna för verktygen valts med intentionen att täcka de viktigaste delarna av processen för att förvärva kunskap. Måtten för att utvärdera detta valdes för att reflektera 1) sofistikering, 2) användarvänlighet, och 3) flexibiliteten hos verktygen. Fokuset inom denna studie har varit verktyg som är gratis tillgängliga, för vilka utbudet har visat sig vara litet och fragmenterat. Selektionen av verktyg som hittats och utvärderats var: MoveMine 2.0, MinUS, GeT_Move and M-Atlas.   Verktygen utvärderades helt och hållet baserat på tillgänglig dokumentation av verktygen.  Prestationen av verktygen visade sig variera längs alla jämförelsegrunder utom visualisering och grafiskt gränssnitt som alla verktyg tillhandahöll. Överlag presterade systemen väl gällande användarvänlighet, någorlunda bra gällande sofistikering och dåligt gällande flexibilitet. Hursomhelst, eftersom uppgifterna som verktygen avser att lösa varierar är det inte relevant att värdera dem mot varandra gällande denna aspekt.   Detta arbete tillhandahåller vidare några teoretiska insikter för användare gällande krav som ställs på deras kunskap, både gällande de tekniska aspekterna av verktygen och rörliga objekts beskaffenhet. Vidare diskuteras framtiden för utvinning från rörelsebanedata i form av begränsningar på informationsutvinning och krav för utvecklingen av verktyg, där en mer robust open source lösning betonas. Sammantaget kan detta arbete anses tillhandahålla material för vägledning i vad för verktyg för datautvinning från rörelsebanedata som kan användas beroende på användningsområde. Arbete för att komplettera denna rapport genom utvärdering av verktygens prestation utifrån användning av dem är önskvärt.
907

New Computational Methods for Literature-Based Discovery

Ding, Juncheng 05 1900 (has links)
In this work, we leverage the recent developments in computer science to address several of the challenges in current literature-based discovery (LBD) solutions. First, LBD solutions cannot use semantics or are too computational complex. To solve the problems we propose a generative model OverlapLDA based on topic modeling, which has been shown both effective and efficient in extracting semantics from a corpus. We also introduce an inference method of OverlapLDA. We conduct extensive experiments to show the effectiveness and efficiency of OverlapLDA in LBD. Second, we expand LBD to a more complex and realistic setting. The settings are that there can be more than one concept connecting the input concepts, and the connectivity pattern between concepts can also be more complex than a chain. Current LBD solutions can hardly complete the LBD task in the new setting. We simplify the hypotheses as concept sets and propose LBDSetNet based on graph neural networks to solve this problem. We also introduce different training schemes based on self-supervised learning to train LBDSetNet without relying on comprehensive labeled hypotheses that are extremely costly to get. Our comprehensive experiments show that LBDSetNet outperforms strong baselines on simple hypotheses and addresses complex hypotheses.
908

Interpretable Fine-Grained Visual Categorization

Guo, Pei 16 June 2021 (has links)
Not all categories are created equal in object recognition. Fine-grained visual categorization (FGVC) is a branch of visual object recognition that aims to distinguish subordinate categories within a basic-level category. Examples include classifying an image of a bird into specific species like "Western Gull" or "California Gull". Such subordinate categories exhibit characteristics like small inter-category variation and large intra-class variation, making distinguishing them extremely difficult. To address such challenges, an algorithm should be able to focus on object parts and be invariant to object pose. Like many other computer vision tasks, FGVC has witnessed phenomenal advancement following the resurgence of deep neural networks. However, the proposed deep models are usually treated as black boxes. Network interpretation and understanding aims to unveil the features learned by neural networks and explain the reason behind network decisions. It is not only a necessary component for building trust between humans and algorithms, but also an essential step towards continuous improvement in this field. This dissertation is a collection of papers that contribute to FGVC and neural network interpretation and understanding. Our first contribution is an algorithm named Pose and Appearance Integration for Recognizing Subcategories (PAIRS) which performs pose estimation and generates a unified object representation as the concatenation of pose-aligned region features. As the second contribution, we propose the task of semantic network interpretation. For filter interpretation, we represent the concepts a filter detects using an attribute probability density function. We propose the task of semantic attribution using textual summarization that generates an explanatory sentence consisting of the most important visual attributes for decision-making, as found by a general Bayesian inference algorithm. Pooling has been a key component in convolutional neural networks and is of special interest in FGVC. Our third contribution is an empirical and experimental study towards a thorough yet intuitive understanding and extensive benchmark of popular pooling approaches. Our fourth contribution is a novel LMPNet for weakly-supervised keypoint discovery. A novel leaky max pooling layer is proposed to explicitly encourages sparse feature maps to be learned. A learnable clustering layer is proposed to group the keypoint proposals into final keypoint predictions. 2020 marks the 10th year since the beginning of fine-grained visual categorization. It is of great importance to summarize the representative works in this domain. Our last contribution is a comprehensive survey of FGVC containing nearly 200 relevant papers that cover 7 common themes.
909

Secure Context-Aware Mobile SIP User Agent

Merha, Bemnet Tesfaye January 2009 (has links)
Context awareness is an important aspect of pervasive and ubiquitous computing. By utilizing contextual information gathered from the environment, applications can adapt to the user’s specific situation. In this thesis, user context is used to automatically discover multimedia devices and services that can be used by a mobile Session Initiation Protocol (SIP) user agent. The location of the user is captured using various sensing technologies to allow users of our SIP user agent to interact with network attached projectors, speakers, and cameras in a home or office environment. In order to determine the location of the user, we have developed and evaluated a context aggregation framework that gathers and analyzes contextual information from various sources such as passive infrared sensors, infrared beacons, light intensity, and temperature sensors. Once the location of the user is determined, the Service Location Protocol (SLP) is used to search for services. For this purpose, we have implemented a mobile SLP user agent and integrated it with an existing SIP user agent. The resulting mobile SIP user agent is able to dynamically utilize multimedia devices around it without requiring the user to do any manual configuration. This thesis also addressed the challenge of building trust relationship between the user agent and the multimedia services. We propose a mechanism which enables the user agent authenticate service advertisements before starting to redirect media streams. The measurements we have performed indicate that the proposed context aggregation framework provides more accurate location determination when additional sensors are incorporated. Furthermore, the performance measurements indicate that the delay incurred by introducing context awareness to the SIP user agent is acceptable for a small deployment such as home and office environment. In order to realize large scale deployments, future investigations are recommended to further improve the performance of the framework. / Att vara medveten om kontexten är en viktig synpunkt av präglande och allestädes närvarande uppskattning av omgivningen. Genom att utnyttja den kontextuella informationen som samlats in från omgivningen, kan applikationen anpassas till användarens specifika situation. I denna avhandling använder man användarens sammanhang för att automatiskt upptäcka multimedia utrustning och tjänster som kan användas av en mobil Session Initiation Protokoll (SIP) användaragent. Placeringen av användaren mäter man med hjälp av olika sensorer för att låta användare av vår SIP användaragent att interagera med nätverk tillkopplat projektorer, högtalare och kameror i hem eller kontorsmiljöer. För att avgöra var användaren befinner har vi utvecklat och utvärderat en sammanhangsstruktur som samlar in och analyserar innehållsbaserad information från olika källor; passiva infraröda sensorer, infraröd beacons, ljusstyrkan och temperaturgivare. Efter bestämmaning användarens placering den så kallade Service Location Protocol (SLP) användas för att söka efter tjänster. För detta ändamål har vi genomfört en mobil SLP användaragent och integrerat denna med ett befintligt SIP användaragent. Den resulterande i mobil SIP användaragent som dynamiskt kan utnyttja multimedia utrustning runt omkring utan att kräva att användaren skall kunna göra någon manuell konfiguration. Avhandlingen tar även upp den utmaningen som krävs för att bygga förtroende mellan användaragenten och multimedia tjänster. Vi föreslår en mekanism som gör det möjligt för användaragenten att verifiera tjänstannonsering innan man börjar dirigera medieströmmar. Dessutom så indikerar mätningarna av prestanda att fördröjningen som man utsätter den för genom att introducera ”medvetenhet om sammanhanget” till SIP användaragenten är acceptabel på hemma eller i en företagsmiljö. För att stora spridningar skall bli verklighet så rekommenderas det att göra mer forskning för att förbättra prestanda.
910

Vytěžování databáze Poradny pro poruchy metabolismu / Data mining of the database of Consulting centre for metabolism disorders

Senft, Martin January 2014 (has links)
This thesis applies the data mining method of decision rules on data from Consulting centre for Metabolism disorders from University hospital Pilsen. As a tool is used the system LISp-Miner, developed at University of Economics, Prague. Decision rules found are evaluated by a specialist. The main parts of this thesis are followings: an overview on main data mining methods and results evalutation methods, description of the data mining method application on data and description and evaluation of results.

Page generated in 0.0831 seconds