• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1314
  • 700
  • 234
  • 112
  • 97
  • 43
  • 36
  • 18
  • 16
  • 16
  • 15
  • 15
  • 11
  • 10
  • 10
  • Tagged with
  • 3151
  • 582
  • 547
  • 368
  • 355
  • 298
  • 296
  • 294
  • 237
  • 221
  • 215
  • 208
  • 191
  • 186
  • 180
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
911

Kernel-Based Data Mining Approach with Variable Selection for Nonlinear High-Dimensional Data

Baek, Seung Hyun 01 May 2010 (has links)
In statistical data mining research, datasets often have nonlinearity and high-dimensionality. It has become difficult to analyze such datasets in a comprehensive manner using traditional statistical methodologies. Kernel-based data mining is one of the most effective statistical methodologies to investigate a variety of problems in areas including pattern recognition, machine learning, bioinformatics, chemometrics, and statistics. In particular, statistically-sophisticated procedures that emphasize the reliability of results and computational efficiency are required for the analysis of high-dimensional data. In this dissertation, first, a novel wrapper method called SVM-ICOMP-RFE based on hybridized support vector machine (SVM) and recursive feature elimination (RFE) with information-theoretic measure of complexity (ICOMP) is introduced and developed to classify high-dimensional data sets and to carry out subset selection of the variables in the original data space for finding the best for discriminating between groups. Recursive feature elimination (RFE) ranks variables based on the information-theoretic measure of complexity (ICOMP) criterion. Second, a dual variables functional support vector machine approach is proposed. The proposed approach uses both the first and second derivatives of the degradation profiles. The modified floating search algorithm for the repeated variable selection, with newly-added degradation path points, is presented to find a few good variables while reducing the computation time for on-line implementation. Third, a two-stage scheme for the classification of near infrared (NIR) spectral data is proposed. In the first stage, the proposed multi-scale vertical energy thresholding (MSVET) procedure is used to reduce the dimension of the high-dimensional spectral data. In the second stage, a few important wavelet coefficients are selected using the proposed SVM gradient-recursive feature elimination (RFE). Fourth, a novel methodology based on a human decision making process for discriminant analysis called PDCM is proposed. The proposed methodology consists of three basic steps emulating the thinking process: perception, decision, and cognition. In these steps two concepts known as support vector machines for classification and information complexity are integrated to evaluate learning models.
912

Mixture model cluster analysis under different covariance structures using information complexity

Erar, Bahar 01 August 2011 (has links)
In this thesis, a mixture-model cluster analysis technique under different covariance structures of the component densities is developed and presented, to capture the compactness, orientation, shape, and the volume of component clusters in one expert system to handle Gaussian high dimensional heterogeneous data sets to achieve flexibility in currently practiced cluster analysis techniques. Two approaches to parameter estimation are considered and compared; one using the Expectation-Maximization (EM) algorithm and another following a Bayesian framework using the Gibbs sampler. We develop and score several forms of the ICOMP criterion of Bozdogan (1994, 2004) as our fitness function; to choose the number of component clusters, to choose the correct component covariance matrix structure among nine candidate covariance structures, and to select the optimal parameters and the best fitting mixture-model. We demonstrate our approach on simulated datasets and a real large data set, focusing on early detection of breast cancer. We show that our approach improves the probability of classification error over the existing methods.
913

Beyond usability -- affect in web browsing

Deng, Liqiong 02 June 2009 (has links)
This research concentrates on the visual aesthetics of a website, investigating the web user's affective/emotional reactions to different designs of web homepage aesthetics and their influence on subsequent behaviors of web users. Drawing on the existing theories and empirical findings in environmental psychology, human-computer interaction, aesthetics, and marketing research literature, a research model is developed to explore the relationships between the visual aesthetic qualities of a website homepage - webpage visual complexity and order, induced emotional states in users, and users' approach behaviors toward the website. The model predicts that the visual aesthetics of a web homepage elicit specific emotional responses by provoking intrinsic feelings of pleasantness / unpleasantness, arousal, as well as motivational pleasantness / unpleasantness in web users. These elicited emotional responses, which mediate the effect of homepage aesthetic features, in turn affect web users' subsequent behaviors toward the website, such as further approaching/exploring or avoiding the website. A set of pilot studies and a main laboratory experiment were conducted to test the model and its associated hypotheses. Based on the results of pilot studies, 12 versions of a Gift website's homepage, which varied at four levels of complexity and three levels of order, were selected the stimuli materials for the main experiment. A total of 467 undergraduate students participated in the main study. During the main study, we instructed the participants to browse the homepage stimuli for a goal-oriented web search activity or an excitement/enjoyment-seeking web browsing activity, measured how they felt about the homepage and their degree of approach/avoidance tendencies toward the entire website. The results of the study generally confirmed the belief that a web user's initial emotional responses (i.e., pleasantness and arousal) evoked by the aesthetic qualities of a website's homepage he/she first encounters will have carry-over effects on his/her subsequent approach behaviors toward the website.
914

Interactions in Decentralized Environments

Allen, Martin William 01 February 2009 (has links)
The decentralized Markov decision process (Dec-POMDP) is a powerful formal model for studying multiagent problems where cooperative, coordinated action is optimal, but each agent acts based on local data alone. Unfortunately, it is known that Dec-POMDPs are fundamentally intractable: they are NEXP-complete in the worst case, and have been empirically observed to be beyond feasible optimal solution.To get around these obstacles, researchers have focused on special classes of the general Dec-POMDP problem, restricting the degree to which agent actions can interact with one another. In some cases, it has been proven that these sorts of structured forms of interaction can in fact reduce worst-case complexity. Where formal proofs have been lacking, empirical observations suggest that this may also be true for other cases, although less is known precisely.This thesis unifies a range of this existing work, extending analysis to establish novel complexity results for some popular restricted-interaction models. We also establish some new results concerning cases for which reduced complexity has been proven, showing correspondences between basic structural features and the potential for dimensionality reduction when employing mathematical programming techniques.As our new complexity results establish that worst-case intractability is more widespread than previously known, we look to new ways of analyzing the potential average-case difficulty of Dec-POMDP instances. As this would be extremely difficult using the tools of traditional complexity theory, we take a more empirical approach. In so doing, we identify new analytical measures that apply to all Dec-POMDPs, whatever their structure. These measures allow us to identify problems that are potentially easier to solve on average, and validate this claim empirically. As we show, the performance of well-known optimal dynamic programming methods correlates with our new measure of difficulty. Finally, we explore the approximate case, showing that our measure works well as a predictor of difficulty there, too, and provides a means of setting algorithm parameters to achieve far more efficient performance.
915

Modellierung von Aktienkursen im Lichte der Komplexitätsforschung

Kauper, Benjamin, Kunze, Karl-Kuno January 2011 (has links)
This paper offers empirical evidence on the power of Sornette et al's [2001] model of bubbles and crashes regarding the German stock market between 1960 and 2009. We identify relevant time periods and describe them with the function given by Sornette et al's model. Our results show some evidence in predicting crashes with the understanding of logarithmic periodic structures that are hidden in the stock price trajectories. It was shown that for the DAX most of the relevant parameters determining the shape of the logarithmic periodic structures are lying in the expected interval researched by Sornette et al. Further more the paper implicitly shows that the point of time of former crashes can be predicted with the presented formula. We conclude that the concept of financial time series conceived as purely random objects should be generalised as to admit complexity.
916

Resource-Predictable and Efficient Monitoring of Events

Mellin, Jonas January 2004 (has links)
We present a formally specified event specification language (Solicitor). Solicitor is suitable for real-time systems, since it results in resource-predictable and efficient event monitors. In event monitoring, event expressions defined in an event specification language control the monitoring by matching incoming streams of event occurrences against the event expressions. When an event expression has a complete set of matching event occurrences, the event type that this expression defines has occurred. Each event expression is specified by combining contributing event types with event operators such as sequence, conjunction, disjunction; contributing event types may be primitive, representing happenings of interest in a system, or composite, specified by event expressions. The formal specification of Solicitor is based on a formal schema that separates two important aspects of an event expression; these aspects are event operators and event contexts. The event operators aspect addresses the relative constraints between contributing event occurrences, whereas the event contexts aspect addresses the selection of event occurrences from an event stream with respect to event occurrences that are used or invalidated during event monitoring. The formal schema also contains an abstract model of event monitoring. Given this formal specification, we present realization issues of, a time complexity study of, as well as a proof of limited resource requirements of event monitoring. We propose an architecture for resource-predictable and efficient event monitoring. In particular, this architecture meets the requirements of realtime systems by defining how event monitoring and tasks are associated. A declarative way of specifying this association is proposed within our architecture. Moreover, an efficient memory management scheme for event composition is presented. This scheme meets the requirements of event monitoring in distributed systems. This architecture has been validated by implementing an executable component prototype that is part of the DeeDS prototype. The results of the time complexity study are validated by experiments. Our experiments corroborate the theory in terms of complexity classes of event composition in different event contexts. However, the experimental platform is not representative of operational real-time systems and, thus, the constants derived from our experiments cannot be used for such systems.
917

Exploring using complexity thinking to extend the modelling of student retention in higher education physics and engineering

Forsman, Jonas January 2011 (has links)
No description available.
918

How to cope with a turbulent environment / Att hantera en turbulent miljö

Hedlöf, Carina, Janson, Ulrika January 2000 (has links)
Background: Due to constant changes and unpredictability in a turbulent environment, the traditional way of planning does not seem to work anymore. Therefore, new approaches to the external and internal conditions need to develop in order to cope with the environmental turbulence. Purpose: The purpose of this thesis is to define a turbulent environment and identify how an organisation can cope with this environment. Procedure: We have developed a frame of reference mainly consisting of theories regarding a turbulent environment and change. In addition, we have selected eight guiding factors, which we have used when studying, systemising, and comparing how contemporary literature suggests that an organisation can cope with a turbulent environment. Results: The conclusions we have come to are that with a definition of the environment as being fast-changing and of chaotic nature, where the changes are continuous, emergent, small, big or somewhere in between, and where paradoxes play an important role, it is necessary to develop an organisational structure, leadership, human resource, and corporate culture, in which the objective always is to create dynamics and to build in an acceptance of change.
919

Appropriate Modelling Complexity: An Application to Mass Balance Modelling of Lake Vänern, Sweden

Dahl, Magnus January 2004 (has links)
This work is about finding an appropriate modelling complexity for a mass-balance model for phosphorus in Lake Vänern, Sweden. A statistical analysis of 30 years of water quality data shows that epilimnion and hypolimnion have different water quality and should be treated separately in a model. Further vertical division is not motivated. Horizontally, the lake should be divided into the two main basins Värmlandssjön and Dalbosjön. Shallow near shore ares, bays and areas close to point sources have to be considered as specific sub-basins if they are to be modelled correctly. These results leads to the use of a model based on ordinary differential equations. The model applied is named LEEDS (Lake Eutrophication Effect Dose Sensitivity) and considers phosphorus and suspended particles. Several modifications were made for the application of the model to Lake Vänern. The two major ones are a revision of the equations governing the outflow of phosphorus and suspended particle through the outflow river, and the inclusion of chemical oxygen demand (COD) into the model, in order to model emissions from pulp and paper mills. The model has also been modified to handle several sub-basins. The LEEDS model has been compared to three other eutrophication models applied to Lake Vänern. Two were simple models developed as parts of catchment area models and the third was a lake model with higher resolution than the LEEDS model. The models showed a good fit to calibration and validation data, and were compared in two nutrient emission scenarios and a scenario with increased temperature, corresponding to the green house effect.
920

Översättningar av konkurrens i ekonomiska laboratorier : Om ekonomiska teoriers förenkling, komplexitet och fördunkling i hälso- och sjukvården

Jensen, Tommy Nöhr January 2004 (has links)
FRAGMENTERAD SAMMANFATTNING Aktörerna i Stockholms läns landsting simulerar marknad och konkurrens och är väl medvetna om att de gör detta (Den Stora Upphandlingen; en inre marknad inom ramen för en förvaltningsstyrd struktur). Det är inte det denna alternativa berättelse om konkurrens avslöjar. Aktörerna vet precis vad de försöker göra när de försöker organisera hälso- och sjukvården i Stockholms läns landsting genom att låta sig influeras av andra ”världar”. En särskilt relevant ”annan värld” är ekonomen och ekonomisk teori. Men i den till synes medvetna processen händer något, det uppstår komplexitet; det uppträder många olika världar som cirkulerar i Stockholms läns landsting. Men översättningsprocessen av konkurrens stannar inte här. Aktörerna intensifierar sina ansträngningar för att komma tillrätta med det som blivit komplicerat. Nya förslag på hur hälso- och sjukvården ska organiseras konstrueras, där var och en för fram Det Bästa Sättet för att komma till rätta med de observerade problemen. Situationen och tillvaron blir fördunklad och ekonomisk teori har nu översatts så många gånger, av så många olika aktörer, att det cirkulerar otaliga kopior av den. Det är processen av förenkling, komplexitet och fördunkling som är essensen i min re-presentation och i mina anspråk på att försöka förstå konkurrensprocesser. En essens som jag fångar och illustrerar med hur aktörer i aktörsnätverk översätter (associerar, enrollerar och etiketterar) den neoklassiska marknaden. Men annat står också på spel. Aktörer översätter såväl människor som ting samtidigt som ting i sin tur influerar mänskliga föreställningar och ageranden. Men tings förmåga att konstruera människor är begränsat eftersom ting agerar utifrån givna koder, en given Ordning, (som givetvis kan bryta samman och ta oanade vägar, till exempel dataprogram), men som i sig är fyllda av mänskliga avsikter och intentioner. Vissa ”sociotekniska” ting är designade att centrera ”världar”, exempelvis Stockholms läns landsting uttryckt i siffror i en årsredovisning. Likafullt är det bara människor som, i processerna av förenkling, komplexitet och fördunkling, kan träda fram och konstruera och centrera heterogent materiella världar, men genom att centrera mängder av heterogena material (såväl människor som ting) förmår ting visa epistemologiska möjligheter för mänskligt agerande. Människor kan med andra ord färdas långt ut i världen med hjälp av ting. En empirisk observation är att ju längre aktörerna, i hälso- och sjukvården i Stockholms läns landsting, reser i de centrerade heterogent materiella världarna desto mer närsynta blir de samtidigt som potentialen att orsaka allvarliga sidoeffekter blir större och större. En annan empirisk observation i studien är att sidoeffekter hanteras på samma sätt i den nationalekonomiska neoklassiska teorin som i hälso- och sjukvårdens praktik, där internaliserade och väl avgränsade ekonomiska transaktioner antas utgöra normen och sidoeffekter undantagen på marknaden. Istället förhåller sig det precis tvärtom: Sidoeffekter är normen och internaliserade och väl avgränsade ekonomiska transaktioner utgör undantagen.

Page generated in 0.0715 seconds