• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2606
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5945
  • 1424
  • 873
  • 728
  • 722
  • 669
  • 492
  • 492
  • 480
  • 448
  • 421
  • 414
  • 386
  • 366
  • 341
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1301

Gestion des données dans les réseaux sociaux / Data management in social networks

Maniu, Silviu 28 September 2012 (has links)
Nous abordons dans cette thèse quelques-unes des questions soulevées par I'émergence d'applications sociales sur le Web, en se concentrant sur deux axes importants: l'efficacité de recherche sociale dans les applications Web et l'inférence de liens sociaux signés à partir des interactions entre les utilisateurs dans les applications Web collaboratives. Nous commençons par examiner la recherche sociale dans les applications de "tag- ging". Ce problème nécessite une adaptation importante des techniques existantes, qui n'utilisent pas des informations sociaux. Dans un contexte ou le réseau est importante, on peut (et on devrait) d'exploiter les liens sociaux, ce qui peut indiquer la façon dont les utilisateurs se rapportent au demandeur et combien de poids leurs actions de "tagging" devrait avoir dans le résultat. Nous proposons un algorithme qui a le potentiel d'évoluer avec la taille des applications actuelles, et on le valide par des expériences approfondies. Comme les applications de recherche sociale peut être considérée comme faisant partie d'une catégorie plus large des applications sensibles au contexte, nous étudions le problème de répondre aux requêtes à partir des vues, en se concentrant sur deux sous-problèmes importants. En premier, la manipulation des éventuelles différences de contexte entre les différents points de vue et une requête d'entrée conduit à des résultats avec des score incertains, valables pour le nouveau contexte. En conséquence, les algorithmes top-k actuels ne sont plus directement applicables et doivent être adaptés aux telle incertitudes dans les scores des objets. Deuxièmement, les techniques adaptées de sélection de vue sont nécessaires, qui peuvent s’appuyer sur les descriptions des requêtes et des statistiques sur leurs résultats. Enfin, nous présentons une approche pour déduire un réseau signé (un "réseau de confiance") à partir de contenu généré dans Wikipedia. Nous étudions les mécanismes pour deduire des relations entre les contributeurs Wikipédia - sous forme de liens dirigés signés - en fonction de leurs interactions. Notre étude met en lumière un réseau qui est capturée par l’interaction sociale. Nous examinons si ce réseau entre contributeurs Wikipedia représente en effet une configuration plausible des liens signes, par l’étude de ses propriétés globaux et locaux du reseau, et en évaluant son impact sur le classement des articles de Wikipedia. / We address in this thesis some of the issues raised by the emergence of social applications on the Web, focusing on two important directions: efficient social search inonline applications and the inference of signed social links from interactions between users in collaborative Web applications. We start by considering social search in tagging (or bookmarking) applications. This problem requires a significant departure from existing, socially agnostic techniques. In a network-aware context, one can (and should) exploit the social links, which can indicate how users relate to the seeker and how much weight their tagging actions should have in the result build-up. We propose an algorithm that has the potential to scale to current applications, and validate it via extensive experiments. As social search applications can be thought of as part of a wider class of context-aware applications, we consider context-aware query optimization based on views, focusing on two important sub-problems. First, handling the possible differences in context between the various views and an input query leads to view results having uncertain scores, i.e., score ranges valid for the new context. As a consequence, current top-k algorithms are no longer directly applicable and need to be adapted to handle such uncertainty in object scores. Second, adapted view selection techniques are needed, which can leverage both the descriptions of queries and statistics over their results. Finally, we present an approach for inferring a signed network (a "web of trust")from user-generated content in Wikipedia. We investigate mechanisms by which relationships between Wikipedia contributors - in the form of signed directed links - can be inferred based their interactions. Our study sheds light into principles underlying a signed network that is captured by social interaction. We investigate whether this network over Wikipedia contributors represents indeed a plausible configuration of link signs, by studying its global and local network properties, and at an application level, by assessing its impact in the classification of Wikipedia articles.javascript:nouvelleZone('abstract');_ajtAbstract('abstract');
1302

Model checking nekonečně stavových systémů založený na inferenci jazyků / Model Checking Infinite-State Systems Using Language Inference

Rozehnal, Pavel Unknown Date (has links)
Regular model checking is a method for verifying infinite-state systems based on coding their configurations as words over a finite alphabet, sets of configurations as finite automata, and transitions as finite transducers. We implement regular model checking using inference of regular languages. The method builds upon the observations that for infinite-state systems whose behavior can be modeled using length-preserving transducers, there is a finite computation for obtaining all reachable configurations.   Our new approach to regular model checking via inference of regular languages is based on the Angluin's L* algorithm that is used for finding out an invariant which can answer our question whether the system satisfies some property.   We also provide an intro to the theory of finite automata, model checking, SAT solving and Anguin's L* and Bierman algorithm of learning finite automata.
1303

Optimal Linear Filtering For Weak Target Detection in Radio Frequency Tomography

Akroush, Muftah Emhemed 15 June 2020 (has links)
No description available.
1304

Models and Algorithms to Solve Electric Vehicle Charging Stations Designing and Managing Problem under Uncertainty

Quddus, Md Abdul 14 December 2018 (has links)
This dissertation studies a framework in support electric vehicle (EV) charging station expansion and management decisions. In the first part of the dissertation, we present mathematical model for designing and managing electric vehicle charging stations, considering both long-term planning decisions and short-term hourly operational decisions (e.g., number of batteries charged, discharged through Battery-to-Grid (B2G), stored, Vehicle-to-Grid (V2G), renewable, grid power usage) over a pre-specified planning horizon and under stochastic power demand. The model captures the non-linear load congestion effect that increases exponentially as the electricity consumed by plugged-in EVs approaches the capacity of the charging station and linearizes it. The study proposes a hybrid decomposition algorithm that utilizes a Sample Average Approximation and an enhanced Progressive Hedging algorithm (PHA) inside a Constraint Generation algorithmic framework to efficiently solve the proposed optimization model. A case study based on a road network of Washington, D.C. is presented to visualize and validate the modeling results. Computational experiments demonstrate the effectiveness of the proposed algorithm in solving the problem in a practical amount of time. Finding of the study include that incorporating the load congestion factor encourages the opening of large-sized charging stations, increases the number of stored batteries, and that higher congestion costs call for a decrease in the opening of new charging stations. The second part of the dissertation is dedicated to investigate the performance of a collaborative decision model to optimize electricity flow among commercial buildings, electric vehicle charging stations, and power grid under power demand uncertainty. A two-stage stochastic programming model is proposed to incorporate energy sharing and collaborative decisions among network entities with the aim of overall energy network cost minimization. We use San Francisco, California as a testing ground to visualize and validate the modeling results. Computational experiments draw managerial insights into how different key input parameters (e.g., grid power unavailability, power collaboration restriction) affect the overall energy network design and cost. Finally, a novel disruption prevention model is proposed for designing and managing EV charging stations with respect to both long-term planning and short-term operational decisions, over a pre-determined planning horizon and under a stochastic power demand. Long-term planning decisions determine the type, location, and time of established charging stations, while short-term operational decisions manage power resource utilization. A non-linear term is introduced into the model to prevent the evolution of excessive temperature on a power line under stochastic exogenous factors such as outside temperature and air velocity. Since the re- search problem is NP-hard, a Sample Average Approximation method enhanced with a Scenario Decomposition algorithm on the basis of Lagrangian Decomposition scheme is proposed to obtain a good-quality solution within a reasonable computational time. As a testing ground, the road network of Washington, D.C. is considered to visualize and validate the modeling results. The results of the analysis provide a number of managerial insights to help decision makers achieving a more reliable and cost-effective electricity supply network.
1305

Automatic Development of Pharmacokinetic Structural Models

Hamdan, Alzahra January 2022 (has links)
Introduction: The current development strategy of population pharmacokinetic models is a complex and iterative process that is manually performed by modellers. Such a strategy is time-demanding, subjective, and dependent on the modellers’ experience. This thesis presents a novel model building tool that automates the development process of pharmacokinetic (PK) structural models. Methods: Modelsearch is a tool in Pharmpy library, an open-source package for pharmacometrics modelling, that searches for the best structural model using an exhaustive stepwise search algorithm. Given a dataset, a starting model and a pre-specified model search space of structural model features, the tool creates and fits a series of candidate models that are then ranked based on a selection criterion, leading to the selection of the best model. The Modelsearch tool was used to develop structural models for 10 clinical PK datasets (5 orally and 5 i.v. administered drugs). A starting model for each dataset was generated using the assemblerr package in R, which included a first-order (FO) absorption without any absorption delay for oral drugs, one-compartment disposition, FO elimination, a proportional residual error model, and inter-individual variability on the starting model parameters with a correlation between clearance (CL) and central volume of distribution (VC). The model search space included aspects of absorption and absorption delay (for oral drugs), distribution and elimination. In order to understand the effects of different IIV structures on structural model selection, five model search approaches were investigated that differ in the IIV structure of candidate models: 1. naïve pooling, 2. IIV on starting model parameters only, 3. additional IIV on mean delay time parameter, 4. additional diagonal IIVs on newly added parameters, and 5. full block IIVs. Additionally, the implementation of structural model selection in the workflow of the fully automatic model development was investigated. Three strategies were evaluated: SIR, SRI, and RSI depending on the development order of structural model (S), IIV model (I) and residual error model (R). Moreover, the NONMEM errors encountered when using the tool were investigated and categorized in order to be handled in the automatic model building workflow. Results: Differences in the final selected structural models for each drug were observed between the five different model search approaches. The same distribution components were selected through Approaches 1 and 2 for 6/10 drugs. Approach 2 has also identified an absorption delay component in 4/5 oral drugs, whilst the naïve pooling approach only identified an absorption delay model in 2 drugs. Compared to Approaches 1 and 2, Approaches 3, 4 and 5 tended to select more complex models and more often resulted in minimization errors during the search. For the SIR, SRI and RSI investigations, the same structural model was selected in 9/10 drugs with a significant higher run time in RSI strategy compared to the other strategies. The NONMEM errors were categorized into four categories based on the handling suggestions which is valuable to further improve the tool in its automatic error handling. Conclusions: The Modelsearch tool was able to automatically select a structural model with different strategies of setting the IIV model structure. This novel tool enables the evaluation of numerous combinations of model components, which would not be possible using a traditional manual model building strategy. Furthermore, the tool is flexible and can support multiple research investigations for how to best implement structural model selection in a fully automatic model development workflow.
1306

Optimal distribution network reconfiguration using meta-heuristic algorithms

Asrari, Arash 01 January 2015 (has links)
Finding optimal configuration of power distribution systems topology is an NP-hard combinatorial optimization problem. It becomes more complex when time varying nature of loads in large-scale distribution systems is taken into account. In the second chapter of this dissertation, a systematic approach is proposed to tackle the computational burden of the procedure. To solve the optimization problem, a novel adaptive fuzzy based parallel genetic algorithm (GA) is proposed that employs the concept of parallel computing in identifying the optimal configuration of the network. The integration of fuzzy logic into GA enhances the efficiency of the parallel GA by adaptively modifying the migration rates between different processors during the optimization process. A computationally efficient graph encoding method based on Dandelion coding strategy is developed which automatically generates radial topologies and prevents the construction of infeasible radial networks during the optimization process. The main shortcoming of the proposed algorithm in Chapter 2 is that it identifies only one single solution. It means that the system operator will not have any option but relying on the found solution. That is why a novel hybrid optimization algorithm is proposed in the third chapter of this dissertation that determines Pareto frontiers, as candidate solutions, for multi-objective distribution network reconfiguration problem. Implementing this model, the system operator will have more flexibility in choosing the best configuration among the alternative solutions. The proposed hybrid optimization algorithm combines the concept of fuzzy Pareto dominance (FPD) with shuffled frog leaping algorithm (SFLA) to recognize non-dominated suboptimal solutions identified by SFLA. The local search step of SFLA is also customized for power systems applications so that it automatically creates and analyzes only the feasible and radial configurations in its optimization procedure which significantly increases the convergence speed of the algorithm. In the fourth chapter, the problem of optimal network reconfiguration is solved for the case in which the system operator is going to employ an optimization algorithm that is automatically modifying its parameters during the optimization process. Defining three fuzzy functions, the probability of crossover and mutation will be adaptively tuned as the algorithm proceeds and the premature convergence will be avoided while the convergence speed of identifying the optimal configuration will not decrease. This modified genetic algorithm is considered a step towards making the parallel GA, presented in the second chapter of this dissertation, more robust in avoiding from getting stuck in local optimums. In the fifth chapter, the concentration will be on finding a potential smart grid solution to more high-quality suboptimal configurations of distribution networks. This chapter is considered an improvement for the third chapter of this dissertation for two reasons: (1) A fuzzy logic is used in the partitioning step of SFLA to improve the proposed optimization algorithm and to yield more accurate classification of frogs. (2) The problem of system reconfiguration is solved considering the presence of distributed generation (DG) units in the network. In order to study the new paradigm of integrating smart grids into power systems, it will be analyzed how the quality of suboptimal solutions can be affected when DG units are continuously added to the distribution network. The heuristic optimization algorithm which is proposed in Chapter 3 and is improved in Chapter 5 is implemented on a smaller case study in Chapter 6 to demonstrate that the identified solution through the optimization process is the same with the optimal solution found by an exhaustive search.
1307

A Comparative Study on Optimization Algorithms and its efficiency

Ahmed Sheik, Kareem January 2022 (has links)
Background: In computer science, optimization can be defined as finding the most cost-effective or notable achievable performance under certain circumstances, maximizing desired factors, and minimizing undesirable results. Many problems in the real world are continuous, and it isn't easy to find global solutions. However, computer technological development increases the speed of computations [1]. The optimization method, an efficient numerical simulator, and a realistic depiction of physical operations that we intend to describe and optimize for any optimization issue are all interconnected components of the optimization process [2]. Objectives: A literature review on existing optimization algorithms is performed. Ten different benchmark functions are considered and are implemented on the existing chosen algorithms like GA (Genetic Algorithm), ACO (Ant ColonyOptimization) Method, and Plant Intelligence Behaviour optimization algorithm to measure the efficiency of these approaches based on the factors or metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation. Methods: In this research work, a mixed-method approach is used. A literature review is performed based on the existing optimization algorithms. On the other hand, an experiment is conducted by using ten different benchmark functions with the current optimization algorithms like PSO algorithm, ACO algorithm, GA, and PIBO to measure their efficiency based on the four different factors like CPU Time, Optimality, Accuracy, Mean Best Standard Deviation. This tells us which optimization algorithms perform better. Results: The experiment findings are represented within this section. Using the standard functions on the suggested method and other methods, the various metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation are considered, and the results are tabulated. Graphs are made using the data obtained. Analysis and Discussion: The research questions are addressed based on the experiment's results that have been conducted. Conclusion: We finally conclude the research by analyzing the existing optimization methods and the algorithms' performance. The PIBO performs much better and can be depicted from the results of the optimal metrics, best mean, standard deviation, and accuracy, and has a significant drawback of CPU Time where its time taken is much higher when compared to the PSO algorithm and almost close to GA and performs much better than ACO algorithm.
1308

Simultaneous localization and mapping for autonomous robot navigation in a dynamic noisy environment / Simultaneous localization and mapping for autonomous robot navigation in a dynamic noisy environment

Agunbiade, Olusanya Yinka 11 1900 (has links)
D. Tech. (Department of Information Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology. / Simultaneous Localization and Mapping (SLAM) is a significant problem that has been extensively researched in robotics. Its contribution to autonomous robot navigation has attracted researchers towards focusing on this area. In the past, various techniques have been proposed to address SLAM problem with remarkable achievements but there are several factors having the capability to degrade the effectiveness of SLAM technique. These factors include environmental noises (light intensity and shadow), dynamic environment, kidnap robot and computational cost. These problems create inconsistency that can lead to erroneous results in implementation. In the attempt of addressing these problems, a novel SLAM technique Known as DIK-SLAM was proposed. The DIK-SLAM is a SLAM technique upgraded with filtering algorithms and several re-modifications of Monte-Carlo algorithm to increase its robustness while taking into consideration the computational complexity. The morphological technique and Normalized Differences Index (NDI) are filters introduced to the novel technique to overcome shadow. The dark channel model and specular-to-diffuse are filters introduced to overcome light intensity. These filters are operating in parallel since the computational cost is a concern. The re-modified Monte-Carlo algorithm based on initial localization and grid map technique was introduced to overcome the issue of kidnap problem and dynamic environment respectively. In this study, publicly available dataset (TUM-RGBD) and a privately generated dataset from of a university in South Africa were employed for evaluation of the filtering algorithms. Experiments were carried out using Matlab simulation and were evaluated using quantitative and qualitative methods. Experimental results obtained showed an improved performance of DIK-SLAM when compared with the original Monte Carlo algorithm and another available SLAM technique in literature. The DIK-SLAM algorithm discussed in this study has the potential of improving autonomous robot navigation, path planning, and exploration while it reduces robot accident rates and human injuries.
1309

Portfolio management using computational intelligence approaches. Forecasting and Optimising the Stock Returns and Stock Volatilities with Fuzzy Logic, Neural Network and Evolutionary Algorithms.

Skolpadungket, Prisadarng January 2013 (has links)
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN¿s initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
1310

An Online Input Estimation Algorithm For A Coupled Inverse Heat Conduction-Microstructure Problem

Ali, Salam K. 09 1900 (has links)
<p> This study focuses on developing a new online recursive numerical algorithm for a coupled nonlinear inverse heat conduction-microstructure problem. This algorithm is essential in identifying, designing and controlling many industrial applications such as the quenching process for heat treating of materials, chemical vapor deposition and industrial baking. In order to develop the above algorithm, a systematic four stage research plan has been conducted. </P> <p> The first and second stages were devoted to thoroughly reviewing the existing inverse heat conduction techniques. Unlike most inverse heat conduction solution methods that are batch form techniques, the online input estimation algorithm can be used for controlling the process in real time. Therefore, in the first stage, the effect of different parameters of the online input estimation algorithm on the estimate bias has been investigated. These parameters are the stabilizing parameter, the measurement errors standard deviation, the temporal step size, the spatial step size, the location of the thermocouple as well as the initial assumption of the state error covariance and error covariance of the input estimate. Furthermore, three different discretization schemes; namely: explicit, implicit and Crank-Nicholson have been employed in the input estimation algorithm to evaluate their effect on the algorithm performance. </p> <p> The effect of changing the stabilizing parameter has been investigated using three different forms of boundary conditions covering most practical boundary heat flux conditions. These cases are: square, triangular and mixed function heat fluxes. The most important finding of this investigation is that a robust range of the stabilizing parameter has been found which achieves the desired trade-off between the filter tracking ability and its sensitivity to measurement errors. For the three considered cases, it has been found that there is a common optimal value of the stabilizing parameter at which the estimate bias is minimal. This finding is important for practical applications since this parameter is usually unknown. Therefore, this study provides a needed guidance for assuming this parameter. </p> <p> In stage three of this study, a new, more efficient direct numerical algorithm has been developed to predict the thermal and microstructure fields during quenching of steel rods. The present algorithm solves the full nonlinear heat conduction equation using a central finite-difference scheme coupled with a fourth-order Runge-Kutta nonlinear solver. Numerical results obtained using the present algorithm have been validated using experimental data and numerical results available in the literature. In addition to its accurate predictions, the present algorithm does not require iterations; hence, it is computationally more efficient than previous numerical algorithms. </p> <p> The work performed in stage four of this research focused on developing and applying an inverse algorithm to estimate the surface temperatures and surface heat flux of a steel cylinder during the quenching process. The conventional online input estimation algorithm has been modified and used for the first time to handle this coupled nonlinear problem. The nonlinearity of the problem has been treated explicitly which resulted in a non-iterative algorithm suitable for real-time control of the quenching process. The obtained results have been validated using experimental data and numerical results obtained by solving the direct problem using the direct solver developed in stage three of this work. These results showed that the algorithm is efficiently reconstructing the shape of the convective surface heat flux. </p> / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.0622 seconds