• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Hydrological data interpolation using entropy

Ilunga, Masengo 17 November 2006 (has links)
Faculty of Engineering and Built Enviroment School of Civil and Enviromental Engineering 0105772w imasengo@yahoo.com / The problem of missing data, insufficient length of hydrological data series and poor quality is common in developing countries. This problem is much more prevalent in developing countries than it is in developed countries. This situation can severely affect the outcome of the water systems managers’ decisions (e.g. reliability of the design, establishment of operating policies for water supply, etc). Thus, numerous data interpolation (infilling) techniques have evolved in hydrology to deal with the missing data. The current study presents merely a methodology by combining different approaches and coping with missing (limited) hydrological data using the theories of entropy, artificial neural networks (ANN) and expectation-maximization (EM) techniques. This methodology is simply formulated into a model named ENANNEX model. This study does not use any physical characteristics of the catchment areas but deals only with the limited information (e.g. streamflow or rainfall) at the target gauge and its similar nearby base gauge(s). The entropy concept was confirmed to be a versatile tool. This concept was firstly used for quantifying information content of hydrological variables (e.g. rainfall or streamflow). The same concept (through directional information transfer index, i.e. DIT) was used in the selection of base/subject gauge. Finally, the DIT notion was also extended to the evaluation of the hydrological data infilling technique performance (i.e. ANN and EM techniques). The methodology was applied to annual total rainfall; annual mean flow series, annual maximum flows and 6-month flow series (means) of selected catchments in the drainage region D “Orange” of South Africa. These data regimes can be regarded as useful for design-oriented studies, flood studies, water balance studies, etc. The results from the case studies showed that DIT is as good index for data infilling technique selection as other criteria, e.g. statistical and graphical. However, the DIT has the feature of being non-dimensionally informational index. The data interpolation iii techniques viz. ANNs and EM (existing methods applied and not yet applied in hydrology) and their new features have been also presented. This study showed that the standard techniques (e.g. Backpropagation-BP and EM) as well as their respective variants could be selected in the missing hydrological data estimation process. However, the capability for the different data interpolation techniques of maintaining the statistical characteristics (e.g. mean, variance) of the target gauge was not neglected. From this study, the relationship between the accuracy of the estimated series (by applying a data infilling technique) and the gap duration was then investigated through the DIT notion. It was shown that a decay (power or exponential) function could better describe that relationship. In other words, the amount of uncertainty removed from the target station in a station-pair, via a given technique, could be known for a given gap duration. It was noticed that the performance of the different techniques depends on the gap duration at the target gauge, the station-pair involved in the missing data estimation and the type of the data regime. This study showed also that it was possible, through entropy approach, to assess (preliminarily) model performance for simulating runoff data at a site where absolutely no record exist: a case study was conducted at Bedford site (in South Africa). Two simulation models, viz. RAFLER and WRSM2000 models, were then assessed in this respect. Both models were found suitable for simulating flows at Bedford.
62

Contributions to the Shape Synthesis of Directivity-Maximized Dielectric Resonator Antennas

Nassor, Mohammed 08 August 2023 (has links)
Antennas are an important component of wireless ("without wires") communications, regardless of their use. As these systems have become increasingly complex, antenna design requirements have become more demanding. Conventional antenna design consists of selecting some canonical radiator structure described by a handful of key dimensions, and then adjusting these using an optimization algorithm that improves some performance-related objective function that is (during optimization) repeatedly evaluated via a full-wave computational electromagnetics model of the structure. This approach has been employed to great effect in the enormously successful development of wireless communications antenna technology thus far, but is limiting in the sense that the "design space" is restricted to a library of canonical (or regular near-canonical) shapes. As increased design constraints and more complicated placement requirements arise such an approach to antenna design could eventually become a bottleneck. The use of antenna shape synthesis, a process also referred to as inverse design, can widen the "design space", and include such aspects as occupancy and fabrication constraints, the presence of a platform, even weight constraints, and much more. Dielectric resonator antennas (DRAs) hold the promise of lower losses at higher frequencies. This thesis uses a three-dimensional shape optimization algorithm along with a characteristic mode analysis and a genetic algorithm to shape synthesize DRAs. Until now, a limited amount of work on such shape synthesis has been performed for single-feed fixed-beam DRAs. In this thesis we extend this approach by devising and implementing a new shaping methodology for significantly more complex problems, namely directivity-maximized multi-port fixed-beam DRAs, and multi-port DRAs capable of the beam-steering required to satisfy certain spherical coverage constraints, where the location, type and number of feed-ports need not be specified prior to shaping. The approach enables even low-profile enhanced-directivity DRAs to be shape synthesized.
63

Sparsification of Social Networks Using Random Walks

Wilder, Bryan 01 May 2015 (has links)
Analysis of large network datasets has become increasingly important. Algorithms have been designed to find many kinds of structure, with numerous applications across the social and biological sciences. However, a tradeoff is always present between accuracy and scalability; otherwise promising techniques can be computationally infeasible when applied to networks with huge numbers of nodes and edges. One way of extending the reach of network analysis is to sparsify the graph by retaining only a subset of its edges. The reduced network could prove much more tractable. For this thesis, I propose a new sparsification algorithm that preserves the properties of a random walk on the network. Specifically, the algorithm finds a subset of edges that best preserves the stationary distribution of a random walk by minimizing the Kullback-Leibler divergence between a walk on the original and sparsified graphs. A highly efficient greedy search strategy is developed to optimize this objective. Experimental results are presented that test the performance of the algorithm on the influence maximization task. These results demonstrate that sparsification allows near-optimal solutions to be found in a small fraction of the runtime that would required using the full network. Two cases are shown where sparsification allows an influence maximization algorithm to be applied to a dataset that previous work had considered intractable.
64

Optimal Vehicle Path Generator Using Optimization Methods

Ramanata, Peeroon Pete 24 April 1998 (has links)
This research explores the idea of developing an optimal path generator that can be used in conjunction with a feedback steering controller to automate track testing experiment. This study specifically concentrates on applying optimization concepts to generate paths that meet two separate objective functions; minimum time and maximum tire forces. A three-degree-of freedom vehicle model is used to approximate the handling dynamics of the vehicle. Inputs into the vehicle model are steering angle and longitudinal force at the tire. These two variables approximate two requirements that are essential in operating a vehicle. The Third order Runge-Kutta integration routine is used to integrate vehicle dynamics equations of motion. The Optimization Toolbox of Matlab is used to evaluate the optimization algorithm. The vehicle is constrained with a series of conditions, includes, a travel within the boundaries of the track, traction force limitations at the tire, vehicle speed, and steering. The simulation results show that the optimization applied to vehicle dynamics can be useful in designing an automated track testing system. The optimal path generator can be used to develop meaningful test paths on existing test tracks. This study can be used to generate an accelerated tire wear test path, perform parametric study of suspension geometry design using vehicle dynamics handling test data, and to increase repeatability in generating track testing results. <i> Vita removed at author's request. GMc 3/13/2013</i> / Master of Science
65

Brexit och den svenska vänstern : Socialdemokraternas, Vänsterpartiets och Miljöpartiets inställning till Storbritanniens utträde ur EU / Brexit and the Swedish Left : The Attitudes of the Social Democrats, the Left Party and the Greens towards the United Kingdom Leaving the EU

Olanås, Henrik January 2017 (has links)
The purpose of this bachelor thesis is to examine how the Swedish parliamentary left viewed Brexit and its expected consequences. The standpoints concerning Brexit that were presented by the Social Democrats (S; SAP), the Left Party (V) and the Greens (MP) during the foreign policy debates of 2016 and 2017, and during eight of the consultations with the Committee on EU Affairs, from December 2015 to September 2016, are analysed. The actions of the three parties are explained with the help of the concepts politicization, programme realization, vote maximization and maximization of parliamentary influence. The standpoints are categorized using a qualitative text analysis. The conclusion is that the Social Democrats and the Greens had a negative attitude towards the United Kingdom leaving the EU, and they argued that the result of the referendum was a matter of regret. According to the Social Democrats and the Greens, Brexit meant that the EU had to start fulfilling the wishes of the citizens; otherwise the legitimacy of the union would be damaged even further. The Left Party neither approved nor disapproved of Brexit, but it did consider the event a historic opportunity to reform the EU. The analysis of the standpoints showed that Brexit couldn’t be classified as a politicized (contentious) question for the Swedish left. The actions of the Social Democrats are seen as an attempt to achieve all the strategic goals: programme realization, vote maximization and maximization of parliamentary influence. The Left Party prioritized vote maximization over the other goals, while the Greens prioritized maximization of parliamentary influence at the expense of programme realization.
66

利用標籤社會網絡之影響力最大化達到目標式廣告行銷 / Influence maximization in labeled social network for target advertising

李法賢 Unknown Date (has links)
病毒式行銷(viral marketing)透過人際之間的互動,藉由消費者對消費者的推薦,達到廣告效益。而廣告商要如何進行病毒式行銷?廣告商必須在有限資源下從人群中找出具有影響力的人,將產品或是概念推薦給更多的消費者,以說服消費者會採納他們的意見。 利用社會網絡(Social network),我們可以簡單地將消費者之間的關係用節點跟邊呈現,而Influence Maximization就是在社會網絡上選擇最具有影響力的k個消費者,而這k個消費者能影響最多的消費者。 然而,廣告相當重視目標消費群,廣告目的就是希望能夠影響目標消費群,使目標消費群購買產品。因此,我們針對標籤社會網絡(Labeled social network)提出Labeled influence maximization的問題,不同過往的研究,我們加入了標籤的條件,希望在標籤社會網絡中影響到最多符合標籤條件的節點。 針對標籤社會網絡,我們除了修改四個解決Influence maximization problem的方法,Greedy、NewGreedy、CELFGreedy和DegreeDiscount,以找出影響最多符合類別條件的節點的趨近解。我們也提出了兩個新的方法ProximityDiscount和MaximumCoverage來解決Labeled influence maximization problem。我們在Offline時,先計算節點與節點之間的Proximity,當行銷人員Online擬定行效策略時,系統利用Proximity,Onlin找出趨近解。 實驗所採用的資料是Internet Movie Database的社會網絡。根據實驗結果顯示,在兼顧效率與效果的情況下,適合用ProximityDiscount來解決Labeled influence maximization problem。 / Influence maximization problem is to find a small subset of nodes (seed nodes) in a social network that could maximize the spread of influence. But when marketers advertise for some products, they have a set of target audience. However, influence maximization doesn’t take target audience into account. This thesis addresses a new problem called labeled influence maximization problem, which is to find a subset of nodes in a labeled social network that could influence target audience and maximizes the profit of influence. In labeled social network, every node has a label, and every label has profit which can be set by marketers. We propose six algorithms to solve labeled influence maximization problem. To accommodate the objective of labeled influence maximization, four algorithms, called LabeledGreedy, LabeledNewGreedy, LabeledCELFGreedy, and LabeledDegreeDiscount, are modified from previous studies on original influence maximization. Moreover, we propose two new algorithms, called ProximityDiscount and MaximumCoverage, which offline compute the proximities of any two nodes in the labeled social network. When marketers make strategies online, the system will return the approximate solution by using proximities. Experiments were performed on the labeled social network constructed from Internet Movie Database, the result shows that ProximityDiscount may provide good efficiency and effectiveness.
67

Modelling and Analysis of Interconnects for Deep Submicron Systems-on-Chip

Pamunuwa, Dinesh January 2003 (has links)
The last few decades have been a very exciting period in thedevelopment of micro-electronics and brought us to the brink ofimplementing entire systems on a single chip, on a hithertounimagined scale. However an unforeseen challenge has croppedup in the form of managing wires, which have become the mainbottleneck in performance, masking the blinding speed of activedevices. A major problem is that increasingly complicatedeffects need to be modelled, but the computational complexityof any proposed model needs to be low enough to allow manyiterations in a design cycle. This thesis addresses the issue of closed form modelling ofthe response of coupled interconnect systems. Following astrict mathematical approach, second order models for thetransfer functions of coupled RC trees based on the first andsecond moments of the impulse response are developed. The2-pole-1-zero transfer function that is the best possible fromthe available information is obtained for the signal path fromeach driver to the output in multiple aggressor systems. Thisallows the complete response to be estimated accurately bysumming up the individual waveforms. The model represents theminimum complexity for a 2-pole-1-zero estimate, for this classof circuits. Also proposed are new techniques for the optimisation ofwires in on-chip buses. Rather than minimising the delay overeach individual wire, the configuration that maximises thetotal bandwidth over a number of parallel wires isinvestigated. It is shown from simulations that there is aunique optimal solution which does not necessarily translate tothe maximum possible number of wires, and in fact deviatesconsiderably from it when the resources available for repeatersare limited. Analytic guidelines dependent only on processparameters are derived for optimal sizing of wires andrepeaters. Finally regular tiled architectures with a commoncommunication backplane are being proposed as being the mostefficient way to implement systems-on-chip in the deepsubmicron regime. This thesis also considers the feasibility ofimplementing a regular packet-switched network-on-chip in atypical future deep submicron technology. All major physicalissues and challenges are discussed for two differentarchitectures and important limitations are identified.
68

Modelling and Analysis of Interconnects for Deep Submicron Systems-on-Chip

Pamunuwa, Dinesh January 2003 (has links)
<p>The last few decades have been a very exciting period in thedevelopment of micro-electronics and brought us to the brink ofimplementing entire systems on a single chip, on a hithertounimagined scale. However an unforeseen challenge has croppedup in the form of managing wires, which have become the mainbottleneck in performance, masking the blinding speed of activedevices. A major problem is that increasingly complicatedeffects need to be modelled, but the computational complexityof any proposed model needs to be low enough to allow manyiterations in a design cycle.</p><p>This thesis addresses the issue of closed form modelling ofthe response of coupled interconnect systems. Following astrict mathematical approach, second order models for thetransfer functions of coupled RC trees based on the first andsecond moments of the impulse response are developed. The2-pole-1-zero transfer function that is the best possible fromthe available information is obtained for the signal path fromeach driver to the output in multiple aggressor systems. Thisallows the complete response to be estimated accurately bysumming up the individual waveforms. The model represents theminimum complexity for a 2-pole-1-zero estimate, for this classof circuits.</p><p>Also proposed are new techniques for the optimisation ofwires in on-chip buses. Rather than minimising the delay overeach individual wire, the configuration that maximises thetotal bandwidth over a number of parallel wires isinvestigated. It is shown from simulations that there is aunique optimal solution which does not necessarily translate tothe maximum possible number of wires, and in fact deviatesconsiderably from it when the resources available for repeatersare limited. Analytic guidelines dependent only on processparameters are derived for optimal sizing of wires andrepeaters.</p><p>Finally regular tiled architectures with a commoncommunication backplane are being proposed as being the mostefficient way to implement systems-on-chip in the deepsubmicron regime. This thesis also considers the feasibility ofimplementing a regular packet-switched network-on-chip in atypical future deep submicron technology. All major physicalissues and challenges are discussed for two differentarchitectures and important limitations are identified.</p>
69

Topics in Network Utility Maximization : Interior Point and Finite-step Methods

Akhil, P T January 2017 (has links) (PDF)
Network utility maximization has emerged as a powerful tool in studying flow control, resource allocation and other cross-layer optimization problems. In this work, we study a flow control problem in the optimization framework. The objective is to maximize the sum utility of the users subject to the flow constraints of the network. The utility maximization is solved in a distributed setting; the network operator does not know the user utility functions and the users know neither the rate choices of other users nor the flow constraints of the network. We build upon a popular decomposition technique proposed by Kelly [Eur. Trans. Telecommun., 8(1), 1997] to solve the utility maximization problem in the aforementioned distributed setting. The technique decomposes the utility maximization problem into a user problem, solved by each user and a network problem solved by the network. We propose an iterative algorithm based on this decomposition technique. In each iteration, the users communicate to the network their willingness to pay for the network resources. The network allocates rates in a proportionally fair manner based on the prices communicated by the users. The new feature of the proposed algorithm is that the rates allocated by the network remains feasible at all times. We show that the iterates put out by the algorithm asymptotically tracks a differential inclusion. We also show that the solution to the differential inclusion converges to the system optimal point via Lyapunov theory. We use a popular benchmark algorithm due to Kelly et al. [J. of the Oper. Res. Soc., 49(3), 1998] that involves fast user updates coupled with slow network updates in the form of additive increase and multiplicative decrease of the user flows. The proposed algorithm may be viewed as one with fast user update and fast network update that keeps the iterates feasible at all times. Simulations suggest that our proposed algorithm converges faster than the aforementioned benchmark algorithm. When the flows originate or terminate at a single node, the network problem is the maximization of a so-called d-separable objective function over the bases of a polymatroid. The solution is the lexicographically optimal base of the polymatroid. We map the problem of finding the lexicographically optimal base of a polymatroid to the geometrical problem of finding the concave cover of a set of points on a two-dimensional plane. We also describe an algorithm that finds the concave cover in linear time. Next, we consider the minimization of a more general objective function, i.e., a separable convex function, over the bases of a polymatroid with a special structure. We propose a novel decomposition algorithm and show the proof of correctness and optimality of the algorithm via the theory of polymatroids. Further, motivated by the need to handle piece-wise linear concave utility functions, we extend the decomposition algorithm to handle the case when the separable convex functions are not continuously differentiable or not strictly convex. We then provide a proof of its correctness and optimality.
70

Training of Hidden Markov models as an instance of the expectation maximization algorithm

Majewsky, Stefan 27 July 2017 (has links) (PDF)
In Natural Language Processing (NLP), speech and text are parsed and generated with language models and parser models, and translated with translation models. Each model contains a set of numerical parameters which are found by applying a suitable training algorithm to a set of training data. Many such training algorithms are instances of the Expectation-Maximization (EM) algorithm. In [BSV15], a generic EM algorithm for NLP is described. This work presents a particular speech model, the Hidden Markov model, and its standard training algorithm, the Baum-Welch algorithm. It is then shown that the Baum-Welch algorithm is an instance of the generic EM algorithm introduced by [BSV15], from which follows that all statements about the generic EM algorithm also apply to the Baum-Welch algorithm, especially its correctness and convergence properties.

Page generated in 0.1378 seconds