• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 87
  • 37
  • 24
  • 19
  • 17
  • 15
  • 14
  • 13
  • 13
  • 13
  • 13
  • 13
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Globalization On the Ground: Health, Development, and Volunteerism in Meatu, Tanzania

Nichols-Belo, Amy 20 August 2003 (has links)
AHEAD (Adventures in Health, Education, and Agricultural Development) is a small grass-roots non-governmental organization working in the rural Meatu, District in Northern Tanzania. The AHEAD project employs Tanzanian nurses who provide health education, child weighing and nutritional counseling, family planning, and antenatal services. AHEAD has recently developed a water quality testing initiative in order to combat unsafe water supplies using solar pasteurization. Dr. Robert Metcalf, an AHEAD volunteer offers "expertise" to Meatu through transfer of solar cooking technology. Each summer, AHEAD takes volunteers into this setting who bring with them both "altruistic" and non-altruistic reasons for volunteering, economic and social capital, and a taste of the world beyond Meatu. This thesis looks at the Summer 2001 AHEAD experience ethnographically from three perspectives: 1) as public health practice, 2) in relation to the contested domain of international "development" , and 3) situated within the larger literature of non-profit and volunteer action research. These three snapshots of AHEAD suggest a project of globalization, theorized as the flow of people, goods, and information across boundaries. / Master of Science
62

Analysis and feedback control of the scanning laser epitaxy process applied to nickel-base superalloys

Bansal, Rohan 08 April 2013 (has links)
Scanning Laser Epitaxy (SLE) is a new layer-by-layer additive manufacturing process being developed in the Direct Digital Manufacturing Laboratory at Georgia Tech. SLE allows for the fabrication of three-dimensional objects with specified microstructure through the controlled melting and re-solidification of a metal powder placed atop a base substrate. This dissertation discusses the work done to date on assessing the feasibility of using SLE to both repair single crystal (SX) turbine airfoils and manufacture functionally graded turbine components. Current processes such as selective laser melting (SLM) are not able to create structures with defined microstructure and often have issues with warping of underlying layers due to the high temperature gradients present when scanning a high power laser beam. Additionally, other methods of repair and buildup have typically been plagued by crack formation, equiaxed grains, stray grains, and grain multiplication that can occur when dendrite arms are separated from their main dendrites due to remelting. In this work, it is shown that the SLE process is capable of creating fully dense, crack-free equiaxed, directionally-solidified, and SX structures. The SLE process, though, is found to be currently constrained by the cumbersome method of choosing proper parameters and a relative lack of repeatability. Therefore, it is hypothesized that a real-time feedback control scheme based upon a robust offline model will be necessary both to create specified defect-free microstructures and to improve the repeatability of the process enough to allow for multi-layer growth. The proposed control schemes are based upon temperature data feedback provided at high frame rate by a thermal imaging camera. This data is used in both PID and model reference adaptive control (MRAC) schemes and drives the melt pool temperature during processing towards a reference melt pool temperature that has been found to give a desired microstructure in the robust offline model of the process. The real-time control schemes will enable the ground breaking capabilities of the SLE process to create engine-ready net shape turbine components from raw powder material.
63

[en] A HYBRID NEURO- EVOLUTIONARY APPROACH FOR DYNAMIC WEIGHTED AGGREGATION OF TIME SERIES FORECASTERS / [pt] ABORDAGEM HÍBRIDA NEURO-EVOLUCIONÁRIA PARA PONDERAÇÃO DINÂMICA DE PREVISORES

CESAR DAVID REVELO APRAEZ 18 February 2019 (has links)
[pt] Estudos empíricos na área de séries temporais indicam que combinar modelos preditivos, originados a partir de diferentes técnicas de modelagem, levam a previsões consensuais superiores, em termos de acurácia, às previsões individuais dos modelos envolvidos na combinação. No presente trabalho é apresentada uma metodologia de combinação convexa de modelos estatísticos de previsão, cujo sucesso depende da forma como os pesos de combinação de cada modelo são estimados. Uma Rede Neural Artificial Perceptron Multi-camada (Multilayer Perceptron - MLP) é utilizada para gerar dinamicamente vetores de pesos ao longo do horizonte de previsão, sendo estes dependentes da contribuição individual de cada previsor observada nos dados históricos da série. O ajuste dos parâmetros da rede MLP é efetuado através de um algoritmo de treinamento híbrido, que integra técnicas de busca global, baseadas em computação evolucionária, junto com o algoritmo de busca local backpropagation, de modo a otimizar de forma simultânea tanto os pesos quanto a arquitetura da rede, visando, assim, a gerar de forma automática um modelo de ponderação dinâmica de previsores de alto desempenho. O modelo proposto, batizado de Neural Expert Weighting - Genetic Algorithm (NEW-GA), foi avaliado em diversos experimentos comparativos com outros modelos de ponderação de previsores, assim como também com os modelos individuais envolvidos na combinação, contemplando 15 séries temporais divididas em dois estudos de casos: séries de derivados de petróleo e séries da versão reduzida da competição NN3, uma competição entre metodologias de previsão, com maior ênfase nos modelos baseados em Redes Neurais. Os resultados demonstraram o potencial do NEWGA em fornecer modelos acurados de previsão de séries temporais. / [en] Empirical studies on time series indicate that the combination of forecasting models, generated from different modeling techniques, leads to higher consen+sus forecasts, in terms of accuracy, than the forecasts of individual models involved in the combination scheme. In this work, we present a methodology for convex combination of statistical forecasting models, whose success depends on how the combination weights of each model are estimated. An Artificial Neural Network Multilayer Perceptron (MLP) is used to generate dynamically weighting vectors over the forecast horizon, being dependent on the individual contribution of each forecaster observed over historical data series. The MLP network parameters are adjusted via a hybrid training algorithm that integrates global search techniques, based on evolutionary computation, along with the local search algorithm backpropagation, in order to optimize simultaneously both weights and network architecture. This approach aims to automatically generate a dynamic weighted forecast aggregation model with high performance. The proposed model, called Neural Expert Weighting - Genetic Algorithm (NEW-GA), was com- pared with other forecaster combination models, as well as with the individual models involved in the combination scheme, comprising 15 time series divided into two case studies: Petroleum Products and the reduced set of NN3 forecasting competition, a competition between forecasting methodologies, with greater emphasis on models based on neural networks. The results obtained demonstrated the potential of NEW-GA in providing accurate models for time series forecasting.
64

雙人決策秘書問題的研究 / A Variation of Two Decision Makers in a Secretary Problem

周冠群, Chou, Guan-Chun Unknown Date (has links)
Chen, Rosenberg和Shepp(1997)的“雙人決策者的秘書問題“(A Secretary Problem with Two Decision Makers),探討在完整訊息(Full Information)與選擇次序不變的情況下,具有優先選擇權的決策者佔有較大優勢。這裡所謂的優勢意指在雙方最終選擇的大小為勝負條件所產生獲勝機率的比較。而本篇文章主要是延伸此一探討,意即在若不變動兩者選擇的次序,但賦予後選擇決策者較多資訊的條件下,能否平衡雙方的優劣勢。我們首先討論後決策者擁有預知下一步(One-step look-ahead)資訊能力的條件下,雙方優勢的改變;隨之若是在後決策者能預知完全資訊的情況下,是否能平衡雙方的優劣勢。而事實上,即便在後決策者擁有所有資訊的條件,仍無法完全改變此一情況;更進一步而言,先選擇決策者甚至在不知道後決策者已掌握了所有資訊的情況下,仍可佔有獲勝機率大於後決策者的優勢。這裡我們將提供理論與理論上的數值結果。 / Chen, Rosenberg, and Shepp (1997) considered a variation of the "secretary problem" in which the salary demands of a group of applicants are from a known and continuous distribution (i.e., full information case) and these applicants are interviewed sequentially by two managers, say, I and II. For every applicant. Manager I has the right to interview and hire him/her first. If Manager I rejects the applicant, Manager II can interview him/her. No recall is allowed when the applicant is rejected by both managers, and neither manager can interview and hire another applicant once he/she has hired an applicant. The manager who chooses the applicants with the lower salary wins the game. Chen et al. shows that manager I has bigger winning chance than manager II in the full information case. This study is to extend the paper by Chen et al., by giving extra information to manager H. In particular, suppose that manager II can look a few applicants ahead, i.e., he/she knows the salary demands of applicants before manager I interview them. However, under the full-information assumption, even if manager II is a clairvoyant, who claims to be able to see what will happen in the future, his/her winning probability is still less than that of manager I. We provide theoretical proof and simulation to confirm this result.
65

Short Term Electricity Price Forecasting In Turkish Electricity Market

Ozguner, Erdem 01 November 2012 (has links) (PDF)
With the aim for higher economical efficiency, considerable and radical changes have occurred in the worldwide electricity sector since the beginning of 1980s. By that time, the electricity sector has been controlled by the state-owned vertically integrated monopolies which manage and control all generation, transmission, distribution and retail activities and the consumers buy electricity with a price set by these monopolies in that system. After the liberalization and restructuring of the electricity power sector, separation and privatization of these activities have been widely seen. The main purpose is to ensure competition in the market where suppliers and consumers compete with each other to sell or buy electricity from the market and the consumers buy the electricity with a price which is based on competition and determined according to sell and purchase bids given by producers and customers rather than a price set by the government. Due to increasing competition in the electricity market, accurate electricity price forecasts have become a very vital need for all market participants. Accurate forecast of electricity price can help suppliers to derive their bidding strategy and optimally design their bilateral agreements in order to maximize their profits and hedge against risks. Consumers need accurate price forecasts for deriving their electricity usage and bidding strategy for minimizing their utilization costs. This thesis presents the determination of system day ahead price (SGOF) at the day ahead market and system marginal price (SMF) at the balancing power market in detail and develops artificial neural network models together with multiple linear regression models to forecast these electricity prices in Turkish electricity market. Also the methods used for price forecasting in the literature are discussed and the comparisons between these methods are presented. A series of historical data from Turkish electricity market is used to understand the characteristics of the market and the necessary input factors which influence the electricity price is determined for creating ANN models for price forecasting in this market. Since the factors influencing SGOF and SMF are different, different ANN models are developed for forecasting these prices. For SGOF forecasting, historical price and load values are enough for accurate forecasting, however, for SMF forecasting the net instruction volume occurred due to real time system imbalances is needed in order to increase the forecasting accuracy.
66

Practical Real-Time with Look-Ahead Scheduling / Praktikable Echtzeit durch vorausschauende Einplanung

Roitzsch, Michael 21 October 2013 (has links) (PDF)
In my dissertation, I present ATLAS — the Auto-Training Look-Ahead Scheduler. ATLAS improves service to applications with regard to two non-functional properties: timeliness and overload detection. Timeliness is an important requirement to ensure user interface responsiveness and the smoothness of multimedia operations. Overload can occur when applications ask for more computation time than the machine can offer. Interactive systems have to handle overload situations dynamically at runtime. ATLAS provides timely service to applications, accessible through an easy-to-use interface. Deadlines specify timing requirements, workload metrics describe jobs. ATLAS employs machine learning to predict job execution times. Deadline misses are detected before they occur, so applications can react early.
67

Σχεδίαση παράλληλης διάταξης επεξεργαστών σε ένα chip : δημιουργία και μελέτη high radix RNS αθροιστή

Γιαννοπούλου, Λεμονιά 09 July 2013 (has links)
Η άθροιση μεγάλων αριθμών είναι μια χρονοβόρα και ενεργοβόρα διαδικασία. Πολλές μέθοδοι έχουν αναπτυχθεί για να μειωθεί η καθυστέρηση υπολογισμού του αθροίσματος λόγω της μετάδοσης κρατουμένου. Τέτοιες είναι η πρόβλεψη κρατουμένου (carry look ahead) και η επιλογή κρατουμένου (carry select). Αυτές οι αρχιτεκτονικές δεν είναι επαρκώς επεκτάσιμες για μεγάλους αριθμούς (με πολλά bits) ή πολλούς αριθμούς, διότι παράγονται μεγάλα και ενεργοβόρα κυκλώματα. Στην παρούσα εργασία μελετάται η μέθοδος υπολοίπου (RNS), η οποία χρησιμοποιεί συστήματα αριθμών μεγαλύτερα από το δυαδικό. Ορίζεται μια βάση τριών αριθμών και οι αριθμοί αναπαρίστανται στα εκάστοτε τρία συστήματα της βάσης. Η άθροιση γίνεται παράλληλα σε κάθε σύστημα και τέλος οι αριθμοί μετατρέπονται πάλι στο δυαδικό. Τα πλεονεκτήματα αυτής της προσέγγισης είναι η παραλληλία και η απουσία μεγάλων κυκλωμάτων διάδοσης κρατουμένου. Το μειονέκτημα είναι ότι χρειάζονται κυκλώματα μετατροπής από και προς το δυαδικό σύστημα. Αυτού του είδους οι αθροιστές συγκρίνονται για κατανάλωση ενέργειας με τους γνωστούς carry look ahead και carry select. Διαπιστώθηκε ότι οι RNS αθροιστές καταναλώνουν λιγότερη ενέργεια. / The addition of many-bits numbers is a time and power consuming task. Many methods are developed to reduce the sum calculation delay due to carry propagation. Such techniques are Carry Look Ahead and Carry Select, Those techniques are not scalable to many bits numbers or a set of many numbers: the circuits needed are big and power consuming. In this thesis, the the RNS technique is investigated. This technique uses radix bigger than binary. A 3-numbers base is defined and the numbers that participate in the sum are represented uniquely in each element radix. The addition is performed in parallel in each radix. Finally the result is transformed back to the binary numbers system. The advantages of this technique are the parallelization of the process and the lack of carry propagation circuits. The disadvantage is that transformation circuits are need from/to binary system. The RNS adders are compared to CLA and CS for power. Such adders are compared to CLA and CS for power consumption. It is found that RNS adders consume less energy.
68

[en] FATIGUE CRACK PROPAGATION MODELLING BY ACCUMULATED DAMAGE INSIDE PLASTIC ZONE / [pt] MODELAGEM DA PROPAGAÇÃO DA TRINCA DE FADIGA ATRAVÉS DO DANO ACUMULADO NA ZONA PLÁSTICA

SAMUEL ELIAS FERREIRA 13 December 2018 (has links)
[pt] Após identificar que uma trinca de fadiga permanecia fechada durante parte do ciclo, Elber assumiu que o dano era induzido apenas pela fração do carregamento acima da carga necessária para abrir a trinca. Diversos modelos foram propostos utilizando o Delta Keff como força motriz da propagação, como os modelos da faixa plástica (strip-yield), que são amplamente utilizados para prever vida residual de componentes trincados. Embora o fenômeno do fechamento da trinca esteja provado, sua real importância na propagação da trinca de fadiga ainda é controversa. Outros mecanismos, além do fechamento da trinca, foram utilizados na tentativa de explicar os efeitos de sequência do carregamento na propagação em amplitude variável como o campo de tensão residual à frente da trinca. Mesmo após mais de 50 anos de pesquisas desde a proposição da primeira regra de propagação por Paris ainda não há consenso nem sobre o mecanismo nem sobre a modelagem. Esse trabalho tem como objetivo apresentar uma modelagem para prever propagação da trinca de fadiga com base na hipótese de que o dano acumulado por deformação plástica seria a força motriz para propagação. A modelagem proposta se diferença de outros modelos de acúmulo de dano por permitir que o contato existente entre as superfícies da trinca exerça influência sobre as deformações plástica à frente de sua ponta. Os resultados mostram que a modelagem proposta possui capacidade de reproduzir curvas de propagação semelhante ao modelo strip-yield. / [en] After identify that a fatigue crack remains closed during part of the load cycle, Elber assumed the damage was induced only by the cycle part over the load required to open the crack. Several models were developed based on Delta Keff as the strip-yield ones, which are widely used to predict residual lives of cracked components. Although the crack closure phenomenon is well proven its actual significance for the propagation is still controversial. Others mechanisms, beyond the crack closure, were used in trying to explain the sequence effects on variable amplitude crack propagation like the residual stress field ahead of the crack tip. However even after more than 50 years of research since the first propagation rule proposed by Paris there is no neither about the mechanism neither about modelling. This work has the aim of present a modelling to predict fatigue crack growth based on the hypothesis that the damage accumulated by cyclic plastic strain would be propagation the drive force. The modelling proposed differs from others damage accumulation models by allowing the existed contact between the crack surfaces to exercise its influence on plastic strain ahead of the crack tip. The results show that the proposed model is able to reproduce propagation curves similar to the model strip-yield.
69

Metascheduling of HPC Jobs in Day-Ahead Electricity Markets

Murali, Prakash January 2014 (has links) (PDF)
High performance grid computing is a key enabler of large scale collaborative computational science. With the promise of exascale computing, high performance grid systems are expected to incur electricity bills that grow super-linearly over time. In order to achieve cost effectiveness in these systems, it is essential for the scheduling algorithms to exploit electricity price variations, both in space and time, that are prevalent in the dynamic electricity price markets. Typically, a job submission in the batch queues used in these systems incurs a variable queue waiting time before the resources necessary for its execution become available. In variably-priced electricity markets, the electricity prices fluctuate over discrete intervals of time. Hence, the electricity prices incurred during a job execution will depend on the start and end time of the job. Our thesis consists of two parts. In the first part, we develop a method to predict the start and end time of a job at each system in the grid. In batch queue systems, similar jobs which arrive during similar system queue and processor states, experience similar queue waiting times. We have developed an adaptive algorithm for the prediction of queue waiting times on a parallel system based on spatial clustering of the history of job submissions at the system. We represent each job as a point in a feature space using the job characteristics, queue state and the state of the compute nodes at the time of job submission. For each incoming job, we use an adaptive distance function, which assigns a real valued distance to each history job submission based on its similarity to the incoming job. Using a spatial clustering algorithm and a simple empirical characterization of the system states, we identify an appropriate prediction model for the job from among standard deviation minimization method, ridge regression and k-weighted average. We have evaluated our adaptive prediction framework using historical production workload traces of many supercomputer systems with varying system and job characteristics, including two Top500 systems. Across workloads, our predictions result in up to 22% reduction in the average absolute error and up to 56% reduction in the percentage prediction errors over existing techniques. To predict the execution time of a job, we use a simple model based on the estimate of job runtime provided by the user at the time of job submission. In the second part of the thesis, we have developed a metascheduling algorithm that schedules jobs to the individual batch systems of a grid, to reduce both the electricity prices for the systems and response times for the users. We formulate the metascheduling problem as a Minimum Cost Maximum Flow problem and leverage execution period and electricity price predictions to accurately estimate the cost of job execution at a system. The network simplex algorithm is used to minimize the response time and electricity cost of job execution using an appropriate flow network. Using trace based simulation with real and synthetic workload traces, and real electricity price data sets, we demonstrate our approach on two currently operational grids, XSEDE and NorduGrid. Our experimental setup collectively constitute more than 433K processors spread across 58 compute systems in 17 geographically distributed locations. Experiments show that our approach simultaneously optimizes the total electricity cost and the average response time of the grid, without being unfair to users of the local batch systems. Considering that currently operational HPC systems budget millions of dollars for annual operational costs, our approach which can save $167K in annual electricity bills, compared to a baseline strategy, for one of the grids in our test suite with over 76000 cores, is very relevant for reducing grid operational costs in the coming years.
70

Predictive Energy Management of Long-Haul Hybrid Trucks : Using Quadratic Programming and Branch-and-Bound

Jonsson Holm, Erik January 2021 (has links)
This thesis presents a predictive energy management controller for long-haul hybrid trucks. In a receding horizon control framework, the vehicle speed reference, battery energy reference, and engine on/off decision are optimized over a prediction horizon. A mixed-integer quadratic program (MIQP) is formulated by performing modelling approximations and by including the binary engine on/off decision in the optimal control problem. The branch-and-bound algorithm is applied to solve this problem. Simulation results show fuel consumption reductions between 10-15%, depending on driving cycle, compared to a conventional truck. The hybrid truck without the predictive control saves significantly less. Fuel consumption is reduced by 3-8% in this case. A sensitivity analysis studies the effects on branch-and-bound iterations and fuel consumption when varying parameters related to the binary engine on/off decision. In addition, it is shown that the control strategy can maintain a safe time gap to a leading vehicle. Also, the introduction of the battery temperature state makes it possible to approximately model the dynamic battery power limitations over the prediction horizon. The main contributions of the thesis are the MIQP control problem formulation, the strategy to solve this with the branch-and-bound method, and the sensitivity analysis.

Page generated in 0.0397 seconds