• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 306
  • 83
  • 32
  • 30
  • 29
  • 16
  • 7
  • 6
  • 6
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 893
  • 228
  • 217
  • 193
  • 157
  • 155
  • 136
  • 104
  • 95
  • 83
  • 79
  • 75
  • 73
  • 69
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Measuring the Effectiveness of China’s Capital Flow Management and Progress on Capital Account Liberalization

Yow, Xinying 01 January 2016 (has links)
China’s goal of eventually having the renminbi (RMB) be “fully convertible” necessarily requires that its capital account be fully liberated; this paper investigates the on-going changes of the implemented capital controls by China and China’s progress on liberalizing the country’s capital account. The first portion of the paper studies deviations of the covered interest parity, a common measure of capital controls. Econometrical analysis provides evidence for significant and persistent RMB/USD interest rate differentials, calculated from monthly data of 1-month yields for the sample period of 1999 to 2014. At the same time, evidence for cointegration between the onshore and offshore yield suggests that capital flows are not fully restrictive in the long run. The second portion of the paper analyzes constructed de jure capital control indices based on IMF’s AREAER documents following Chen and Qian (2015), and actual capital account flows based on China’s Balance of Payments. The constructed de jure indices quantify the intensity of changes of capital controls, capturing the gradualist style that China adopts in implementing its policies. The index reveals that China has been increasing its pace of capital account liberalization in the recent years compared to the past, and in particular, prioritizes liberalizing controls on outward FDI flows and equity securities inflows. The constructed de jure indices and the respective flows for FDI and equity securities are found to be highly correlated, implying that flows have been responsive to changes in the controls. It also indicates that prior to the restriction lift offs, the capital controls had been relatively effective.
242

An Information Security Control Assessment Methodology for Organizations

Otero, Angel Rafael 01 January 2014 (has links)
In an era where use and dependence of information systems is significantly high, the threat of incidents related to information security that could jeopardize the information held by organizations is more and more serious. Alarming facts within the literature point to inadequacies in information security practices, particularly the evaluation of information security controls in organizations. Research efforts have resulted in various methodologies developed to deal with the information security controls assessment problem. A closer look at these traditional methodologies highlights various weaknesses that can prevent an effective information security controls assessment in organizations. This dissertation develops a methodology that addresses such weaknesses when evaluating information security controls in organizations. The methodology, created using the Fuzzy Logic Toolbox of MATLAB based on fuzzy theory and fuzzy logic, uses fuzzy set theory which allows for a more accurate assessment of imprecise criteria than traditional methodologies. It is argued and evidenced that evaluating information security controls using fuzzy set theory addresses existing weaknesses found in the literature for traditional evaluation methodologies and, thus, leads to a more thorough and precise assessment. This, in turn, results in a more effective selection of information security controls and enhanced information security in organizations. The main contribution of this research to the information security literature is the development of a fuzzy set theory-based assessment methodology that provides for a thorough evaluation of ISC in organizations. The methodology just created addresses the weaknesses or limitations identified in existing information security control assessment methodologies, resulting in an enhanced information security in organizations. The methodology can also be implemented in a spreadsheet or software tool, and promote usage in practical scenarios where highly complex methodologies for ISC selection are impractical. Moreover, the methodology fuses multiple evaluation criteria to provide a holistic view of the overall quality of information security controls, and it is easily extended to include additional evaluation criteria factor not considered within this dissertation. This is one of the most meaningful contributions from this dissertation. Finally, the methodology provides a mechanism to evaluate the quality of information security controls in various domains. Overall, the methodology presented in this dissertation proved to be a feasible technique for evaluating information security controls in organizations.
243

Complexity Management to design and produce customerspecific hydraulic controls for mobile applications

Krüßmann, Martin, Tischler, Karin 03 May 2016 (has links) (PDF)
Complexity management is the key to success for mobile machinery where the variety of customers and applications requires individual solutions. This paper presents the way Bosch Rexroth supports each OEM with hydraulic controls – from specification and conception towards application and production. It gives examples how platforms and processes are optimized according to the customer needs. The demand for flexible, short-term deliveries is met by an agile production with the technologies of Industry 4.0.
244

Three Essays in International Macroeconomics

Nanovsky, Simeon Boyanov 01 January 2015 (has links)
This dissertation spans topics related to global trade, oil prices, optimum currency areas, the eurozone, monetary independence, capital controls and the international monetary policy trilemma. It consists of four chapters and three essays. Chapter one provides a brief summary of all three essays. Chapter two investigates the impact of oil prices on global trade. It is concluded that when oil prices increase, countries start trading relatively more with their neighbors. As an application this chapter provides a new estimate of the eurozone effect on trade. Chapter three continues to study the eurozone and asks whether it is an optimum currency area using the member countries’ desired monetary policies. It is concluded that Greece, Spain, and Ireland have desired policies that are the least compatible with the common euro policy and are therefore the least likely to have formed an optimum currency area with the euro. Chapter four provides a new methodology in testing the international trilemma hypothesis. It is concluded that the trilemma holds in the context of the Asian countries.
245

Towards a non-intrusive traffic surveillance system using digital image processing

Lorio, Berino 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2001. / ENGLISH ABSTRACT: With the increased focus on the use of innovative and state-of-the-art technology in Intelligent Transport Systems (ITS), the need for more accurate and more detailed road traffic flow data has become apparent. Data obtained from vehicle detector loops, which merely act as vehicle presence sensors, is neither reliable nor accurate enough anymore. This type of sensor poses the problem that it has to be inserted into the road surface; temporarily obstructing traffic flows, and has to be replaced after pavement reconstruction. One of the solutions to this problem is to develop a traffic surveillance system that uses video image processing. In cities where Intelligent Transport Systems are used extensively, roadways are monitored through Closed Circuit Television Cameras (CCTV) that are closely watched by traffic control centre personnel. These cameras are mounted on posts on the roadside. These cameras can serve a dual purpose, being used for both human monitoring and as inputs to Video Image Processing Systems. In this study some of the digital image processing techniques that could be used in a traffic surveillance system were investigated. This report leads the reader through the various steps in the processing of a scene by a traffic surveillance system based on feature tracking, and discusses the pitfalls and problems that are experienced. The tracker was tested using three image sequences and the results are presented in the final chapter of this report. / AFRIKAANSE OPSOMMING: Met die toenemende fokus op die gebruik van innoverende oplossings en gevorderde tegnologie in Intelligente Vervoerstelsels, het die noodsaaklikheid van akkurater en meer gedetailleerde padverkeer vloeidata duidelik geword. Data wat verkry word d.m.v. voertuig deteksie lusse, wat alleenlik voertuig teenwoordigheid/afwesigheid meet, is nie meer akkuraat of betroubaar genoeg nie. Hierdie tipe sensors het egter die nadeel dat dit in die plaveisel ingesny moet word, dus vloei tydelik kan belemmer, en moet vervang word elke keer as plaveisel rekonstruksie gedoen word. Een van die oplossings vir hierdie probleem is om 'n verkeers waarnemingstelsel te ontwikkel wat van videobeeldverwerking gebruik maak. In stede waar van uitgebreide intelligente verkeerstelsels gebruik gemaak word, word paaie gemonitor d.m.v. geslote baan televisiekameras wat op pale langs die paaie aangebring is. Personeellede van die verkeers beheer sentrum hou dan die inkomende televisiebeelde dop. Hierdie kameras kan 'n dubelle rol vervul deurdat dit vir beide menslike waarneming en as invoer in 'n video-beeldverwerking stelsel gebruik kan word. In hierdie studie was verskeie digitale beeldverwerking tegnieke wat gebruik kan word in 'n verkeers waarnemingstelsel ondersoek. Hierdie verslag lei die leser deur die verskeie stappe in die verwerking van 'n toneel deur 'n verkeers waarneming stelsel wat gebaseer is op die volg van kenmerke. Die verslag beskryf ook die slaggate en probleme wat ondervind word. Die voertuig volger was getoets deur van drie reekse beelde gebruik te maak en die resultate word weergegee in die finale hoodfstuk van hierdie verslag.
246

Self-organising traffic control algorithms at signalised intersections

Einhorn, Mark David 04 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: The debilitating social, economic and environmental ramifications of traffic congestion are experienced in large cities the world over. The optimisation of traffic signal timings at signalised road intersections attempts to mitigate the extent of these adverse effects of traffic congestion by reducing the delay time experienced by vehicles in a transport network. Today, traffic signal control schemes may be classiffied into one of two main classes, namely fixed-time traffic signal control strategies, which are typically cyclic in nature, and vehicle-actuated traffic signal control strategies, which are typically acyclic in nature. Generally, cyclic control strategies tend to lack exibility, and are unable to adapt to short-term uctuations in traffic ow rates, resulting in green times that are either too long or too short. On the other hand, acyclic control strategies tend to lack coordination between intersections, resulting in vehicles being required to stop at the majority of signalised intersections they encounter. Self-organising traffic signal control has been proposed as an attractive alternative form of control which both exhibits exibility and facilitates a global coordination between intersections as a result of localised signal switching policies. Two examples of existing self-organising traffic signal control algorithms from the literature include an algorithm proposed by Lammer and Helbing in 2008 and an algorithm proposed by Gershenson and Rosenblueth in 2012. These algorithms have been shown to outperform both optimised fixed-time traffc signal control techniques as well as state-of-the-art vehicle actuated trffic signal control techniques, in terms of reducing vehicle delay time in a transport network. A draw-back of both of these self-organising approaches, however, is that their effective operation relies on carefully selected parameter values; poorly selected parameter values may render these algorithms very ineffectual. In this dissertation, three novel self-organising traffic signal traffic control algorithms are proposed. These three algorithms assume the use of existing radar detection sensors mounted at the intersection to provide the necessary input data. The radar detection sensors are capable of detecting and tracking individual vehicles approaching an intersection, providing real-time information pertaining to their physical dimensions, velocities, and ranges from the intersection in terms of both time and distance. The three traffic signal control algorithms are free of any user-specialised parameters, and instead rely solely on the data provided by the radar detection sensors to inform their signal switching policies. The first of these traffic signal control algorithms is inspired by inventory control theory, and draws parallels between the monetary costs typically considered in inventory control models and the delay time costs associated with traffic control at signalised intersections, which the algorithm attempts to minimise. The second novel traffic control algorithm is inspired by the chemical process of osmosis in which solvent molecules move unaided from a region where they are highly concentrated, across a semi-permeable membrane, into a region of high solute molecule concentration. The algorithm models vehicles approaching an intersection as solvent molecules and the physical space available for the vehicles to occupy once they have passed through the intersection as solute molecules. Following this analogy, the intersection is considered to be the semi-permeable membrane. The third traffic control algorithm is a hybrid of the inventory and osmosis-inspired algorithms together with an intersection utilisation maximisation technique, which prevents unnecessary or prolonged underutilisation of an intersection. The three novel trafficc control algorithms, together with the algorithms of Lammer and Helbing, and of Gershenson and Rosenblueth, as well as a fixed-time control algorithm, are implemented in a purpose-built microscopic traffic simulation modelling framework. Several measures are employed to evaluate the relative performances of the algorithms. These measures include the usual mean and maximum resulting delay times incurred by vehicles and the saturation level of the roadways in the transport network, as well as three novel performance measure indicators which include the mean number of stops made by vehicles, their mean normalised delay time and the mean normalised number of stops made. The algorithms are compared in the context of a linear corridor road network topology as well as a grid road network topology under various traffic ow conditions. The overall performance of the novel hybrid traffic signal control algorithm is found to be superior for the corridor road network topology, while the performance of the osmosis-inspired algorithm is found to be superior for the grid road network topology. / AFRIKAANSE OPSOMMING:Die negatiewe sosiale, ekonomiese en omgewingsimpak van verkeersopeenhoping word in groot stede regoor die w^ereld ervaar. Die doel met die optimering van verkeersligwerkverrigting by straatkruisings is om die omvang van hierdie negatiewe impak tee te werk deur die vertraging van voertuie in 'n vervoernetwerk te verminder. Hedendaagse verkeersbeheeralgoritmes kom in een van twee hoofklasse voor, naamlik vaste-tyd beheerstrategiee, wat gewoonlik siklies van aard is, en beheerstrategiee gebaseer op voertuigopsporing, wat tipies asiklies van aard is. Oor die algemeen beskik sikliese beheerstrategiee nie oor genoegsame buigsaambeid om aan te pas by kort-termyn fluktuasies in verkeersvloei nie, wat tipies daartoe lei dat hul groentye spesifiseer wat of te lank of te kort is. Aan die ander kant is asikliese beheerstrategiee nie daartoe in staat om koordinasie tussen naasliggende straatkruisings te bewerkstellig nie, wat weer daartoe lei dat voertuie genoodsaak word om by die oorgrote meerderheid straatkruisings op hul pad te stop. Die self-organiserende beheer van verkeersligte is as 'n aantrektlike, buigsame alternatief voorgestel wat in staat is om globale koordinasie tussen naasliggende straatkruisings as gevolg van gelokaliseerde seinstrategiee te bewerkstellig. Twee voorbeelde van bestaande self-organiserende verkeersbeheeralgoritmes in die literatuur is die algoritmes wat in 2008 deur Lammer and Helbing en in 2012 deur Gershenson en Rosenblueth voorgestel is. Daar is aangetoon dat hierdie algoritmes daartoe in staat is om ge-optimeerde vaste-tyd beheerstrategiee sowel as gevorderde strategiee gebaseer op voertuigopsporing uit te stof in terme van 'n vermindering van die vertraging van voertuie in 'n vervoernetwerk. 'n Nadeel van beide hierdie self-organiserende benaderings is egter dat hul doeltreffende werkverrigting berus op versigtig-gekose parameterwaardes; willekeurige parameterwaardes mag lei na hoogs ondoeltreffende werkverrigitng van die algoritmes. Drie nuwe self-organiserende verkeersbeheeralgoritmes word in hierdie proefskrif voorgestel. Hierdie drie algoritmes maak vir hul toevoerdata staat op die beskikbaarhed van bestaande radar opsporingsensors wat by straatkruisings geinstalleer is. Die sensors is daartoe in staat om individuele voertuie wat 'n straatkruising nader, op te spoor, te volg en intydse data oor hul fisiese dimensies, snelhede, en afstande na die kruising (in terme van beide tyd en afstand) te lewer. Die drie algoritmes bevat geen gebruikers-gespesifiseerde parameters nie, en maak in plaas daarvan slegs gebruik van die sensortoevoerdata om hul beheerstrategiee te bepaal. Die eerste van hierdie verkeersbeheeralgoritmes is deur die teorie van voorraadbeheer geinspireer en maak gebruik van parallelle tussen die monet^ere kostes wat tipies in voorraadbeheermodelle voorkom en die kostes in terme van vertragingstyd wat met verkeersbeheer by straatkruisings aangegaan word, en wat deur die algoritme geminimeer word. Die tweede verkeersbeheeralgoritme is deur die chemiese proses van osmose geinspireer, waar molekules van 'n oplossingsmiddel sonder eksterne hulp vanaf 'n gebied waar hul in hoe konsentrasie voorkom, deur 'n gedeeltelik-deurlaatbare membraan beweeg na 'n gebied waarin hul ook in hoe konsentrasie, maar in opgeloste vorm voorkom. Die algoritme modelleer voertuie wat 'n straatkruising nader as die molekules van die oplossingsmiddel en die fisiese ruimte wat aan die ander kant van die kruising beskikbaar is om deur voertuie beset te word, as molekules in opgeloste vorm. In hierdie analogie word die kruising self as die gedeeltelik-deurlaatbare membraan beskou. Die derde algoritme is 'n hibriede strategie waarin elemente van die eerste twee algoritmes in samewerking met 'n tegniek vir die maksimering van straatkruisingsbenutting gekombineer word, en wat wat ten doel het om onnodige of verlengte onderbenutting van die kruising te vermy. Hierdie drie nuwe verkeersbeheeralgoritmes word, tesame met die bestaande algoritmes van Lammer en Helbing, en van Gershenson en Rosenblueth, asook 'n vaste-tyd beheeralgoritme, in 'n mikroskopiese verkeersimulasiemodelleringsraamwerk wat spesifiek vir die doel ontwerp is, geimplementeer. Verskeie maatstawwe word ingespan om die relatiewe werkverrigting van die algoritmes te evalueer. Hierdie maatstawwe sluit in die gebruiklike gemiddelde en maksimum vertragingstye van voertuie en die versadigingsvlak van strate in die vervoernetwerk, sowel as drie nuwe maatstawwe, naamlik die gemiddelde aantal stoppe deur voertuie, hul genormaliseerde vertragingstye en die gemiddelde, genormaliseerde aantal stoppe. Die algoritmes word in die kontekste van 'n line^ere topologie van opeenvolgende straatkruisings en 'n netwerktopologie van reghoekige straatblokke onder verskeie verkeersdigthede met mekaar vergelyk. Daar word bevind dat die nuwe hibriede algoritme die beste vaar in die line^ere topologie, terwyl die osmose-ge inspireerde algoritme die ander algoritmes uitstof in die straatblok-netwerktopologie.
247

Indoor secondary organic aerosol formation : influence of particle controls, mixtures, and surfaces

Waring, Michael Shannon 22 October 2009 (has links)
Ozone (O₃) and terpenoids react to produce secondary organic aerosol (SOA). This work explored novel ways that these reactions form SOA indoors, with five investigations, in two categories: investigations of (i) the impacts of particle controls on indoor SOA formation, and (ii) two fundamental aspects of indoor SOA formation. For category (i), two investigations examined the particle control devices of ion generators, which are air purifiers that are ineffective at removing particles and emit ozone during operation. With a terpenoid source present (an air freshener), ion generators acted as steady-state SOA generators, both in a 15 m³ chamber and 27 m³ room. The final investigation in category (i) modeled how heating, ventilating, and air-conditioning (HVAC) systems influence SOA formation. Influential HVAC parameters were flow rates, particle filtration, and indoor temperature for residential and commercial models, as well as ozone removal by particle-laden filters for the commercial model. For category (ii), the first investigation measured SOA formation from ozone reactions with single terpenoids and terpenoid mixtures in a 90 L Teflon-film chamber, at low and high ozone concentrations. For low ozone, experiments with only d-limonene yielded the largest SOA number formation, relative to other mixtures, some of which had three times the effective amount of reactive terpenoids. This trend was not observed for high ozone experiments, and these results imply that ozone-limited reactions with d-limonene form byproducts with high nucleation potential. The second investigation in category (ii) explored SOA formation from ozone reactions with surface-adsorbed terpenoids. A model framework was developed to describe SOA formation due to ozone/terpenoid surface reactions, and experiments in a 283 L chamber determined the SOA yield for ozone/d-limonene surface reactions. The observed molar yields were 0.14–0.16 over a range of relative humidities, and lower relative humidity led to higher SOA number formation from surface reactions. Building materials on which ozone/d-limonene surface reactions are predicted to lead to substantial SOA formation are those with initially low surface reactivity, such as glass, sealed materials, or metals. The results from category (ii) suggest significant, previously unexplored mechanisms of SOA number formation indoors. / text
248

Aqueous Phase Tracers of Chemical Weathering in a Semi-arid Mountain Critical Zone

Jardine, Angela Beth January 2011 (has links)
Chemical weathering reactions are important for the physical, chemical, and biological development of the critical zone. We present findings from aqueous phase chemical analyses of surface and soil pore waters during a 15 month study in a small semi-arid mountain catchment of the Santa Catalina Mountain Critical Zone Observatory. Stream water geochemical solutes are sourced to two distinct locations - fractured bedrock baseflow stores and soil quickflow stores. Solid phase observations of albite, anorthite, and K-feldspar transformation to Ca-montmorillonite and kaolinite are supported by stream water saturation states calculated via a PHREEQC geochemical model. While differences in mineral assemblages, soil depths, and horizonation suggest greater weathering in schist versus granite lithologies and in hillslope divergent versus convergent zones, soil pore water solute ratio analysis does not readily distinguish these differences. However, preliminary investigation of aqueous rare earth elements suggests detectable lithologic and landscape positional differences warranting focus for future research efforts.
249

On the Variability of Hydrologic Catchment Response: Inherent and External Controls

Heidbuechel, Ingo January 2013 (has links)
Hydrologic catchment response varies in time. The goal of this dissertation is to investigate how and why it varies and what controls these variations. In order to tackle these questions the first step is to develop a method that permits the capturing of the temporal variation of transit time distributions (TTDs). To this end, the established transfer function-convolution approach using time series of stable water isotopes was modified so that it is now able to determine variable mean transit times (mTTs). The type and the shape parameter of the transfer function also vary in time. We found that antecedent moisture content, saturated hydraulic conductivity, soil depth and subsequent precipitation intensity are all potential controls. We propose a dimensionless number that integrates these controls and relates available storage to incoming and outgoing water fluxes in combination with information on antecedent moisture conditions to predict TTD type and shape. The individual TTDs for every time step produced by this model can be superimposed, summed and normalized to create a classification tool for catchments that is based on its general response behavior to precipitation events: the master transit time distribution. With this model in hand the hydrologic response for three consecutive monsoon seasons in ten nested subcatchments was examined. It was found that the major response controls were changing between the years in accordance with three hydrologic response modes. The mTT correlated most strongly with soil depth in the first year, with hydraulic conductivity in the second year and with curvature in the third year. These variations were produced by differences in precipitation patterns that led to differences in soil saturation and consequently to different dominant flow processes: in the first year most of the water left the catchment via fast flow paths (macropore flow, overland flow), in the second year shallow subsurface flow in the soil matrix was more dominant and in the third year most outflowing water derived from slow base flow. To better predict hydrologic catchment response we propose to apply a dimensionless number to determine the catchment response mode for every time step before selecting the appropriate response control.
250

Accounting for Product Recalls: How to Promote Strong Corporate Governance and Business Ethics

Amirdjanian, Kevin 01 January 2013 (has links)
The purpose of this paper is two-fold. The first goal is to qualitatively explore what ethics is and how companies can create a lasting culture of ethics. I explore the meanings of ethics, corporate ethics, and the responsibilities that companies have to shareholders and the public. The second goal is to demonstrate how a culture of ethics can help to prevent product recalls by creating a control environment that catches potentially dangerous products before they leave the facility. This is achieved through an analysis of three case studies: 1) Johnson & Johnson’s response to the Chicago Tylenol Murders of 1982, 2) Peanut Corporation of America’s response to its peanut butter recalls in 2009, and 3) Toyota Motor Corp’s response to the recalls of 2009-2011. The paper concludes by discussing trends in product safety over the last forty years and explaining why business ethics are an economic imperative, not just in preventing product recalls but also in protecting consumers.

Page generated in 0.0406 seconds