• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22207
  • 3292
  • 2300
  • 1656
  • 1583
  • 631
  • 617
  • 240
  • 219
  • 217
  • 180
  • 180
  • 180
  • 180
  • 180
  • Tagged with
  • 40893
  • 6653
  • 5762
  • 5301
  • 4199
  • 4060
  • 3402
  • 3216
  • 2846
  • 2735
  • 2610
  • 2497
  • 2370
  • 2365
  • 2325
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

MATHEMATICAL PROGRAMMING IN BANACH SPACES

Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 33-10, Section: B, page: 4933. / Thesis (Ph.D.)--The Florida State University, 1972.
92

GENERALIZED CONVEXITY IN NONLINEAR PROGRAMMING (INVEXITY)

Unknown Date (has links)
Many results in mathematical programming involving convex functions hold for a more general class of functions, called invex functions. Examples of such functions are given. It is shown that several types of generalized convex functions are special cases of invex functions. The relationship between convexity and some generalizations of invexity is illustrated. A nonlinear problem with equality constraints is studied and necessary and sufficient conditions for optimality are stated in terms of invexity. Also, weak, strong and converse dual theorems for fractional programming are given using invexity conditions. Finally, a sufficient condition for invexity is established through the use of linear programming. / Source: Dissertation Abstracts International, Volume: 48-07, Section: B, page: 2087. / Thesis (Ph.D.)--The Florida State University, 1987.
93

Analytic center cutting plane and path-following interior-point methods in convex programming and variational inequalities

Sharifi Mokhtarian, Faranak. January 1997 (has links)
No description available.
94

The geochemistry of chromium in the supergene environment : chromium (VI) and related species

Shaddick, Lindsay Raymond, University of Western Sydney, College of Science, Technology and Environment, School of Science, Food and Horticulture January 2003 (has links)
A description of the role of chromium in aqueous solution with respect to its geochemistry and the formation of secondary chromium minerals in the supergene environment is developed. Secondary chromium minerals are relatively rare in Nature and apart from the lead chromate crocoite which is by far the most common and also the most keenly sought after by collectors due its great beauty, little is known of related species. In attempt to redress this situation the initial aim of this study is to present an up-to-date list and description of secondary chromium minerals. It has long been recognised that secondary chromium mineralisation, in the form of chromates and compound chromates, occurs as a minor component of total secondary mineralisation in oxidised zones of some sulphide ore bodies. A thermochemical approach is adopted and a model to describe the geochemistry of chromium in aqueous solution as applied to mineral formation in the supergene environment and its transport in near surface waters is established. The model confirms that species distribution of chromium in aqueous solution is quite pH dependent and that at concentrations appropriate to those experienced in Nature, chromate is the only important species in basic solutions under highly oxidising conditions / Master of Science (Hons.)
95

A method for determining the source mechanism in small earthquakes with application to the Pacific Northwest region

Gallagher, John Neil 28 January 1968 (has links)
A technique was developed in the present study to determine fault-plane solutions for small earthquakes. The method uses the direction and amplitude of initial P-wave motions recorded at a small number of seismic stations for epicentral distances less than 2000 km. Seismic arrivals recorded on short-period seismograms were identified as p, P or Pn waves for crustal shocks and P waves for subcrustal shocks. Source amplitudes were converted from station amplitudes using known theoretical methods, based on determining angles of incidence at the surface of the earth and straight ray paths in experimenta1 crustal models. Source amplitudes were calculated for three stations and were then projected back to the earthquake source. The source amplitudes were compared to amplitudes that correspond to more than 6000 theoretical amplitude patterns. The pattern which most nearly fitted the first motions was taken as the fault-plane solution. P-wave amplitudes, velocity structures, focal depth and wave attenuation were varied to show the relative deviations of the dip and strike in a fault-plane solution. When the S-wave was identified, it was found that polarization could be determined for epicentral distances less than 20°. Thirty-three earthquakes in the Pacific Northwest region were analyzed, and twenty-two fault-plane solutions were determined by the method described in this paper. Seven additional fault-plane solutions were determined using the well-known Byerly method. The fault-plane solutions generally showed large dip-slip components. This was particularly evident in fault-plane solutions for earthquakes occurring off the coast of Oregon and northern California, and west of the Cascade Mountains. The solutions for earthquakes east of the Cascade Range and off the coast of British Columbia have either dip-slip or strike-slip components. The solutions obtained by the present technique were compared with solutions for generally larger earthquakes in western North America as previously determined by other investigators, using the Byerly method. Satisfactory agreement was found between the two methods. Two general tectonic hypotheses are proposed from the study of earthquake stresses in the Pacific Northwest region. / Graduation date: 1969
96

Nonstationary Erlang Loss Queues and Networks

Alnowibet, Khalid Abdulaziz 23 April 2004 (has links)
The nonstationary Erlang loss model is a queueing system consisting of a finite number of servers and no waiting room with a nonstationary arrival process or a time-dependent service rate. The Erlang loss model is commonly used to model and evaluate many communication systems. Often, these types of service systems encounter a change in the arrival rate over time while the service rate remains either constant or changes very little over time. In view of this, the focus in this research is the nonstationary Erlang loss queues and network with time-dependent arrival rate and constant service rate. We developed an iterative scheme referred to as the fixed point approximation (FPA) in order to obtain the time-dependent blocking probability and other measures for a single-class nonstationary Erlang loss queue and a nonstationary multi-rate Erlang loss queue. The FPA method was compared against exact numerical results, and two other methods, namely, MOL and PSA, for various nonstationary Erlang loss queues with sinusoidal arrival rates. Although we used sinusoidal functions to model the time-dependent arrival rate, the solution can be obtained for any arrival rate function. Experimental results demonstrate that the FPA algorithm provides an exact solution for nonstationary Erlang loss queue. The FPA algorithm was also applied to the case of multi-rate nonstationary Erlang loss queues and the results obtained were compared with simulation. We generalized the FPA algorithm for networks of nonstationary Erlang loss queues with Markovian branching, and compared its accuracy to simulation. Finally, FPA was used to analyze networks of nonstationary Erlang loss queues with population constraints. Numerical results showed that FPA provides a good approximation.
97

Min-Cost Multicommodity Network Flows: A Linear Case for the Convergence and Reoptimization of Multiple Single-Commodity Network Flows

Kramer, Jeremy Daniel 11 May 2009 (has links)
Network Flow problems are prevalent in Operations Research, Computer Science, Industrial Engineering and Management Science. They constitute a class of problems that are frequently faced by real world applications, including transportation, telecommunications, production planning, etc. While many problems can be modeled as Network Flows, these problems can quickly become unwieldy in size and difficult to solve. One particularly large instance is the Min-Cost Multicommodity Network Flow problem. Due to the time-sensitive nature of the industry, faster algorithms are always desired: recent advances in decomposition methods may provide a remedy. One area of improvement is the cost reoptimization of the min-cost single commodity network flow subproblems that arise from the decomposition. Since similar single commodity network flow problems are solved, information from the previous solution provides a "warm-start" of the current solution. While certain single commodity network flow algorithms may be faster "from scratch," the goal is to reduce the overall time of computation. Reoptimization is the key to this endeavor. Three single commodity network flow algorithms, namely, cost scaling, network simplex and relaxation, will be examined. They are known to reoptimize well. The overall goal is to analyze the effectiveness of this approach within one particular class of network problems.
98

Exact and Heuristic Methods for solving the View-Selection Problem for Aggregate Queries

Asgharzadeh Talebi, Zohreh 12 June 2006 (has links)
In this thesis we present a formal study of the following view-selection problem: Given a set of queries, a database, and an upper bound on the amount of disk space that can be used to store materialized views, return definitions of views that, when materialized in the database, would reduce the evaluation costs of the queries. Optimizing the layout of stored data using view selection has a direct impact on the performance of the entire database system. At the same time, the optimization problem is intractable, even under natural restrictions on the types of queries of interest. We introduce an integer-programming model to obtain optimal solutions for the view-selection problem for aggregate queries on data warehouses. Through a computational experiment we show that this model can be used to solve realistic-size instances of the problem. We also report the results of the post-optimality analysis that we performed to determine the impact of changing certain input characteristics on the optimal solution. We solve large instances by applying several methods of reducing the size of the search space. We compare our approach to the leading heuristic procedure in the field [20].
99

Modeling to Quantify the Capacity and Efficacy of Emergency Preparedness and Response Systems: A Study of the North Carolina Health Alert Network

Wynter, Sharolyn Antonia 05 August 2009 (has links)
Following the attacks of September 11th, the growing fear of a bioterrorist attack emerged within the United States and pushed the threat of bioterrorism to the forefront of the public health emergency preparedness and response agenda. Despite the investment of more than six billion dollars in federal funding towards emergency preparedness and response initiatives, well defined and broadly accepted performance measures for determining the efficacy of these systems have yet to be established. Because of the complex and dynamic conditions under which emergency preparedness and response systems must perform, it is becoming apparent that traditional measures of evaluating the performance of public health systems simply will not suffice. The inability to accurately capture and quantify this information has created knowledge gaps which hinder our ability to measure our true level of preparedness and ultimately weakens our response capacity. It is therefore essential that we develop methodologies that enable us to establish valid metrics which capture the information needed to quantify the capacity and efficacy of these systems. As a key information sharing and communication component of North Carolinaâs Public Health Information Network (NC PHIN), the North Carolina Health Alert Network (NCHAN) serves as a promising means to measure emergency preparedness and response capacity. The goal of this thesis is to present a methodology for extending approaches in operations research and systems engineering to better understand the value of emergency preparedness and response systems, such as NCHAN. Ultimately we seek to determine how NCHAN has aided to emergency preparedness and response by quantifying the added value of the system to the greater âpreparedness and responseâ process. We demonstrate the use of statistical analysis, simulation and the IDEF0 mapping process as valid tools for modeling and quantifying the less-tangible aspects of emergency preparedness and response. We find that although the capacity exists within NCHAN to increase emergency preparedness and response, other factors, such as usage variability amongst NCHAN users, lack of integration with other NC PHIN components, and limited capacity of tangible system resources (such as labs, funding and public health practitioners) limits the efficacy of NCHAN. These findings suggest that user standardization, component integration and proper resource allocation will be necessary in order to realize the true value of emergency preparedness and response systems.
100

Simple Strategies to Improve Data Warehouse Performance

Mathews, Reena 18 May 2004 (has links)
Data warehouse management is fast becoming one of the most popular and important topics in industries today. For business executives, it promises significant competitive advantage for their companies, while presenting the information system managers a way to overcome the obstructions in providing business information to managers and other users. Here the company is going through the problem of inefficient performance of its data warehouse. To find an appropriate solution to this problem we first try to understand the data warehouse concept and its basic architecture, followed by an in depth study of the company data warehouse and the various issues affecting it. We propose and evaluate a set of solutions including classification of suppliers, implementing corporate commodity classification and coding system, obtaining level three spend details for PCard purchases, etc. The experimental results show considerable improvement in the data quality and the data warehouse performance. We further support these recommendations by evaluating the return on investment for improved quality data. Lastly, we discuss the future scope and other possible improvement techniques for obtaining better results.

Page generated in 0.0343 seconds