• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 447
  • 103
  • 99
  • 49
  • 43
  • 20
  • 17
  • 14
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • Tagged with
  • 943
  • 165
  • 128
  • 106
  • 100
  • 96
  • 94
  • 94
  • 92
  • 88
  • 80
  • 73
  • 70
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Calculations for positioning with the Global Navigation Satellite System

Cheng, Chao-heh January 1998 (has links)
No description available.
92

Optimization of linear time-invariant dynamic systems without lagrange multipliers

Veeraklaew, Tawiwat January 1995 (has links)
No description available.
93

Variable threshold detection with weighted PCM signal transmitted over Guassian channel

Ahn, Seung Choon January 1986 (has links)
No description available.
94

Weighted Density Approximations for Kohn-Sham Density Functional Theory

Cuevas-Saavedra, Rogelio 10 1900 (has links)
<p>Approximating the exchange-correlation energy in density functional theory (DFT) is a crucial task. As the only missing element in the Kohn-Sham DFT, the search for better exchange-correlation functionals has been an active field of research for fifty years. Many models and approximations are known and they can be summarized in what is known as the Jacob’s ladder. All the functionals in that ladder are local in the sense that they rely on the information of only one electronic coordinate. That is, even though the exchange-correlation hole, the cornerstone in density functional theory, is a two-electron coordinate quantity, one of the coordinates is averaged over in “Jacob’s ladder functionals.” This makes the calculations considerably more efficient. On the other hand, some of the important constraints on the form of the exchange-correlation functional become inaccessible in the one-point forms. The violation of these constraints leads to functionals plagued by systematic errors, leading to qualitatively incorrect descriptions of some chemical and physical processes.</p> <p>In this thesis the idea of a weighted density approximation (WDA) is explored. More specifically, a symmetric and normalized two-point functional is proposed for the exchange-correlation energy functional. The functional is based entirely on the hole for the uniform electron gas. By construction, these functionals fulfill two of the most important constraints: the normalization of the exchange-correlation hole and the uniform electron gas limit. The findings suggest that we should pursue a whole new generation of “new Jacob’s ladder” functionals.</p> <p>A further step was considered. Given the relevance of the long-range behavior of the exchange-correlation hole, a study of the electronic direct correlation function was performed. The idea was to build up the long-range character of the hole as convoluted pieces of the simple and short-ranged direct correlation function. This direct correlation function provides better results, at least for the correlation energy in the spin-polarized uniform electron gas.</p> <p>The advantage of one-point functionals is their computational efficiency. We therefore attempted to develop new methods that mitigate the relative computational inefficiency of two-point functionals. This led to new methods for evaluating the six-dimensional integrals that are inherent to the exchange-correlation energy.</p> / Doctor of Philosophy (PhD)
95

Log Linear Models for Prediction and Analysis of Networks

Ouzienko, Vladimir January 2012 (has links)
The heightened research activity in the interdisciplinary field of network science can be attributed to the emergence of the social network computer applications. Researchers understood early on that data describing how entities interconnect is highly valuable and that it offers a deeper understanding about the entities themselves. This is why there were so many studies done about various kinds of networks in the last 10-15 years. The study of the networks from the perspective of computer science usually has two objectives. The first objective is to develop statistical mechanisms capable of accurately describing and modeling observed real-world networks. A good fit of such mechanism suggests the correctness of the model's assumptions and leads to better understanding of the network. A second goal is more practical, a well performing model can be used to predict what will happen to the network in the future. Also, such model can be leveraged to use the information gleaned from network to predict what will happen to the networks entities. One important leitmotif of network research and analysis is wide adaptation of log linear models. In this work we apply this philosophy for study and evaluation of log-linear statistical models in various types of networks. We begin with proposal of the new Temporal Exponential Random Graph Model (tERGM) for the analysis and predictions in the binary temporal social networks. We then extended the model for applications in partially observed networks that change over time. Lastly, we generalize the tERGM model to predict the real-valued weighted links in the temporal non-social networks. The log-linear models are not limited to networks that change over time but can also be applied to networks that are static. One such static network is a social network composed of patients undergoing hemodialysis. Hemodialysis is prescribed to people suffering from the end stage renal disease; the treatment necessitates the attendance, on non-changing schedule, of the hemodialysis clinic for a prolonged time period and this is how the social ties are formed. The new log-linear Social Latent Vectors (SLV) model was applied to study such static social networks. The results obtained from SLV experiments suggest that social relationships formed by patients bear influence on individual patients clinical outcome. The study demonstrates how social network analysis can be applied to better understand the network constituents. / Computer and Information Science
96

Structure of Invariant Subspaces for Left-Invertible Operators on Hilbert Space

Sutton, Daniel Joseph 15 September 2010 (has links)
This dissertation is primarily concerned with studying the invariant subspaces of left-invertible, weighted shifts, with generalizations to left-invertible operators where applicable. The two main problems that are researched can be stated together as When does a weighted shift have the one-dimensional wandering subspace property for all of its closed, invariant subspaces? This can fail either by having a subspace that is not generated by its wandering subspace, or by having a subspace with an index greater than one. For the former we show that every left-invertible, weighted shift is similar to another weighted shift with a residual space, with respect to being generated by the wandering subspace, of dimension $n$, where $n$ is any finite number. For the latter we derive necessary and sufficient conditions for a pure, left-invertible operator with an index of one to have a closed, invariant subspace with an index greater than one. We use these conditions to show that if a closed, invariant subspace for an operator in a class of weighted shifts has a vector in $l^1$, then it must have an index equal to one, and to produce closed, invariant subspaces with an index of two for operators in another class of weighted shifts. / Ph. D.
97

On a Selection of Advanced Markov Chain Monte Carlo Algorithms for Everyday Use: Weighted Particle Tempering, Practical Reversible Jump, and Extensions

Carzolio, Marcos Arantes 08 July 2016 (has links)
We are entering an exciting era, rich in the availability of data via sources such as the Internet, satellites, particle colliders, telecommunication networks, computer simulations, and the like. The confluence of increasing computational resources, volumes of data, and variety of statistical procedures has brought us to a modern enlightenment. Within the next century, these tools will combine to reveal unforeseeable insights into the social and natural sciences. Perhaps the largest headwind we now face is our collectively slow-moving imagination. Like a car on an open road, learning is limited by its own rate. Historically, slow information dissemination and the unavailability of experimental resources limited our learning. To that point, any methodological contribution that helps in the conversion of data into knowledge will accelerate us along this open road. Furthermore, if that contribution is accessible to others, the speedup in knowledge discovery scales exponentially. Markov chain Monte Carlo (MCMC) is a broad class of powerful algorithms, typically used for Bayesian inference. Despite their variety and versatility, these algorithms rarely become mainstream workhorses because they can be difficult to implement. The humble goal of this work is to bring to the table a few more highly versatile and robust, yet easily-tuned algorithms. Specifically, we introduce weighted particle tempering, a parallelizable MCMC procedure that is adaptable to large computational resources. We also explore and develop a highly practical implementation of reversible jump, the most generalized form of MetropolisHastings. Finally, we combine these two algorithms into reversible jump weighted particle tempering, and apply it on a model and dataset that was partially collected by the author and his collaborators, halfway around the world. It is our hope that by introducing, developing, and exhibiting these algorithms, we can make a reasonable contribution to the ever-growing body of MCMC research. / Ph. D.
98

Enabling Grid Integration of Combined Heat and Power Plants

Rajasekeran, Sangeetha 17 August 2020 (has links)
In a world where calls for climate action grow louder by the day, the role of renewable energy and energy efficient generation sources has become extremely important. One such energy efficient resource that can increase the penetration of renewable energy into the grid is the Combined Heat and Power technology. Combined Heat and Power (CHP) plants produce useful thermal and electrical power output from a single input fuel source and are widely used in the industrial and commercial sectors for reliable on-site power production. However, several unfavorable policies combined with inconsistent regulations have discouraged investments in this technology and reduced participation of such facilities in grid operations. The potential benefits that could be offered by this technology are numerous - improving grid resiliency during emergencies, deferring transmission system updates and reducing toxic emissions, to name a few. With increased share of renewable energy sources in the generation mix, there is a pressing need for reliable base generation that can meet the grid requirements without contributing negatively to the environment. Since CHP units are good candidates to help achieve this two-fold requirement, it is important to understand the present barriers to their deployment and grid involvement. In this thesis work, we explore some of these challenges and propose suitable grid integration technology as well as market participation approaches for better involvement of distributed CHP units in the industrial and commercial sectors. / Master of Science / Combined Heat and Power is a generation technology which uses a single fuel source to produce two useful outputs - electric power and thermal energy - by capturing and reusing the exhaust steam by-product. These generating units have much higher efficiencies than conventional power plants, lower fuel emissions and have been a popular choice among several industries and commercial buildings with a need for uninterrupted heat and power. With increasing calls for climate action and large scale deployment of renewable based energy generation sources, there is a higher need for reliable base-line generation which can handle the fluctuations and uncertainty of such renewables. This need can be met by CHP units owing to their geographic distribution and their high operating duration. CHPs also provide a myriad of other benefits for the grid operators and environmental benefits, compared to the conventional generators. However, unfavorable and inconsistent regulatory procedures have discouraged these facility owners from actively engaging in providing grid services. Therefore, it is imperative to look into some of the existing policies and understand where the changes and incentives need to be made. In this work, we look into methods that can ease CHP integration from a technological and an economic point of view, with the aim of encouraging grid operators and CHP owners to be more active participants.
99

Input Sensitive Analysis of a Minimum Metric Bipartite Matching Algorithm

Nayyar, Krati 29 June 2017 (has links)
In various business and military settings, there is an expectation of on-demand delivery of supplies and services. Typically, several delivery vehicles (also called servers) carry these supplies. Requests arrive one at a time and when a request arrives, a server is assigned to this request at a cost that is proportional to the distance between the server and the request. Bad assignments will not only lead to larger costs but will also create bottlenecks by increasing delivery time. There is, therefore, a need to design decision-making algorithms that produce cost-effective assignments of servers to requests in real-time. In this thesis, we consider the online bipartite matching problem where each server can serve exactly one request. In the online minimum metric bipartite matching problem, we are provided with a set of server locations in a metric space. Requests arrive one at a time that have to be immediately and irrevocably matched to a free server. The total cost of matching all the requests to servers, also known as the online matching is the sum of the cost of all the edges in the matching. There are many well-studied models for request generation. We study the problem in the adversarial model where an adversary who knows the decisions made by the algorithm generates a request sequence to maximize ratio of the cost of the online matching and the minimum-cost matching (also called the competitive ratio). An algorithm is a-competitive if the cost of online matching is at most 'a' times the minimum cost. A recently discovered robust and deterministic online algorithm (we refer to this as the robust matching or the RM-Algorithm) was shown to have optimal competitive ratios in the adversarial model and a relatively weaker random arrival model. We extend the analysis of the RM-Algorithm in the adversarial model and show that the competitive ratio of the algorithm is sensitive to the input, i.e., for "nice" input metric spaces or "nice" server placements, the performance guarantees of the RM-Algorithm is significantly better. In fact, we show that the performance is almost optimal for any fixed metric space and server locations. / Master of Science / In various business and military settings, there is an expectation of on-demand delivery of supplies and services. Typically, several delivery vehicles (also called servers) carry these supplies. Requests arrive one at a time and when a request arrives, a server is assigned to this request at a cost that is proportional to the distance between the server and the request. Bad assignments will not only lead to larger costs but will also create bottlenecks by increasing delivery time. There is, therefore, a need to design decision-making algorithms that produce cost-effective assignments of servers to requests in real-time. In this thesis, we consider the online bipartite matching problem where each server can serve exactly one request. In the online minimum metric bipartite matching problem, we are provided with a set of server locations in a metric space. Requests arrive one at a time that have to be immediately and irrevocably matched to a free server. The total cost of matching all the requests to servers, also known as the online matching is the sum of the cost of all the edges in the matching. There are many well-studied models for request generation. We study the problem in the adversarial model where an adversary who knows the decisions made by the algorithm generates a request sequence to maximize ratio of the cost of the online matching and the minimum-cost matching (also called the competitive ratio). An algorithm is α-competitive if the cost of online matching is at most α times the minimum cost. A recently discovered robust and deterministic online algorithm (we refer to this as the robust matching or the RM-Algorithm) was shown to have optimal competitive ratios in the adversarial model and a relatively weaker random arrival model. We extend the analysis of the RM-Algorithm in the adversarial model and show that the competitive ratio of the algorithm is sensitive to the input, i.e., for “nice” input metric spaces or “nice” server placements, the performance guarantees of the RM-Algorithm is significantly better. In fact, we show that the performance is almost optimal for any fixed metric space and server locations.
100

Diffusion Weighted MR Imaging in the Differentiation between Metastatic and Benign Lymph Nodes in Canine Patients with Head and Neck Disease

Stahle, Jessica Anne 14 July 2016 (has links)
In dogs with large primary tumors, regional lymph node involvement or evidence of distant metastasis can have worse prognoses and significantly decreased survival. Lymph node size alone has been shown to be insufficient as a predictor for the accurate clinical staging of some canine neoplasia, including oral malignant melanoma. However, regional lymph nodes of the oral cavity, such as the medial retropharyngeal lymph nodes, are difficult to access for routine sampling. Diffusion weighted magnetic resonance imaging (DWI) has demonstrated the ability to differentiate metastatic from inflammatory/benign lymph nodes in clinical studies with human cancer patients through the calculation of quantitative values of diffusion termed apparent diffusion coefficients (ADC). The objective of this exploratory study was to evaluate DWI and ADC as potential future methods for detecting malignant lymph nodes in dogs with naturally occurring disease. We hypothesized that DWI would identify significantly different ADC values between benign and metastatic lymph nodes in a group of canine patients with head or neck disease. Our results demonstrated that two of four observers identified a significant difference between the mean ADC values of the benign and metastatic lymph nodes. When data from all four observers were pooled, the difference between the mean ADC values of the benign and metastatic lymph nodes approached but did not reach significance (P-value: 0.0566). Therefore, our hypothesis was not supported. However, DWI does show promise in its ability to differentiate benign from metastatic lymph nodes, and further studies with increased patient numbers are warranted / Master of Science

Page generated in 0.0259 seconds