Spelling suggestions: "subject:"convex"" "subject:"konvex""
241 |
Trend-Filtered Projection for Principal Component AnalysisLi, Liubo, Li January 2017 (has links)
No description available.
|
242 |
The Brunn-Minkowski Inequality and Related ResultsMullin, Trista A. 25 June 2018 (has links)
No description available.
|
243 |
EVALUATION OF FLATNESS TOLERANCE AND DATUMS IN COMPUTATIONAL METROLOGYCHEPURI, SHAMBAIAH January 2000 (has links)
No description available.
|
244 |
An open source object oriented platform for rapid design of high performance path following interior-point methodsChiş, Voicu January 2008 (has links)
<p> Interior point methods (IPMs) is a powerful tool in convex optimization. From
the theoretical point of view, the convex set of feasible solutions is represented
by a so-called barrier functional and the only information required by the
algorithms is the evaluation of the barrier, its gradient and Hessian. As a
result, IPM algorithms can be used for many types of convex problems and
their theoretical performance depends on the properties of the barrier. In
practice, performance depends on how the data structure is exploited at the
linear algebra level. In this thesis, we make use of the object-oriented paradigm
supported by C++ to create a platform where the aforementioned generality
of IPM algorithms is valued and the possibility to exploit the data structure
is available. We will illustrate the power of such an approach on optimization
problems arrising in the field of Radiation Therapy, in particular Intensity
Modulated Radiation Therapy. </p> / Thesis / Master of Science (MSc)
|
245 |
Enhancements to Transportation Analysis and Simulation SystemsJeihani Koohbanani, Mansoureh 22 December 2004 (has links)
Urban travel demand forecasting and traffic assignment models are important tools in developing transportation plans for a metropolitan area. These tools provide forecasts of urban travel patterns under various transportation supply conditions. The predicted travel patterns then provide useful information in planning the transportation system. Traffic assignment is the assignment of origin-destination flows to transportation routes, based on factors that affect route choice.
The urban travel demand models, developed in the mid 1950s, provided accurate and precise answers to the planning and policy issues being addressed at that time, which mainly revolved around expansion of the highway system to meet the rapidly growing travel demand. However, the urban transportation planning and analysis have undergone changes over the years, while the structure of the travel demand models has remained largely unchanged except for the introduction of disaggregate choice models beginning in the mid-1970s. Legislative and analytical requirements that exceed the capabilities of these models and methodologies have driven new technical approaches such as TRANSIMS.
The Transportation Analysis and Simulation System, or TRANSIMS, is an integrated system of travel forecasting models designed to give transportation planners accurate, and complete information on traffic impacts, congestion, and pollution. It was developed by the Los Alamos National Laboratory to address new transportation and air quality forecasting procedures required by the Clean Air Act, the Intermodal Surface Transportation Efficiency Act, and other regulations.
TRANSIMS includes six different modules: Population Synthesizer, Activity Generator, Route Planner, Microsimulator, Emissions Estimator, and Feedback. This package has been under development since 1994 and needs significant improvements within some of its modules. This dissertation enhances the interaction between the Route Planner and the Microsimulator modules to improve the dynamic traffic assignment process in TRANSIMS, and the Emissions Estimator module.
The traditional trip assignment is static in nature. Static assignment models assume that traffic is in a steady-state, link volumes are time invariant, the time to traverse a link depends only on the number of vehicles on that link, and that the vehicle queues are stacked vertically and do not traverse to the upstream links in the network. Thus, a matrix of steady-state origin-destination (O-D) trip rates is assigned simultaneously to shortest paths from each origin to a destination. To address the static traffic assignment problems, dynamic traffic assignment models are proposed. In dynamic traffic assignment models, the demand is allowed to be time varying so that the number of vehicles passing through a link and the corresponding link travel times become time-dependent. In contrast with the static case, the dynamic traffic assignment problem is still relatively unexplored and a precise formulation is not clearly established. Most models in the literature do not present a solution algorithm and among the presented methods, most of them are not suitable for large-scale networks. Among the suggested solution methodologies that claim to be applicable to large-scale networks, very few methods have been actually tested on such large-scale networks. Furthermore, most of these models have stability and convergence problem.
A solution methodology for computing dynamic user equilibria in large-scale transportation networks is presented in this dissertation. This method, which stems from the convex simplex method, routes one traveler at a time on the network and updates the link volumes and link travel times after each routing. Therefore, this method is dynamic in two aspects: it is time-dependent, and it routes travelers based on the most updated link travel times. To guarantee finite termination, an additional stopping criterion is adopted.
The proposed model is implemented within TRANSIMS, the Transportation Analysis and Simulation System, and is applied to a large-scale network. The current user equilibrium computation in TRANSIMS involves simply an iterative process between the Route Planner and the MicroSimulator modules. In the first run, the Route Planner uses free-flow speeds on each link to estimate the travel time to find the shortest paths, which is not accurate because there exist other vehicles on the link and so, the speed is not simply equal to the free-flow speed. Therefore, some paths might not be the shortest paths due to congestion. The Microsimulator produces the new travel times based on accurate vehicle speeds. These travel times are fed back to the Route Planner, and the new routes are determined as the shortest paths for selected travelers. This procedure does not necessarily lead to a user equilibrium solution. The existing problems in this procedure are addressed in our proposed algorithm as follows.
TRANSIMS routes one person at a time but does not update link travel times. Therefore, each traveler is routed regardless of other travelers on the network. The current stopping criterion is based only on visualization and the procedure might oscillate. Also, the current traffic assignment spends a huge amount of time by iterating frequently between the Route Planner and the Microsimulator. For example in the Portland study, 21 iterations between the Route Planner and the Microsimulator were performed that took 33:29 hours using three 500-MHZ CPUs (parallel processing). These difficulties are addressed by distributing travelers on the network in a better manner from the beginning in the Route Planner to avoid the frequent iterations between the Route Planner and the Microsimulator that are required to redistribute them. By updating the link travel times using a link performance function, a near-equilibrium is obtained only in one iteration. Travelers are distributed in the network with regard to other travelers in the first iteration; therefore, there is no need to redistribute them using the time-consuming iterative process. To avoid problems caused by link performance function usage, an iterative procedure between the current Route Planner and the Microsimulator is performed and a user equilibrium is found after a few iterations. Using an appropriate descent-based stopping criterion, the finite termination of the procedure is guaranteed. An illustration using real-data pertaining to the transportation network of Portland, Oregon, is presented along with comparative analyses.
TRANSIMS framework contains a vehicle emissions module that estimates tailpipe emissions for light and heavy duty vehicles and evaporative emissions for light duty vehicles. It uses as inputs the emissions arrays obtained the Comprehensive Modal Emissions Model (CMEM). This dissertation describes and validates the framework of TRANSIMS for modeling vehicle emissions. Specifically, it identifies an error in the model calculations and enhances the emission modeling formulation. Furthermore, the dissertation compares the TRANSIMS emission estimates to on-road emission-measurements and other state-of-the-art emission models including the VT-Micro and CMEM models. / Ph. D.
|
246 |
A Deterministic Approach to Partitioning Neural Network Training Data for the Classification ProblemSmith, Gregory Edward 28 September 2006 (has links)
The classification problem in discriminant analysis involves identifying a function that accurately classifies observations as originating from one of two or more mutually exclusive groups. Because no single classification technique works best for all problems, many different techniques have been developed. For business applications, neural networks have become the most commonly used classification technique and though they often outperform traditional statistical classification methods, their performance may be hindered because of failings in the use of training data. This problem can be exacerbated because of small data set size.
In this dissertation, we identify and discuss a number of potential problems with typical random partitioning of neural network training data for the classification problem and introduce deterministic methods to partitioning that overcome these obstacles and improve classification accuracy on new validation data. A traditional statistical distance measure enables this deterministic partitioning. Heuristics for both the two-group classification problem and k-group classification problem are presented. We show that these heuristics result in generalizable neural network models that produce more accurate classification results, on average, than several commonly used classification techniques.
In addition, we compare several two-group simulated and real-world data sets with respect to the interior and boundary positions of observations within their groups' convex polyhedrons. We show by example that projecting the interior points of simulated data to the boundary of their group polyhedrons generates convex shapes similar to real-world data group convex polyhedrons. Our two-group deterministic partitioning heuristic is then applied to the repositioned simulated data, producing results superior to several commonly used classification techniques. / Ph. D.
|
247 |
Distributed, Stable Topology Control of Multi-Robot Systems with Asymmetric InteractionsMukherjee, Pratik 17 June 2021 (has links)
Multi-robot systems have recently witnessed a swell in interest in the past few years because of their various applications such as agricultural autonomy, medical robotics, industrial and commercial automation and,
search and rescue. In this thesis, we particularly investigate the behavior of multi-robot systems with respect to stable topology control in asymmetric interaction settings.
From theoretical perspective, we first classify stable topologies, and identify the conditions under which we can determine whether a topology is stable or not. Then, we design a limited fields-of-view (FOV) controller for robots that use sensors like cameras for coordination which induce asymmetric robot to robot interactions. Finally, we conduct a rigorous theoretical analysis to qualitatively determine which interactions are suitable for stable directed topology control of multi-robot systems with asymmetric interactions. In this regard, we solve an optimal topology selection problem to determine the topology with the best interactions based on a suitable metric that represents the quality of interaction. Further, we solve this optimal problem distributively and validate the distributed optimization formulation with extensive simulations. For experimental purposes, we developed a portable multi-robot testbed which enables us to conduct multi-robot topology control experiments in both indoor and outdoor settings and validate our theoretical findings.
Therefore, the contribution of this thesis is two fold: i) We provide rigorous theoretical analysis of stable coordination of multi-robot systems with directed graphs, demonstrating the graph structures that induce stability for a broad class of coordination objectives;
ii) We develop a testbed that enables validating multi-robot topology control in both indoor and outdoor settings. / Doctor of Philosophy / In this thesis, we address the problem of collaborative tasks in a multi-robot system where we investigate how interactions within members of the multi-robot system can induce instability. We conduct rigorous theoretical analysis and identify when the system will be unstable and hence classify interactions that will lead to stable multi-robot coordination. Our theoretical analysis tries to emulate realistic interactions in a multi-robot system such as limited interactions (blind spots) that exist when on-board cameras are used to detect and track other robots in the vicinity. So we study how these limited interactions induce instability in the multi-robot system. To verify our theoretical analysis experimentally, we developed a portable multi-robot testbed that enables us to test our theory on stable coordination of multi-robot system with a team of Unmanned Aerial Vehicles (UAVs) in both indoor and outdoor settings. With this feature of the testbed we are able to investigate the difference in the multi-robot system behavior when tested in controlled indoor environments versus an uncontrolled outdoor environment. Ultimately, the motivation behind this thesis is to emulate realistic conditions for multi-robot cooperation and investigate suitable conditions for them to work in a stable and safe manner. Therefore, our contribution is twofold ; i) We provide rigorous theoretical analysis that enables stable coordination of multi-robot systems with limited interactions induced by sensor capabilities such as cameras; ii) We developed a testbed that enables testing of our theoretical contribution with a team of real robots in realistic environmental conditions.
|
248 |
Internal convex programming, orthogonal linear programming, and program generation proceduresRistroph, John Heard 05 January 2010 (has links)
Three topics are developed: interval convex programming, and program generation techniques. The interval convex programming problem is similar to the convex programming problem of the real number system except that all parameters are specified as intervals of real numbers rather than as real scalars. The interval programming solution procedure involves the solution of a series of 2n real valued convex programs where n is the dimension of the space. The solution of an interval programming problem is an interval vector which contains all possible solutions to any real valued convex program which may be realized.
Attempts to improve the efficiency of the interval convex programming problem lead to the eventual development of a new solution procedure for the real valued linear programming problem, Orthogonal linear programming. This new algorithm evolved from some heuristic procedures which were initially examined in the attempt to improve solution efficiency. In the course of testing these heuristics, which were unsuccessful, procedures were developed whereby it is possible to generate discrete and continuous mathematical programs with randomly chosen parameters, but known solutions. / Ph. D.
|
249 |
A Study of Machine Learning Approaches for Biomedical Signal ProcessingShen, Minjie 10 June 2021 (has links)
The introduction of high-throughput molecular profiling technologies provides the capability of studying diverse biological systems at molecular level. However, due to various limitations of measurement instruments, data preprocessing is often required in biomedical research. Improper preprocessing will have negative impact on the downstream analytics tasks. This thesis studies two important preprocessing topics: missing value imputation and between-sample normalization.
Missing data is a major issue in quantitative proteomics data analysis. While many methods have been developed for imputing missing values in high-throughput proteomics data, comparative assessment on the accuracy of existing methods remains inconclusive, mainly because the true missing mechanisms are complex and the existing evaluation methodologies are imperfect. Moreover, few studies have provided an outlook of current and future development.
We first report an assessment of eight representative methods collectively targeting three typical missing mechanisms. The selected methods are compared on both realistic simulation and real proteomics datasets, and the performance is evaluated using three quantitative measures. We then discuss fused regularization matrix factorization, a popular low-rank matrix factorization framework with similarity and/or biological regularization, which is extendable to integrating multi-omics data such as gene expressions or clinical variables. We further explore the potential application of convex analysis of mixtures, a biologically inspired latent variable modeling strategy, to missing value imputation. The preliminary results on proteomics data are provided together with an outlook into future development directions.
While a few winners emerged from our comparative assessment, data-driven evaluation of imputation methods is imperfect because performance is evaluated indirectly on artificial missing or masked values not authentic missing values. Imputation accuracy may vary with signal intensity. Fused regularization matrix factorization provides a possibility of incorporating external information. Convex analysis of mixtures presents a biologically plausible new approach.
Data normalization is essential to ensure accurate inference and comparability of gene expressions across samples or conditions. Ideally, gene expressions should be rescaled based on consistently expressed reference genes. However, for normalizing biologically diverse samples, the most commonly used reference genes have exhibited striking expression variability, and distribution-based approaches can be problematic when differentially expressed genes are significantly asymmetric.
We introduce a Cosine score based iterative normalization (Cosbin) strategy to normalize biologically diverse samples. The between-sample normalization is based on iteratively identified consistently expressed genes, where differentially expressed genes are sequentially eliminated according to scale-invariant Cosine scores.
We evaluate the performance of Cosbin and four other representative normalization methods (Total count, TMM/edgeR, DESeq2, DEGES/TCC) on both idealistic and realistic simulation data sets. Cosbin consistently outperforms the other methods across various performance criteria. Implemented in open-source R scripts and applicable to grouped or individual samples, the Cosbin tool will allow biologists to detect subtle yet important molecular signals across known or novel phenotypic groups. / Master of Science / Data preprocessing is often required due to various limitations of measurement instruments in biomedical research. This thesis studies two important preprocessing topics: missing value imputation and between-sample normalization.
Missing data is a major issue in quantitative proteomics data analysis. Imputation is the process of substituting for missing values. We propose a more realistic assessment workflow which can preserve the original data distribution, and then assess eight representative general-purpose imputation strategies. We explore two biologically inspired imputation approaches: fused regularization matrix factorization (FRMF) and convex analysis of mixtures (CAM) imputation. FRMF integrates external information such as clinical variables and multi-omics data into imputation, while CAM imputation incorporates biological assumptions. We show that the integration of biological information improves the imputation performance.
Data normalization is required to ensure correct comparison. For gene expression data, between sample normalization is needed. We propose a Cosine score based iterative normalization (Cosbin) strategy to normalize biologically diverse samples. We show that Cosbin significantly outperform other methods in both ideal simulation and realistic simulation. Implemented in open-source R scripts and applicable to grouped or individual samples, the Cosbin tool will allow biologists to detect subtle yet important molecular signals across known or novel cell types.
|
250 |
A Complexity-Theoretic Perspective on Convex GeometryNadimpalli, Shivam January 2024 (has links)
This thesis considers algorithmic and structural aspects of high-dimensional convex sets with respect to the standard Gaussian measure.
Among our contributions, (i) we introduce a notion of "influence" for convex sets that yields the first quantitative strengthening of Royen's celebrated Gaussian correlation inequality; (ii) we investigate the approximability of general convex sets by intersections of halfspaces, where the approximation quality is measured with respect to the standard Gaussian distribution; and (iii) we give the first lower bounds for testing convexity and estimating the distance to convexity of an unknown set in the black-box query model.
Our results and techniques are inspired by a number of fundamental ingredients and results---such as the influence of variables, noise sensitivity, and various extremal constructions---from the analysis of Boolean functions in complexity theory.
|
Page generated in 0.0348 seconds