Spelling suggestions: "subject:"large scale"" "subject:"marge scale""
1 |
A New Model for Providing Prehospital Medical Care in Large StadiumsSpaite, Daniel W., Criss, Elizabeth A., Valenzuela, Terence D., Meislin, Harvey W., Smith, Roger, Nelson, Allie 01 January 1988 (has links)
To determine proper priorities for the provision of health care in large stadiums, we studied the medical incident patterns occurring in a major college facility and combined this with previously reported information from four other large stadiums. Medical incidents were an uncommon occurrence (1.20 to 5.23 per 10,000 people) with true medical emergencies being even more unusual (0.09 to 0.31 per 10,000 people). Cardiac arrest was rare (0.01 to 0.04 events per 10,000 people). However, the rates of successful resuscitation in three studies were 85% or higher. The previous studies were descriptive in nature and failed to provide specific recommendations for medical aid system configuration or response times. A model is proposed to provide rapid response of advanced life support care to victims of cardiac arrest. We believe that the use of this model in large stadiums throughout the United States could save as many as 100 lives during each football season.
|
2 |
Decentralized model reference adaptive systems with variable structure controllers /Al-Abbass, Faysal January 1986 (has links)
No description available.
|
3 |
Integrating local information for inference and optimization in machine learningZhu, Zhanxing January 2016 (has links)
In practice, machine learners often care about two key issues: one is how to obtain a more accurate answer with limited data, and the other is how to handle large-scale data (often referred to as “Big Data” in industry) for efficient inference and optimization. One solution to the first issue might be aggregating learned predictions from diverse local models. For the second issue, integrating the information from subsets of the large-scale data is a proven way of achieving computation reduction. In this thesis, we have developed some novel frameworks and schemes to handle several scenarios in each of the two salient issues. For aggregating diverse models – in particular, aggregating probabilistic predictions from different models – we introduce a spectrum of compositional methods, Rényi divergence aggregators, which are maximum entropy distributions subject to biases from individual models, with the Rényi divergence parameter dependent on the bias. Experiments are implemented on various simulated and real-world datasets to verify the findings. We also show the theoretical connections between Rényi divergence aggregators and machine learning markets with isoelastic utilities. The second issue involves inference and optimization with large-scale data. We consider two important scenarios: one is optimizing large-scale Convex-Concave Saddle Point problem with a Separable structure, referred as Sep-CCSP; and the other is large-scale Bayesian posterior sampling. Two different settings of Sep-CCSP problem are considered, Sep-CCSP with strongly convex functions and non-strongly convex functions. We develop efficient stochastic coordinate descent methods for both of the two cases, which allow fast parallel processing for large-scale data. Both theoretically and empirically, it is demonstrated that the developed methods perform comparably, or more often, better than state-of-the-art methods. To handle the scalability issue in Bayesian posterior sampling, the stochastic approximation technique is employed, i.e., only touching a small mini batch of data items to approximate the full likelihood or its gradient. In order to deal with subsampling error introduced by stochastic approximation, we propose a covariance-controlled adaptive Langevin thermostat that can effectively dissipate parameter-dependent noise while maintaining a desired target distribution. This method achieves a substantial speedup over popular alternative schemes for large-scale machine learning applications.
|
4 |
Some exact and approximate methods for large scale systems steady-state availability analysis.Chien, Ying-Che January 1995 (has links)
System availability is the probability of the system being operable at instant t. Markov chains are a model used for system availability analysis. The exact analytical solution in terms of component failure rates and repair rates for steady-state system availability is complex to find solving the large numbers of simultaneous linear equations that result from the model. Although exact analytical solutions have been developed for series and parallel systems and for some other small size systems, they have not been developed for large scale general systems with n distinct components. Some methods for approximate analytical solutions have been suggested, but limitations on network types, over simplified states merge conditions and lack of predictions of approximation errors make these methods difficult to use. Markov state transition graphs can be classified as symmetric or asymmetric. A symmetric Markov graph has two-way transitions between each pair of communicating nodes. An asymmetric Markov graph has some pair(s) of communicating nodes with only one-way transitions. In this research, failure rates and repair rates are assumed to be component dependent only. Exact analytical solutions are developed for systems with symmetric Markov graphs. Pure series systems, pure parallel systems and general k out of n systems are examples of systems with symmetric Markov graphs. Instead of solving a large number of linear equations from the Markov model to find the steady-state system availability, it is shown that only algebraic operations on component failure rates and repair rates are necessary. In fact, for the above class of systems, the exact analytical solutions are relatively easy to obtain. Approximate analytical solutions for systems with asymmetric Markov graphs are also developed based on the exact solutions for the corresponding symmetric Markov graphs. The approximate solutions are shown to be close to the exact solutions for large scale and complex systems. Also, they are shown to be lower bounds for the exact solutions. Design principles to improve systems availability are derived from the analytical solutions for systems availability. Important components can be found easily with the iteration procedure and computer programs provided in this research.
|
5 |
Generic VLSI architectures : chip designs for image processing applicationsLe Riguer, E. M. J. January 2001 (has links)
No description available.
|
6 |
Epi-CHO, an episomal expression system for recombinant protein production in CHO cellsKunaparaju, Raj Kumar, Biotechnology & Biomolecular Sciences, Faculty of Science, UNSW January 2008 (has links)
The current project is to develop a transient expression system for Chinese Hamster Ovary (CHO) cells based on autonomous replication and retention of plasmid DNA. The expression system, named Epi-CHO comprises (1) a recombinant CHO-K1 cell line encoding the Polyoma (Py) virus large T-Antigen (PyLT-Ag), and (2) a DNA expression vector, pPy/EBV encoding the Py Origin (PyOri) for autonomous replication and encoding the Epstein-Barr virus (EBV), Nuclear Antigen-1 (EBNA-1) and EBV Origin of replication (OriP) for plasmid retention. The CHO-K1 cell line expressing PyLT-Ag, named CHO-T was adapted to suspension growth in serum-free media (EXCELL-302) to facilitate large scale transient transfection and recombinant (r) protein production. PyLT-Ag-expressed in CHO-T supported replication of PyOri-containing plasmids and enhanced growth and r- protein production. A scalable cationic lipid based transfection was optimised for CHO-T cells using LipofectAMINE-2000??. Destabilised Enhanced Green Fluorescence Protein (D2EGFP) and Human Growth Hormone (HGH) were used as reporter proteins to demonstrate transgene expression and productivity. Transfection of CHO-T cells with the vector pPy/EBV encoding D2EGFP showed prolonged and enhanced EGFP expression, and transfection with pPy/EBV encoding HGH resulted in a final concentration of 75 mg/L of HGH in culture supernatant 11 days following transfection.
|
7 |
The role of language and culture in large-scale assessment : a study of the 2009 Texas Assessment of Knowledge and SkillsLima Gonzalez, Cynthia Esperanza 09 September 2013 (has links)
The inclusion of all students in large-scale assessment mandated by the No Child Left Behind (2003) requires that these large-scale assessments be developed to allow all students to show what they know, and that the results are comparable and equitable across diverse cultural and linguistic populations. This study examined the validity of the 5th grade 2009 Science Texas Assessment of Knowledge and Skills (TAKS) for diverse cultural and linguistic groups. The student groups considered for this study were selected based on all the possible combinations of three variables: ethnicity--White and Hispanic, test language--English and Spanish, and Limited English Proficiency (LEP) classification. Validity was assessed at the item and construct levels, and was analyzed from a psychometric, cultural and linguistic stance. At the item level, Differential Item Function (DIF) was conducted using the Mantel-Haenszel procedure. The presence of biased items was revealed for all pairwise group comparisons; with a high number of DIF items between groups which differed in English proficiency (approximately 50% of the test items), and a low number of DIF items between groups which only differ in ethnicity (approximately 15% of the test items). However, an analysis of the Item Characteristic Curves (ICCs), revealed that items classified by the Mantel-Haenszel procedure as advantaging the LEP groups, did so for students at low proficiency levels; while the advantage at high proficiency levels was for non-LEP groups. At the construct level, the structure of the English version of the TAKS was compared across three student groups using Confirmatory Factor Analysis with Multiple Groups. The hypothesized structure based on the TAKS blueprint, was rejected for the Group conformed by White, non-LEP students (MLM[subscript x]²[subscript(734)] = 1042.110; CFI= 0.845; RMSEA= 0.020); but, it was a good fit for Hispanic, non-LEP (MLM[subscript x]²[subscript(734)] = 819.356; CFI= 0.980; RMSEA= 0.011) and LEP (MLM[subscript x]²[subscript(734)] = 805.124; CFI= 0.985; RMSEA= 0.010) Groups. The results obtained from this study call to reinterpret the achievement gap observed in TAKS scores between the populations considered, and highlight the need for further development of guidelines that can better help to develop fair large-scale tests for all students. / text
|
8 |
Measuring the Universe with High-Precision Large-Scale StructureMehta, Kushal Tushar January 2014 (has links)
Baryon acoustic oscillations (BAOs) are used to obtain precision measurements of cosmological parameters from large-scale surveys. While robust against most systematics, there are certain theoretical uncertainties that can affect BAO and galaxy clustering measurements. In this thesis I use data from the Sloan Digital Sky Survey (SDSS) to measure cosmological parameters and use N-body and smoothed-particle hydrodynamic (SPH) simulations to measure the effect of theoretical uncertainties by using halo occupation distributions (HODs). I investigate the effect of galaxy bias on BAO measurements by creating mock galaxy catalogs from large N -body simulations at z = 1. I find that there is no additional shift in the acoustic scale (0.10% ± 0.10%) for the less biased HODs (b < 3) and a mild shift (0.79% ± 0.31%) for the highly biased HODs (b > 3). I present the methodology and implementation of the simple one-step reconstruction technique introduced by Eisenstein et al. (2007) to biased tracers in N-body simulation. Reconstruction reduces the errorbars on the acoustic scale measurement by a factor of 1.5 - 2, and removes any additional shift due to galaxy bias for all HODs (0.07% ± 0.15%). Padmanabhan et al. (2012) and Xu et al. (2012) use this reconstruction technique in the SDSS DR7 data to measure Dᵥ(z = 0.35)(rᶠⁱᵈs/rs) = 1356 ± 25 Mpc. Here I use this measurement in combination with measurements from the cosmic microwave background and the supernovae legacy survey to measure various cosmological parameters. I find the data consistent with the ΛCDM Universe with a flat geometry. In particular, I measure H₀ = 69.8 ± 1.2 km/s/Mpc, w = 0.97 ± 0.17, Ωk = -0.004 ± 0.005 in the ΛCDM, wCDM, and oCDM models respectively. Next, I measure the effect of large-scale (5 Mpc) halo environment density on the HOD by using an SPH simulation at z = 0, 0.35, 0.5, 0.75, 1.0. I do not find any significant dependence of the HOD on the halo environment density for different galaxy mass thresholds, red and blue galaxies, and at different redshifts. I use the MultiDark N-body simualtion to measure the possible effect of environment density on the galaxy correlation function ℰ(r). I find that environment density enhances ℰ(r) by ∽ 3% at scales of 1 – 20h⁻¹Mpc at z = 0 and up to ∽ 12% at 0.3h⁻¹Mpc and ∽ 8% at 1 - 4h⁻¹Mpc for z = 1.
|
9 |
Decomposition and decentralized output control of large-scale systemsFinney, John D. 05 1900 (has links)
No description available.
|
10 |
Generating Accurate Dependencies for Large SoftwareWang, Pei 06 November 2014 (has links)
Dependencies between program elements can reflect the architecture, design, and implementation of a software project. According a industry report, intra- and inter-module dependencies can be a significant source of latent threats to software maintainability in long-term software development, especially when the software has millions of lines of code.
This thesis introduces the design and implementation of an accurate and scalable analysis tool that extracts code dependencies from large C/C++ software projects. The tool analyzes both symbol-level and module-level dependencies of a software system and provides an utilization-based dependency model. The accurate dependencies generated by the tool can be provided as the input to other software analysis suits; the results along can help developers identify potential underutilized and inconsistent dependencies in the software. Such information points to potential refactoring opportunities and assists developers with large-scale refactoring tasks.
|
Page generated in 0.0344 seconds