• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • Tagged with
  • 45
  • 45
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

James Moffett’s Search for Harmony: A Biography of One Reformer’s Evolution in English Education

Potts, Shannon Alice January 2024 (has links)
James Moffett (1929-1996) is an American educator, theorist, author, and consultant whose work focused on the reform of English education, in particular writing instruction. The researcher of this dissertation contends that despite his tremendous influence on the field of English education, Moffett has not been properly given credit for his contributions. This has arisen as a result of the bifurcated path that his career took wherein he developed an interest not only in the reform of English education, but also in the reform of the educational system itself. This biography traces Moffett’s contributions to the field of English education and considers how the story of his life impacted his professional work. This researcher looks back across James Moffett’s story—in his publications, professional writings, and personal life—to consider him as an integrated person and to wonder if a central driving force of his professional work can be defined. This biography uses Hamilton’s concept of polychromatic portraiture to draw together knowledge from archival documents from The James Porter Moffett Papers at the University of California, Santa Barbara, the Carnegie Corporation of New York Records at Columbia University, and archival documents hosted on ERIC, along with the published research on Moffett and his era in English education, as well as interviews from Moffett’s contemporaries, and biographical references from Moffett’s own life contained in his writing.
42

Machine-Learned Anatomic Subtyping, Longitudinal Disease Evaluation and Quantitative Image Analysis on Chest Computed Tomography: Applications to Emphysema, COPD, and Breast Density

Wysoczanski, Artur January 2024 (has links)
Chronic obstructive pulmonary disease (COPD) and emphysema together are one of the leading causes of death in the United States and worldwide; meanwhile, breast cancer has the highest incidence and second-highest mortality burden of all cancers in women. Imaging markers relevant to each of these conditions are readily identifiable on chest computed tomography (CT): (1) visually-appreciable variants in airway tree structure exist which are associated with increased odds for development of COPD; (2) CT emphysema subtypes (CTES), based on lung texture and spatial features, have been identified by unsupervised clustering and correlate with functional measures and clinical outcomes; (3) dysanapsis, or the ratio of airway caliber to lung volume, is the strongest known predictor of COPD risk, and (4) breast density (i.e., the extent of fibroglandular tissue within the breast) is strongly associated with breast cancer risk. Machine- and deep-learning frameworks present an opportunity to address unmet needs in each of these directions, leveraging the data from large CT cohorts. Application of unsupervised learning approaches serves to discover new, image-based phenotypes. While topologic and geometric variation in the structure of the CT-resolved airway tree are well-described, tree- structural subtypes are not fully characterized. Similarly, while the clinical correlates of CTES have been described in large cohort studies, the association of CTES with structural and functional measures of the lung parenchyma are only partially described, and the time-dependent evolution of emphysematous lung texture has not been studied. Supervised approaches are required to automate CT image assessment, or to estimate CT- based measures from incomplete input data. While dysanapsis can be directly quantified on full- lung CT, the lungs are often only partially imaged in large CT datasets; total lung volume must then be regressed from the observed partial image. Breast density grades, meanwhile, are generally visually assessed, which is laborious to perform at scale. Moreover, current automated methods rely on segmentation followed by intensity thresholding, excluding higher-order features which may contribute to the radiologist assessment. In this thesis, we present a series of machine-learning methods which address each of these gaps in the field, using CT scans from the Multi-Ethnic Study of Atherosclerosis (MESA), the SubPopulations and InteRmediate Outcome Measures in COPD (SPIROMICS) Study, and an institutional chest CT dataset acquired at Columbia University Irving Medical Center. First, we design a novel graph-based clustering framework for identifying tree-structure subtypes in Billera-Holmes-Vogtmann (BHV) tree-space, using the airway trees segmented from the full-lung CT scans of MESA Lung Exam 5. We characterize the behavior of our clustering algorithm on a synthetic dataset, describe the geometric and topological variation across tree-structure clusters, and demonstrate the algorithm’s robustness to perturbation of the input dataset and graph tuning parameter. Second, in MESA Lung Exam 5 CT scans, we quantify the loss of small-diameter airway and pulmonary vessel branches within CTES-labeled lung tissue, demonstrating that depletion of these structures is concentrated within CTES regions, and that the magnitude of this effect is CTES-specific. In a sample of 278 SPIROMICS Visit 1 participants, we find that CTES demonstrate distinct patterns of gas trapping and functional small airways disease (fSAD) on expiratory CT imaging. In the CT scans of SPIROMICS participants imaged at Visit 1 and Visit 5, we update the CTES clustering pipeline to identify longitudinal emphysema patterns (LEPs), which refine CTES by defining subphenotypes informative of time-dependent texture change. Third, we develop a multi-view convolutional neural network (CNN) model to estimate total lung volume (TLV) from cardiac CT scans and lung masks in MESA Lung Exam 5. We demonstrate that our model outperforms regression on imaged lung volume, and is robust to same- day repeated imaging and longitudinal follow-up within MESA. Our model is directly applicable to multiple large-scale cohorts containing cardiac CT and totaling over ten thousand participants. Finally, we design a 3-D CNN model for end-to-end automated breast density assessment on chest CT, trained and evaluated on an institutional chest CT dataset of patients imaged at Columbia University Irving Medical Center. We incorporate ordinal regression frameworks for density grade prediction which outperform binary or multi-class classification objectives, and we demonstrate that model performance on identifying high breast density is comparable to the inter-rater reliability of expert radiologists on this task.
43

Advanced data visualization and accuracy measurements of COVID-19 projections in US Counties for Informed Public Health Decision-Making.

Yaman, Tonguc January 2024 (has links)
Background: The COVID-19 pandemic posed an unparalleled challenge to worldwide public health systems, characterized by its high transmissibility and the initial absence of accessible testing, treatments, and vaccines. The deficiency in public awareness and the scarcity of readily available public health information regarding this century's disaster further intensified the critical need for innovative solutions to bridge these gaps. In response, Shaman Labs1,2, leveraging its deep expertise in forecasting for influenza3, Ebola, and various SARS viruses, initiated the development of country-wide COVID-19 projections within weeks following the WHO's declaration of the pandemic4–6. Almost immediately thereafter, it became necessary to create a sophisticated online platform—a system capable of displaying county-specific COVID-19 forecasts, including daily estimated infections, cases, and deaths. This platform was designed to allow users to select any county, state, or national geography and compare it with another, under various scenarios of social distancing measures. Additionally, the architecture of this system was required to facilitate the regular integration of updated data, ensuring the tool's ongoing relevance and utility. Columbia University's data visualization system aimed to communicate epidemiological forecasts to various stakeholders. At the onset of the COVID-19 pandemic, amid escalating uncertainty and the pressing need for reliable data, Dr. Rundle played a pivotal role in briefing key stakeholders on the unfolding crisis. His efforts were directed towards providing Congressman Ron Johnson, Chairman of the U.S. Senate Committee on Homeland Security & Governmental Affairs, and Congresswoman Anna Eshoo, as well as their staff, with up-to-date projections and analyses derived from the Classic Data Visualization tools. Dr. Rundle’s consultative role extended to a diverse array of institutions including the U.S. Army Corps of Engineers, the U.S. Air Force, and the Federal Reserve Board, as well as advising private entities such as Pfizer, MetLife, and Unilever. His expertise facilitated informed planning and response efforts across various levels of government and sectors, underscoring the critical role of sophisticated data visualization from the earliest stages of the pandemic. This Integrated Learning Experience (ILE) examines the development and implementation of the Time Machine platform, focusing on its application in visualizing and analyzing COVID-19 epidemiological forecasts. The study explores methods for improving forecast data presentation, analysis, and accuracy assessment. Methods: The body of this work unfolds through a series of critical chapters that collectively address the multifaceted functionality and impact of the Time Machine platform. Initially, the work focuses on the construction of the Time Machine platform, a web-based R interactive user interface coupled with cloud-based database system, specifically tailored for the intuitive visualization of epidemiological forecasts, detailing the technical and design considerations essential for enabling users to interpret complex data more effectively. Following this, the implementation of a rigorous data-discovery framework is presented, examining case reporting inconsistencies across different regions, using low-level GitHub and Windows scripting technologies, thereby highlighting the significance of accurate data collection and the impact of discrepancies on public health decisions. The narrative then transitions to the implementation of advanced statistical models, such as strictly proper scoring and weighted interval scoring, to assess the accuracy of the forecasts provided by the Time Machine platform, using a dedicated R library and testing with the help of MS Excel sandbox, underscoring the importance of reliable predictions in the management of public health crises. Lastly, a detailed analysis is conducted, encompassing countrywide data (3142 counties) over an extended period (147 weeks), utilizing Generalized Estimating Equations (GEE) to identify key predictors that influence forecast accuracy, offering valuable insights into the factors that either enhance or detract from the reliability of epidemiological predictions. Results: The deployment of the Classic Data Visualization and the subsequent evolution of the Time Machine platform have significantly advanced epidemiological forecast visualization capabilities. The Time Machine platform was designed with an automated data refresh system, allowing for regular updates of epidemiological forecast data and reported actuals. The project developed tools for monitoring and evaluating the quality of public health reporting, aiming to improve the accuracy and timeliness of data used in public health decisions. Additionally, the research implemented methods for standardizing forecast accuracy assessments, including the normalization of scores to enable comparisons across different geographical scales. These approaches were designed to support both local and national-level pandemic response efforts. The accuracy analyses throughout different phases of the pandemic revealed a 42% improvement in forecast accuracy from Phase 1 to Phase 7. Larger populations (27% increase per unit increase on a base-10 logarithmic scale) and higher county-level activity (45% increase from the lowest to the highest quartile) resulted in better estimations. Additionally, the analysis highlighted the significant impact of reporting quality on forecast accuracy. On the other hand, the study identified the challenges in predicting case surges, showing a 27% decline in accuracy during periods of rising infections compared to declining periods. The regression results highlight the potential benefits of improving data collection and providing timely feedback to forecasting teams. Conclusion: This study demonstrates the potential of advanced data visualization and accuracy measurement techniques in improving epidemiological forecasting. The findings suggest that factors such as urbanicity, case reporting quality, and pandemic phase significantly influence forecast accuracy. Further research is needed to refine these models and enhance their applicability across various public health scenarios.
44

Some crises in higher education.

Eisenhart, Charles Robert, January 1954 (has links)
Thesis (Ed.D.)--Teachers College, Columbia University. / Typescript. Sponsor: Karl W. Bigelow. Dissertation Committee: R. Freeman Butts, Ralph R. Field, . Type C project. Includes bibliographical references (leaves 419-437).
45

Optimization and Decision-Making in Decentralized Finance, Scheduling, and Graphical Game Theory

Patange, Utkarsh January 2024 (has links)
We consider the problem of optimization and decision-making in various settings involving complex systems. In particular, we consider specific problems in decentralized finance which we address employing insights from mathematical finance, in course-mode selection that we solve by applying mixed-integer programming, and in social networks that we approach using tools from graphical game theory.In the first part of the thesis, we model and analyze fixed spread liquidation lending in DeFi as implemented by popular pooled lending protocols such as AAVE, JustLend, and Compound. Empirically, we observe that over 70% of liquidations occur in the absence of any downward price jumps. Then, assuming the borrowers monitor their loans with exponentially distributed horizons, we compute the expected liquidation cost incurred by the borrowers in closed form as a function of the monitoring frequency. We compare this cost against liquidation data obtained from AAVE protocol V2, and observe a match with our model assuming the borrowers monitor their loans five to six times more often than they interact with the pool. Such borrowers must balance the financing cost against the likelihood of liquidation. We compute the optimal health factor in this situation assuming a financing rate for the collateral. Empirically, we observe that borrowers are often more conservative compared to model predictions, though on average, model predictions match with empirical observations. In the second part of the thesis, we consider the problem of hybrid scheduling that was faced by Columbia Business School during the Covid-19 pandemic and describe the system that we implemented to address it. The system allows some students to attend in-person classes with social distancing, while their peers attend online, and schedules vary by day. We consider two variations of this problem: one where students have unique, individualized class enrollments, and one where they are grouped in teams that are enrolled in identical classes. We formulate both problems as mixed-integer programs. In the first setting, students who are scheduled to attend all classes in person on a given day may, at times, be required to attend a particular class on that day online due to social distancing constraints. We count these instances as “excess.” We minimize excess and related objectives, and analyze and solve the relaxed linear program. In the second setting, we schedule the teams so that each team’s in-person attendance is balanced over days of week and spread out over the entire term. Our objective is to maximize interaction between different teams. Our program was used to schedule over 2,500 students in student-level scheduling and about 790 students in team-level scheduling from the Fall 2020 through Summer 2021 terms at Columbia Business School. In the third part of the thesis, we consider a social network, where individuals choose actions which optimize utility which is a function of their neighbors’ actions. We assume that a central authority aiming to maximize social welfare at equilibrium can intervene by paying some cost to shift individual incentives, and that the cost is upper bounded by a budget. The intervention that maximizes the social welfare can be computed using the spectral decomposition of the adjacency matrix of the graph, yet this is infeasible in practice if the adjacency matrix is unknown. We study the question of designing intervention strategies for graphs where the adjacency matrix is unknown and is drawn from some distribution. For several commonly studied random graph models, we show that the competitive ratio of in intervention proportional to the first eigenvector of the expected adjacency matrix, approaches 1 in probability as the graph size increases. We also provide several efficient sampling-based approaches for approximately recovering the first eigenvector when we do not know the distribution. On the whole, our analysis compares three categories of interventions: those which use no data about the network, those which use some data (such as distributional knowledge or queries to the graph), and those which are fully optimal. We evaluate these intervention strategies on synthetic and real-world network data, and our results suggest that analysis of random graph models can be useful for determining when certain heuristics may perform well in practice.

Page generated in 0.06 seconds