• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 18
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 69
  • 69
  • 21
  • 13
  • 11
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Retrofit urbano: uma abordagem para apoio de tomada de decisão. / Urban retrofitting an approach to support decision-making.

Negreiros, Iara 07 December 2018 (has links)
Acomodar adequadamente uma população urbana crescente terá implicações maiores não só para a indústria da construção, empregos e habitação, mas também para a infraestrutura associada, incluindo transporte, energia, água e espaços abertos ou verdes. Limitações da infraestrutura geralmente incluem o envelhecimento, subutilização e inadequação, assim como uma ausência de integração das estratégias de planejamento, projeto e gestão para o desenvolvimento futuro da cidade, em cenários de longo prazo. A exemplo do retrofit de edifícios, em que as intervenções ocorrem no âmbito do edifício isolado e seus sistemas constituintes, o retrofit urbano pode ser entendido como um conjunto de intervenções urbanas com vistas não somente à adequação da área urbana para atingir a sustentabilidade no momento presente, frente a problemas e demandas atuais, mas vislumbra a adequação para população e demandas futuras, fazendo a transição da situação atual da cidade para sua visão de futuro. Esta transição, o retrofit urbano em si, apresenta caráter abrangente e de larga escala, natureza integrada e deve ser mensurado por meio de indicadores e metas claramente definidos para monitoramento. Portanto, esta tese apresenta um método para implementação de retrofit urbano, na escala de cidades, para auxiliar a definição de metas de longo prazo e a tomada de decisão em processos de planejamento urbano. Utilizando as metas dos ODS - Objetivos de Desenvolvimento Sustentável, os \"indicadores de serviços urbanos e qualidade de vida\" da NBR ISO 37120:2017 (ABNT, 2017a), análise de tendência por Média Móvel Simples e benchmarking por análise de agrupamento (clustering), o resultado é um painel visual (dashboard), adaptável e flexível, passível de agregações e filtros, tais como: seções e temas da ISO 37120, classificação de indicadores, diferentes escalas temporais e espaciais, entre outras. O dashboard é interativo e amigável, traz informações e resultados desta pesquisa e pode ser totalmente acessado em https://bit.ly/2EDnZ4J. Sorocaba, município de grande porte do Estado de São Paulo, é utilizada como estudo de caso, evidenciando os desafios e oportunidades gerados pelo rápido crescimento populacional e auxiliando a priorizar intervenções de retrofit para o desenvolvimento urbano na direção de cenários futuros. / Accommodating growing populations in cities will have major implications not only for employment, housing and the construction industry, but also for urban infrastructure including transportation, energy, water and open or green space. Infrastructure constraints currently include ageing, underutilized and inadequate existing built environment, as well as a lack of integration in planning, design and management strategies for future infrastructure development in long-term scenarios. As building retrofit, which interventions take place in isolated buildings and their constituting systems performance, urban retrofitting can be understood as a set of interventions designed to upgrade and sustain an urban area by providing a long-term practical response to its current problems and pressures. Such interventions must take into account the future population´s needs by ensuring that the present urban infrastructure provides a firm basis for launching and achieving a city\'s ambitions for the future. One of the main requirements for urban retrofitting is a clearly defined set of goals and metrics for monitoring purposes. This thesis presents a method for urban retrofit implementation at city scale using a visual tool to support decision-making and urban planning processes. Using Sustainable Development Goals (SDGs) targets, the 100 ISO 37120:2014 \'indicators for city services and quality of life\', Simple Moving Averages (SMA) trend analysis, clustering and city benchmarking, this method proposes creating an adaptative and flexible dashboard, that could aggregate and filter data, such as: ISO 37120 sections, indicators classification, time and spatial levels, etc. The resulting dashboard is interactive and friendly, and can be fully accessed in https://bit.ly/2EDnZ4J. We use Sorocaba, a medium sized, well-located city in São Paulo State in Brazil, as a case study, focusing on the challenges and opportunities arising from exceptional urban population growth, and ranking key retrofit interventions in Sorocaba as possible forerunners of future urban development scenarios.
32

Summary Conclusions: Computation of Minimum Volume Covering Ellipsoids*

Sun, Peng, Freund, Robert M. 01 1900 (has links)
We present a practical algorithm for computing the minimum volume n-dimensional ellipsoid that must contain m given points a₁,..., am ∈ Rn. This convex constrained problem arises in a variety of applied computational settings, particularly in data mining and robust statistics. Its structure makes it particularly amenable to solution by interior-point methods, and it has been the subject of much theoretical complexity analysis. Here we focus on computation. We present a combined interior-point and active-set method for solving this problem. Our computational results demonstrate that our method solves very large problem instances (m = 30,000 and n = 30) to a high degree of accuracy in under 30 seconds on a personal computer. / Singapore-MIT Alliance (SMA)
33

Análise de Algoritmos de Agrupamento para Base de Dados Textuais / Analysis of the Clustering Algorithms for the Databases

Luiz Gonzaga Paula de Almeida 31 August 2008 (has links)
O volume crescente de textos digitalmente armazenados torna necessária a construção de ferramentas computacionais que permitam a organização e o acesso eficaz e eficiente à informação e ao conhecimento nele contidos. No campo do conhecimento da biomedicina este problema se torna extremamente relevante, pois a maior parte do conhecimento gerado é formalizada através de artigos científicos e é necessário que o acesso a estes seja o mais fácil e rápido possível. A área de pesquisa conhecida como Mineração de Textos (do inglês Text Mining), se propõe a enfrentar este problema ao procurar identificar novas informações e conhecimentos até então desconhecidos, em bases de dados textuais. Uma de suas tarefas é a descoberta de grupos de textos correlatos em base de dados textuais e esse problema é conhecido como agrupamento de textos (do inglês Text Clustering). Para este fim, a representação das bases de dados textuais comumente utilizada no agrupamento de textos é o Modelo Espaço-vetorial, no qual cada texto é representado por um vetor de características, que são as freqüências das palavras ou termos que nele ocorrem. O conjunto de vetores forma uma matriz denominada de documento-termo, que é esparsa e de alta dimensionalidade. Para atenuar os problemas decorrentes dessas características, normalmente é selecionado um subconjunto de termos, construindo-se assim uma nova matriz documento-termo com um número reduzido de dimensões que é então utilizada nos algoritmos de agrupamento. Este trabalho se desdobra em: i) introdução e implementação de dois algoritmos para seleção de termos e ii) avaliação dos algoritmos k-means, espectral e de particionamento de grafos, em cinco base de dados de textos previamente classificadas. As bases de dados são pré-processadas através de métodos descritos na literatura, produzindo-se as matrizes documento-termo. Os resultados indicam que os algoritmos de seleção propostos, para a redução das matrizes documento-termo, melhoram o desempenho dos algoritmos de agrupamento avaliados. Os algoritmos k-means e espectral têm um desempenho superior ao algoritmos de particionamento de grafos no agrupamento de bases de dados textuais, com ou sem a seleção de características. / The increasing amount of digitally stored texts makes necessary the development of computational tools to allow the access of information and knowledge in an efficient and efficacious manner. This problem is extremely relevant in biomedicine research, since most of the generated knowledge is translated into scientific articles and it is necessary to have the most easy and fast access. The research field known as Text Mining deals with the problem of identifying new information and knowledge in text databases. One of its tasks is to find in databases groups of texts that are correlated, an issue known as text clustering. To allow clustering, text databases must be transformed into the commonly used Vector Space Model, in which texts are represented by vectors composed by the frequency of occurrence of words and terms present in the databases. The set of vectors composing a matrix named document-term is usually sparse with high dimension. Normally, to attenuate the problems caused by these features, a subset of terms is selected, thus giving rise a new document-term matrix with reduced dimensions, which is then used by clustering algorithms. This work presents two algorithms for terms selection and the evaluation of clustering algorithms: k-means, spectral and graph portioning, in five pre-classified databases. The databases were pre-processed by previously described methods. The results indicate that the term selection algorithms implemented increased the performance of the clustering algorithms used and that the k-means and spectral algorithms outperformed the graph portioning.
34

Estabilidade em análise de agrupamento via modelo AMMI com reamostragem \"bootstrap\" / Stability in clustering analysis through the AMMI methodology with bootstrap

Débora Robert de Godoi 11 October 2013 (has links)
O objetivo deste trabalho é propor uma nova metodologia de interpretação da estabilidade dos métodos de agrupamento, para dados de vegetação, utilizando a metodologia AMMI e a reamostragem (bootstrap), para ganhar confiabilidade nos agrupamentos formados. Os dados utilizados são provenientes do departamento de genética da Escola Superior de Agricultura \"Luiz de Queiroz\", e visam à produtividade de soja. Primeiramente aplica-se a metodologia AMMI e então, é estimada a matriz de distâncias euclidianas - com base nos dados originais e obtidos via reamostragem (bootstrap) - para a aplicação dos métodos de agrupamento (vizinho mais próximo, vizinho mais distante, ligação média, centroide, mediana e Ward). Para a verificação da validade dos agrupamentos formados utiliza-se o coeficiente de correlação cofenética, e pelo teste de Mantel, é apresentada a distribuição empírica dos coeficientes de correlação cofenética. Os agrupamentos obtidos pelos diferentes métodos são, em sua maioria, semelhantes indicando que, em princípio, qualquer um desses métodos seria adequado para a representação. O método que apresenta resultados discrepantes em relação aos outros (tanto para os dados originais, quanto pelos dados obtidos via bootstrap) - na representação gráfica em dendrograma - é método de Ward. Este estudo é promissor na análise da validade de agrupamentos formados em dados de vegetação. / The objective of this work is to propose a new interpretation methodology of clustering methods for vegetation data stability, using the AMMI and bootstrap methodology, to gain reliability in the clusters formed. The database used is from the Departament of Genetics of Luiz de Queiroz College of Agriculture, aiming soybean yield. Firstly AMMI is applied, then the Euclidian distance matrix is estimated - based on the original data and on the acquired by the bootstrap method - for the application of clustering methods (nearest neighbor, furthest neighbor, average linkage, centroid , median and Ward). In order to assess the validity of clusters formed the cophenetic correlation coefficient is used, and the Mantel test, in order to show the empirical distribution of the cophenetic correlation coefficients. The clusters obtained by different methods are, in most cases, quite similar, indicating that in principle, any of these methods would be suitable for the representation. The method that presents discrepant results (for both the original and bootstrap method obtained data) - on the dendrogram graphical representation, compared to the others - is the Ward\'s. This study is promising in the analysis of validity of clusters formed in vegetation data.
35

Análise de agrupamento e estabilidade para aquisição e validação de conhecimento em bases de dados de alta dimensionalidade

Brum, Vinicius Campista 28 August 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-06T12:39:52Z No. of bitstreams: 1 viniciuscampistabrum.pdf: 846002 bytes, checksum: 5ac93812c3739c70741f6052b77b22c8 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-06T14:06:19Z (GMT) No. of bitstreams: 1 viniciuscampistabrum.pdf: 846002 bytes, checksum: 5ac93812c3739c70741f6052b77b22c8 (MD5) / Made available in DSpace on 2017-06-06T14:06:19Z (GMT). No. of bitstreams: 1 viniciuscampistabrum.pdf: 846002 bytes, checksum: 5ac93812c3739c70741f6052b77b22c8 (MD5) Previous issue date: 2015-08-28 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Análise de agrupamento é uma tarefa descritiva e não-supervisionada de mineração de dados que utiliza amostras não-rotuladas com o objetivo de encontrar grupos naturais, isto é, grupos de amostras fortemente relacionadas de forma que as amostras que per-tençam a um mesmo grupo sejam mais similares entre si do que amostras em qualquer outro grupo. Avaliação ou validação é considerada uma tarefa essencial dentro da análise de agrupamento. Essa tarefa apresenta técnicas que podem ser divididas em dois tipos: técnicas não-supervisionadas ou de validação interna e técnicas supervisionadas ou de va-lidação externa. Trabalhos recentes introduziram uma abordagem de validação interna que busca avaliar e melhorar a estabilidade do algoritmo de agrupamento por meio de identificação e remoção de amostras que são consideradas prejudiciais e, portanto, de-veriam ser estudadas isoladamente. Por meio de experimentos foi identificado que essa abordagem apresenta características indesejáveis que podem resultar em remoção de todo um grupo e ainda não garante melhoria de estabilidade. Considerando essas questões, neste trabalho foi desenvolvida uma abordagem mais ampla utilizando algoritmo genético para análise de agrupamento e estabilidade de dados. Essa abordagem busca garantir melhoria de estabilidade, reduzir o número de amostras para remoção e permitir que o usuário controle o processo de análise de estabilidade, o que resulta em maior aplicabi-lidade e confiabilidade para tal processo. A abordagem proposta foi avaliada utilizando diferentes algoritmos de agrupamento e diferentes bases de dados, sendo que uma base de dados genotípicos também foi utilizada com o intuito de aquisição e validação de conhe-cimento. Os resultados mostram que a abordagem proposta é capaz de garantir melhoria de estabilidade e também é capaz de reduzir o número de amostras para remoção. Os resultados também sugerem a utilização da abordagem como uma ferramenta promissora para aquisição e validação de conhecimento em estudos de associação ampla do genoma (GWAS). Este trabalho apresenta uma abordagem que contribui para aquisição e valida-ção de conhecimento por meio de análise de agrupamento e estabilidade de dados. / Clustering analysis is a descriptive and unsupervised data mining task, which uses non-labeled samples in order to find natural groups, i.e. groups of closely related samples such that samples within the same cluster are more similar than samples within the other clusters. Evaluation and validation are considered essential tasks within the clustering analysis. These tasks present techniques that can be divided into two kinds: unsuper-vised or internal validation techniques and supervised or external validation techniques. Recent works introduced an internal clustering validation approach to evaluate and im-prove the clustering algorithm stability through identifying and removing samples that are considered harmful and therefore they should be studied separately. Through experi-mentation, it was identified that this approach has two undesirable characteristics, it can remove an entire cluster from dataset and still decrease clustering stability. Taking into account these issues, in this work a broader approach was developed using genetic algo-rithm for clustering and data stability analysis. This approach aims to increase stability, to reduce the number of samples for removal and to allow the user control the stability analysis process, which gives greater applicability and reliability for such process. This approach was evaluated using different kinds of clustering algorithm and datasets. A genotype dataset was also used in order to knowledge acquisition and validation. The results show the approach proposed in this work is able to increase stability, and it is also able to reduce the number of samples for removal. The results also suggest the use of this approach as a promising tool for knowledge acquisition and validation on genome-wide association studies (GWAS). This work presents an approach that contributes for knowledge acquisition and validation through clustering and data stability analysis.
36

Implementation of Advanced Analytics on Customer Satisfaction Process in Comparison to Traditional Data Analytics

Akula, Venkata Ganesh Ashish 06 September 2019 (has links)
No description available.
37

CITY NETWORK RESILIENCE QUANTIFICATION UNDER SYSTEMIC RISKS: A HYBRID MACHINE LEARNING-GENETIC ALGORITHM APPROACH

Hassan, Rasha January 2020 (has links)
Disruptions due to either natural or anthropogenic hazards significantly impact the operation of critical infrastructure networks because they may instigate network-level cascade (i.e., systemic) risks. Therefore, quantifying and enhancing the resilience of such complex dynamically evolving networks ensure minimizing the possibility and consequences of systemic risks. Focusing only on robustness, as one of the key resilience attributes, and on transportation networks, key critical infrastructure, the current study develops a hybrid complex network theoretic-genetic algorithms analysis approach. To demonstrate the developed approach, the robustness of a city transportation network is quantified by integrating complex network theoretic topology measures with a dynamic flow redistribution model. The network robustness is subsequently investigated under different operational measures and the corresponding absorptive capacity thresholds are quantified. Finally, the robustness of the network under different failure scenarios is evaluated using genetic algorithms coupled with k-means clustering to classify the different network components. The hybrid approach developed in the current study is expected to facilitate optimizing potential systemic risk mitigation strategies for critical infrastructure networks under disruptive events. / Thesis / Master of Applied Science (MASc)
38

DESIGN OF MULTI-MATERIAL STRUCTURES FOR CRASHWORTHINESS USING HYBRID CELLULAR AUTOMATON

Sajjad Raeisi (11205861) 30 July 2021 (has links)
<p>The design of vehicle components for crashworthiness is one of the most challenging problems in the automotive industry. The safety of the occupants during a crash event relies on the energy absorption capability of vehicle structures. Therefore, the body components of a vehicle are required to be lightweight and highly integrated structures. Moreover, reducing vehicle weight is another crucial design requirement since fuel economy is directly related to the mass of a vehicle. In order to address these requirements, various design concepts for vehicle bodies have been proposed using high-strength steel and different aluminum alloys. However, the price factor has always been an obstacle to completely replace regular body steels with more advanced alloys. To this end, the integration of numerical simulation and structural optimization techniques has been widely practiced addressing these requirements. Advancements in nonlinear structural design have shown the promising potential to generate innovative, safe, and lightweight vehicle structures. In addition, the implementation of structural optimization techniques has the capability to shorten the design cycle time for new models. A reduced design cycle time can provide the automakers with an opportunity to stay ahead of their competitors. During the last few decades, enormous structural optimization methods were proposed. A vast majority of these methods use mathematical programming for optimization, a method that relies on availability sensitivity analysis of objective functions. Thus, due to the necessity of sensitivity analyses, these methods remain limited to linear (or partially nonlinear) material models under static loading conditions. In other words, these methods are no able to capture all non-linearities involved in multi-body crash simulation. As an alternative solution, heuristic approaches, which do need sensitivity analyses, have been developed to address structural optimization problems for crashworthiness. The Hybrid Cellular Automaton (HCA), as a bio-inspired algorithm, is a well-practiced heuristic method that has shown promising capabilities in the structural design for vehicle components. The HCA has been continuously developed during the last two decades and designated to solve specific structural design applications. Despite all advancements, some fundamental aspects of the algorithm are still not adequately addressed in the literature. For instance, the HCA numerically implemented as a closed-loop control system. The local controllers, which dictate the design variable updates, need parameter tuning to efficiently solve different sets of problems. Previous studies suggest that one can identify some default values for the controllers. However, still, there is no well-organized strategy to tune these parameters, and proper tuning still relies on the designer’s experience.</p> <p> </p> <p> Moreover, structures with multiple materials have now become one of the perceived necessities for the automotive industry to address vehicle design requirements such as weight, safety, and cost. However, structural design methods for crashworthiness, including the HCA, are mainly applied to binary structural design problems. Furthermore, the conventional methods for the design of multi-material structures do not fully utilize the capabilities of premium materials. In other words, the development of a well-established method for the design of multi-material structures and capable of considering the cost of the materials, bonding between different materials (especially categorical materials), and manufacturing considering is still an open problem. Lastly, the HCA algorithm relies only on one hyper-parameter, the mass fraction, to synthesize structures. For a given problem, the HCA only provides one design option directed by the mass constraint. In other words, the HCA cannot tailor the dynamic response of the structure, namely, intrusion and deceleration profiles.</p> <p> </p> <p>The main objective of this dissertation is to develop new methodologies to design structures for crashworthiness applications. These methods are built upon the HCA algorithm. The first contribution is about introducing s self-tuning scheme for the controller of the algorithm. The proposed strategy eliminates the need to manually tune the controller for different problems and improve the computational performance and numerical stability. The second contribution of this dissertation is to develop a systematic approach to design multi-material crashworthy structures. To this end, the HCA algorithm is integrated with an ordered multi-material SIMP (Solid Isotropic Material with Penalization) interpolation. The proposed multi-material HCA (MMHCA) framework is a computationally efficient method since no additional design variables are introduced. The MMHCA can synthesize multi-material structures subjected to volume fraction constraints. In addition, an elemental bonding method is introduced to simulate the laser welding applied to multi-material structures. The effect of the bonding strength on the final topology designs is studied using numerical simulations. In the last step, after obtaining the multi-material designs, the HCA is implemented to remove the desired number of bonding elements and reduce the weld length.</p> <p> </p> <p>The third contribution of this dissertation is to introduce a new Cluster-based Structural Optimization method (CBSO) for the design of multi-material structures. This contribution introduces a new Cluster Validity Index with manufacturing considerations referred to as CVI<sub>m</sub>. The proposed index can characterize the quality of the cluster in structural design considering volume fraction, size, interface as a measure of manufacturability. This multi-material structural design approach comprises three main steps: generating the conceptual design using adaptive HCA algorithm, clustering of the design domain using Multi-objective Genetic Algorithm (MOGA) optimization. In the third step, MOGA optimization is used to choose categorical materials in order to optimize the crash indicators (e.g., peak intrusion, peak contact force, load uniformity) or the cost of the raw materials. The effectiveness of the algorithm is investigated using numerical examples.</p>
39

PV Module Performance Under Real-world Test Conditions - A Data Analytics Approach

Hu, Yang 12 June 2014 (has links)
No description available.
40

Airport Performance Metrics Analysis: Application to Terminal Airspace, Deicing, and Throughput

Alsalous, Osama 08 June 2022 (has links)
The Federal Aviation Administration (FAA) is continuously assessing the operational performance of the National Airspace System (NAS), where they analyze trends in the aviation industry to help develop strategies for a more efficient air transportation system. To measure the performance of various elements of the aviation system, the FAA and the International Civil Aviation Organization (ICAO) developed nineteen key performance indicators (KPIs). This dissertation contains three research studies, each written in journal format, addressing select KPIs. These studies aim at answering questions that help understand and improve different aspects of airport operational efficiency. In the first study, we model the flight times within the terminal airspace and compare our results with the baseline methodology that the FAA uses for benchmarking. In the second study, we analyze the efficiency of deicing operations at Chicago O'Hare (ORD) by developing an algorithm that analyzes radar data. We also use a simulation model to calculate potential improvements in the deicing operations. Lastly, we present our results of a clustering analysis surrounding the response of airports to demand and capacity changes during the COVID-19 pandemic. The findings of these studies add to literature by providing a methodology that predicts travel times within the last 100 nautical miles with greater accuracy, by providing deicing times per aircraft type, and by providing insight into factors related to airport response to shock events. These findings will be useful for air traffic management decision makers in addition to other researchers in related future studies and airport simulations. / Doctor of Philosophy / The Federal Aviation Administration (FAA) is the transportation agency that regulates all aspects of civil aviation in the United States. The FAA is continuously analyzing trends in the aviation industry to help develop a more efficient air transportation system. They measure the performance of various elements of the aviation system. For example, there are indicators focused on the departure phase of flights measuring departure punctuality and additional time in taxi-out. On the arrivals side, there are indicators that measure the additional time spent in the last 100 nautical miles of flight. Additionally, there are indicators that measure the performance of the airport as a whole such as the peak capacity and the peak throughput. This dissertation contains three research studies, each one aims at answering questions that help understand and improve a different aspect of airport operational efficiency. The first study is focused on arrivals where we model the flight times within the last 100 nautical miles of flight. Our model incorporated factors such as wind and weather conditions to predict flight times within the last 100 nautical miles with greater accuracy than the baseline methodology that the FAA currently uses. The resulting more accurate benchmarks are important in helping decision makers, such as airport managers, understand the factors causing arrival delays. In the second study, we analyze the efficiency of deicing operations which can be a major source of departure delays during winter weather. We use radar data at Chicago O'Hare airport to analyze real life operations. We developed a simulation model that allowed us to recreate actual scenarios and run what-if scenarios to estimate potential improvements in the process. Our results showed potential savings of 25% in time spent in the deicing system if the airport changed their queueing style towards a first come first served rather than leaving it for the airlines to have their separate areas. Lastly, we present an analysis of the response of airports to demand and capacity changes during the COVID-19 pandemic. In this last study, we group airports by the changes in their throughput and capacity during two time periods. The first part of the study compares airports operations during 2019 to the pandemic during the "shock event" in 2020. The second part compares the changes in airports operations during 2020 with the "recovery" time period using data from 2021. This analysis showed which airports reacted similarly during the shock and recovery. It also showed the relationship between airport response and factors such as what kind of airlines use the airport, airport hub size, being located in a multi-airport city, percentage of cargo operations. The results of this study can help in understanding airport resilience based on known airport characteristics, this is particularly useful for predicting airport response to future disruptive events.

Page generated in 0.1104 seconds