• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 6
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 28
  • 28
  • 12
  • 9
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Curved Graphene Nanoribbon with Multi-Edge Structure and High Intrinsic Charge Carrier Mobility

Niu, Wenhui, Ma, Ji, Soltani, Paniz, Zheng, Wenhao, Liu, Fupin, Popov, Alexey A., Weigand, Jan J., Komber, Hartmut, Poliani, Emanuele, Casiraghi, Cinzia, Droste, Jörn, Hansen, Michael Ryan, Osella, Silvio, Beljonne, David, Bonn, Mischa, Wang, Hai I., Feng, Xinliang, Liu, Junzhi, Mai, Yiyong 28 October 2021 (has links)
Structurally well-defined graphene nanoribbons (GNRs) have emerged as highly promising materials for the next-generation nanoelectronics. The electronic properties of GNRs critically depend on their edge topologies. Here, we demonstrate the efficient synthesis of a curved GNR (cGNR) with a combined cove, zigzag, and armchair edge structure, through bottom-up synthesis. The curvature of the cGNR is elucidated by the corresponding model compounds tetrabenzo[a,cd,j,lm]perylene (1) and diphenanthrene-fused tetrabenzo[a,cd,j,lm]perylene (2), the structures of which are unambiguously confirmed by the X-ray single-crystal analysis. The resultant multi-edged cGNR exhibits a well-resolved absorption at the near-infrared (NIR) region with a maximum peak at 850 nm, corresponding to a narrow optical energy gap of ∼1.22 eV. Employing THz spectroscopy, we disclose a long scattering time of ∼60 fs, corresponding to a record intrinsic charge carrier mobility of ∼600 cm2 V–1 s–1 for photogenerated charge carriers in cGNR.
22

Downward Continuation of Bouguer Gravity Anomalies and Residual Aeromagnetic Anomalies by Means of Finite Differences

Arenson, John Dean January 1975 (has links)
The depths to buried bodies, characterized by anomalous gravity and magnetic properties, are determined by a combination of two numerical techniques. An upward continuation integral is solved by a method by Paul and Nagy using elemental squares and low order polynomials to describe the behavior of the gravity or magnetic data between observed data points. Downward continuation of the magnetic or gravity data is done by a finite difference technique as described by Bullard and Cooper. The applicability of the techniques are determined by comparison to depths determined by other means over the same anomalies and by comparison to various rule-of-thumb methods prevalent in the geophysical literature. The relative speed and cost of the particular computer system used is also considered in the applicability. The results show that although the initial costs of the computer program are high, the combined technique is as good as and at times better than the rule-of-thumb methods in determining the depth to the anomaly-causing body and is useful when more than just an approximate depth is of interest.
23

Výpočet ustáleného chodu sítě 22 kV v zadané oblasti / Steady state calculation of 22kV network

Kaplanová, Klára January 2012 (has links)
Master's thesis proposes a new operation state of distribution network 22kV in Prostějov district after connection of the new Prostějov – Západ substation. The PASS DAISY OFF-LINE Bizon program is used to calculate power losses, optimal network possibilities, as well as to create a new model of Prostějov district's distribution network and to design a modification of a current state of given location. A description of this calculating and simulation program is also included in this paper. The theoretical part describes calculating methods of distribution network’s condition, with emphasis on mathematical method used by PASS DAISY OFF-LINE Bizon program, the modified Newton-Raphson method. The aim of this paper is to prepare technical documentation for E.ON company, with respect to new operation disconnection, planned overhead and cable lines of corresponding parameters. These changes in network's configuration will cause a new distribution of feeding areas. As a result, related changes of provided power from individual feeding points and changes of power currents in electric lines will occur. Due to new operation connection, a reduction of losses and, simultaneously, an improvement of voltage ratios are expected. One of the aims of this paper is to update network's model in PASS DAISY OFF-LINE Bizon program to match the current network’s state of a given area. The outcome of this work is a comparison between the current condition and the new operation condition, with new Prostějov – Západ substation connected, from perspective of distribution network's operator.
24

Machine Learning methods in shotgun proteomics

Truong, Patrick January 2023 (has links)
As high-throughput biology experiments generate increasing amounts of data, the field is naturally turning to data-driven methods for the analysis and extraction of novel insights. These insights into biological systems are crucial for understanding disease progression, drug targets, treatment development, and diagnostics methods, ultimately leading to improving human health and well-being, as well as, deeper insight into cellular biology. Biological data sources such as the genome, transcriptome, proteome, metabolome, and metagenome provide critical information about biological system structure, function, and dynamics. The focus of this licentiate thesis is on proteomics, the study of proteins, which is a natural starting point for understanding biological functions as proteins are crucial functional components of cells. Proteins play a crucial role in enzymatic reactions, structural support, transport, storage, cell signaling, and immune system function. In addition, proteomics has vast data repositories and technical and methodological improvements are continually being made to yield even more data. However, generating proteomic data involves multiple steps, which are prone to errors, making sophisticated models essential to handle technical and biological artifacts and account for uncertainty in the data. In this licentiate thesis, the use of machine learning and probabilistic methods to extract information from mass-spectrometry-based proteomic data is investigated. The thesis starts with an introduction to proteomics, including a basic biological background, followed by a description of how massspectrometry-based proteomics experiments are performed, and challenges in proteomic data analysis. The statistics of proteomic data analysis are also explored, and state-of-the-art software and tools related to each step of the proteomics data analysis pipeline are presented. The thesis concludes with a discussion of future work and the presentation of two original research works. The first research work focuses on adapting Triqler, a probabilistic graphical model for protein quantification developed for data-dependent acquisition (DDA) data, to data-independent acquisition (DIA) data. Challenges in this study included verifying that DIA data conformed with the model used in Triqler, addressing benchmarking issues, and modifying the missing value model used by Triqler to adapt for DIA data. The study showed that DIA data conformed with the properties required by Triqler, implemented a protein inference harmonization strategy, and modified the missing value model to adapt for DIA data. The study concluded by showing that Triqler outperformed current protein quantification techniques. The second research work focused on developing a novel deep-learning based MS2-intensity predictor by incorporating the self-attention mechanism called transformer into Prosit, an established Recurrent Neural Networks (RNN) based deep learning framework for MS2 spectrum intensity prediction. RNNs are a type of neural network that can efficiently process sequential data by capturing information from previous steps, in a sequential manner. The transformer self-attention mechanism allows a model to focus on different parts of its input sequence during processing independently, enabling it to capture dependencies and relationships between elements more effectively. The transformers therefore remedy some of the drawbacks of RNNs, as such, we hypothesized that the implementation of MS2-intensity predictor using transformers rather than RNN would improve its performance. Hence, Prosit-transformer was developed, and the study showed that the model training time and the similarity between the predicted MS2 spectrum and the observed spectrum improved. These original research works address various challenges in computational proteomics and contribute to the development of data-driven life science. / Allteftersom high-throughput experiment genererar allt större mängder data vänder sig området naturligt till data-drivna metoder för analys och extrahering av nya insikter. Dessa insikter om biologiska system är avgörande för att förstå sjukdomsprogression, läkemedelspåverkan, behandlingsutveckling, och diagnostiska metoder, vilket i slutändan leder till en förbättring av människors hälsa och välbefinnande, såväl som en djupare förståelse av cell biologi. Biologiska datakällor som genomet, transkriptomet, proteomet, metabolomet och metagenomet ger kritisk information om biologiska systems struktur, funktion och dynamik. I licentiatuppsats fokusområde ligger på proteomik, studiet av proteiner, vilket är en naturlig startpunkt för att förstå biologiska funktioner eftersom proteiner är avgörande funktionella komponenter i celler. Dessa proteiner spelar en avgörande roll i enzymatiska reaktioner, strukturellt stöd, transport, lagring, cellsignalering och immunsystemfunktion. Dessutom har proteomik har stora dataarkiv och tekniska samt metodologiska förbättringar görs kontinuerligt för att ge ännu mer data. Men för att generera proteomisk data krävs flera steg, som är felbenägna, vilket gör att sofistikerade modeller är väsentliga för att hantera tekniska och biologiska artefakter och för att ta hänsyn till osäkerhet i data. I denna licentiatuppsats undersöks användningen av maskininlärning och probabilistiska metoder för att extrahera information från masspektrometribaserade proteomikdata. Avhandlingen börjar med en introduktion till proteomik, inklusive en grundläggande biologisk bakgrund, följt av en beskrivning av hur masspektrometri-baserade proteomikexperiment utförs och utmaningar i proteomisk dataanalys. Statistiska metoder för proteomisk dataanalys utforskas också, och state-of-the-art mjukvara och verktyg som är relaterade till varje steg i proteomikdataanalyspipelinen presenteras. Avhandlingen avslutas med en diskussion om framtida arbete och presentationen av två original forskningsarbeten. Det första forskningsarbetet fokuserar på att anpassa Triqler, en probabilistisk grafisk modell för proteinkvantifiering som utvecklats för datadependent acquisition (DDA) data, till data-independent acquisition (DIA) data. Utmaningarna i denna studie inkluderade att verifiera att DIA-datas egenskaper överensstämde med modellen som användes i Triqler, att hantera benchmarking-frågor och att modifiera missing-value modellen som användes av Triqler till DIA-data. Studien visade att DIA-data överensstämde med de egenskaper som krävdes av Triqler, implementerade en proteininferensharmoniseringsstrategi och modifierade missing-value modellen till DIA-data. Studien avslutades med att visa att Triqler överträffade nuvarande state-of-the-art proteinkvantifieringsmetoder. Det andra forskningsarbetet fokuserade på utvecklingen av en djupinlärningsbaserad MS2-intensitetsprediktor genom att inkorporera self-attention mekanismen som kallas för transformer till Prosit, en etablerad Recurrent Neural Network (RNN) baserad djupinlärningsramverk för MS2 spektrum intensitetsprediktion. RNN är en typ av neurala nätverk som effektivt kan bearbeta sekventiell data genom att bevara och använda dolda tillstånd som fångar information från tidigare steg på ett sekventiellt sätt. Självuppmärksamhetsmekanismen i transformer tillåter modellen att fokusera på olika delar av sekventiellt data samtidigt under bearbetningen oberoende av varandra, vilket gör det möjligt att fånga relationer mellan elementen mer effektivt. Genom detta lyckas Transformer åtgärda vissa nackdelar med RNN, och därför hypotiserade vi att en implementation av en ny MS2-intensitetprediktor med transformers istället för RNN skulle förbättra prestandan. Därmed konstruerades Prosit-transformer, och studien visade att både modellträningstiden och likheten mellan predicerat MS2-spektrum och observerat spektrum förbättrades. Dessa originalforskningsarbeten hanterar olika utmaningar inom beräkningsproteomik och bidrar till utvecklingen av datadriven livsvetenskap. / <p>QC 2023-05-22</p>
25

Simulações Financeiras em GPU / Finance and Stochastic Simulation on GPU

Souza, Thársis Tuani Pinto 26 April 2013 (has links)
É muito comum modelar problemas em finanças com processos estocásticos, dada a incerteza de suas variáveis de análise. Além disso, problemas reais nesse domínio são, em geral, de grande custo computacional, o que sugere a utilização de plataformas de alto desempenho (HPC) em sua implementação. As novas gerações de arquitetura de hardware gráfico (GPU) possibilitam a programação de propósito geral enquanto mantêm alta banda de memória e grande poder computacional. Assim, esse tipo de arquitetura vem se mostrando como uma excelente alternativa em HPC. Com isso, a proposta principal desse trabalho é estudar o ferramental matemático e computacional necessário para modelagem estocástica em finanças com a utilização de GPUs como plataforma de aceleração. Para isso, apresentamos a GPU como uma plataforma de computação de propósito geral. Em seguida, analisamos uma variedade de geradores de números aleatórios, tanto em arquitetura sequencial quanto paralela. Além disso, apresentamos os conceitos fundamentais de Cálculo Estocástico e de método de Monte Carlo para simulação estocástica em finanças. Ao final, apresentamos dois estudos de casos de problemas em finanças: \"Stops Ótimos\" e \"Cálculo de Risco de Mercado\". No primeiro caso, resolvemos o problema de otimização de obtenção do ganho ótimo em uma estratégia de negociação de ações de \"Stop Gain\". A solução proposta é escalável e de paralelização inerente em GPU. Para o segundo caso, propomos um algoritmo paralelo para cálculo de risco de mercado, bem como técnicas para melhorar a solução obtida. Nos nossos experimentos, houve uma melhora de 4 vezes na qualidade da simulação estocástica e uma aceleração de mais de 50 vezes. / Given the uncertainty of their variables, it is common to model financial problems with stochastic processes. Furthermore, real problems in this area have a high computational cost. This suggests the use of High Performance Computing (HPC) to handle them. New generations of graphics hardware (GPU) enable general purpose computing while maintaining high memory bandwidth and large computing power. Therefore, this type of architecture is an excellent alternative in HPC and comptutational finance. The main purpose of this work is to study the computational and mathematical tools needed for stochastic modeling in finance using GPUs. We present GPUs as a platform for general purpose computing. We then analyze a variety of random number generators, both in sequential and parallel architectures, and introduce the fundamental mathematical tools for Stochastic Calculus and Monte Carlo simulation. With this background, we present two case studies in finance: ``Optimal Trading Stops\'\' and ``Market Risk Management\'\'. In the first case, we solve the problem of obtaining the optimal gain on a stock trading strategy of ``Stop Gain\'\'. The proposed solution is scalable and with inherent parallelism on GPU. For the second case, we propose a parallel algorithm to compute market risk, as well as techniques for improving the quality of the solutions. In our experiments, there was a 4 times improvement in the quality of stochastic simulation and an acceleration of over 50 times.
26

Simulações Financeiras em GPU / Finance and Stochastic Simulation on GPU

Thársis Tuani Pinto Souza 26 April 2013 (has links)
É muito comum modelar problemas em finanças com processos estocásticos, dada a incerteza de suas variáveis de análise. Além disso, problemas reais nesse domínio são, em geral, de grande custo computacional, o que sugere a utilização de plataformas de alto desempenho (HPC) em sua implementação. As novas gerações de arquitetura de hardware gráfico (GPU) possibilitam a programação de propósito geral enquanto mantêm alta banda de memória e grande poder computacional. Assim, esse tipo de arquitetura vem se mostrando como uma excelente alternativa em HPC. Com isso, a proposta principal desse trabalho é estudar o ferramental matemático e computacional necessário para modelagem estocástica em finanças com a utilização de GPUs como plataforma de aceleração. Para isso, apresentamos a GPU como uma plataforma de computação de propósito geral. Em seguida, analisamos uma variedade de geradores de números aleatórios, tanto em arquitetura sequencial quanto paralela. Além disso, apresentamos os conceitos fundamentais de Cálculo Estocástico e de método de Monte Carlo para simulação estocástica em finanças. Ao final, apresentamos dois estudos de casos de problemas em finanças: \"Stops Ótimos\" e \"Cálculo de Risco de Mercado\". No primeiro caso, resolvemos o problema de otimização de obtenção do ganho ótimo em uma estratégia de negociação de ações de \"Stop Gain\". A solução proposta é escalável e de paralelização inerente em GPU. Para o segundo caso, propomos um algoritmo paralelo para cálculo de risco de mercado, bem como técnicas para melhorar a solução obtida. Nos nossos experimentos, houve uma melhora de 4 vezes na qualidade da simulação estocástica e uma aceleração de mais de 50 vezes. / Given the uncertainty of their variables, it is common to model financial problems with stochastic processes. Furthermore, real problems in this area have a high computational cost. This suggests the use of High Performance Computing (HPC) to handle them. New generations of graphics hardware (GPU) enable general purpose computing while maintaining high memory bandwidth and large computing power. Therefore, this type of architecture is an excellent alternative in HPC and comptutational finance. The main purpose of this work is to study the computational and mathematical tools needed for stochastic modeling in finance using GPUs. We present GPUs as a platform for general purpose computing. We then analyze a variety of random number generators, both in sequential and parallel architectures, and introduce the fundamental mathematical tools for Stochastic Calculus and Monte Carlo simulation. With this background, we present two case studies in finance: ``Optimal Trading Stops\'\' and ``Market Risk Management\'\'. In the first case, we solve the problem of obtaining the optimal gain on a stock trading strategy of ``Stop Gain\'\'. The proposed solution is scalable and with inherent parallelism on GPU. For the second case, we propose a parallel algorithm to compute market risk, as well as techniques for improving the quality of the solutions. In our experiments, there was a 4 times improvement in the quality of stochastic simulation and an acceleration of over 50 times.
27

Optimal Deep Learning Assisted Design of Socially and Environmentally Efficient Steel Concrete Composite Bridges under Constrained Budgets

Martínez Muñoz, David 06 September 2023 (has links)
Tesis por compendio / [ES] El diseño de infraestructuras está fuertemente influido por la búsqueda de soluciones que tengan en cuenta el impacto en la economía, el medio ambiente y la sociedad. Estos criterios están muy relacionados con la definición de sostenibilidad que hizo la Comisión Brundtland en 1987. Este hito supuso un reto para técnicos, científicos y legisladores. Este reto consistía en generar métodos, criterios, herramientas y normativas que permitieran incluir el concepto de sostenibilidad en el desarrollo y diseño de nuevas infraestructuras. Desde entonces, se han producido pequeños avances en la búsqueda de la sostenibilidad, pero se necesitan más a corto plazo. Como plan de acción, las Naciones Unidas establecieron los Objetivos de Desarrollo Sostenible, fijando el año 2030 como meta para alcanzarlos. Dentro de estos objetivos, las infraestructuras se postulan como un punto crítico. Tradicionalmente, se han desarrollado métodos para obtener diseños óptimos desde el punto de vista del impacto económico. Sin embargo, aunque en los últimos tiempos se ha avanzado en la aplicación y utilización de métodos de análisis del ciclo de vida completo, aún falta un consenso claro, especialmente en el pilar social de la sostenibilidad. Dado que la sostenibilidad engloba diferentes criterios, que en principio no van necesariamente de la mano, el problema de la búsqueda de la sostenibilidad se plantea no sólo como un problema de optimización, sino también como un problema de toma de decisiones multi-criterio. El objetivo principal de esta tesis doctoral es proponer diferentes metodologías para la obtención de diseños óptimos que introduzcan los pilares de la sostenibilidad en el diseño de puentes mixtos acero-hormigón. Como problema estructural representativo se propone un puente viga en cajón de tres vanos mixto. Dada la complejidad de la estructura, en la que intervienen 34 variables discretas, la optimización con métodos matemáticos resulta inabordable. Por ello, se propone el uso de algoritmos metaheurísticos. Esta complejidad también se traduce en un alto coste computacional para el modelo, por lo que se implementa un modelo de redes neuronales profundas que permite la validación del diseño sin necesidad de computación. Dada la naturaleza discreta del problema, se proponen técnicas de discretización para adaptar los algoritmos al problema de optimización estructural. Además, para mejorar las soluciones obtenidas a partir de estos algoritmos discretos, se introducen métodos de hibridación basados en la técnica K-means y operadores de mutación en función del tipo de algoritmo. Los algoritmos utilizados se clasifican en dos ramas. La primera son los basados en trayectorias como el Simulated Annealing, Threshold Accepting y el Algoritmo del Solterón. Por otra parte, se utilizan algoritmos de inteligencia de enjambre como Jaya, Sine Cosine Algorithm y Cuckoo Search. La metodología de Análisis del Ciclo de Vida definida en la norma ISO 14040 se utiliza para evaluar el impacto social y medioambiental de los diseños propuestos. La aplicación de esta metodología permite evaluar el impacto y compararlo con otros diseños. La evaluación mono-objetivo de los diferentes criterios lleva a la conclusión de que la optimización de costes está asociada a una reducción del impacto medioambiental y social de la estructura. Sin embargo, la optimización de los criterios medioambientales y sociales no reduce necesariamente los costes. Por ello, para realizar una optimización multi-objetivo y encontrar una solución de compromiso, se implementa una técnica basada en la Teoría de Juegos, proponiendo una estrategia de juego cooperativo. La técnica multi-criterio utilizada es la Teoría de la Entropía para asignar pesos a los criterios para la función objetivo agregada. Los criterios considerados son los tres pilares de la sostenibilidad y la facilidad constructiva de la losa superior. Aplicando esta técnica se obtiene un diseño óptimo relativo a los tres pilares de la soste / [CAT] El disseny d'infraestructures està fortament influït per la cerca de solucions que tinguen en compte l'impacte en l'economia, el medi ambient i la societat. Aquests criteris estan molt relacionats amb la definició de sostenibilitat que va fer la Comissió Brundtland en 1987. Aquesta fita va suposar un repte per a tècnics, científics i legisladors. Aquest repte consistia a generar mètodes, criteris, eines i normatives que permeteren incloure el concepte de sostenibilitat en el desenvolupament i disseny de noves infraestructures. Des de llavors, s'han produït xicotets avanços en la cerca de la sostenibilitat, però es necessiten més a curt termini. Com a pla d'acció, les Nacions Unides van establir els Objectius de Desenvolupament Sostenible, fixant l'any 2030 com a meta per aconseguir-los. Dins d'aquests objectius, les infraestructures es postulen com un punt crític. Tradicionalment, s'han desenvolupat mètodes per a obtindre dissenys òptims des del punt de vista de l'impacte econòmic. No obstant això, encara que en els últims temps s'ha avançat en l'aplicació i utilització de mètodes d'anàlisis del cicle de vida complet, encara falta un consens clar, especialment en el pilar social de la sostenibilitat. Atés que la sostenibilitat engloba diferents criteris, que en principi no van necessàriament de la mà, el problema de la cerca de la sostenibilitat es planteja no sols com un problema d'optimització, sinó també com un problema de presa de decisions multi-criteri. L'objectiu principal d'aquesta tesi doctoral és proposar diferents metodologies per a l'obtenció de dissenys òptims que introduïsquen els pilars de la sostenibilitat en el disseny de ponts mixtos. Com a problema estructural representatiu es proposa un pont viga en calaix de tres vans mixt. Donada la complexitat de l'estructura, en la qual intervenen 34 variables discretes, l'optimització amb mètodes matemàtics resulta inabordable. Per això, es proposa l'ús d'algorismes metaheurísticos. Aquesta complexitat també es tradueix en un alt cost computacional per al model, per la qual cosa s'implementa un model de xarxes neuronals profundes que permet la validació del disseny sense necessitat de computació. Donada la naturalesa discreta del problema, es proposen tècniques de discretització per a adaptar els algorismes al problema d'optimització estructural. A més, per a millorar les solucions obtingudes a partir d'aquests algorismes discrets, s'introdueixen mètodes d'hibridació basats en la tècnica K-*means i operadors de mutació en funció del tipus d'algorisme. Els algorismes utilitzats es classifiquen en dues branques. La primera són els basats en trajectòries com la Simulated Annealing, Threshold Accepting i el Old Bachelor Acceptance. D'altra banda, s'utilitzen algorismes d'intel·ligència d'eixam com Jaya, Sine Cosine Algorithm i Cuckoo Search. La metodologia d'Anàlisi del Cicle de Vida definida en la norma ISO 14040 s'utilitza per a avaluar l'impacte social i mediambiental dels dissenys proposats. L'aplicació d'aquesta metodologia permet avaluar l'impacte i comparar-lo amb altres dissenys. L'avaluació mono-objectiu dels diferents criteris porta a la conclusió que l'optimització de costos està associada a una reducció de l'impacte mediambiental i social de l'estructura. No obstant això, l'optimització dels criteris mediambientals i socials no redueix necessàriament els costos. Per això, per a realitzar una optimització multi-objectiu i trobar una solució de compromís, s'implementa una tècnica basada en la Teoria de Jocs, proposant una estratègia de joc cooperatiu. La tècnica multi-criteri utilitzada és la Teoria de l'Entropia per a assignar pesos als criteris per a la funció objectiu agregada. Els criteris considerats són els tres pilars de la sostenibilitat i la facilitat constructiva de la llosa superior. Aplicant aquesta tècnica s'obté un disseny òptim relatiu als tres pilars de la sostenibilitat i a partir del qual es millora la facilitat constructiva. / [EN] Infrastructure design is strongly influenced by the search for solutions considering the impact on the economy, the environment, and society. These criteria were strongly related to the definition of sustainability by the Brundtland Commission in 1987. This milestone posed a challenge for technicians, scientists, and legislators alike. This challenge consisted of generating methods, criteria, tools, and regulations that would allow the inclusion of the concept of sustainability in developing and designing new infrastructures. Since then, small advances have been made in the search for sustainability, but they need more in the short term. As an action plan, the United Nations established the Sustainable Development Goals, setting the year 2030 as the target for achieving them. Within these goals, infrastructure is postulated as a critical point. Traditionally, methods have been developed to obtain optimal designs from the point of view of economic impact. However, although recent advances have been made in implementing and using complete life cycle analysis methods, there still needs to be a clear consensus, especially in the social pillar of sustainability. Given that sustainability encompasses different criteria, which in principle do not necessarily go hand in hand, the problem of finding sustainability is posed not only as an optimization problem but also as a multi-criteria decision-making problem. The main objective of this doctoral thesis is to propose different methodologies for obtaining optimal designs that introduce the pillars of sustainability in the design of steel-concrete composite bridges. A three-span box-girder bridge is proposed as a representative structural problem. Given the complexity of the structure, which involves 34 discrete variables, optimization with mathematical methods is unaffordable. Therefore, the use of metaheuristic algorithms is proposed. This complexity also translates into a high computational cost for the model, so a deep neural networks model is implemented to allow the validation of the design without the need for computation. Given the problem's discrete nature, discretization techniques are proposed to adapt the algorithms to the structural optimization problem. In addition, to improve the solutions obtained from these discrete algorithms, hybridization methods based on the K-means technique and mutation operators are introduced depending on the type of algorithm. The algorithms used are classified into two branches. The first are those based on trajectories such as Simulated Annealing, Threshold Accepting, and Old Bachelor Acceptance. Moreover, swarm intelligence algorithms such as Jaya, Sine Cosine Algorithm, and Cuckoo Search are used. The Life Cycle Assessment methodology defined in the ISO 14040 standard is used to evaluate the social and environmental impact of the proposed designs. The application of this methodology allows the evaluation of the impact and comparison with other designs. The single-objective evaluation of the different criteria leads to the conclusion that cost optimization is associated with a reduction of the environmental and social impact of the structure. However, optimizing environmental and social criteria does not necessarily reduce costs. Therefore, to perform a multi-objective optimization and find a compromise solution, a technique based on Game Theory is implemented, proposing a cooperative game strategy. The multi-criteria technique used is the Entropy Theory to assign criteria weights for the aggregate objective function. The criteria considered are the three pillars of sustainability and the constructive ease of the top slab. Applying this technique results in an optimal design concerning the three pillars of sustainability and from which the constructive ease is improved. / I would like to thank the Spanish Ministry of Science and Innovation. This research would not have been possible without the support of grant FPU-18/01592, funded by MCIN/AEI/10.13039/501100011033, "ESF invests in your future", as well as the financial assistance provided by DIMALIFE (BIA2017-85098-R) and HYDELIFE (PID2020-117056RB-I00), both funded by MCIN/AEI/10.13039/5011-00011033, and "ERDF A way of making Europe". / Martínez Muñoz, D. (2023). Optimal Deep Learning Assisted Design of Socially and Environmentally Efficient Steel Concrete Composite Bridges under Constrained Budgets [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/195967 / Compendio
28

Modelling space-use and habitat preference from wildlife telemetry data

Aarts, Geert January 2007 (has links)
Management and conservation of populations of animals requires information on where they are, why they are there, and where else they could be. These objectives are typically approached by collecting data on the animals’ use of space, relating these to prevailing environmental conditions and employing these relations to predict usage at other geographical regions. Technical advances in wildlife telemetry have accomplished manifold increases in the amount and quality of available data, creating the need for a statistical framework that can use them to make population-level inferences for habitat preference and space-use. This has been slow-in-coming because wildlife telemetry data are, by definition, spatio-temporally autocorrelated, unbalanced, presence-only observations of behaviorally complex animals, responding to a multitude of cross-correlated environmental variables. I review the evolution of techniques for the analysis of space-use and habitat preference, from simple hypothesis tests to modern modeling techniques and outline the essential features of a framework that emerges naturally from these foundations. Within this framework, I discuss eight challenges, inherent in the spatial analysis of telemetry data and, for each, I propose solutions that can work in tandem. Specifically, I propose a logistic, mixed-effects approach that uses generalized additive transformations of the environmental covariates and is fitted to a response data-set comprising the telemetry and simulated observations, under a case-control design. I apply this framework to non-trivial case-studies using data from satellite-tagged grey seals (Halichoerus grypus) foraging off the east and west coast of Scotland, and northern gannets (Morus Bassanus) from Bass Rock. I find that sea bottom depth and sediment type explain little of the variation in gannet usage, but grey seals from different regions strongly prefer coarse sediment types, the ideal burrowing habitat of sandeels, their preferred prey. The results also suggest that prey aggregation within the water column might be as important as horizontal heterogeneity. More importantly, I conclude that, despite the complex behavior of the study species, flexible empirical models can capture the environmental relationships that shape population distributions.

Page generated in 0.0757 seconds