Spelling suggestions: "subject:"0nvironmental data"" "subject:"byenvironmental data""
1 |
Representative Environments for Reduced Estimation Time of Wide Area Acoustic PerformanceFabre, Josette 14 May 2010 (has links)
Advances in ocean modeling (Barron et al., 2006) have improved such that ocean forecasts and even ensembles (e.g., Coelho et al., 2009) representing ocean uncertainty are becoming more widely available. This facilitates nowcasts (current time ocean fields / analyses) and forecasts (predicted ocean fields) of acoustic propagation conditions in the ocean which can greatly improve the planning of acoustic experiments. Modeling of acoustic transmission loss (TL) provides information about how the environment impacts acoustic performance for various systems and system configurations of interest. It is, however, very time consuming to compute acoustic propagation to and from many potential source and receiver locations for multiple locations on an area-wide grid for multiple analysis / forecast times, ensembles and scenarios of interest. Currently, to make such wide area predictions, an area is gridded and acoustic predictions for multiple directions (or radials) at each grid point for a single time period or ensemble, are computed to estimate performance on the grid. This grid generally does not consider the environment and can neglect important environmental acoustic features or can overcompute in areas of environmental acoustic isotropy. This effort develops two methods to pre-examine the area and time frame in terms of the environmental acoustics in order to prescribe an environmentally optimized computational grid that takes advantage of environmental-acoustic similarities and differences to characterize an area, time frame and ensemble with fewer acoustic model predictions and thus less computation time. Such improvement allows for a more thorough characterization of the time frame and area of interest. The first method is based on critical factors in the environment that typically indicate acoustic response, and the second method is based on a more robust full waveguide mode-based description of the environment. Results are shown for the critical factors method and show that this proves to be a viable solution for most cases studied. Limitations are at areas of high loss, which may not be of concern for exercise planning. The mode-based method is developed for range independent environments and shows significant promise for future development.
|
2 |
Les bases de données environnementales : entre complexité et simplification : mutualisation et intégration d’outils partagés et adaptés à l’observatoire O-LiFE / Environmental databases : between complexity and simplification : mutualization and integration of shared tools adapted to O-LiFE ObservatoryHajj-Hassan, Hicham 19 December 2016 (has links)
O-LiFE est un observatoire de l’environnement dédié à l’étude des ressources et de la biodiversité dans la zone critique à la vie, focalisé sur la méditerranée. C’est aussi une structure à l’interface entre la recherche fondamentale et les porteurs d’enjeux. Cette plateforme initiée en collaboration entre des équipes libanaises et françaises se focalise d’abord sur l’observation systémique du milieu naturel, autour des thématiques de l’eau, la biodiversité et la gestion de l’environnement. Le fondement de l’observatoire est la mise en oeuvre d’une approche transdisciplinaire du défi du changement global. Structurer, partager, pérenniser et valoriser les données environnementales constitue un objectif prioritaire pour permettre à une large communauté de converger vers une approche réellement systémique et transdisciplinaire des enjeux environnementaux en Méditerranée. La construction d’un système d’information permettant cette mise en relation complète des données est donc prioritaire. Cependant cette mise en oeuvre est rendu complexe par plusieurs défis à relever pour répondre aux utilisateurs finaux et producteurs de données qui ne partagent pas les mêmes besoins, et pour tenir compte de l’hétérogénéité naturelle des données.Dans cette thèse, nous détaillons par conséquent les réflexions et travaux menés pour la mise en place de l’architecture du SI de l’observatoire. Le travail a été initié par une enquête permettant de mieux connaître les sources de données existantes. Nous avons ensuite proposé d’utiliser les environnements de gestion de données d’observations basés sur des ontologies partagées et les recommandations des consortiums reconnus (OGC). Des extensions sont proposées pour permettre la prise en compte de points de vue distincts sur les données via des multi-mapping. Cette extension permet un découplage entre la vision initiale du producteur de données et les multiples utilisations possibles des données à l’aide de croisement avec d’autres sources de données et/ou d’autres points de vue.Nous avons enfin appliqué la méthodologie sur les données O-LiFE et avons pu extraire des croisements de données inter-bases (entre deux sources de données distinctes) et intra-bases (en juxtaposant des points de vue distincts sur une même source de données). Ce travail est une démonstration du rôle fondamental des outils du SI et des observatoires dans le rapprochement indispensable des communautés scientifiques autant que des porteurs d’enjeux pour la résolution des grands défis sociétaux environnementaux, notamment en Méditerranéen. / O-LiFE is an environmental observatory dedicated to the study of resources and biodiversity in the critical area of life, focused on the Mediterranean. It is also a structure at the interface between basic research and the holders of issues. This platform initiated in collaboration between Lebanese and French teams focuses first on systemic observation of the natural environment around the themes of water, biodiversity and environmental management. The foundation of the observatory is the implementation of a transdisciplinary approach to the challenge of global change.Organize, share, sustain and enhance environmental data is a priority objective to enable the wider community to converge towards a truly systemic and transdisciplinary approach to environmental issues in the Mediterranean. The construction of an information system allowing complete connection of data is therefore a priority. However, this implementation is complicated by a number of challenges to meet the end users and data producers expectations who do not share the same needs, and to take into account thenatural heterogeneity of data.In this PhD, we detail brainstorming and work needed for the development of the architecture of the information system of the observatory. The work was initiated by a survey to better understand theexisting sources of data. We then proposed to use observational data management environments based on shared ontologies and the recommendations of recognized consortia (OGC). Extensions are proposed to enable the inclusion of different perspectives on data through multi-mapping. This extension allows a decouplingbetween the original vision of the data producer and the many possible uses of the data with crossbreeding with other data sources and / or other views.We then applied the methodology on the O-LiFE data and were able to extract inter-data analysis (between two distinct data sources) and intra-bases analysis (by juxtaposing different points of view on the same data source). This work is a demonstration of the fundamental role of IS tools and observatories in the essential gathering of the scientific communities as much as stakeholders to resolve major environmental challenges facing society, particularly in Mediterranean.
|
3 |
QSBMR Quantitative Structure Biomagnification Relationships : Studies Regarding Persistent Environmental Pollutants in the Baltic Sea BiotaLundstedt-Enkel, Katrin January 2005 (has links)
I have studied persistent environmental pollutants in herring (Clupea harengus), in adult guillemot (Uria aalge) and in guillemot eggs from the Baltic Sea. The studied contaminants were organochlorines (OCs); dichlorodiphenyltrichloroethanes (DDTs), polychlorinated biphenyls (PCBs), hexachlorobenzene (HCB), hexachlorocyclohexanes (HCHs), and brominated flame retardants (BFRs); polybrominated diphenylethers (PBDEs) and hexabromocyclododecane (HBCD). The highest concentration in both species was shown by p,p′DDE with a concentration in guillemot egg (geometric mean (GM) with 95% confidence interval) of 18200 (17000 – 19600) ng/g lipid weight. The BFR with the highest concentration in guillemot egg was HBCD with a GM concentration of 140 (120 – 160) ng/g lw. To extract additional and essential information from the data, not possible to obtain using only univariate or bivariate statistics, I used multivariate data analysis techniques; principal components analysis (PCA), partial least squares regression (PLS), soft independent modelling of class analogy (SIMCA), and PLS discriminant analysis (PLS-DA). I found e.g.; that there are significant negative correlations between egg weight and the concentrations of HCB and p,p'DDE; that concentrations of OCs and BFRs in the organisms co-varied so that concentrations of OCs can be used to calculate concentrations of BFRs; and, that several contaminants (e.g. HBCD) had higher concentration in guillemot egg than in guillemot muscle, that several (e.g. BDE47) showed no concentration difference between muscle and egg and that one contaminant (BDE154) showed higher concentration in the guillemot muscles than in egg. In this thesis I developed a new method, “randomly sampled ratios” (RSR), to calculate biomagnification factors (BMFs) i.e. the ratio between the concentration of a contaminant in an organism and the concentration of the same contaminant in its food. With this new method BMFs are denoted with an estimate of variation. Contaminants that biomagnify are e.g., p,p′DDE, CB118, HCB, βHCH and all of the BFRs. Those that do not biomagnify are e.g., p,p′DDT, αHCH and CB101. Lastly, to investigate which of the contaminants descriptors (physical-chemical/other properties and characteristics) that correlates to the biomagnification of the contaminants, I modeled the contaminants’ respective BMFRSR versus ~100 descriptors and showed that ~20 descriptors in combination were important, either favoring or counteracting biomagnification between herring and guillemot.
|
4 |
Chukchi Sea environmental data management in a relational databaseYang, Fengyan 29 October 2013 (has links)
Environmental data hold important information regarding humanity’s past, present, and future, and are managed in various ways. The database structure most commonly used in contemporary applications is the relational database. Its usage in the scientific world for managing environmental data is not as popular as in businesses enterprises. Attention is caught by the diverse nature and rapidly growing volume of environmental data that has been increasing substantially in recent. Environmental data for the Chukchi Sea, with its embedded potential oil resources, have become important for characterizing the physical, chemical, and biological environment. Substantive data have been collected recently by researchers from the Chukchi Sea Offshore Monitoring in the Drilling Area: Chemical and Benthos (COMIDA CAB) project. A modified Observations Data Model was employed for storing, retrieving, visualizing and sharing data. Throughout the project-based study, the processes of environmental data heterogeneity reconciliation and relational database model modification and implementation were carried out. Data were transformed into shareable information, which improves data interoperability between different software applications (e.g. ArcGIS and SQL server). The results confirm the feasibility and extendibility of employing relational databases for environmental data management. / text
|
5 |
Technology For Social Innovation : Open Data Platform for Monitoring the Condition of the EnvironmentPatrzalek, Roksana January 2022 (has links)
This study investigates how the data about environmental conditions can be used in order toprovide individuals with the tool to claim against industrial companies causing the pollutionand affecting people’s life. Based on extensive research in relation to the political, social,technological and ethical context of the use case, the design solution introduces an open dataplatform where individuals and NGOs can collect, store, analyse and reuse collected data. Proposed design outcome is creating a bridge between users outside of technological scopeand the emerging field of IoT devices in order to make the data collection affordable andaccessible. It introduces a workflow for implementing the off-shelf technology into the digitalinfrastructure together with supportive functionalities to use collected information forcontext-specific purposes. The concept was developed with strong emphasis on implementingdemocratic values into the design solution, such as protection of personal data, distributedgovernance and transparency.
|
6 |
Integration of LCA into the building design processJerome, Adeline January 2019 (has links)
The required estimation of performances of a building cannot be limited to its energy efficiency anymore. Environmental issues are rising concerns into national policies. However, information about construction products is still segmented into several distinct databases: the construction company gathered data for its design process into private pricing databases while environmental declarations from manufacturers are available in a public database. The interconnection of the different information about the same product is difficult because of the difference of data formatting and representation. The objective of this project was to provide first tools to facilitate this interconnection between the design process of the company and environmental data, considering incoming requirements from the new thermal regulation of 2020. This led to the creation of a SQL environmental database, based on environmental declarations, more adapted for statistical analysis than a document-based database. Specific data management functions were also developed to homogenise unit representation and to spot product performances for the purpose of multi-criteria analysis of products Finally, an estimation of the distinctiveness of products through a selection of key words was tested. The comparison of lists of words provided good performances for their classification into a limited number of lots. But it is not sufficient to identify items that can be related to the same construction product. So further works is needed into the creation of a semantic metric model of construction vocabulary.
|
7 |
Adaptação de algoritmos de processamento de dados ambientais para o contexto de Big DataCampos, Guilherme Falcão da Silva 23 November 2015 (has links)
Submitted by Jordan (jordanbiblio@gmail.com) on 2017-05-04T14:04:39Z
No. of bitstreams: 1
DISS_2015_Guilherme Falcão da Silva Campos.pdf: 3678965 bytes, checksum: 16184b756c14ab6fc7eb19e95ff445d4 (MD5) / Approved for entry into archive by Jordan (jordanbiblio@gmail.com) on 2017-05-04T15:41:39Z (GMT) No. of bitstreams: 1
DISS_2015_Guilherme Falcão da Silva Campos.pdf: 3678965 bytes, checksum: 16184b756c14ab6fc7eb19e95ff445d4 (MD5) / Made available in DSpace on 2017-05-04T15:41:39Z (GMT). No. of bitstreams: 1
DISS_2015_Guilherme Falcão da Silva Campos.pdf: 3678965 bytes, checksum: 16184b756c14ab6fc7eb19e95ff445d4 (MD5)
Previous issue date: 2015-11-23 / Pesquisas ambientais dependem de dados de sensores para a criação das séries
temporais referentes às variáveis analisadas. A quantidade de dados tende a aumentar,
cada vez mais, à medida que novos sensores são criados e instalados.
Com o passar do tempo os conjuntos de dados se tornam massivos, requerendo
novas formas de armazenamento e processamento. Este trabalho busca meios de
se contornar esses problemas utilizando uma solução tecnológica capaz de armazenar
e processar grandes quantidades de dados. A solução tecnológica utilizada
é o Apache Hadoop, uma ferramenta voltada a problemas de Big Data. Com a
finalidade de avaliar a ferramenta foram utilizados diferentes conjuntos de dados
e adaptados diferentes algoritmos usados na análise de séries temporais. Foram
implementados analises de séries caóticas e não caóticas. As implementações foram
a transformada de wavelet, uma busca por similaridade usando a função de
distância Euclidiana, cálculo da dimensão box-counting e o cálculo da dimensão
de correlação. Essas implementações foram adaptadas para utilizar o paradigma
de processamento distribuído MapReduce. / Environmental research depend on sensor generated data to create time series
regarding the variables that are being analyzed. The amount of data tends to
increase as more and more sensors are created and installed. After some time the
datasets become huge and requires new ways to process and store the data. This
work seeks to find ways to avoid these issues using a technological solution able
to store and process large amounts of data. The solution used is Apache Hadoop,
a tool which purpose is to solve Big Data problems. In order to evaluate the tool
were used different datasets and time series analysis algorithms. The analysis of
chaotic and non-chaotic time series were implemented. These implementations
were: the wavelet transform, similarity search using Euclidean distance function,
the calculus of the box-counting dimension and the calculus of the correlation
dimension. Those implementations were adapted for the MapReduce parallel
processing paradigm.
|
8 |
Membrane Bioreactor-based Wastewater Treatment Plant Energy Consumption: Environmental Data Science Modeling and AnalysisCheng, Tuoyuan 10 1900 (has links)
Wastewater Treatment Plants (WWTPs) are sophisticated systems that have to
sustain long-term qualified performance, regardless of temporally volatile volumes
or compositions of the incoming wastewater. Membrane filtration in the Membrane
Bioreactors (MBRs) reduces the WWTPs footprint and produces effluents of proper
quality. The energy or electric power consumption of the WWTPs, mainly from
aeration equipment and pumping, is directly linked to greenhouse gas emission and
economic input. Biological treatment requires oxygen from aeration to perform
aerobic decomposition of aquatic pollutants, while pumping consumes energy to
overcome friction in the channels, piping systems, and membrane filtration.
In this thesis, we researched full-scale WWTPs Influent Conditions (ICs) monitoring
and forecasting models to facilitate the energy consumption budgeting and raise early
alarms when facing latent abnormal events. Accurate and efficient forecasts of ICs
could avoid unexpected system disruption, maintain steady product quality, support
efficient downstream processes, improve reliability and save energy. We carried out a
numerical study of bioreactor microbial ecology for MBRs microbial communities
to identify indicator species and typical working conditions that would assist in
reactor status confirmation and support energy consumption budgeting. To quantify
membrane fouling and cleaning effects at various scales, we proposed quantitative
methods based on Matern covariances to analyze biofouling layer thickness and roughness
obtained from Optical Coherence Tomography (OCT) images taken from gravitydriven
MBRs under various working conditions. Such methods would support practitioners
to design suitable data-driven process operation or replacement cycles and lead to
quantified WWTPs monitoring and energy saving.
For future research, we would investigate data from other full-scale water or
wastewater treatment process with higher sampling frequency and apply kernel machine
learning techniques for process global monitoring. The forecasting models would
be incorporated into optimization scenarios to support data-driven decision-making.
Samples from more MBRs would be considered to gather information of microbial
community structures and corresponding oxygen-energy consumption in various working
conditions. We would investigate the relationship between pressure drop and spatial
roughness measures. Anisotropic Matern covariance related metrics would be adopted
to quantify the directional effects under various operation and cleaning working
conditions.
|
9 |
Hur låter miljöförstöring? : Självgenererande och slumpmässig musik sprungen ur statistiska data / What does environmental pollution sound like? : Self generative and randomized music interprets dataWahlström, Gustav January 2020 (has links)
Hur låter miljöförstöring – Självgenererande och slumpmässig sprungen ur statistiska data är ett mastersarbete som fokuserar på hur man kan omvandla data till att kontrollera musik och ljud. Om vi tillåter konstnärliga uttryck, med data som utgångspunkt, kan det få oss att uppleva och förstår original data på ett nytt sätt? Projektets resultat består av sju generativa kompositioner, där parametrar är kontrollerade av olika typer av miljödata, och försöker utforska forskningsområdet sonifikation och generativ musik genom att ställa frågan: Hur låter miljöförstöring? Med generativ musik menas att musiken skapar, utvecklar och förändrar sig själv utifrån de verktyg som bildats inom detta projekt. Texten går också djupare in på metoden av att utveckla dessa verktyg för att möjliggöra liknande kompositioner i framtiden. Med erfarenheter av att använda slumpmässiga parametrar för att manipulera bakgrundsdetaljer, utforskar det här projekt istället möjligheten att utveckla de metoderna och applicera det på hela kompositioner. De sju kompositionerna ligger också till grund för utforskandet av området sonifikation. I tidigare forskning har ämnet främst bemöts ifrån ett vetenskapligt perspektiv. Syftet med det här projektet har istället varit att bemöta det inom ramarna för musikalisk gestaltning och ett konstnärligt perspektiv. Begreppet sonifikation betyder, användandet av icke-talande ljud som uppmärksammar data och statistik, med målet att agera som ett substitut, eller ett komplement, till att visualisera data. Utifrån dessa kompositioner reflekterar sedan texten kring generativ musik i allmänhet, och sonifikation i synnerhet, där bland annat möjligheterna, framtida forskning och autenticiteten inom sonifikation tas upp. / What does environmental pollution sound like? – Self generative and randomized music interprets data is a master thesis focused on transforming data and letting it control music and sounds. If we create artistic outputs out of data, will it allow us to experience and understand the original data in a new way? The core, and the result, of this project where seven compositions, created and controlled by different environmental data which tries to explore the research areas of sonification and generative music by asking the question: What does environmental pollution sound like? Generative music means that the music creates, develops and changes itself based on the established tools that this project provides. This thesis also focuses on the method of developing these tools in order to enable similar productions in the future. With previous experiences in using randomized events, to manipulate details in a production, this project delves deeper into applying the same technique to a whole composition. The seven compositions were formed in order to understand and reflect upon the research areas of sonification. Earlier research tends to approach the subject from a scientific perspective. The purpose of this project was to instead approach it from a more artistic perspective. Sonification means, the use of non-speech audio to perceptualize data which enables the possibilities as an alternative, or complement, to visualize the original data. Drawing from these seven compositions, this thesis also discusses generative music and sonification in general, as well as the opportunities, future research and authenticity of sonification. / <p>Bifogad ljudfil är ett kollage av de sju kompositionerna som arbetet resulterat i, med anledning av att i framtiden kunna publiceras i sin helhet.</p>
|
10 |
Stochastic Multimedia Modelling of Watershed-Scale Microbial Transport in Surface WaterSafwat, Amr M. 10 October 2014 (has links)
No description available.
|
Page generated in 0.0986 seconds