Spelling suggestions: "subject:"gerged"" "subject:"emerged""
1 |
Some Common Subsequence Problems of Multiple Sequences and Their ApplicationsHuang, Kuo-Si 14 July 2007 (has links)
The longest common subsequence (LCS) problem is a famous and classical problem in computer science and molecular biology. The common subsequence of multiple sequences shows the identical and similar parts in these sequences. This dissertation pays attention to the approximate algorithms for finding the LCS of $k$ input sequence ($k$-LCS problem), the merged LCS problem, and the mosaic LCS problem. These three problems try to hunt out the identical relationships among the $k$ sequences, the interleaving relationship between a target sequence and a merged sequence of a pair of sequences, and the mosaic relationship between a target sequence and a set of sequences, respectively.
Given $k$ input sequences, the $k$-LCS problem is to find the LCS which is common in all sequences. We first propose two $sigma$-approximate algorithms for the $k$-LCS problem with time complexities $O(sigma k n)$ and $O(sigma^{2} k n + sigma^{3} n)$ respectively, where $sigma$ and $n$ are the alphabet size and length of sequences, respectively. Experimental results show that our algorithms for 2-LCS could be a good filter to select the candidate sequences in database searching.
Given a target sequence $T$ and a pair of merging sequences $A$ and $B$, the merged LCS problem is to find the LCS of $T$ and the optimally merged sequence by merging $A$ and $B$ alternately. Its goal is to find a merging way for understanding the interleaving relationship of sequences. We first propose an algorithm with $O(n^{3})$ time for solving the problem, where $n$ is the sequence length. We further add the block information of input sequences in the blocked merged LCS problem. To solve the latter problem, we propose an algorithm with time complexity $O(n^{2}m_{b})$, where $m_{b}$ is the number of blocks. Based on the S-table technique, we can design an improved algorithm with $O(n^{2} + nm_{b}^{2})$ time.
Additionally, we desire to obtain the relationship between one sequence and a set of sequences. Given a target sequence $T$ and a set $S$ of source sequences, the mosaic LCS problem is to find the LCS of $T$ and a mosaic sequence $C$, composed of repeatable $k$ sequences in $S$. Based on the concept of break points in $T$, a divide and conquer algorithm is proposed with $O(n^2m|S|+ n^3log k)$ time, where $n$ and $m$ are the lengths of $T$ and the maximal length of sequences in $S$, respectively. Again, based on the S-table technique, an improved algorithm with $O(n(m+k)|S|)$ time is proposed by applying an efficient preprocessing.
|
2 |
Effects of English as a Corporate Language on Communication in a Nordic Merged CompanyGynne (Leppänen), Annaliina January 2004 (has links)
In the business world facilitation of corporate communication through the use of a single language has become almost a standard procedure. There is little knowledge, however, regarding how working in a language other than the mother tongue affects our thought processes and functionality at work. This study is an attempt to clear some issues around the subject. The purpose of this study is to explore the impact of the corporate language, English, on managers’ communication within the organisation. The target group includes Finnish and Swedish managers working at a Nordic IT corporation, TietoEnator. The study was conducted by combining theoretical material on communication, language and culture with the empirical results of 7 qualitative interviews. The results show us that using a shared corporate language has both advantages and disadvantages. English helps in company internationalisation and in creating a sense of belonging, but also complicates everyday communication. The main disadvantage that English has caused is the lack of social communication between members of different nations in an unofficial level. The main conclusion is that the corporate language is not at all times sufficient fulfil the social needs of the members of the organisation. Through this lack of socialisation it is possible that the functionality of the organisation loses some of its competitive advantage in the business markets.
|
3 |
Verification based on unfoldings of Petri nets with read arcs / Vérification à l'aide de dépliages de réseaux de Petri étendus avec des arcs de lectureRodríguez, César 12 December 2013 (has links)
L'être humain fait des erreurs, en particulier dans la réalisation de taches complexes comme la construction des systèmes informatiques modernes. Nous nous intéresserons dans cette thèse à la vérification assistée par ordinateur du bon fonctionnement des systèmes informatiques. Les systèmes informatiques actuels sont de grande complexité. Afin de garantir leur fiabilité, la vérification automatique est une alternative au 'testing' et à la simulation. Elle propose d'utiliser des ordinateurs pour explorer exhaustivement l'ensemble des états du système, ce qui est problématique: même des systèmes assez simples peuvent atteindre un grand nombre d'états. L'utilisation des bonnes représentations des espaces d'états est essentielle pour surmonter la complexité des problèmes posés en vérification automatique. La vérification des systèmes concurrents amène des difficultés additionnelles, car l'analyse doit, en principe, examiner tous les ordres possibles d'exécution des actions concurrentes. Le dépliage des réseaux de Petri est une technique largement étudiée pour la vérification des systèmes concurrents. Il représentent l'espace d'états du système par un ordre partiel, ce qui se révèle aussi naturel qu'efficace pour la vérification automatique. Nous nous intéressons à la vérification des systèmes concurrents modélisés par des réseaux de Petri, en étudiant deux techniques remarquables de vérification: le 'model checking' et le diagnostic. Nous étudions les dépliages des réseaux de Petri étendus avec des arcs de lecture. Ces dépliages, aussi appelés dépliages contextuels, semblent être une meilleure représentation des systèmes contenant des actions concurrentes qui lisent des ressources partagées : ils peuvent être exponentiellement plus compacts dans ces cas. Ce travail contient des contributions théoriques et pratiques. Dans un premier temps, nous étudions la construction des dépliages contextuels, en proposant des algorithmes et des structures de données pour leur construction efficace. Nous combinons les dépliages contextuels avec les 'merged process', une autre représentation des systèmes concurrents qui contourne l'explosion d'états dérivée du non-déterminisme. Cette nouvelle structure, appelée 'contextual merged process', est souvent exponentiellement plus compacte, ce que nous montrons expérimentalement. Ensuite, nous nous intéressons à la vérification à l'aide des dépliages contextuels. Nous traduisons vers SAT le problème d'atteignabilité des dépliages contextuels, en abordant les problèmes issus des cycles de conflit asymétrique. Nous introduisons également une méthode de diagnostic avec des hypothèses d'équité, cette fois pour des dépliages ordinaires. Enfin, nous implémentons ces algorithmes dans le but de produire un outil de vérification compétitif et robuste. L'évaluation de nos méthodes sur un ensemble d'exemples standards, et leur comparaison avec des techniques issues des dépliages ordinaires, montrent que la vérification avec des dépliages contextuels est plus efficace que les techniques existantes dans de nombreux cas. Ceci suggère que les dépliages contextuels, et les structures d'évènements asymétriques en général, méritent une place légitime dans la recherche en concurrence, également du point de vu de leur efficacité. / Humans make mistakes, especially when faced to complex tasks, such as the construction of modern hardware or software. This thesis focuses on machine-assisted techniques to guarantee that computers behave correctly. Modern computer systems are large and complex. Automated formal verification stands as an alternative to testing or simulation to ensuring their reliability. It essentially proposes to employ computers to exhaustively check the system behavior. Unfortunately, automated verification suffers from the state-space explosion problem: even relatively small systems can reach a huge number of states. Using the right representation for the system behavior seems to be a key step to tackle the inherent complexity of the problems that automated verification solves. The verification of concurrent systems poses additional issues, as their analysis requires to evaluate, conceptually, all possible execution orders of their concurrent actions. Petri net unfoldings are a well-established verification technique for concurrent systems. They represent behavior by partial orders, which not only is natural but also efficient for automatic verification. This dissertation focuses on the verification of concurrent systems, employing Petri nets to formalize them, and studies two prominent verification techniques: model checking and fault diagnosis. We investigate the unfoldings of Petri nets extended with read arcs. The unfoldings of these so-called contextual nets seem to be a better representation for systems exhibiting concurrent read access to shared resources: they can be exponentially smaller than conventional unfoldings on these cases. Theoretical and practical contributions are made. We first study the construction of contextual unfoldings, introducing algorithms and data structures that enable their efficient computation. We integrate contextual unfoldings with merged processes, another representation of concurrent behavior that alleviates the explosion caused by non-determinism. The resulting structure, called contextual merged processes, is often orders of magnitude smaller than unfoldings, as we experimentally demonstrate. Next, we develop verification techniques based on unfoldings. We define SAT encodings for the reachability problem in contextual unfoldings, thus solving the problem of detecting cycles of asymmetric conflict. Also, an unfolding-based decision procedure for fault diagnosis under fairness constraints is presented, in this case only for conventional unfoldings. Finally, we implement our verification algorithms, aiming at producing a competitive model checker intended to handle realistic benchmarks. We subsequently evaluate our methods over a standard set of benchmarks and compare them with existing unfolding-based techniques. The experiments demonstrate that reachability checking based on contextual unfoldings outperforms existing techniques on a wide number of cases. This suggests that contextual unfoldings, and asymmetric event structures in general, have a rightful place in research on concurrency, also from an efficiency point of view.
|
4 |
Strategic consensus building : A single case study in a merged organizationBuijs, Sonja, Langguth, Julia January 2017 (has links)
Background: Considering high merger failure in the process of strategy implementation, there is a need to elaborate on strategic consensus building during this major organizational change. Purpose: To gain understanding about the strategic consensus building process in a merged organization from a teleological perspective. The premerger influence and the intervening circumstances are expected to affect the process of consensus building. Methodology: A single case study approach was taken by interviewing twelve senior managers from two hierarchical levels as well as five managers from the corporate strategy department of a merged organization to gain a comprehensive understanding of the research topic. Findings: The empirical findings indicated that consensus on strategic priorities is essential for further development of a merged organization. In addition, this study has identified three strategic consensus building facilitators vertical communication, transparency, and agility.
|
5 |
The Perception of Effectiveness in Merged Information Services Organizations: Combining Library and Information Technology Services at Liberal Arts InstitutionsStemmer, John K. 10 August 2007 (has links)
No description available.
|
6 |
Verification based on unfoldings of Petri nets with read arcsRodríguez, César 12 December 2013 (has links) (PDF)
Humans make mistakes, especially when faced to complex tasks, such as the construction of modern hardware or software. This thesis focuses on machine-assisted techniques to guarantee that computers behave correctly. Modern computer systems are large and complex. Automated formal verification stands as an alternative to testing or simulation to ensuring their reliability. It essentially proposes to employ computers to exhaustively check the system behavior. Unfortunately, automated verification suffers from the state-space explosion problem: even relatively small systems can reach a huge number of states. Using the right representation for the system behavior seems to be a key step to tackle the inherent complexity of the problems that automated verification solves. The verification of concurrent systems poses additional issues, as their analysis requires to evaluate, conceptually, all possible execution orders of their concurrent actions. Petri net unfoldings are a well-established verification technique for concurrent systems. They represent behavior by partial orders, which not only is natural but also efficient for automatic verification. This dissertation focuses on the verification of concurrent systems, employing Petri nets to formalize them, and studies two prominent verification techniques: model checking and fault diagnosis. We investigate the unfoldings of Petri nets extended with read arcs. The unfoldings of these so-called contextual nets seem to be a better representation for systems exhibiting concurrent read access to shared resources: they can be exponentially smaller than conventional unfoldings on these cases. Theoretical and practical contributions are made. We first study the construction of contextual unfoldings, introducing algorithms and data structures that enable their efficient computation. We integrate contextual unfoldings with merged processes, another representation of concurrent behavior that alleviates the explosion caused by non-determinism. The resulting structure, called contextual merged processes, is often orders of magnitude smaller than unfoldings, as we experimentally demonstrate. Next, we develop verification techniques based on unfoldings. We define SAT encodings for the reachability problem in contextual unfoldings, thus solving the problem of detecting cycles of asymmetric conflict. Also, an unfolding-based decision procedure for fault diagnosis under fairness constraints is presented, in this case only for conventional unfoldings. Finally, we implement our verification algorithms, aiming at producing a competitive model checker intended to handle realistic benchmarks. We subsequently evaluate our methods over a standard set of benchmarks and compare them with existing unfolding-based techniques. The experiments demonstrate that reachability checking based on contextual unfoldings outperforms existing techniques on a wide number of cases. This suggests that contextual unfoldings, and asymmetric event structures in general, have a rightful place in research on concurrency, also from an efficiency point of view.
|
7 |
Learning in the third space : a sociocultural perspective on learning with analogiesBellocchi, Alberto January 2009 (has links)
Research on analogies in science education has focussed on student interpretation of teacher and textbook analogies, psychological aspects of learning with analogies and structured approaches for teaching with analogies. Few studies have investigated how analogies might be pivotal in students’ growing participation in chemical discourse. To study analogies in this way requires a sociocultural perspective on learning that focuses on ways in which language, signs, symbols and practices mediate participation in chemical discourse. This study reports research findings from a teacher-research study of two analogy-writing activities in a chemistry class. The study began with a theoretical model, Third Space, which informed analyses and interpretation of data. Third Space was operationalized into two sub-constructs called Dialogical Interactions and Hybrid Discourses. The aims of this study were to investigate sociocultural aspects of learning chemistry with analogies in order to identify classroom activities where students generate Dialogical Interactions and Hybrid Discourses, and to refine the operationalization of Third Space.
These aims were addressed through three research questions. The research questions were studied through an instrumental case study design. The study was conducted in my Year 11 chemistry class at City State High School for the duration of one Semester. Data were generated through a range of data collection methods and analysed through discourse analysis using the Dialogical Interactions and Hybrid Discourse sub-constructs as coding categories. Results indicated that student interactions differed between analogical activities and mathematical problem-solving activities. Specifically, students drew on discourses other than school chemical discourse to construct analogies and their growing participation in chemical discourse was tracked using the Third Space model as an interpretive lens.
Results of this study led to modification of the theoretical model adopted at the beginning of the study to a new model called Merged Discourse. Merged Discourse represents the mutual relationship that formed during analogical activities between the Analog Discourse and the Target Discourse. This model can be used for interpreting and analysing classroom discourse centred on analogical activities from sociocultural perspectives. That is, it can be used to code classroom discourse to reveal students’ growing participation with chemical (or scientific) discourse consistent with sociocultural perspectives on learning.
|
8 |
Time series analysis of SAR images using persistent scatterer (PS), small baseline (SB) and merged approaches in regions with small surface deformation / Analyse des séries temporelles des images SAR par le biais des méthodes « persistant scatterer » (PS), « smal baseline » (SB) et l’approche de fusion dans les régions à petite déformation des surfaceBouraoui, Seyfallah 02 July 2013 (has links)
Cette thèse porte sur l’étude de la déformation de surface (petite et grande déformation) pouvant être détectée en utilisant la méthode de l’interférométrie "InSAR " pour le traitement des images SAR (Synthetic Aperture Radar, bande C : λÉ = 5.6 cm) et signal associé à synthèse d'ouverture. Les nouveaux développements des techniques de traitement InSAR permettent le suivi de la déformation en surface avec une précision de l'ordre millimétrique. Les traitements dites conventionnels de l'InSAR utilisent une paire d'images SAR ("Maitre" et "Esclave") afin de mesurer la différence de phase entre les deux prises de la même scène d'image à des moments différents. Les incertitudes dans les mesures obtenus à partir du traitement conventionnel de l'InSAR sont nombreuses : la décorrelation dans le signal en raison du délai du à l'atmosphère, la contribution topographique et les positions orbitales sont les handicaps majeurs de cette technique. En 2001, Ferretti et al. ont introduit une nouvelle méthode appelée Permanent Scatterer (PS-InSAR) également connue sous le nom de Persistent Scatterer. Pour cette méthode, nous utilisons une série d'images, dont une dite esclave pour construire des interférogrammes avec la même image dite « Maître ». Cette méthode permet d'améliorer le signal de visé (LOS) en terme de correlation pour chaque pixel (PS) en utilisant les meilleurs réflecteurs donnant une corrélation maximale (à partir de l'amplitude et/ou la phase) dans le temps et dans l'espace. Un grand nombre d'algorithmes a été élaboré à cet effet en utilisant le même principe (des variantes) décrit auparavant. En 2002, Berardino et al. publient un nouveau algorithme développé pour le suivi de la déformation en surface en se basant sur les interférogrammes produits à partir des couples d’image SAR ayant une petite séparation spatial (SBAS) de la ligne de base.Dans cette thèse, les techniques InSAR sont appliquées pour différents cas d’étude allant de la petite déformation en surface telle que: 1) Un affaissement dans une zone de puits de pétrole, 2) des glissements de terrain dans une zone urbaine, et 3) la déformation lente à travers les zones de failles des zones sismiques. Afin d'étudier la petite déformation j'opte pour l’utilisation des deux algorithmes (PS et SBAS) dit de traitement multi-temporelle de l’InSAR incorporés dans le logiciel StaMPS (Hooper, 2008). Ainsi, j’ai pu calculer la méthode de combinaison ou hybride entre PS et SBAS et ce, pour toutes les études de cas présentées dans cette thèse. Par ailleurs, certains logiciels en libre accès sont utilisés tout au long de cette thèse tel que, Roi-pac (Rosen et al., 2004) pour aligner les images SAR ainsi que Doris (Kampes et al., 2003) pour calculer interférogrammes à partir de images SAR.[...] / This thesis aims at the study of small to large surface deformation that can be detected using the remote sensing interferometric synthetic aperture radar (InSAR) methods. The new developments of InSAR processing techniques allow the monitoring of surface deformation with millimeter surface change accuracy. Conventional InSAR use a pair of SAR images (“Master” and “Slave” images) in order to measure the phase difference between the two images taken at different times. The uncertainties in measurements using the conventional InSAR due to the atmospheric delay, the topographic changes and the orbital artifacts are the handicaps of this method. The idea of InSAR method is to measure the phase difference between tow SAR acquisitions. These measure refere to the ground movment according to the satellite position. In interferogram the red to blue colors refere to the pixel movement to or far from the satellite position in Line-Of-Sight (LOS) direction. In 2000’s, Radar spacecraft have seen a large number of launching mission, SAR quisitions and InSAR applicability have seen explosion in differents geophysical studies due to the important SAR datas and facility of data accessibity. This SAR-mining needs other type and generation of InSAR processing.In 2001, Ferretti and others introduce a new method called Permanent Scatterer InSAR (PS) that is based on the use of more than one Slave image in InSAR processing with the same Master image. This method allows enhancing the LOS signal for each pixel (PS) by using the best time and/or space-correlated signal (from amplitude and/or from phase) for each pixel over the acquisitions. A large number of algorithms were developed for this purpose using thesame principle (variantes). In 2002, Berardino et al developed new algorithm for monitoring surface deformation based on the combination of stack of InSAR results from SAR couples respecting small baseline (SB) distance. Nowadays, these two methods represent the existing time series (TS) analysis of SAR images approaches. In addition, StaMPS software introduced by Hooper and others, in 2008 is able to combine these two methods in order to take advantages from both of this TS approaches in term of best signal correlation and reducing the signal noise errors. In this thesis, the time series studies of surface changes associate to differents geophysical phenomena will have two interest: the first is to highlight the PS and SBAS results and discuss the fiability of obtained InSAR signal with comparation with the previous studies of the same geophysical case or observations in the field and in the second time, the combined method will also validate the results obtained separately with differents TS techniques. The validation of obtained signal is assured by these two steeps: Both of PS and SBAS methods should give relatively the same interferograms and LOSdisplacement signal (in term of sign and values), in addition these results will be compared with the previous studies results or with observations on the field.In this thesis, the InSAR techniques are applied to different case-studies of small surface deformation [...]
|
9 |
Breakdown Characteristics in SiC and Improvement of PiN Diodes toward Ultrahigh-Voltage Applications / 超高耐圧応用を目指したSiCにおける絶縁破壊特性の基礎研究およびPiNダイオードの高性能化Niwa, Hiroki 23 March 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第19722号 / 工博第4177号 / 新制||工||1644(附属図書館) / 32758 / 京都大学大学院工学研究科電子工学専攻 / (主査)教授 木本 恒暢, 教授 髙岡 義寛, 教授 山田 啓文 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
10 |
Coupled processes in seasonally frozen soils : Merging experiments and simulationsWu, Mousong January 2016 (has links)
Soil freezing/thawing is of importance in the transport of water, heat and solute, with coupled effects. Due to complexity in soil freezing/thawing, uncertainty could be influential in both experimentation and simulation work in frozen soils. Solute and water in frozen soil could reduce the freezing point, resulting in uncertainty in simulation water, heat and solute processes as well as in estimation of frozen soil evaporation. High salinity and groundwater level could result in high soil evaporation during wintertime. Seasonal courses in energy and water balance on surface have shown to be influential to soil water and heat dynamics, as well as in salt accumulation during wintertime. Water and solute accumulated during freezing period resulted in high evaporation during thawing period and enhanced surface salinization. Diurnal changes in surface energy partitioning resulted in significant cycle of freezing/thawing as well as in evaporation/condensation in surface layer, which could in turn affect atmosphere. Uncertainties in experiments and simulations were detectable in investigation of seasonally frozen soils with limited methods and simplified representations of reality in two agricultural fields in northern China. Soil water and solute contents have shown to be more uncertain than soil temperatures in both measurements and simulations. The combination of experiments with process-based model (CoupModel) has proven to be useful in understanding freezing/thawing processes and in identification of uncertainty, when Monte-Carlo based methods were used for evaluation of simulations. Correlations between parameters and model performance indices needed to be taken into account carefully in calibration of the process-based model. Parameters related to soil hydraulic processes and surface energy processes were more sensitive when using different datasets for calibration. In using multiple model performance indicators for multi-objective evaluation, the trade-offs between them have shown to be a source of uncertainty in calibration. More proper representations of the reality in model (e.g., soil hydraulic and thermal properties) and more detailed measurements (e.g., soil liquid water content and solute concentration) as input would be efficient in reducing uncertainty. Relationships between groundwater, soil and climate change would be of high interest for better understanding of cold regions water and energy balance. / 土壤冻融过程对于水热及溶质的运移具有十分重要的影响,并对于寒旱区水文过程的研究有着深远意义。在冻土中,溶质的存在导致冰点降低,改变土壤冻融规律,进而影响冻融土壤水热运移。冻融土壤地表水分及能量平衡的季节性变化规律对土壤水热运移及盐分的积累影响较大。同时,土壤冻融的水热平衡日变化规律对表层土壤水热过程影响较为强烈,并进而影响地表的气象变化。试验研究表明,高溶质含量及浅埋深地下水条件为地表的蒸发提供了便利条件,因为高溶质含量土壤冰点降低,同一负温条件下的液态含水量增大,为蒸发提供了可利用水分;而浅埋深地下水对冻融期水盐的表聚提供的方便,进而有助于融化期地表水分的大量蒸发及下层土层水分的大量向上补给。例如,当地下水初始埋深设置在1.5 m时,对于初始含盐量分别为0.2%,0.4% 和0.6% g/g的冻融试验组,冻融期累积蒸发量分别为51.0,96.6和114.0 mm。同样的增加趋势在其它初始地下水埋深设置试验组里也被验证,且初始地下水埋深越浅,累积蒸发量也越大。对于利用有限的试验数据及简化的数值模型对冻融土壤水热及溶质研究,由于试验及模型的不确定性,会造成结果的不确定性。而通过利用不确定性分析的方法将试验结果与数值模型结合起来可以较好地理解土壤冻融过程及处理不确定性,并进而为改进试验方法及完善数值模型提供参考。模型不确定分析结果表明,模型参数之间,及参数与模型模拟效果评价因子之间存在较强的相关性,会造成模拟结果的不确定性。而不同模拟方法的结果对比表明,在进行冻融土壤水热及溶质模拟时,建立更为完善的考虑更为详细的过程的数值模型可以提高模型的模拟效率,减小模拟结果的不确定性。同时,试验数据的不确定性也显示出了对模拟结果的显著影响。精确的试验数据及更为科学的试验方法有助于减少模拟结果的不确定性。在减少模拟结果的不确定性上也有重要作用。同时,由于寒区水文过程的复杂性及在气候变化过程中的重要性,有必要进一步开展寒区地下水,土壤水热盐与气候变化关系的研究,以便于制定更为合理的寒区水资源管理策略。 / Frysning och tining är av betydelse för kopplade flöden av vatten, värme och lösta ämnen i mark. Komplexiteten i sambanden mellan lösta ämnen, ofruset vatten och fryspunkten skapar en osäkerhet vid simuleringar av processer för både vatten, värme och lösta ämnen i marken samt för avdunstningen från markytan. Årtidsberoende mönster i energi- och vattenbalansen för markytan påverkar värme- och vattendynamiken i marken samt ackumulering av salter under vintern. Dygnsvariationer i energibalansens uppdelning vid markytan ger upphov till frysning/tining samt avdunstning och kondensation i ytliga lager som har återkopplingar också till tillstånden i atmosfärens ytskikt. Osäkerheter i både experiment och i simuleringar spårades i undersökningar av säsongstyrd frysning av mark i två provincer av norra Kina. Begräningar i metodik och förenklingar av naturens komplexitet kunde klargöras. Kombinationen av experiment och processbaserad modellering med CoupModel var lyckosam för föreståelsen av frysning/tining och kunde klargöra osäkerhet med hjälp av Monte Carlo teknik. Korrelation mellan parametrar och prestanda hos modellen var en viktig del av kalibreringsproceduren. En förbättrad processbeskrivning av marken och minskad parameterosäkerhet kan erhållas om också mer detaljerade mätningar inbegrips i framtida studier. Sambanden mellan grundvatten, mark och klimatförändringar är av största intresse för en bättre kunna beskriva kalla regioners vatten- och energibalanser. / <p>QC 20160329</p>
|
Page generated in 0.0352 seconds