Spelling suggestions: "subject:"kernel density"" "subject:"fernel density""
91 |
'Designing out Crime' – A Spatial and Temporal Analysis of Crime in UmeåZugschwerdt, Marc January 2017 (has links)
The creation of sustainable and safe environments nowadays moves more and more into focus for urban planners and architects. Cities should be designed in a way to contribute to social cohesion, shaping an inclusive environment and focusing on the wellbeing of its citizens. Nevertheless, these processes can be undermined by public crime and the fear of crime, which is not only affecting aspects of personal safety but also affecting the people’s behaviour. Reasons why criminality occurs are manifold, impacted by a dynamic set of socioeconomic, demographic, personal but also environmental aspects. In recent years especially the impact of factors related to urban and environmental design respectively planning received rising attention in the field of crime prevention. However, the implementation of strategies regarding ‘crime prevention through environmental design’ or ‘designing out crime’ is still in its early stage in Sweden. This study aims to investigate spatial and temporal patterns of public crime for the case of Umeå in order to identify potential risk areas, which could receive particular attention regarding crime prevention through environmental design (CPTED). In this sense a GIS based spatial analysis had the aim to detect statistically significant hotspots of crime and furthermore to assess the development of these hotspots over time. In order to understand the nature of public crime and criminal behaviour in Umeå in a more holistic way, also temporal aspects regarding the occurrence of crime were analysed. One particularly vulnerable neighbourhood was examined with a qualitative field observation regarding the principles of crime prevention through environmental design in order to assess in which way the built environment is designed and suited to prevent and deter criminality. Umeå displays rather clear patterns of higher crime activity, assigned to seasonal, weakly and daily periods, which are connected to higher activity in the public space. Also from a spatial perspective certain patterns are detectable with a higher vulnerability for crime at spots which generate higher activity such as shopping areas or neighbourhoods with nightlife and transport hub functions, and in general neighbourhoods with a higher building density. The neighbourhood of Ålidhem displayed thereby a high concentration of criminality, marked as a constant or even intensifying hotspot for the entire period of investigation. The results of the field observation regarding principles of CPTED are especially indicating a lack of maintenance and furthermore the street and building layout is contributing to disorientation. On the other hand, the area is in most cases well equipped for natural surveillance and provides a high amount of locations for leisure and recreation in order strengthen social cohesion.
|
92 |
Clustering on groups for human tracking with 3D LiDARUtterström, Simon January 2023 (has links)
3D LiDAR people detection and tracking applications rely on extracting individual people from the point cloud for reliable tracking. A recurring problem for these applications is under-segmentation caused by people standing close or interacting with each other, which in turn causes the system to lose tracking. To address this challenge, we propose Kernel Density Estimation Clustering with Grid (KDEG) based on Kernel Density Estimation Clustering. KDEG leverages a grid to save density estimates computed in parallel, finding cluster centers by selecting local density maxima in the grid. KDEG reaches a remarkable accuracy of 98.4%, compared to HDBSCAN and Scan Line Run (SLR) with 80.1% and 62.0% accuracy respectively. Furthermore, KDEG is measured to be highly efficient, with a running time similar to state-of-the-art methods SLR and Curved Voxel Clustering. To show the potential of KDEG, an experiment with a real tracking application on two people walking shoulder to shoulder was performed. This experiment saw a significant increase in the number of accurately tracked frames from 5% to 78% by utilizing KDEG, displaying great potential for real-world applications. In parallel, we also explored HDBSCAN as an alternative to DBSCAN. We propose a number of modifications to HDBSCAN, including the projection of points to the groundplane, for improved clustering on human groups. HDBSCAN with the proposed modifications demonstrates a commendable accuracy of 80.1%, surpassing DBSCAN while maintaining a similar running time. Running time is however found to be lacking for both HDBSCAN and DBSCAN compared to more efficient methods like KDEG and SLR. / <p>Arbetet är gjort på plats i Tokyo på Chuo Universitet utan samverkan från Umeå Universitet såsom utbytesprogram eller liknande.</p><p>Arbetet är delvis finansierat av Scandinavia-Japan Sasakawa Foundation.</p><p>Arbetet gick inte under vanlig termin, utan började 2023/05/01 och slutade 2023/08</p>
|
93 |
Introducing the IP Heaviness Classification System in IP Valuation : Valuing Intellectual Capital Across Industries / Introduktion av IP-Tunghet inom värdering av immateriella tillgångarLostorp, Henrik, Karlsson, Elias January 2024 (has links)
Valuing Intellectual Property assets is increasingly critical in today’s economy, where intangible assets constitute a significant portion of business value. This thesis addresses the challenges inherent in the IP valuation process, particularly the subjectivity and variability associated with different IP types and valuation methodologies. It proposes a new way to value IP assets, by building upon existing disaggregation methods, and by introducing the IP-heaviness classification system. The study aims to develop an objective valuation model for IP assets by introducing the IP-heaviness classification system. The goal of the model is to estimate the range of IP Contribution (IPC) to company value across different industry groups. Our study employed Kernel Density Estimation and Monte Carlo Simulation to analyze the dataset and generate a larger data sample. We then developed the IPH classification system, which categorizes industries based on their reliance on IP as a value contributor, grouping them by similar levels of IP dependence. This structured approach allows for a preliminary estimation of the IP contribution for each group, providing a standardized framework for IP valuation. Each IPH group was assigned its own probability density curve to represent its potential IPC value. Ultimately, our model produced confidence intervals for each IPH group, offering a reliable measure of the IP contribution within each category. Our findings reveal significant variability in the impact of IP on company value across different industries. Higher IPH groups, representing industries with substantial IP reliance, show a greater proportion of their value attributed to IP assets. Conversely, lower IPH groups, with less reliance on IP, exhibit lower IP contributions. The IPH classification system addresses the challenges of traditional IP valuation methods by providing a more objective and transparent approach. It enhances the comparability of companies within and across IPH groups and reduces subjectivity in the valuation process.
|
94 |
Nonparametric estimation of the off-pulse interval(s) of a pulsar light curve / Willem Daniël SchutteSchutte, Willem Daniël January 2014 (has links)
The main objective of this thesis is the development of a nonparametric sequential estimation
technique for the off-pulse interval(s) of a source function originating from a pulsar. It is important
to identify the off-pulse interval of each pulsar accurately, since the properties of the off-pulse
emissions are further researched by astrophysicists in an attempt to detect potential emissions
from the associated pulsar wind nebula (PWN). The identification technique currently used in the
literature is subjective in nature, since it is based on the visual inspection of the histogram estimate
of the pulsar light curve. The developed nonparametric estimation technique is not only objective
in nature, but also accurate in the estimation of the off-pulse interval of a pulsar, as evident from
the simulation study and the application of the developed technique to observed pulsar data.
The first two chapters of this thesis are devoted to a literature study that provides background
information on the pulsar environment and -ray astronomy, together with an explanation of the
on-pulse and off-pulse interval of a pulsar and the importance thereof for the present study. This
is followed by a discussion on some fundamental circular statistical ideas, as well as an overview
of kernel density estimation techniques. These two statistical topics are then united in order to
illustrate kernel density estimation techniques applied to circular data, since this concept is the
starting point of the developed nonparametric sequential estimation technique.
Once the basic theoretical background of the pulsar environment and circular kernel density
estimation has been established, the new sequential off-pulse interval estimator is formulated. The
estimation technique will be referred to as `SOPIE'. A number of tuning parameters form part
of SOPIE, and therefore the performed simulation study not only serves as an evaluation of the
performance of SOPIE, but also as a mechanism to establish which tuning parameter configurations
consistently perform better than some other configurations.
In conclusion, the optimal parameter configurations are utilised in the application of SOPIE to
pulsar data. For several pulsars, the sequential off-pulse interval estimators are compared to the
off-pulse intervals published in research papers, which were identified with the subjective \eye-ball"
technique. It is found that the sequential off-pulse interval estimators are closely related to the
off-pulse intervals identified with subjective visual inspection, with the benefit that the estimated
intervals are objectively obtained with a nonparametric estimation technique. / PhD (Statistics), North-West University, Potchefstroom Campus, 2014
|
95 |
A multi-wavelength study of a sample of galaxy clusters / Susan WilsonWilson, Susan January 2012 (has links)
In this dissertation we aim to perform a multi-wavelength analysis of galaxy clusters. We discuss
various methods for clustering in order to determine physical parameters of galaxy clusters
required for this type of study. A selection of galaxy clusters was chosen from 4 papers, (Popesso
et al. 2007b, Yoon et al. 2008, Loubser et al. 2008, Brownstein & Mo at 2006) and restricted
by redshift and galactic latitude to reveal a sample of 40 galaxy clusters with 0.0 < z < 0.15.
Data mining using Virtual Observatory (VO) and a literature survey provided some background
information about each of the galaxy clusters in our sample with respect to optical, radio and
X-ray data. Using the Kayes Mixture Model (KMM) and the Gaussian Mixing Model (GMM),
we determine the most likely cluster member candidates for each source in our sample. We compare
the results obtained to SIMBADs method of hierarchy. We show that the GMM provides
a very robust method to determine member candidates but in order to ensure that the right
candidates are chosen we apply a select choice of outlier tests to our sources. We determine
a method based on a combination of GMM, the QQ Plot and the Rosner test that provides a
robust and consistent method for determining galaxy cluster members. Comparison between
calculated physical parameters; velocity dispersion, radius, mass and temperature, and values
obtained from literature show that for the majority of our galaxy clusters agree within 3 range.
Inconsistencies are thought to be due to dynamically active clusters that have substructure or
are undergoing mergers, making galaxy member identi cation di cult. Six correlations between
di erent physical parameters in the optical and X-ray wavelength were consistent with
published results. Comparing the velocity dispersion with the X-ray temperature, we found a
relation of T0:43 as compared to T0:5 obtained from Bird et al. (1995). X-ray luminosity
temperature and X-ray luminosity velocity dispersion relations gave the results LX T2:44
and LX 2:40 which lie within the uncertainty of results given by Rozgacheva & Kuvshinova
(2010). These results all suggest that our method for determining galaxy cluster members is
e cient and application to higher redshift sources can be considered. Further studies on galaxy
clusters with substructure must be performed in order to improve this method. In future work,
the physical parameters obtained here will be further compared to X-ray and radio properties
in order to determine a link between bent radio sources and the galaxy cluster environment. / MSc (Space Physics), North-West University, Potchefstroom Campus, 2013
|
96 |
Nonparametric estimation of the off-pulse interval(s) of a pulsar light curve / Willem Daniël SchutteSchutte, Willem Daniël January 2014 (has links)
The main objective of this thesis is the development of a nonparametric sequential estimation
technique for the off-pulse interval(s) of a source function originating from a pulsar. It is important
to identify the off-pulse interval of each pulsar accurately, since the properties of the off-pulse
emissions are further researched by astrophysicists in an attempt to detect potential emissions
from the associated pulsar wind nebula (PWN). The identification technique currently used in the
literature is subjective in nature, since it is based on the visual inspection of the histogram estimate
of the pulsar light curve. The developed nonparametric estimation technique is not only objective
in nature, but also accurate in the estimation of the off-pulse interval of a pulsar, as evident from
the simulation study and the application of the developed technique to observed pulsar data.
The first two chapters of this thesis are devoted to a literature study that provides background
information on the pulsar environment and -ray astronomy, together with an explanation of the
on-pulse and off-pulse interval of a pulsar and the importance thereof for the present study. This
is followed by a discussion on some fundamental circular statistical ideas, as well as an overview
of kernel density estimation techniques. These two statistical topics are then united in order to
illustrate kernel density estimation techniques applied to circular data, since this concept is the
starting point of the developed nonparametric sequential estimation technique.
Once the basic theoretical background of the pulsar environment and circular kernel density
estimation has been established, the new sequential off-pulse interval estimator is formulated. The
estimation technique will be referred to as `SOPIE'. A number of tuning parameters form part
of SOPIE, and therefore the performed simulation study not only serves as an evaluation of the
performance of SOPIE, but also as a mechanism to establish which tuning parameter configurations
consistently perform better than some other configurations.
In conclusion, the optimal parameter configurations are utilised in the application of SOPIE to
pulsar data. For several pulsars, the sequential off-pulse interval estimators are compared to the
off-pulse intervals published in research papers, which were identified with the subjective \eye-ball"
technique. It is found that the sequential off-pulse interval estimators are closely related to the
off-pulse intervals identified with subjective visual inspection, with the benefit that the estimated
intervals are objectively obtained with a nonparametric estimation technique. / PhD (Statistics), North-West University, Potchefstroom Campus, 2014
|
97 |
A multi-wavelength study of a sample of galaxy clusters / Susan WilsonWilson, Susan January 2012 (has links)
In this dissertation we aim to perform a multi-wavelength analysis of galaxy clusters. We discuss
various methods for clustering in order to determine physical parameters of galaxy clusters
required for this type of study. A selection of galaxy clusters was chosen from 4 papers, (Popesso
et al. 2007b, Yoon et al. 2008, Loubser et al. 2008, Brownstein & Mo at 2006) and restricted
by redshift and galactic latitude to reveal a sample of 40 galaxy clusters with 0.0 < z < 0.15.
Data mining using Virtual Observatory (VO) and a literature survey provided some background
information about each of the galaxy clusters in our sample with respect to optical, radio and
X-ray data. Using the Kayes Mixture Model (KMM) and the Gaussian Mixing Model (GMM),
we determine the most likely cluster member candidates for each source in our sample. We compare
the results obtained to SIMBADs method of hierarchy. We show that the GMM provides
a very robust method to determine member candidates but in order to ensure that the right
candidates are chosen we apply a select choice of outlier tests to our sources. We determine
a method based on a combination of GMM, the QQ Plot and the Rosner test that provides a
robust and consistent method for determining galaxy cluster members. Comparison between
calculated physical parameters; velocity dispersion, radius, mass and temperature, and values
obtained from literature show that for the majority of our galaxy clusters agree within 3 range.
Inconsistencies are thought to be due to dynamically active clusters that have substructure or
are undergoing mergers, making galaxy member identi cation di cult. Six correlations between
di erent physical parameters in the optical and X-ray wavelength were consistent with
published results. Comparing the velocity dispersion with the X-ray temperature, we found a
relation of T0:43 as compared to T0:5 obtained from Bird et al. (1995). X-ray luminosity
temperature and X-ray luminosity velocity dispersion relations gave the results LX T2:44
and LX 2:40 which lie within the uncertainty of results given by Rozgacheva & Kuvshinova
(2010). These results all suggest that our method for determining galaxy cluster members is
e cient and application to higher redshift sources can be considered. Further studies on galaxy
clusters with substructure must be performed in order to improve this method. In future work,
the physical parameters obtained here will be further compared to X-ray and radio properties
in order to determine a link between bent radio sources and the galaxy cluster environment. / MSc (Space Physics), North-West University, Potchefstroom Campus, 2013
|
98 |
Three essays on the econometric analysis of high-frequency dataMalec, Peter 27 June 2013 (has links)
Diese Dissertation behandelt die ökonometrische Analyse von hochfrequenten Finanzmarktdaten. Kapitel 1 stellt einen neuen Ansatz zur Modellierung von seriell abhängigen positiven Variablen, die einen nichttrivialen Anteil an Nullwerten aufweisen, vor. Letzteres ist ein weitverbreitetes Phänomen in hochfrequenten Finanzmarktzeitreihen. Eingeführt wird eine flexible Punktmassenmischverteilung, ein maßgeschneiderter semiparametrischer Spezifikationstest sowie eine neue Art von multiplikativem Fehlermodell (MEM). Kapitel 2 beschäftigt sich mit dem Umstand, dass feste symmetrische Kerndichteschätzer eine geringe Präzision aufweisen, falls eine positive Zufallsvariable mit erheblicher Wahrscheinlichkeitsmasse nahe Null gegeben ist. Wir legen dar, dass Gammakernschätzer überlegen sind, wobei ihre relative Präzision von der genauen Form der Dichte sowie des Kerns abhängt. Wir führen einen verbesserten Gammakernschätzer sowie eine datengetriebene Methodik für die Wahl des geeigneten Typs von Gammakern ein. Kapitel 3 wendet sich der Frage nach dem Nutzen von Hochfrequenzdaten für hochdimensionale Portfolioallokationsanwendungen zu. Wir betrachten das Problem der Konstruktion von globalen Minimum-Varianz-Portfolios auf der Grundlage der Konstituenten des S&P 500. Wir zeigen auf, dass Prognosen, welche auf Hochfrequenzdaten basieren, im Vergleich zu Methoden, die tägliche Renditen verwenden, eine signifikant geringere Portfoliovolatilität implizieren. Letzteres geht mit spürbaren Nutzengewinnen aus der Sicht eines Investors mit hoher Risikoaversion einher. / In three essays, this thesis deals with the econometric analysis of financial market data sampled at intraday frequencies. Chapter 1 presents a novel approach to model serially dependent positive-valued variables realizing a nontrivial proportion of zero outcomes. This is a typical phenomenon in financial high-frequency time series. We introduce a flexible point-mass mixture distribution, a tailor-made semiparametric specification test and a new type of multiplicative error model (MEM). Chapter 2 addresses the problem that fixed symmetric kernel density estimators exhibit low precision for positive-valued variables with a large probability mass near zero, which is common in high-frequency data. We show that gamma kernel estimators are superior, while their relative performance depends on the specific density and kernel shape. We suggest a refined gamma kernel and a data-driven method for choosing the appropriate type of gamma kernel estimator. Chapter 3 turns to the debate about the merits of high-frequency data in large-scale portfolio allocation. We consider the problem of constructing global minimum variance portfolios based on the constituents of the S&P 500. We show that forecasts based on high-frequency data can yield a significantly lower portfolio volatility than approaches using daily returns, implying noticeable utility gains for a risk-averse investor.
|
99 |
Análise dos atropelamentos de mamíferos em uma rodovia no estado de São Paulo utilizando Self-Organizing Maps. / Using Self-Organizing Maps to analyse wildlife-vehicle collisions on a highway in São Paulo state.Tsuda, Larissa Sayuri 05 July 2018 (has links)
A construção e ampliação de rodovias gera impactos significativos ao meio ambiente. Os principais impactos ao meio biótico são a supressão de vegetação, redução da riqueza e abundância de espécies de fauna como decorrência da fragmentação de habitats e aumento dos riscos de atropelamento de animais silvestres e domésticos. O objetivo geral do trabalho foi identificar padrões espaciais nos atropelamentos de fauna silvestre por espécie (nome popular) utilizando ferramentas de análise espacial e machine learning. Especificamente, buscou-se compreender a relação entre atropelamentos de animais silvestres e variáveis que representam características de uso e cobertura do solo e caracterização da rodovia, tais como formação florestal, corpos d\'água, silvicultura, áreas edificadas, velocidade máxima permitida, volume de tráfego, entre outras. Os atropelamentos de fauna silvestre foram analisados por espécie atropelada, a fim de identificar os padrões espaciais dos atropelamentos específicos para cada espécie. As ferramentas de análise espacial empregadas foram a Função K - para determinar o padrão de distribuição dos registros de atropelamento de fauna, o Estimador de Densidade de Kernel - para gerar estimativas de densidade de pontos sobre a rodovia, a Análise de Hotspots - para identificar os trechos mais críticos de atropelamento de fauna e, por fim, o Self-Organizing Maps (SOM), um tipo de rede neural artificial, que reorganiza amostras de dados n-dimensionais de acordo com a similaridade entre elas. Os resultados das análises de padrões pontuais foram importantes para entender que os pontos de atropelamento possuem padrões de distribuição espacial que variam por espécie. Os eventos ocorrem espacialmente agrupados e não estão homogeneamente distribuídos ao longo da rodovia. De maneira geral, os animais apresentam trechos de maior intensidade de atropelamento em locais distintos. O SOM permitiu analisar as relações entre múltiplas variáveis, lineares e não-lineares, tais como são os dados ecológicos, e encontrar padrões espaciais distintos por espécie. A maior parte dos animais foi atropelada próxima de fragmentos florestais e de corpos d\'água, e distante de cultivo de cana-de-açúcar, silvicultura e área edificada. Porém, uma parte considerável das mortes de animais dos tipos com maior número de atropelamentos ocorreu em áreas com paisagem diversificada, incluindo alta densidade de drenagem, fragmentos florestais, silvicultura e áreas edificadas. / The construction and expansion of roads cause significant impacts on the environment. The main potential impacts to biotic environment are vegetation suppression, reduction of the abundance and richness of species due to forest fragmentation and increase of animal (domestic and wildlife) vehicle collisions. The general objective of this work was to identify spatial patterns in wildlife-vehicle collisions individually per species by using spatial analysis and machine learning. Specifically, the relationship between wildlife-vehicle collisions and variables that represent land use and road characterization features - such as forests, water bodies, silviculture, sugarcane fields, built environment, speed limit and traffic volume - was investigated. The wildlife-vehicle collisions were analyzed per species, in order to identify the spatial patterns for each species separately. The spatial analysis tools used in this study were K-Function - to determine the distribution pattern of roadkill, Kernel Density Estimator (KDE) - to identify the location and intensity of hotspots and hotzones. Self-Organizing Maps (SOM), an artificial neural network (ANN), was selected to reorganize the multi-dimensional data according to the similarity between them. The results of the spatial pattern analysis were important to perceive that the point data pattern varies between species. The events occur spatially clustered and are not uniformly distributed along the highway. In general, wildlife-vehicle collsions have their hotzones in different locations. SOM was able to analyze the relationship between multiple variables, linear and non-linear, such as ecological data, and established distinct spatial patterns per each species. Most of the wildlife was run over close to forest area and water bodies, and distant from sugarcane, silviculture and built environments. But a considerable part of the wildlife-vehicle collisions occurred in areas with diverse landscape, including high density of water bodies, silviculture and built environments.
|
100 |
Urban Growth Modeling Based on Land-use Changes and Road Network ExpansionRui, Yikang January 2013 (has links)
A city is considered as a complex system. It consists of numerous interactivesub-systems and is affected by diverse factors including governmental landpolicies, population growth, transportation infrastructure, and market behavior.Land use and transportation systems are considered as the two most importantsubsystems determining urban form and structure in the long term. Meanwhile,urban growth is one of the most important topics in urban studies, and its maindriving forces are population growth and transportation development. Modelingand simulation are believed to be powerful tools to explore the mechanisms ofurban evolution and provide planning support in growth management. The overall objective of the thesis is to analyze and model urban growth basedon the simulation of land-use changes and the modeling of road networkexpansion. Since most previous urban growth models apply fixed transportnetworks, the evolution of road networks was particularly modeled. Besides,urban growth modeling is an interdisciplinary field, so this thesis made bigefforts to integrate knowledge and methods from other scientific and technicalareas to advance geographical information science, especially the aspects ofnetwork analysis and modeling. A multi-agent system was applied to model urban growth in Toronto whenpopulation growth is considered as being the main driving factor of urbangrowth. Agents were adopted to simulate different types of interactiveindividuals who promote urban expansion. The multi-agent model with spatiotemporalallocation criterions was shown effectiveness in simulation. Then, anurban growth model for long-term simulation was developed by integratingland-use development with procedural road network modeling. The dynamicidealized traffic flow estimated by the space syntax metric was not only used forselecting major roads, but also for calculating accessibility in land-usesimulation. The model was applied in the city centre of Stockholm andconfirmed the reciprocal influence between land use and street network duringthe long-term growth. To further study network growth modeling, a novel weighted network model,involving nonlinear growth and neighboring connections, was built from theperspective of promising complex networks. Both mathematical analysis andnumerical simulation were examined in the evolution process, and the effects ofneighboring connections were particular investigated to study the preferentialattachment mechanisms in the evolution. Since road network is a weightedplanar graph, the growth model for urban street networks was subsequentlymodeled. It succeeded in reproducing diverse patterns and each pattern wasexamined by a series of measures. The similarity between the properties of derived patterns and empirical studies implies that there is a universal growthmechanism in the evolution of urban morphology. To better understand the complicated relationship between land use and roadnetwork, centrality indices from different aspects were fully analyzed in a casestudy over Stockholm. The correlation coefficients between different land-usetypes and road network centralities suggest that various centrality indices,reflecting human activities in different ways, can capture land development andconsequently influence urban structure. The strength of this thesis lies in its interdisciplinary approaches to analyze andmodel urban growth. The integration of ‘bottom-up’ land-use simulation androad network growth model in urban growth simulation is the major contribution.The road network growth model in terms of complex network science is anothercontribution to advance spatial network modeling within the field of GIScience.The works in this thesis vary from a novel theoretical weighted network modelto the particular models of land use, urban street network and hybrid urbangrowth, and to the specific applications and statistical analysis in real cases.These models help to improve our understanding of urban growth phenomenaand urban morphological evolution through long-term simulations. Thesimulation results can further support urban planning and growth management.The study of hybrid models integrating methods and techniques frommultidisciplinary fields has attracted a lot attention and still needs constantefforts in near future. / <p>QC 20130514</p>
|
Page generated in 0.0441 seconds