Spelling suggestions: "subject:"1market data"" "subject:"biomarket data""
1 |
The interpretation of market related information and data in the South African residential property market affects at what stage each individual party lies in the real estate marketYudelowitz, Dani Menachem 01 September 2008 (has links)
In recent times the emergence of the property cycle and the effects that it has on the
property market has caused the relevant parties involved in the market to start placing
more emphasis on how these cycle works. The overall objective of this study is to try
establish if the interpretation of market related data affects at what position these parties
are relative to one another on the property curve. The study concentrates on the use of
market indicators, indices and variables in trying to determine an individual’s position on
the property market curve. It also concentrates on how this market data is retrieved and
what effect it has on how they interpret the data.
The methodology adopted for this study involves the collecting and interpretation of
market related indices and indicators relevant to the property market over a ten year
period from 1996 through to 2006. This data was then used to establish the key indicators
used. A questionnaire was sent out to the relevant parties involved in the property market
to ascertain the extent of what the main sources of market information are and how this
data is collected and interpreted. This was limited to individuals in the Gauteng region.
The data was examined and collected in the form of line graphs, histograms and pie
charts.
The data was then examined and presented in four areas: the major sources of
information used by parties for market related data, to try and establish where these
parties are relative to one another on the property curve, the effect that the different types
of sources of information has on each party and finally to try determine by how much
these parties lag or lead one another on the curve.
|
2 |
Cotton utilization in women's apparel : gender, apparel purchase decisions, and fiber compositionStewart Stevens, Sara Marisa 1976- 21 October 2014 (has links)
A cursory review of domestic apparel production data from ‘Cotton Counts Its Customers’ reports by The National Cotton Council of America showed a discrepancy between the amounts of cotton utilized in domestically produced women’s apparel and that for men’s apparel. It appeared that the men’s apparel sector had a higher percentage market share of cotton than women’s apparel. For both genders, cotton’s dwindling market share was similar to that of diminishing domestic US apparel production overall. Since the majority of apparel in the U.S. is imported, import data was obtained from the United States International Trade Commission and compiled with the domestic apparel data to offer a more expansive view of cotton’s market share and its use separated by gender. The compilation of domestic and import apparel data followed the overall trend of a higher percentage of weight of cotton being used in men’s apparel than in women’s. Challenging apparel categories which may offer potential for expanded utilization with increased performance were Coats, Underwear/Nightwear, Suits, and Dresses. In an attempt to add context to the apparel market data, we explored two stages of the apparel supply chain: the first at the retail setting, the second at the consumer purchase and wear decision level. At the retail level, we investigated the availability of fiber composition information and its use as a part of the assortment offered to consumers. Two stores were selected for this exploratory phase and retail availability by gender and fiber content were physically tallied in the two retail settings. In both retail assortments, there was no emphasis of fiber composition as part of the information offered to the consumer. For the consumer wanting to find cotton apparel in these two settings, prior knowledge regarding the feel or look of cotton would seem necessary to facilitate locating cotton among the assortment of apparel. Fiber blends can offer cotton-like appearance and hand, so fiber composition tags could give consumers certainty regarding the garments they are buying. In addition to the observations above, we also noted in both stores a prevalence of cotton in men’s apparel, and a larger presence of man-made fibers in women’s apparel, which reflects the overall market situation. Finally, the second exploratory stage focused on clothing diaries and a wardrobe inventory provided by a small purposeful sample of respondents to examine the role of fiber composition, cotton in particular, in the individual’s garment purchase and daily-use decisions. The findings suggested that fiber composition was an important part of the daily garment selection process, based upon the daily activity and a set of personal beliefs about what the diary respondent felt that fiber had to offer. Similar to the market data Results, in the Clothing Diary responses males showed a greater tendency to select both 100% cotton Tops and Bottoms than did the female respondents. Overall, cotton appeared challenged by man-made and other fibers when the respondents needed to “dress up”, to attend to athletic activity, or to satisfy the need for specific functionalities such as rapid drying. / text
|
3 |
Agent based modelling and simulation : an examination of customer retention in the UK mobile marketHassouna, Mohammed Bassam January 2012 (has links)
Customer retention is an important issue for any business, especially in mature markets such as the UK mobile market where new customers can only be acquired from competitors. Different methods and techniques have been used to investigate customer retention including statistical methods and data mining. However, due to the increasing complexity of the mobile market, the effectiveness of these techniques is questionable. This study proposes Agent-Based Modelling and Simulation (ABMS) as a novel approach to investigate customer retention. ABMS is an emerging means of simulating behaviour and examining behavioural consequences. In outline, agents represent customers and agent relationships represent processes of agent interaction. This study follows the design science paradigm to build and evaluate a generic, reusable, agent-based (CubSim) model to examine the factors affecting customer retention based on data extracted from a UK mobile operator. Based on these data, two data mining models are built to gain a better understanding of the problem domain and to identify the main limitations of data mining. This is followed by two interrelated development cycles: (1) Build the CubSim model, starting with modelling customer interaction with the market, including interaction with the service provider and other competing operators in the market; and (2) Extend the CubSim model by incorporating interaction among customers. The key contribution of this study lies in using ABMS to identify and model the key factors that affect customer retention simultaneously and jointly. In this manner, the CubSim model is better suited to account for the dynamics of customer churn behaviour in the UK mobile market than all other existing models. Another important contribution of this study is that it provides an empirical, actionable insight on customer retention. In particular, and most interestingly, the experimental results show that applying a mixed customer retention strategy targeting both high value customers and customers with a large personal network outperforms the traditional customer retention strategies, which focuses only on the customer‘s value.
|
4 |
A Test of Catastrophe Theory Applied to Corporate FailureGregory-Allen, Russell B. (Russell Brian) 08 1900 (has links)
Catastrophe theory (CT) is a relatively new mathematical theory that comprehensively describes a system exhibiting discontinuous behavior when subjected to continuous stimuli. This study tests the theory using capital-market data. The data is a time series of stock returns on firms that filed for Chapter 11 reorganization during 1980-1985. The CT model used is based on a corporate failure model suggested by Francis, Hastings and Fabozzi (1983). The model predicts 1) as the filing date approaches, there will be a structural shift in the underlying stock-return generating process of the filing firm, and 2) firms with lower operating risk will have a smaller jump than firms with higher operating risk, corresponding to their relative positions within the bifurcation set of the catastrophe cusp.
|
5 |
Sitting on a Goldmine : Exploring Institutional Enablement for Real Estate Market Data Accessibility in Ghana.Otoo-Ankrah, Naa Kwaamah January 2023 (has links)
The Ghanaian real estate market, though thriving, grapples with insufficient market data. The lack of data renders the market nontransparent and increases transaction costs. Considering the market performance over the past few years, great potential lies for even more growth if this problem is addressed. This research aims to provide an understanding of the data needs of the market and the effects of data paucity on the market. It also explores the potential that state institutions provide to ameliorate the problem. The data for this study is collected from interviews with real estate valuers and data aggregation firms that operate in the Ghanaian market. Data is also collected from acts of parliament. The research outlines the perspectives of valuers regarding the problem and the provisions that legal documents make for improving access to market data. This is conducted through qualitative methods. The research finds that the problems with data inaccessibility do not only affect market transactions but also the training of valuers and research about the market. The results indicate that government legislation makes provisions that should enable the collection of data by different agencies to be made publicly available; however, it appears the lack of incentives and a lack of enforcement of these institutions has resulted in the status quo: stakeholders in the market seem to be sitting on a goldmine. Therefore, relevant stakeholders in the market need to drive change in data provision for a more transparent and efficient market.
|
6 |
Algorithm design on multicore processors for massive-data analysisAgarwal, Virat 28 June 2010 (has links)
Analyzing massive-data sets and streams is computationally very challenging. Data sets in
systems biology, network analysis and security use network abstraction to construct large-scale
graphs. Graph algorithms such as traversal and search are memory-intensive and typically require
very little computation, with access patterns that are irregular and fine-grained. The increasing
streaming data rates in various domains such as security, mining, and finance leaves algorithm
designers with only a handful of clock cycles (with current general purpose computing technology)
to process every incoming byte of data in-core at real-time. This along with increasing complexity of
mining patterns and other analytics puts further pressure on already high computational requirement.
Processing streaming data in finance comes with an additional constraint to process at low latency,
that restricts the algorithm to use common techniques such as batching to obtain high throughput.
The primary contributions of this dissertation are the design of novel parallel data analysis algorithms
for graph traversal on large-scale graphs, pattern recognition and keyword scanning on massive
streaming data, financial market data feed processing and analytics, and data transformation,
that capture the machine-independent aspects, to guarantee portability with performance to future
processors, with high performance implementations on multicore processors that embed processorspecific
optimizations. Our breadth first search graph traversal algorithm demonstrates a capability
to process massive graphs with billions of vertices and edges on commodity multicore processors
at rates that are competitive with supercomputing results in the recent literature. We also present
high performance scalable keyword scanning on streaming data using novel automata compression
algorithm, a model of computation based on small software content addressable memories (CAMs)
and a unique data layout that forces data re-use and minimizes memory traffic. Using a high-level
algorithmic approach to process financial feeds we present a solution that decodes and normalizes
option market data at rates an order of magnitude more than the current needs of the market, yet
portable and flexible to other feeds in this domain. In this dissertation we discuss in detail algorithm
design challenges to process massive-data and present solutions and techniques that we believe can
be used and extended to solve future research problems in this domain.
|
7 |
Modelling regime shifts for foreign exchange market data using hidden Markov models / Modellering av regimskiften för valutamarknadsdata genom dolda MarkovkedjorPersson, Liam January 2021 (has links)
Financial data is often said to follow different market regimes. These regimes, which not possible to observe directly, are assumed to influence the observable returns. In this thesis such regimes are modeled using hidden Markov models. We will investigate whether the five different currency pairs EUR/NOK, USD/NOK, EUR/USD, EUR/SEK, and USD/SEK exhibit market regimes that can be described using hidden Markov modeling. We will find the most optimal number of states and study the mean, variance, and correlations in each market regime. / Finansiella data sägs ofta följa olika marknadsregimer. Dessa marknadsregimer kan inte observeras direkt men antas ha inflytande på de observerade avkastningarna. I denna uppsats undersöks om de fem valutaparen EUR/NOK, USD/NOK, EUR/USD, EUR/SEK och USD/SEK tycks följa separata marknadsregimer som kan detekteras med hjälp av en dold Markovkedja.
|
8 |
Inference of buffer queue times in data processing systems using Gaussian Processes : An introduction to latency prediction for dynamic software optimization in high-end trading systems / Inferens av buffer-kötider i dataprocesseringssystem med hjälp av Gaussiska processerHall, Otto January 2017 (has links)
This study investigates whether Gaussian Process Regression can be applied to evaluate buffer queue times in large scale data processing systems. It is additionally considered whether high-frequency data stream rates can be generalized into a small subset of the sample space. With the aim of providing basis for dynamic software optimization, a promising foundation for continued research is introduced. The study is intended to contribute to Direct Market Access financial trading systems which processes immense amounts of market data daily. Due to certain limitations, we shoulder a naïve approach and model latencies as a function of only data throughput in eight small historical intervals. The training and test sets are represented from raw market data, and we resort to pruning operations to shrink the datasets by a factor of approximately 0.0005 in order to achieve computational feasibility. We further consider four different implementations of Gaussian Process Regression. The resulting algorithms perform well on pruned datasets, with an average R2 statistic of 0.8399 over six test sets of approximately equal size as the training set. Testing on non-pruned datasets indicate shortcomings from the generalization procedure, where input vectors corresponding to low-latency target values are associated with less accuracy. We conclude that depending on application, the shortcomings may be make the model intractable. However for the purposes of this study it is found that buffer queue times can indeed be modelled by regression algorithms. We discuss several methods for improvements, both in regards to pruning procedures and Gaussian Processes, and open up for promising continued research. / Denna studie undersöker huruvida Gaussian Process Regression kan appliceras för att utvärdera buffer-kötider i storskaliga dataprocesseringssystem. Dessutom utforskas ifall dataströmsfrekvenser kan generaliseras till en liten delmängd av utfallsrymden. Medmålet att erhålla en grund för dynamisk mjukvaruoptimering introduceras en lovandestartpunkt för fortsatt forskning. Studien riktas mot Direct Market Access system för handel på finansiella marknader, somprocesserar enorma mängder marknadsdata dagligen. På grund av vissa begränsningar axlas ett naivt tillvägagångssätt och väntetider modelleras som en funktion av enbartdatagenomströmning i åtta små historiska tidsinterval. Tränings- och testdataset representeras från ren marknadsdata och pruning-tekniker används för att krympa dataseten med en ungefärlig faktor om 0.0005, för att uppnå beräkningsmässig genomförbarhet. Vidare tas fyra olika implementationer av Gaussian Process Regression i beaktning. De resulterande algorithmerna presterar bra på krympta dataset, med en medel R2 statisticpå 0.8399 över sex testdataset, alla av ungefär samma storlek som träningsdatasetet. Tester på icke krympta dataset indikerar vissa brister från pruning, där input vektorermotsvararande låga latenstider är associerade med mindre exakthet. Slutsatsen dras att beroende på applikation kan dessa brister göra modellen obrukbar. För studiens syftefinnes emellertid att latenstider kan sannerligen modelleras av regressionsalgoritmer. Slutligen diskuteras metoder för förbättrning med hänsyn till både pruning och GaussianProcess Regression, och det öppnas upp för lovande vidare forskning.
|
9 |
Visualizing the Ethiopian Commodity MarketRogstadius, Jakob January 2009 (has links)
<p>The Ethiopia Commodity Exchange (ECX), like many other data intensive organizations, is having difficulties making full use of the vast amounts of data that it collects. This MSc thesis identifies areas within the organization where concepts from the academic fields of information visualization and visual analytics can be applied to address this issue.Software solutions are designed and implemented in two areas with the purpose of evaluating the approach and to demonstrate to potential users, developers and managers what can be achieved using this method. A number of presentation methods are proposed for the ECX website, which previously contained no graphing functionality for market data, to make it easier for users to find trends, patterns and outliers in prices and trade volumes of commodieties traded at the exchange. A software application is also developed to support the ECX market surveillance team by drastically improving its capabilities of investigating complex trader relationships.Finally, as ECX lacked previous experiences with visualization, one software developer was trained in computer graphics and involved in the work, to enable continued maintenance and future development of new visualization solutions within the organization.</p>
|
10 |
Aktuelle Themen in der UnternehmensbewertungArnold, Sven 23 April 2013 (has links) (PDF)
Die vorliegende kumulative Dissertationsschrift beschäftigt sich mit finanzwissenschaftlichen Fragestellungen im Bereich der Unternehmensbewertung. Dabei wurden aktuelle Themen diskutiert, die in Theorie oder Praxis ungelöste Probleme darstellen. Hervorzuheben ist an dieser Stelle, dass sich die ersten drei Artikel mit dem Werteinfluss der Zinsschanke auf den Wert von fremdfinanzierungsbedingten Steuerersparnissen (Tax Shield) beschäftigen. Die drei darauf folgenden Artikel beschäftigen sich schwerpunktmäßig mit der konsistenten Modellierung von Finanzierungspolitiken und dem Werteinfluss der Insolvenzmöglichkeit von Unternehmen. Der siebte und achte Artikel haben die Kapitalstruktur und weitere wichtige Parameter für die Unternehmensbewertung zum Thema.
|
Page generated in 0.058 seconds