• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11714
  • 2106
  • 1106
  • 947
  • 844
  • 499
  • 271
  • 259
  • 245
  • 226
  • 178
  • 132
  • 104
  • 71
  • 70
  • Tagged with
  • 23298
  • 3431
  • 2898
  • 2210
  • 2101
  • 2030
  • 1946
  • 1762
  • 1720
  • 1658
  • 1585
  • 1551
  • 1515
  • 1500
  • 1491
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

Essays in the Empirical Analysis of Venture Capital and Entrepreneurship

Romain, Astrid 09 February 2007 (has links)
EXECUTIVE SUMMARY This thesis aims at analysing some aspects of Venture Capital (VC) and high-tech entrepreneurship. The focus is both at the macroeconomic level, comparing venture capital from an international point of view and Technology-Based Small Firms (TBSF) at company and founder’s level in Belgium. The approach is mainly empirical. This work is divided into two parts. The first part focuses on venture capital. First of all, we test the impact of VC on productivity. We then identify the determinants of VC and we test their impact on the relative level of VC for a panel of countries. The second part concerns the technology-based small firms in Belgium. The objective is twofold. It first aims at creating a database on Belgian TBSF to better understand the importance of entrepreneurship. In order to do this, a national survey was developed and the statistical results were analysed. Secondly, it provides an analysis of the role of universities in the employment performance of TBSF. A broad summary of each chapter is presented below. PART 1: VENTURE CAPITAL The Economic Impact of Venture Capital The objective of this chapter is to perform an evaluation of the macroeconomic impact of venture capital. The main assumption is that VC can be considered as being similar in several respects to business R&D performed by large firms. We test whether VC contributes to economic growth through two main channels. The first one is innovation, characterized by the introduction of new products, processes or services on the market. The second one is the development of an absorptive capacity. These hypotheses are tested quantitatively with a production function model for a panel data set of 16 OECD countries from 1990 to 2001. The results show that the accumulation of VC is a significant factor contributing directly to Multi-Factor Productivity (MFP) growth. The social rate of return to VC is significantly higher than the social rate of return to business or public R&D. VC has also an indirect impact on MFP in the sense that it improves the output elasticity of R&D. An increased VC intensity makes it easier to absorb the knowledge generated by universities and firms, and therefore improves aggregate economic performance. Technological Opportunity, Entrepreneurial Environment and Venture Capital Development The objective of this chapter is to identify the main determinants of venture capital. We develop a theoretical model where three main types of factors affect the demand and supply of VC: macroeconomic conditions, technological opportunity, and the entrepreneurial environment. The model is evaluated with a panel dataset of 16 OECD countries over the period 1990-2000. The estimates show that VC intensity is pro-cyclical - it reacts positively and significantly to GDP growth. Interest rates affect the VC intensity mainly because the entrepreneurs create a demand for this type of funding. Indicators of technological opportunity such as the stock of knowledge and the number of triadic patents affect positively and significantly the relative level of VC. Labour market rigidities reduce the impact of the GDP growth rate and of the stock of knowledge, whereas a minimum level of entrepreneurship is required in order to have a positive effect of the available stock of knowledge on VC intensity. PART 2: TECHNOLOGY-BASED SMALL FIRMS Survey in Belgium The first purpose of this chapter is to present the existing literature on the performance of companies. In order to get a quantitative insight into the entrepreneurial growth process, an original survey of TBSF in Belgium was launched in 2002. The second purpose is to describe the methodology of our national TBSF survey. This survey has two main merits. The first one lies in the quality of the information. Indeed, most of national and international surveys have been developed at firm-level. There exist only a few surveys at founder-level. In the TBSF database, information both at firm and at entrepreneur-level will be found. The second merit is about the subject covered. TBSF survey tackles the financing of firms (availability of public funds, role of venture capitalists, availability of business angels,…), the framework conditions (e.g. the quality and availability of infrastructures and communication channels, the level of academic and public research, the patenting process,…) and, finally, the socio-cultural factors associated with the entrepreneurs and their environment (e.g. level of education, their parents’education, gender,…). Statistical Evidence The main characteristics of companies in our sample are that employment and profits net of taxation do not follow the same trend. Indeed, employment may decrease while results after taxes may stay constant. Only a few companies enjoy a growth in both employment and results after taxes between 1998 and 2003. On the financing front, our findings suggest that internal finance in the form of personal funds, as well as the funds of family and friends are the primary source of capital to start-up a high-tech company in Belgium. Entrepreneurs rely on their own personal savings in 84 percent of the cases. Commercial bank loans are the secondary source of finance. This part of external financing (debt-finance) exceeds the combined angel funds and venture capital funds (equity-finance). On the entrepreneur front, the preliminary results show that 80 percent of entrepreneurs in this study have a university degree while 42 percent hold postgraduate degrees (i.e. master’s, and doctorate). In term of research activities, 88 percent of the entrepreneurs holding a Ph.D. or a post-doctorate collaborate with Belgian higher education institutes. Moreover, more than 90 percent of these entrepreneurs are working in a university spin-off. The Contribution of Universities to Employment Growth The objective of this chapter is to test whether universities play a role amongst the determinants of employment growth in Belgian TBSF. The empirical model is based on our original survey of 87 Belgian TBSF. The results suggest that both academic spin-offs and TBSF created on the basis of an idea originating from business R&D activities are associated with an above than average growth in employees. As most ‘high-tech’ entrepreneurs are at least graduated from universities, there is no significant impact of the level of education. Nevertheless, these results must be taken with caution, as they are highly sensitive to the presence of outliers. Young high-tech firms are by definition highly volatile, and might be therefore difficult to understand. CONCLUSION In this last chapter, recommendations for policy-makers are drawn from the results of the thesis. The possible interventions of governments are classified according to whether they influence the demand or the supply of entrepreneurship and/or VC. We present some possible actions such as direct intervention in the VC funds, interventions of public sector through labour market rigidities, pension system, patent and research policy, level of entrepreneurial activities, bankruptcy legislation, entrepreneurial education, development of university spin-offs, and creation of a national database of TBSF.
692

Problem-based learning : A study of suggestions for solving learning difficulties presentedduring English lessons in the school context

Kihlstenius, Therese January 2010 (has links)
This study aims to connect problem-based learning with the problematic aspects during English lessons in school. In this way, suggestions for solutions to these problems could be generated. The study took place at an upper secondary school located in the middle regions of Sweden. In this school six unstructured observations were done during the English lessons in order to locate the problematic aspects. The students who participated in this study were in the ages of sixteen to eighteen years old. The theoretical framework was based on literature on problem-based learning and learning in general. The essential features of problem-based learning have been summarized and connected with the problematic aspects and classroom activities from the observations. In this way, it has been possible to come up with suggestions for solutions to some problems, such as unwillingness to speak and lack of motivation among students. The conclusions in this essay are that the problematic aspects dealt with features, such as unwillingness to speak as well as that the students did not understand the learning materials and that the teacher was not supportive. The solutions for these problems dealt mostly with triggering motivation by presenting problem-solving tasks, working with interaction and metacognition and planning the tasks in accordance with the students’ zone of proximal development. Furthermore, the teacher should work as a guide in the classroom to help the students along the way. The students should also receive positive and constructive feedback from the teacher, which will improve the learning among the students.
693

Design of 3D Graphic Tile-based Rendering Engine for Embedded Systems

Tsai, Chung-hua 03 September 2007 (has links)
Due to the increasing demand of three-dimensional (3D) graphic applications in various consumer electronics, how to develop a low-cost 3D graphic hardware accelerator suitable for the embedded systems has become an important issue. A typical 3D graphic accelerator includes a geometry sub-system and a rendering sub-system. In this thesis a highly-efficient 3D graphic rendering intellectual property (IP) based on the tiled-based approach is proposed. An entire rendering IP consists of several modules. The main contributions of this thesis focus on the development of the setup-engine, rasterization module, and the integration of the whole modules for the rendering IP. In the design of setup engine, the thesis develops a folded arithmetic unit architecture mainly consisting of one iterative divider, three multipliers and several adders, which can finish the overall computation of the setup equations within less than 50 cycles. As for the rasterization module, this thesis develops several scan-conversion algorithms including hierarchical, fast skip, and boundary-edge test methods suitable for the tiled-based rendering process. The ordinary line drawing algorithm for the scan-line boundary search or the direct in-out test approach is not efficient for tile-based approach since the shape of triangle primitives may become irregular after tiling. Our experimental results show that the boundary-edge test can lead to the most compact design since it can transform the normal in-out test circuit for single pixel to detect two end-points of the scan-line simultaneously. In addition, the rasterization module can be divided into the scan-line and the fragment generation parts which can help the optimization and speedup of the individual part to achieve the desired overall fill-rate goal. Our simulation shows the fill-rate improvement based on this approach is around 60%. Finally, this thesis integrates all the sub-modules to the entire rendering IP core. This IP has been realized by 0.18 um technology. The total gate count is 504k. It can run up to 166 Mhz, and deliver the peak fill rate of 333M pixels/sec and 1.3G texels/sec. This IP has been highly verified, and achieves more than 95% code coverage. It has also been integrated with OPENGL ES software module, Linux operation system and geometry module, and successfully prototyped on the ARM versatile platform.
694

Window-based congestion control : Modeling, analysis and design

Möller, Niels January 2008 (has links)
This thesis presents a model for the ACK-clock inner loop, common to virtually all Internet congestion control protocols, and analyzes the stability properties of this inner loop, as well as the stability and fairness properties of several window update mechanisms built on top of the ACK-clock. Aided by the model for the inner-loop, two new congestion control mechanisms are constructed, for wired and wireless networks. Internet traffic can be divided into two main types: TCP traffic and real-time traffic. Sending rates for TCP traffic, e.g., file-sharing, uses window-based congestion control, and adjust continuously to the network load. The sending rates for real-time traffic, e.g., voice over IP, are mostly independent of the network load. The current version of the Transmission Control Protocol (TCP) results in large queueing delays at bottlenecks, and poor quality for real-time applications that share a bottleneck link with TCP. The first contribution is a new model for the dynamic relationship between window sizes, sending rates, and queue sizes. This system, with window sizes as inputs, and queue sizes as outputs, is the inner loop at the core of window-based congestion control. The new model unifies two models that have been widely used in the literature. The dynamics of this system, including the static gain and the time constant, depend on the amount of cross traffic which is not subject to congestion control. The model is validated using ns-2 simulations, and it is shown that the system is stable. For moderate cross traffic, the system convergence time is a couple of roundtrip times. When introducing a new congestion control protocol, one important question is how flows using different protocols share resources. The second contribution is an analysis of the fairness when a flow using TCP Westwood+ is introduced in a network that is also used by a TCP New Reno flow. It is shown that the sharing of capacity depends on the buffer size at the bottleneck link. With a buffer size matching the bandwidth-delay product, both flows get equal shares. If the buffer size is smaller, Westwood+ gets a larger share. In the limit of zero buffering, it gets all the capacity. If the buffer size is larger, New Reno gets a larger share. In the limit of very large buffers, it gets 3/4 of the capacity. The third contribution is a new congestion control mechanism, maintaining small queues. The overall control structure is similar to the combination of TCP with Active Queue Management (AQM) and explicit congestion notification, where routers mark some packets according to a probability which depends on the queue size. The key ideas are to take advantage of the stability of the inner loop, and to use control laws for setting and reacting to packet marks that result in more frequent feedback than with AQM. Stability analysis for the single flow, single bottleneck topology gives a simple stability condition, which can be used to guide tuning. Simulations, both of the fluid-flow differential equations, and in the ns-2 packet simulator, show that the protocol maintains small queues. The simulations also indicate that tuning, using a single control parameter per link, is fairly easy. The final contribution is a split-connection scheme for downloads to a mobile terminal. A wireless mobile terminal requests a file from a web server, via a proxy. During the file transfer, the Radio Network Controller (RNC) informs the proxy about bandwidth changes over the radio channel, and the current RNC queue length. A novel control mechanism in the proxy uses this information to adjust the window size. In simulation studies, including one based on detailed radio-layer simulations, both the user response time and the link utilization are improved, compared TCP New Reno, Eifel and Snoop, both for a dedicated channel, and for the shared channel in High-Speed Downlink Packet Access. / QC 20100830
695

Identification and Analysis of Important Proteins in Protein Interaction Networks Using Functional and Topological Information

Reddy, Joseph January 2008 (has links)
Studying protein interaction networks using functional and topological information is important for understanding cellular organization and functionality. This study deals with identifying important proteins in protein interaction networks using SWEMODE (Lubovac, et al, 2006) and analyzing topological and functional properties of these proteins with the help of information derived from modular organization in protein interaction networks as well as information available in public resources, in this case, annotation sources describing the functionality of proteins. Multi-modular proteins are short-listed from the modules generated by SWEMODE. Properties of these short-listed proteins are then analyzed using functional information from SGD Gene Ontology(GO) (Dwight, et al., 2002) and MIPS functional categories (Ruepp, et al., 2004). Topological features such as lethality and centrality of these proteins are also investigated, using graph theoretic properties and information on lethal genes from Yeast Hub (Kei-Hoi, et al., 2005). The findings of the study based on GO terms reveal that these important proteins are mostly involved in the biological process of “organelle organization and biogenesis” and a majority of these proteins belong to MIPS “cellular organization” and “transcription” functional categories. A study of lethality reveals that multi-modular proteins are more likely to be lethal than proteins present only in a single module. An examination of centrality (degree of connectivity of proteins) in the network reveals that the ratio of number of important proteins to number of hubs at different hub sizes increases with the hub size (degree).
696

Understanding Customers Attitudes towards Technology-Based Self-Service : A Case Study on ATMs

Annam, Balasubrahmanyam, Yallapragada, Narasimha Rao January 2006 (has links)
In the present society technological innovations are playing significant role in every phase of human life, human interaction with machines has become essential in service sector. In the past a number of efforts have been made in the literature of service marketing to understand how companies can better deliver their services with the help of self-service technology. In this present situation companies have many possibilities to realize service offerings with huge investments in self-service technologies, as the technology became the driving force to service the customers effectively and helpful in delivering the services. Nowadays it become challenging for the companies to serve customers effectively with in a prescribed time providing the right products with lower cost. To get rid of this issue most of the organizations are showing interest to employ self-service technologies (Like ATMs, ticket vending machines, online auctions, etc..,). The purpose of the thesis can be traced to the fact that a large part of the service sector is changing from personnel-based delivery to technology-based self-service. The theoretical problem of the present study is to concentrate on service marketing and service quality in order to provide a better understanding of customers’ attitudes and preferences towards technology-based self-services (ATMs). So far many researchers have addressed customers’ attitudes towards the technology-based self-service delivery from a service quality perspective. The present paper on service quality and self-service based on technology concerns of expected use rather than actual use and customers expectations about new self-service technologies. To reach the purpose we conducted a pilot case study to know the customers attitudes towards technology (speed, accuracy, ease of use, privacy) using ATMs and their perception towards self-service technologies. For date collection we did 26 open interviews and 150 interviews with the help of questionnaire and the data analysis is based on both qualitative and quantitative methods, supported by the qualitative information and literature reviews. Finally in terms of important findings: easy of use, speed, control and accuracy are the main attributes for service quality and customer satisfaction.
697

An Error Analysis Model for Adaptive Deformation Simulation

Kocak, Umut, Lundin Palmerius, Karljohan, Cooper, Matthew January 2012 (has links)
With the widespread use of deformation simulations in medical applications, the realism of the force feedback has become an important issue. In order to reach real-time performance with sufficient realism the approach of adaptivity, solution of different parts of the system with different resolutions and refresh rates, has been commonly deployed. The change in accuracy resulting from the use of adaptivity, however, has been been paid scant attention in the deformation simulation field. Presentation of error metrics is rare, while more focus is given to the real-time stability. We propose an abstract pipeline to perform error analysis for different types of deformation techniques which can consider different simulation parameters. A case study is also performed using the pipeline, and the various uses of the error estimation are discussed.
698

Barriers to implementing holistic, community-based treatment for offenders with fetal alcohol conditions

Mitten, H. Rae 02 February 2007
The thesis contends that holistic, community-based treatment is preferable to carceral options for offenders with fetal alcohol conditions, presents emerging support for this contention, identifies barriers to the implementation of community-based treatment, and culminates with analyses of ways of influencing policy reform or of legally mandating non-carceral treatment options. Potential avenues that will be examined include:<P> &bull;&#x00A0;&#x00A0; Charter of Rights and Freedoms, s. 15, including an analysis from Eldridge, Law, and Auton, based on the duty to accommodate disabilities;<BR> &bull;&#x00A0;&#x00A0; Constitution Act, 1982, s. 35 and its recognition and affirmation of such relevant treaty right as the alcohol ban, particularly as the ban operates as a contextual factor in a s. 15 Charter analysis as applied to affected treaty beneficiaries; and<BR> &bull;&#x00A0;&#x00A0; Articles 23, 24 and 40 of the Convention on the Rights of the Child, and Article 12(1) of the International Covenant on Economic, Social and Cultural Rights, particularly as they influence the s. 1 analysis under the Charter. <P> A remedy mandating a positive state obligation to provide community-based treatment likely would require favourable cost-benefit analyses, as well as evidence of effectiveness of the treatment (the latter to be studied in a subsequent interdisciplinary Ph.D. program using qualitative research techniques). The implications of a finding of disability and mental disorder related to fetal alcohol conditions will be examined. The present research topic is at the interface of health and justice, and indeed is multidisciplinary in nature as fetal alcohol influences every aspect of affected individuals' lives. Moreover, the problem is situated in its historical, ideological, global, and trans-disciplinary context.
699

An agent-based simulation model of structural change in agriculture

Stolniuk, Peter Charles 04 April 2008
Like many North American agricultural regions, Saskatchewan has experienced significant fundamental structural changes in farming. Structural change encompasses evolution in distribution of farm sizes, land tenure and financial characteristics, as well as variations in demographic and production characteristics. These issues are often a source of discontent among farm populations as it implies these populations are forced to adapt in a number of potentially unpleasant ways. These changes have profound and sometimes poorly understood effects on the rural economy for example, structural change affects rural population and therefore demand for rural infrastructure. <p>Traditional agricultural farm level analysis is often conducted using a representative farm or group, but this framework cannot capture the growing heterogeneity of modern farm operators or the current operating environment in agricultural regions. Farm profiles vary by demographic characteristics, such as age and education, and resource endowments. Agent-based simulation captures this heterogeneity through a farm by farm analysis, where after initialization, the regional economy evolves over time.<p>A synthetic population is created based on survey data and the land characteristics based on the actual land data in CAR 7B of Saskatchewan. A number of different price and yield time paths were created using a bootstrap procedure on historical data and the model evolved to potential agriculture structures that may occur in the model region, 30 years in the future.<p>Structural change occurs endogenously as farms interact in land markets, and make decisions on land use. Agents compete for available land in a purchase and lease market with land selling to the highest bidder. The dynamic nature of agent-based models allows individual farms to adjust land use in response to changing economic conditions and individual preferences. How individuals organize their resources will be critical to farm survival and growth.<p>The results indicate that many of the trends are the same under the different price and yield time paths, however the rate of change is significantly impacted by the price and yield time path that occurs. The model predicted the trend to fewer and larger farms will continue into the future. The forecasted distribution of smaller farms will decline and proportion of large farms will increase, while mid sized farms will remain relatively unchanged. The proportion of mixed farms, land use, and total livestock numbers depend significantly on the price and yield time path. The actual structure that will occur will be the result of the actual individual price and yield time path that occurs.
700

The accuracy of financial analysts and market response

Yang, Zhaochun (Fiona) 23 June 2006
Financial analysts play an intermediary role in financial markets, resulting in two steps for information to be fully absorbed into the stock price: analysts reaction to information, and investors reaction to analysts recommendations. Thus any observed inefficiency in stock pricing could result from two possibilities: analysts failed to fully incorporate the market information into their stock analysis, or the information released in the analysts report is not fully believed by investors. <p>The documented optimism of financial analysts may suggest the possibility of the later case. To test the accuracy of analysts from another perspective, we follow a market microstructure model and use intraday market data to estimate the probability of an information event, the probability of good or bad news, and the rates that different traders arrive at the market. <p>By comparing those estimates based on days with and without recommendation changes, we find inconsistent results with regard to a difference in the probability of an information event. For some stocks, we do observe an increase in the likelihood of news on days when analysts change their recommendations, but this is not the case for most stocks. However, even though they are inaccurate most of the time, uninformed investors usually believe financial analysts. Furthermore, it seems that uninformed investors disbelieve analyst recommendation changes at those instances when analysts are most accurate. <p> Because of this, we hypothesise that market makers might suspect that orders in the opposite direction of an analysts recommendation change are more likely to come from informed traders. This is consistent with the intuition that most traders are uninformed and will simply follow the advice of a perceived expert, and therefore those that dont follow that advice may be more likely to have special information of their own. We check whether there are any differences in the probability of information-based trading (PIN) and for the conditional probability of information-based trading conditioned on sell (PIN|sell) and buy (PIN|buy) between days with and without recommendation changes. We did not find any significant difference, indicating that although we may observe a higher arrival rate of informed traders on recommendation change days, the probabilities of information-based trading do not change substantially. More informed traders seem to come to the market merely because the higher arrival rate of uninformed traders on recommendation days gives them a good opportunity to camouflage their behaviour. And the specialists likely would not have to change their behaviour on those days by increasing or shifting bid-ask spreads since the increased costs from the higher volume of informed trading are balanced by increased profits from the higher volume of uninformed trading. <p>Furthermore, regression of the probabilities of informed trading (conditional or unconditional) on firm size, trading volume, and volatility of daily return shows nothing significant, so we werent able to identify influential factors that affect informed trading or explain differences in informed trading between firms.

Page generated in 0.1115 seconds