• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1509
  • 499
  • 154
  • 145
  • 145
  • 119
  • 55
  • 55
  • 47
  • 36
  • 36
  • 34
  • 17
  • 16
  • 16
  • Tagged with
  • 3357
  • 481
  • 467
  • 366
  • 339
  • 283
  • 258
  • 249
  • 234
  • 233
  • 231
  • 219
  • 212
  • 211
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Motivations and Outcomes of Firms' Leveraging of Alliance Knowledge

Zhou, Shihao 22 February 2017 (has links)
Nowadays, firms increasingly rely on strategic alliances to reach out for unique technological knowledge that firms cannot develop internally. However, in previous literature, we find inconsistent findings regarding the drivers and outcomes of a firm's leverage of alliance partners' technological knowledge. In this dissertation, I consider opposite propositions in prior studies simultaneously and examine two research questions: 1) what motivates a firm to search technological knowledge from alliance partners? And 2) how configurations of alliance knowledge and alliance network affect firm innovation? I argue that alliance knowledge search motivation is determined by the allocation of managerial attention to local domains and distant domains. While distant attention motives alliance knowledge search, local attention suppresses the motivation. I hypothesize that innovation performance below the aspiration level intensifies both local and distant attentions and has an inverted U-shaped relationship with alliance knowledge search intensity. This curvilinear relationship is moderated by the focal firm's knowledge stock size since firms with large knowledge stock are more likely to develop distant attention in the presence of poor innovation performance. I further argue that exploration and exploitation play key roles in the configurations of both alliance knowledge and alliance network. Alliance knowledge leveraging can contribute more to firm innovation, if the firm can establish a balance between exploration and exploitation. I propose that balancing exploration and exploitation within a single domain (e.g., search moderately explorative alliance knowledge) generates great managerial costs. However, firms can balance exploration and exploitation across domains: they can leverage explorative knowledge through exploitative alliances, such as repeated partnerships and strong ties. I test related hypotheses using longitudinal data from the U.S. biopharmaceutical industry. Results show that: 1) innovation performance below the aspiration level has an inverted U-shaped relationship with alliance knowledge search, demonstrating that both distant and local attention play important roles in developing the motivation for alliance knowledge search; 2) increasing knowledge stock size increases both positive and negative effects of innovation performance below aspiration; 3) technological distance of searched alliance knowledge has a linear negative effect on firm innovation; and 4) leveraging explorative knowledge from repeated partnership, but not strong ties, leads to superior innovation performance, supporting the idea of establishing the balance across domains. The findings make important contributions to alliance knowledge leveraging, aspiration, and exploration-exploitation literatures. The managerial implications of the study are also discussed. / Ph. D.
122

Metaheuristics for the waste collection vehicle routing problem with time windows

Benjamin, Aida Mauziah January 2011 (has links)
In this thesis there is a set of waste disposal facilities, a set of customers at which waste is collected and an unlimited number of homogeneous vehicles based at a single depot. Empty vehicles leave the depot and collect waste from customers, emptying themselves at the waste disposal facilities as and when necessary. Vehicles return to the depot empty. We take into consideration time windows associated with customers, disposal facilities and the depot. We also have a driver rest period. The problem is solved heuristically. A neighbour set is defined for each customer as the set of customers that are close, but with compatible time windows. This thesis uses six different procedures to obtain initial solutions for the problem. Then, the initial solutions from these procedures are improved in terms of the distance travelled using our phase 1 and phase 2 procedures, whereas we reduce the number of vehicles used using our vehicle reduction (VR) procedure. In a further attempt to improve the solutions three metaheuristic algorithms are presented, namely tabu search (TS), variable neighbourhood search (VNS) and variable neighbourhood tabu search (VNTS). Moreover, we present a modified disposal facility positioning (DFP), reverse order and change tracking procedures. Using all these procedures presented in the thesis, four solution procedures are reported for the two benchmark problem sets, namely waste collection vehicle routing problems with time windows (VRPTW) and multi-depot vehicle routing problem with inter-depot routes (MDVRPI). Our solutions for the waste collection VRPTW problems are compared with the solutions from Kim et al (2006), and our solutions for the MDVRPI problems are compared with Crevier et al (2007). Computational results for the waste collection VRPTW problems indicate that our algorithms produce better quality solutions than Kim et al (2006) in terms of both distance travelled and number of vehicles used. However for the MDVRPI problems, solutions from Crevier et al (2007) outperform our solutions.
123

Google ekonometrie: Aplikace na Českou republiku / Google Econometrics: An Application to the Czech Republic

Platil, Lukáš January 2014 (has links)
This thesis examines the applicability of Google Econometrics - the use of search volume data of particular queries as explanatory variables in time se- ries modeling - in the case of the Czech Republic. We analyze the contribu- tion of Google data by comparing out-of-sample nowcasting performance and in-sample fit with control variables in three related areas: using an auto- regressive model for unemployment, vector autoregression and logit models for GDP and household consumption, and Granger causality test for consum- er confidence. The improvement in quality of unemployment nowcasting is modest but statistically significant; sentiment index based on Google queries shows reciprocal relationship with the official Consumer Confidence Indicator, and it also provides superior nowcasts for household consumption as well as in- sample fit in logit models; its performance in GDP nowcasting is average among control variables. In overall, the results suggest that Google Econometrics is applicable also to the Czech Republic, despite the fact that the internet penetration rate and Google popularity was lower over the analyzed period compared with developed economies where these methods were usually tested. In the future, Google data may be used together with other leading and coincident indica- tors to...
124

An Investigation into User Text Query and Text Descriptor Construction

Pfitzner, Darius Mark, pfit0022@flinders.edu.au January 2009 (has links)
Cognitive limitations such as those described in Miller's (1956) work on channel capacity and Cowen's (2001) on short-term memory are factors in determining user cognitive load and in turn task performance. Inappropriate user cognitive load can reduce user efficiency in goal realization. For instance, if the user's attentional capacity is not appropriately applied to the task, distractor processing can tend to appropriate capacity from it. Conversely, if a task drives users beyond their short-term memory envelope, information loss may be realized in its translation to long-term memory and subsequent retrieval for task base processing. To manage user cognitive capacity in the task of text search the interface should allow users to draw on their powerful and innate pattern recognition abilities. This harmonizes with Johnson-Laird's (1983) proposal that propositional representation is tied to mental models. Combined with the theory that knowledge is highly organized when stored in memory an appropriate approach for cognitive load optimization would be to graphically present single documents, or clusters thereof, with an appropriate number and type of descriptors. These descriptors are commonly words and/or phrases. Information theory research suggests that words have different levels of importance in document topic differentiation. Although key word identification is well researched, there is a lack of basic research into human preference regarding query formation and the heuristics users employ in search. This lack extends to features as elementary as the number of words preferred to describe and/or search for a document. Contrastive understanding these preferences will help balance processing overheads of tasks like clustering against user cognitive load to realize a more efficient document retrieval process. Common approaches such as search engine log analysis cannot provide this degree of understanding and do not allow clear identification of the intended set of target documents. This research endeavours to improve the manner in which text search returns are presented so that user performance under real world situations is enhanced. To this end we explore both how to appropriately present search information and results graphically to facilitate optimal cognitive and perceptual load/utilization, as well as how people use textual information in describing documents or constructing queries.
125

Personalisation of web information search: an agent based approach

Gopinathan-Leela, Ligon, n/a January 2005 (has links)
The main purpose of this research is to find an effective way to personalise information searching on the Internet using middleware search agents, namely, Personalised Search Agents (PSA). The PSA acts between users and search engines, and applies new and existing techniques to mine and exploit relevant and personalised information for users. Much research has already been done in developing personalising filters, as a middleware technique which can act between user and search engines to deliver more personalised results. These personalising filters, apply one or more of the popular techniques for search result personalisation, such as the category concept, learning from user actions and using metasearch engines. By developing the PSA, these techniques have been investigated and incorporated to create an effective middleware agent for web search personalisation. In this thesis, a conceptual model for the Personalised Search Agent is developed, implemented by developing a prototype and benchmarked the prototype against existing web search practices. System development methodology which has flexible and iterative procedures that switch between conceptual design and prototype development was adopted as the research methodology. In the conceptual model of the PSA, a multi-layer client server architecture is used by applying generalisation-specialisation features. The client and the server are structurally the same, but differ in the level of generalisation and interface. The client handles personalising information regarding one user whereas the server effectively combines the personalising information of all the clients (i.e. its users) to generate a global profile. Both client and server apply the category concept where user selected URLs are mapped against categories. The PSA learns the user relevant URLs both by requesting explicit feedback and by implicitly capturing user actions (for instance the active time spent by the user on a URL). The PSA also employs a keyword-generating algorithm, and tries different combinations of words in a user search string by effectively combining them with the relevant category values. The core functionalities of the conceptual model for the PSA, were implemented in a prototype, used to test the ideas in the real word. The result was benchmarked with the results from existing search engines to determine the efficiency of the PSA over conventional searching. A comparison of the test results revealed that the PSA is more effective and efficient in finding relevant and personalised results for individual users and possesses a unique user sense rather than the general user sense of traditional search engines. The PSA, is a novel architecture and contributes to the domain of knowledge web information searching, by delivering new ideas such as active time based user relevancy calculations, automatic generation of sensible search keyword combinations and the implementation of a multi-layer agent architecture. Moreover, the PSA has high potential for future extensions as well. Because it captures highly personalised data, data mining techniques which employ case-based reasoning make the PSA a more responsive, more accurate and more effective tool for personalised information searching.
126

Optimization of Code-Constellation for M-ary CDMA Systems

Chen, Yang-Wen 02 September 2006 (has links)
In this thesis, we propose and evaluate quasi-optimal algorithms for solving the code-constellation optimization problem in M-ary CDMA system. The M-ary CDMA system is a new CDMA architecture. The more spreading codes used in each user, and the higher bandwidth efficiency can achieve with more bits packed in each symbol. We use a code, which we refer to as ¡§mapping code¡¨, to help form a multidimensional spherical code-constellation. The M codewords of the mapping code correspond one-to-one to the M points on the code-constellation. Thus, the code-constellation optimization problem is a combinatorial optimization problem. We present that an exhaustive search (ES) algorithm would have compute and check all possible subset, and then this problem becomes a NP-hard. Based on the exhaustive search algorithm, we propose symmetric points search (SPS) algorithm to reduce computation complexity, but it is not optimal algorithm. In addition, we propose a quasi-optimal algorithm, namely Manhattan distance search (MDS) algorithm. Numerical results and comparisons are provided to illustrate that the computation complexity of the Manhattan distance search algorithm increases linearly with dimension of code-constellation and its performance is better than others.
127

Informacijos paieškos sistemos elektroninėje bibliotekoje (eLABa) projektavimas ir tyrimas / Information search engine in e-library (eLABa) design and analysis

Gilaitis, Antanas 15 July 2009 (has links)
Šiame darbe buvo išanalizuota keletas paieškos sistemų, bei įvardinti jų privalumai bei trūkumai. Palygintos sistemos kūrimo technologijos. Tai įvertinus buvo sukurta informacijos paieškos sistema elektroninėje bibliotekoje (eLABa). Jos pagrindinis privalumas yra tai, kad paieška vykdoma ne tik tarp dokumento metaduomenų (aprašo), bet ir dokumento tekste. Eksperimentiškai buvo nustatytas galimas paieškos vykdymo optimizavimas. / This system is invented to help people search for the e. documents in FEDORA repository. The system has following features: indexing of Fedora FOXML records, including the text contents of data streams and search in the index. There is possibility to registered users to save results in the history page, and to do repeated search if you want. Also you can go directly to e. document after you have made search.
128

Simultaneously searching with multiple algorithm settings: an alternative to parameter tuning for suboptimal single-agent search

Valenzano, Richard Unknown Date
No description available.
129

Exploiting Context in Dealing with Programming Errors and Exceptions in the IDE

2014 September 1900 (has links)
Studies show that software developers spend about 19% of their development time in web surfing. While collecting necessary information using traditional web search, they face several practical challenges. First, it does not consider context (i.e., surroundings, circumstances) of the programming problems during search unless the developers do so in search query formulation, and forces the developers to frequently switch between their working environment (e.g., IDE) and the web browser. Second, technical details (e.g., stack trace) of an encountered exception often contain a lot of information, and they cannot be directly used as a search query given that the traditional search engines do not support long queries. Third, traditional search generally returns hundreds of search results, and the developers need to manually analyze the result pages one by one in order to extract a working solution. Both manual analysis of a page for content relevant to the encountered exception (and its context) and working an appropriate solution out are non-trivial tasks. Traditional code search engines share the same set of limitations of the web search ones, and they also do not help much in collecting the code examples that can be used for handling the encountered exceptions. In this thesis, we present a context-aware and IDE-based approach that helps one overcome those four challenges above. In our first study, we propose and evaluate a context-aware meta search engine for programming errors and exceptions. The meta search collects results for any encountered exception in the IDE from three popular search engines- Google, Bing and Yahoo and one programming Q & A site- StackOverflow, refines and ranks the results against the detailed context of the encountered exception, and then recommends them within the IDE. From this study, we not only explore the potential of the context-aware and meta search based approach but also realize the significance of appropriate search queries in searching for programming solutions. In the second study, we propose and evaluate an automated query recommendation approach that exploits the technical details of an encountered exception, and recommends a ranked list of search queries. We found the recommended queries quite promising and comparable to the queries suggested by experts. We also note that the support for the developers can be further complemented by post-search content analysis. In the third study, we propose and evaluate an IDE-based context-aware content recommendation approach that identifies and recommends sections of a web page that are relevant to the encountered exception in the IDE. The idea is to reduce the cognitive effort of the developers in searching for content of interest (i.e., relevance) in the page, and we found the approach quite effective through extensive experiments and a limited user study. In our fourth study, we propose and evaluate a context-aware code search engine that collects code examples from a number of code repositories of GitHub, and the examples contain high quality handlers for the exception of interest. We validate the performance of each of our proposed approaches against existing relevant literature and also through several mini user studies. Finally, in order to further validate the applicability of our approaches, we integrate them into an Eclipse plug in prototype--ExcClipse. We then conduct a task-oriented user study with six participants, and report the findings which are significantly promising.
130

Simultaneously searching with multiple algorithm settings: an alternative to parameter tuning for suboptimal single-agent search

Valenzano, Richard 11 1900 (has links)
Many single-agent search algorithms have parameters that need to be tuned. Although settings found by offline tuning will exhibit strong average performance, properly selecting parameter settings for each problem can result in substantially reduced search effort. We consider the use of dovetailing as a way to deal with this issue. This procedure performs search with multiple parameter settings simultaneously. We present results testing the use of dovetailing with the weighted A*, weighted IDA*, weighted RBFS, and BULB algorithms on the sliding tile and pancake puzzle domains. Dovetailing will be shown to significantly improve weighted IDA*, often by several orders of magnitude, and generally enhance weighted RBFS. In the case of weighted A* and BULB, dovetailing will be shown to be an ineffective addition to these algorithms. A trivial parallelization of dovetailing will also be shown to decrease the search time in all considered domains.

Page generated in 0.0286 seconds