• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1540
  • 499
  • 154
  • 145
  • 145
  • 121
  • 55
  • 55
  • 47
  • 36
  • 36
  • 34
  • 17
  • 17
  • 16
  • Tagged with
  • 3395
  • 488
  • 475
  • 370
  • 340
  • 285
  • 261
  • 250
  • 242
  • 238
  • 234
  • 220
  • 214
  • 213
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Headquarters for government flying services /

Choi, Chi-fung, Nelson. January 1996 (has links)
Thesis (M. Arch.)--University of Hong Kong, 1996. / Includes special report study entitled: Skin and skeleton of a hangar. Includes bibliographical references.
542

Rule based stochastic tree search

Kumar, Mukund 08 February 2012 (has links)
This work presents an enhancement of a search process that is suited for a problem that can be solved using a graph grammar based generative tree. Generative grammar can be used to generate a vast number of design alternatives by using a seed graph of the problem and a set of transformation rules. The problem is to find the best solution among this space by doing the least number of evaluations possible. In a previous paper, an interactive algorithm for searching in a graph grammar representation was presented. The process was demonstrated for a problem of tying a necktie and the work here builds on top of this process to be useful for solving engineering problem. To test the search process, two problems, a photovoltaic array topology optimization problem and an electromechanical product redesign problem, are chosen. It is shown this search process converges in finding the best solution within a few hundred evaluations which is a manageable number compared to the large solution space of millions of candidates. Further optimization and tweaks are done on the process to control exploration vs. exploitation and find the parameters for fastest convergence and the best solution. / text
543

Theory and techniques for synthesizing efficient breadth-first search algorithms

Nedunuri, Srinivas 05 October 2012 (has links)
The development of efficient algorithms to solve a wide variety of combinatorial and planning problems is a significant achievement in computer science. Traditionally each algorithm is developed individually, based on flashes of insight or experience, and then (optionally) verified for correctness. While computer science has formalized the analysis and verification of algorithms, the process of algorithm development remains largely ad-hoc. The ad-hoc nature of algorithm development is especially limiting when developing algorithms for a family of related problems. Guided program synthesis is an existing methodology for systematic development of algorithms. Specific algorithms are viewed as instances of very general algorithm schemas. For example, the Global Search schema generalizes traditional branch-and-bound search, and includes both depth-first and breadth-first strategies. Algorithm development involves systematic specialization of the algorithm schema based on problem-specific constraints to create efficient algorithms that are correct by construction, obviating the need for a separate verification step. Guided program synthesis has been applied to a wide range of algorithms, but there is still no systematic process for the synthesis of large search programs such as AI planners. Our first contribution is the specialization of Global Search to a class we call Efficient Breadth-First Search (EBFS), by incorporating dominance relations to constrain the size of the frontier of the search to be polynomially bounded. Dominance relations allow two search spaces to be compared to determine whether one dominates the other, thus allowing the dominated space to be eliminated from the search. We further show that EBFS is an effective characterization of greedy algorithms, when the breadth bound is set to one. Surprisingly, the resulting characterization is more general than the well-known characterization of greedy algorithms, namely the Greedy Algorithm parametrized over algebraic structures called greedoids. Our second contribution is a methodology for systematically deriving dominance relations, not just for individual problems but for families of related problems. The techniques are illustrated on numerous well-known problems. Combining this with the program schema for EBFS results in efficient greedy algorithms. Our third contribution is application of the theory and methodology to the practical problem of synthesizing fast planners. Nearly all the state-of-the-art planners in the planning literature are heuristic domain-independent planners. They generally do not scale well and their space requirements also become quite prohibitive. Planners such as TLPlan that incorporate domain-specific information in the form of control rules are orders of magnitude faster. However, devising the control rules is labor-intensive task and requires domain expertise and insight. The correctness of the rules is also not guaranteed. We introduce a method by which domain-specific dominance relations can be systematically derived, which can then be turned into control rules, and demonstrate the method on a planning problem (Logistics). / text
544

Information structures and their effects on consumption decisions and prices

Moreno González, Othón M. 06 November 2013 (has links)
This work analyzes the effects that different information structures on the demand side of the market have on consumption decisions and the way prices are determined. We develop three theoretical models to address this issue in a systematic way. First, we focus our attention on the consumers' awareness, or lack thereof, of substitute products in the market and the strategic interaction between firms competing in prices and costly advertising in such an environment. We find that prior information held by consumers can drastically change the advertising equilibrium predictions. In particular, we provide sufficient conditions for the existence of three types of equilibria, in addition to one previously found in the literature, and provide a necessary condition for a fourth type of equilibrium. Additionally, we show that the effect of the resulting advertising strategies on the expected transaction price is qualitatively significant, although ambiguous when compared to the case of a newly formed market. We can establish, however, that the transaction price is increasing in the size of the smaller firm's captive market. In the second chapter, we study the optimal timing to buy a durable good with an embedded option to resell it at some point in the future, as well as its reservation price, where the agent faces Knightian uncertainty about the process generating the market prices. The problem is modeled as a stopping problem with multiple priors in continuous time with infinite horizon. We find that the direction of the change in the buyer's reservation price depends on the particular parametrization of the model. Furthermore, the change in the buying threshold due to an increase in ambiguity is greater as the fraction of the market at which the agent can resell the good decreases, and the value of the embedded option is decreasing in the perceived level of ambiguity. Finally, we introduce Knightian uncertainty to a model of price search by letting the consumers be ambiguous regarding the industry's cost of production. We characterize the equilibria of this game for high and low levels of the search cost and show that firms extract abnormal profits for low realizations of the marginal cost. Furthermore, we show that, as the search cost goes to zero, the equilibrium of the game under the low cost regime does not converge to the Bertrand marginal-cost pricing. Instead firms follow a mixed-strategy that includes all prices between the high and low production costs. / text
545

Automating program transformations based on examples of systematic edits

Meng, Na 16 January 2015 (has links)
Programmers make systematic edits—similar, but not identical changes to multiple places during software development and maintenance in order to add features and fix bugs. Finding all the correct locations and making the ed- its correctly is a tedious and error-prone process. Existing tools for automating systematic edits are limited because they do not create general purpose edit scripts or suggest edit locations, except for specialized or trivial edits. Since many similar changes occur in similar contexts (in code with similar surrounding dependent relations and syntactic structures), there is an opportunity to automate program transformations based on examples of systematic edits. By inferring systematic edits and relevant context from one or more exemplar changes, automated approaches can (1) apply similar changes to other loca- tions, (2) locate code that requires similar changes, and (3) refactor code which undergoes systematic edits. This thesis seeks to improve programmer produc- tivity and software correctness by automating parts of systematic editing and refactoring. Applying similar, but not identical code changes, to multiple locations with similar contexts requires (1) understanding and relating common program context—a program’s syntactic structure, control, and data flow—relevant to the edits in order to propagate code changes from one location to oth- ers, and (2) recognizing differences between locations in order to customize code changes for each location. Prior approaches for propagating nontrivial, general-purpose code changes from one location to another either do not ob- serve the program context when placing edits, or do not handle the differences between locations when customizing edits, producing syntactic invalid or in- correctly modified programs. We design a novel technique and implement it in a tool called Sydit. Our approach first creates an abstract, context-aware edit script which contains a syntax subtree enclosing the exemplar edit with all concrete identifiers abstracted and a sequence of edit operations. It then applies the edit script to user-selected locations by establishing both context matching and identifier matching to correctly place and customize the edit. Although SYDIT is effective in helping developers correctly apply edits to multiple locations, programmers are still on their own to identify all the appropriate locations. When developers omit some of the locations, the edit script inferred from a single code location is not always well suited to help them find the locations. One approach to infer the edit script is encoding the concrete context. However, the resulting edit script is too specific to the source location, and therefore can only identify locations which contain syntax trees identical to the source location (false negatives). Another approach is to encode context with all identifiers abstracted, but the resulting edit script may match too many locations (false positives). To suggest edit locations, we use multiple examples to create a partially abstract, context-aware edit script, and use this edit script to both find edit locations and transform the code. Our experiments show that edit scripts from multiple examples have high precision and recall in finding edit locations and high accuracy when applying systematic edits because the extracted common context together with identified common concrete identifiers from multiple examples improves the location search without sacrificing edit application accuracy. For systematic edits which insert or update duplicated code, our systematic editing approaches may encourage developers in the bad practice of creating or evolving duplicated code. We investigate and evaluate an approach that automatically refactors cloned code based on the extent of systematic edits by factoring out common code and parameterizing any differences between them. Our investigation finds that refactoring systematically edited code is not always feasible or desirable. When refactoring is desirable, systematic ed- its offer a better way to scope the refactoring as compared to whole method refactoring. Automatic clone removal refactoring cannot obviate the need for systematic editing. Developers need tool support for both automatic refactoring and systematic editing. Based on the systematic changes already made by developers for a subset of change locations, our automated approaches facilitate propagating general purpose systematic changes across large programs, identifying locations requiring systematic changes missed by developers, and refactoring code undergoing systematic edits to reduce code duplication and future repetitive code changes. The combination of these techniques opens a new way of helping developers automate tedious and error-prone tasks, when they add features, fix bugs, and maintain software. These techniques also have the potential to guide automated software development and maintenance activities based on existing code changes mined from version histories for bug fixes, feature additions, refactoring, and software migration. / text
546

Investigating sleepiness and distraction in simple and complex tasks

Wales, Alan January 2009 (has links)
The cost of sleepiness-related accidents runs into tens of billions of dollars per year in America alone (Leger, 1994), and can play a contributing role in motor vehicle accidents and large-scale industrial disasters (Reason, 1990). Likewise, the effects of an ill-timed distraction or otherwise lack of attention to a main task can be the difference between elevated risk, or simply a lack of productivity. The interaction between sleepiness and distraction is poorly researched, and little is known about the mechanisms and scale of the problems associated by this interaction. Therefore, we sought to determine the effects of sleepiness and distraction using overnight and daytime sleepiness with various levels of distraction on three tasks ranging from a simple vigilance task to a challenging luggage x-ray inspection task. The first and second studies examined overnight sleepiness (7pm to 7am) for twenty-four healthy participants (m = 23.2yrs old - same for both studies) using a psychomotor task compared to a systems monitoring task, while also manipulating peripheral distraction through a television playing a comedy series. The results showed significant effects of sleepiness on the psychomotor task and evidence for interactive effects of distraction, whereas the systems monitoring task showed no changes with either sleepiness or distraction. Subjects were far more prone to distraction when sleepy for both tasks, and EEG findings suggest that the alpha frequency (8-13Hz) power increases reflect impairments of performance. There is a decaying . exponential relationship between the probability of a subject's eyes being open as the response time increases, such that longer responses above three seconds are 95% likely to have occurred with the eyes closed. The third study used a sample of twelve young (m = 20.8yrs) and twelve older (m = 60.0yrs) participants, and examined the effects of sleep restriction (< 5hrs vs normal sleep) with three levels of distraction (no distraction, peripheral in the form of television and cognitive distraction as a simulated conversation by means of verbal fluency task). The task used was an x-ray luggage search simulator that is functionally similar to the task used for airport security screening. The practice day showed that speed and accuracy on the task improved with successive sessions, but that the older group were markedly slower and less accurate than the younger group even before the experimental manipulations. There was no effect of daytime sleep restriction for either the younger or older groups between the two experimental days. However, distraction was found to impair the performance of both young and old, with the cognitive distraction proving to be the most difficult condition. Overall, it is concluded that overnight sleepiness impairs performance in monotonous tasks, but these risks can be diminished by making tasks more engaging. Distractions can affect performance, but may be difficult to quantify as subjects create strategies that allow themselves to attend to distractions during the undemanding moments of a task. Continuous cognitive distraction does affect performance, particularly in older subjects, who are less able to manage concurrent demands effectively. Humans appear capable of coping Sleepiness and Distraction iv with a 40% loss of their usual sleep quota or 24-hours of sleep restriction on complex tasks, but performance degrades markedly on monotonous tasks. Performances for simple and complex tasks are impaired by distracters when the effect of distraction is large enough, but the magnitude of impairment depends on how challenging the task is or how well the subject is able to cope with the distractions.
547

Influence modeling in behavioral data

Li, Liangda 21 September 2015 (has links)
Understanding influence in behavioral data has become increasingly important in analyzing the cause and effect of human behaviors under various scenarios. Influence modeling enables us to learn not only how human behaviors drive the diffusion of memes spread in different kinds of networks, but also the chain reactions evolve in the sequential behaviors of people. In this thesis, I propose to investigate into appropriate probabilistic models for efficiently and effectively modeling influence, and the applications and extensions of the proposed models to analyze behavioral data in computational sustainability and information search. One fundamental problem in influence modeling is the learning of the degree of influence between individuals, which we called social infectivity. In the first part of this work, we study how to efficient and effective learn social infectivity in diffusion phenomenon in social networks and other applications. We replace the pairwise infectivity in the multidimensional Hawkes processes with linear combinations of those time-varying features, and optimize the associated coefficients with lasso regularization on coefficients. In the second part of this work, we investigate the modeling of influence between marked events in the application of energy consumption, which tracks the diffusion of mixed daily routines of household members. Specifically, we leverage temporal and energy consumption information recorded by smart meters in households for influence modeling, through a novel probabilistic model that combines marked point processes with topic models. The learned influence is supposed to reveal the sequential appliance usage pattern of household members, and thereby helps address the problem of energy disaggregation. In the third part of this work, we investigate a complex influence modeling scenario which requires simultaneous learning of both infectivity and influence existence. Specifically, we study the modeling of influence in search behaviors, where the influence tracks the diffusion of mixed search intents of search engine users in information search. We leverage temporal and textual information in query logs for influence modeling, through a novel probabilistic model that combines point processes with topic models. The learned influence is supposed to link queries that serve for the same formation need, and thereby helps address the problem of search task identification. The modeling of influence with the Markov property also help us to understand the chain reaction in the interaction of search engine users with query auto-completion (QAC) engine within each query session. The fourth part of this work studies how a user's present interaction with a QAC engine influences his/her interaction in the next step. We propose a novel probabilistic model based on Markov processes, which leverage such influence in the prediction of users' click choices of suggested queries of QAC engines, and accordingly improve the suggestions to better satisfy users' search intents. In the fifth part of this work, we study the mutual influence between users' behaviors on query auto-completion (QAC) logs and normal click logs across different query sessions. We propose a probabilistic model to explore the correlation between user' behavior patterns on QAC and click logs, and expect to capture the mutual influence between users' behaviors in QAC and click sessions.
548

Thermo-economic optimization of a heat recovery steam generator (HRSG) system using Tabu search

Liu, Zelong 11 November 2010 (has links)
Heat Recovery Steam Generator (HRSG) systems in conjunction with a primary gas turbine and a secondary steam turbine can provide advanced modern power generation with high thermal efficiency at low cost. To achieve such low cost efficiencies, near optimal settings of parameters of the HRSG must be employed. Unfortunately, current approaches to obtaining such parameter settings are very limited. The published literature associated with the Tabu Search (TS) metaheuristic has shown conclusively that it is a powerful methodology for the solution of very challenging large practical combinatorial optimization problems. This report documents a hybrid TS-direct pattern search (TS-DPS) approach and applied to the thermoeconomic optimization of a three pressure level HRSG system. To the best of our knowledge, this algorithm is the first to be developed that is capable of successfully solving a practical HRSG system. A requirement of the TS-DPS technique was the creation of a robust simulation module to evaluate the associated extremely complex 19 variable objective function. The simulation module was specially constructed to allow the evaluation of infeasible solutions, a highly preferable capability for methods like TS-DPS. The direct pattern search context is explicitly embodied within the TS neighborhoods permitting different neighborhood structures to be tested and compared. Advanced TS is used to control the associated continuum discretization with minimal memory requirements. Our computational studies show that TS is a very effective method for solving this HRSG optimization problem. / text
549

Evidence of intelligent neural control of human eyes

Najemnik, Jiri 22 June 2011 (has links)
Nearly all imaginable human activities rest on a context-appropriate dynamic control of the flow of retinal data into the nervous system via eye movements. The brain’s task is to move the eyes so as to exert intelligent predictive control over the informational content of the retinal data stream. An intelligent oculomotor controller would first model future contingent upon each possible next action in the oculomotor repertoire, then rank-order the repertoire by assigning a value v(a,t) to each possible action a at each time t, and execute the oculomotor action with the highest predicted value each time. We present a striking evidence of such an intelligent neural control of human eyes in a laboratory task of visual search for a small target camouflaged by a natural-like stochastic texture, a task in which the value of fixating a given location naturally corresponds to the expected information gain about the unknown location of the target. Human searchers behave as if maintaining a map of beliefs (represented as probabilities) about the target location, updating their beliefs with visual data obtained on each fixation optimally using the Bayes Rule. On average, human eye movement patterns appear remarkably consistent with an intelligent strategy of moving eyes to maximize the expected information gain, but inconsistent with the strategy of always foveating the currently most likely location of the target (a prevalent intuition in the existing theories). We derive principled, simple, accurate, and robust mathematical formulas to compute belief and information value maps across the search area on each fixation (or time step). The formulas are exact expressions in the limiting cases of small amount of information extracted, which occurs when the number of potential target locations is infinite, or when the time step is vanishingly small (used for online control of fixation duration). Under these circumstances, the computation of information value map reduces to a linear filtering of beliefs on each time step, and beliefs can be maintained simply as running weighted averages. A model algorithm employing these simple computations captures many statistical properties of human eye movements in our search task. / text
550

Online Content Popularity in the Twitterverse: A Case Study of Online News

2014 January 1900 (has links)
With the advancement of internet technology, online news content has become very popular. People can now get live updates of the world's news through online news sites. Social networking sites are also very popular among Internet users, for sharing pictures, videos, news links and other online content. Twitter is one of the most popular social networking and microblogging sites. With Twitter's URL shortening service, a news link can be included in a tweet with only a small number of characters, allowing the rest of the tweet to be used for expressing views on the news story. Social links can be unidirectional in Twitter, allowing people to follow any person or organization and get their tweet updates, and share those updates with their own followers if desired. Through Twitter thousands of news links are tweeted every day. Whenever there is a popular new story, different news sites will publish identical or nearly identical versions (``clones'') of that story. Though these clones have the same or very similar content, the level of popularity they achieve may be quite different due to content agnostic factors such as influential tweeters, time of publication and the popularities of the news sites. It is very important for the content provider site to know about which factor plays a important role to make their news link popular. In this thesis research, a data set is collected containing the tweets made for the 218 members of 25 distinct sets of news story clones. The collected data is analyzed with respect to basic popularity characteristics concerning number of tweets of various types, relative publication times of clone set members, tweet timing and number of tweeter followers. Then, several other factors are investigated to see their impact in making some news story clones more popular than others. It is found that multiple content-agnostic factors i.e. maximum number of followers, self promotional tweets plays an impact on news site's stories overall popularity, and a first step is taken at quantifying their relative importance.

Page generated in 0.0546 seconds