I=
Pakistan Research Repository Home
 

Title of Thesis

Learning to learn: An Automated and Continuous Approach to Learning in Imperfect Environments

Author(s)

Hasan Mujtaba

Institute/University/Department Details
Department of Computer Science / Fast-National University of Computer and Emerging Sciences, Islamabad
Session
2010
Subject
Computer Science
Number of Pages
155
Keywords (Extracted from title, table of contents and abstract of thesis)
Reproduce, Machine, Automated, Environment, Learning, Dissemination, Imperfect, Realization, Performance, Learning, Processes, Historical, Approach, Continuous

Abstract
Our quest to understand, model, and reproduce natural intelligence has opened new avenues of research. One such area is artificial intelligence (AI). AI is the branch of computer science aiming to create machines able to engage in activities that humans consider intelligent. The ability to create intelligence in a machine has intrigued humans ever since the advent of computers. With recent advancements in computer science we are coming closer every day to the realization of our dreams of smarter or intelligent machines. New algorithms and methods are constantly being designed by researchers.However these techniques must be evaluated and their performance compared before they can be accepted. For this purpose games have caught the attention of AI researchers and gaming environment have proven to be excellent test beds for such evaluation.Although games have redeemed AI research, one limitation most researchers have applied is of perfect information. Perfect information environments imply that the information available to the agents in the environment does not change. Essentially what this means is that agents can detect entities that they have been trained for but will ignore entities for which training has not taken place.This limitation results in agents that do not gain a single iota of learning while they are in the environment.Whatever learning has taken place during their training, they will not increase upon it. This would all be fine if we were living in a static world of perfect information, but we do not!
Our quest to understand, model, and reproduce natural intelligence has opened new avenues of research. One such area is artificial intelligence (AI). AI is the branch of computer science aiming to create machines able to engage in activities that humans consider intelligent. The ability to create intelligence in a machine has intrigued humans ever since the advent of computers. With recent advancements in computer science we are coming closer every day to the realization of our dreams of smarter or intelligent machines. New algorithms and methods are constantly being designed by researchers. However these techniques must be evaluated and their performance compared before they can be accepted. For this purpose games have caught the attention of AI researchers and gaming environment have proven to be excellent test beds for such evaluation. Although games have redeemed AI research, one limitation most researchers have applied is of perfect information. Perfect information environments imply that the information available to the agents in the environment does not change.Essentially what this means is that agents can detect entities that they have been trained for but will ignore entities for which training has not taken place. This limitation results in agents that do not gain a single iota of learning while they are in the environment.Whatever learning has taken place during their training, they will not increase upon it. This would all be fine if we were living in a static world of perfect information, but we do not!
Ensures proper dissemination of information within a species. Forgetfulness is an inherent feature of the co-evolutionary processes. Keeping this in view we have also explored the integration of historical information and the ability to retain and recall past learning experiences. We have tested a social learning based flavor of our CLF to see whether learning from past is profitable for agents. Each of the species was allowed to maintain a social pool of successful strategies. Results from these experiments show that strategy from the pool results in a significant boost to performance in cases where the environmental conditions are similar to when the strategy was established. This social pools acts like a general reservoir of knowledge which is similar in nature to the one we humans hold with ancient civilizations. This historical information also results in performance boosts by eliminating the “reinvention of wheel” phenomena common to evolutionary strategies.
This research not only presents a new way of learning along within a dynamic and uncertain medium but also aims to establish the importance of learning in such an imperfect environment. Much work still needs to be undertaken in this path. Possible future channels of this research include designing better performance evaluation criteria of agents residing in different locations of the environment, and establishing individual archive for learning based on personal experience.

Download Full Thesis
2,111 KB
S. No. Chapter Title of the Chapters Page Size (KB)
1 0 CONTENTS

 

vi
71 KB
2

1

INTRODUCTION

1.1 Problem Statement
1.2 Background and Motivation
1.3 Contribution
1.4 Thesis Organization

1
141 KB
3 2 AI & EVOLUTIONARY LEARNING

2.1 Introduction
2.2 Definitions of Artificial Intelligence
2.3 Disciplines of AI
2.4 Machine Learning
2.5 Artificial Neural Networks
2.6 Particle Swarm Optimization
2.7 Computer Game Playing
2.8 Limitations of Traditional Learning
2.9 Summary

12
508 KB
4 3 IMPERFECTION IN EVOLUTIONARY SYSTEMS

3.1 Introduction
3.2 Imperfect Evolutionary Systems
3.3 Components of an Imperfect Evolutionary System
3.4 Individuals relationship with its Environment
3.5 Intelligence in an IES
3.6 Summary

47
155 KB
5

4

CONTINUOUS LEARNING FRAMEWORK

4.1 Introduction
4.2 Continuity of Learning
4.3 Individual Learning
4.4 Social Learning
4.5 Continuous Learning Framework
4.6 Summary

59
111 KB
6

5

DYNAMIC & IMPERFECT GAMING ENVIRONMENT

5.1 Introduction
5.2 An ImperfectWorld
5.3 Agents and Artifacts in the Environment
5.4 Phases in the Environment
5.5 Agent Movements
5.6 Summary

69
531 KB
7

6

CONTINUITY OF LEARNING IN DIGE

6.1 Introduction
6.2 Specification of Learning Algorithm
6.3 Experimental Setup
6.4 Perfect Environment
6.5 Coping with Imperfection
6.6 Retaining historical lessons
6.7 Adding a new dimension to learning
6.8 Summary

84
349 KB
8

7

ADAPTIVE GAME INTELLIGENCE

7.1 Introduction
7.2 Predictability in games
7.3 Automated Game Learning
7.4 Summary

110
114 KB
9

8

CONCLUSION

8.1 Answer to Problem Statements
8.2 Contributions
8.3 Limitations
8.4 Future Work
8.5 Summary

117
39 KB
10

9

REFERENCES

 

120
111 KB