My First Research Paper ! (To Be Published)


Greetings ! I wanted to share with you my first AI-related paper which will be published soon. I might (or maybe not) upload the whole paper in another post later to gain your reviews, but for now, I’m showing the abstract and keywords.

Abstract

Research in learning and planning in real-time  strategy (RTS) games is  very  interesting  in  several industries  such as military industry, robotics,  and most importantly game  industry.    A  Recent  published  work on online  case-based  planning in RTS Games does not include the capability of online  learning from experience, so the knowledge certainty remains  constant,  which leads to inefficient decisions. In this  paper,  an  intelligent agent model based on both online case-based planning  (OLCBP)  and reinforcement learning  (RL)  techniques  is  proposed.  In addition, the proposed model has been evaluated  using empirical simulation on Wargus (an open-source clone of  the well known Real-Time Strategy Game Warcraft 2).   This  evaluation shows that the proposed model increases the certainty  of the  case  base  by learning from experience, and hence the  process of decision making  for selecting more efficient, effective  and successful plans.

Keywords

Case-based  reasoning Reinforcement  LearningOnline  Case-based  Planning, Real-Time Strategy Games,  Sarsa (λ) learning, Eligibility Traces, Intelligent Agent.

Natural Language Processing – The Big Picture


Introduction

Since I’m personally in hunger for big pictures for everything around me, and since natural language processing (NLP) is highly important for computers to control human kind, and since many of the AI-related careers depend on it, and since it is used extensively for commercial uses, For all those reasons, I will give a big picture about it (Yes, I mean NLP).

What’s Natural Language Processing?

The simple definition is obvious : making computers understand or generate a human text in a certain language, however, the complete definition is : a set of computational techniques for analyzing and representing naturally occurring texts (at one or more levels) for the purpose of achieving human-like language processing for a range of applications.

NLP can be done for any language of any mode or genre, for oral or written texts. It works over multiple types or levels of language processing starting from the level of understanding a word to the level of understanding the big picture of a complete book.

A brain with language samples. (Image courtesy of MIT OCW.)

Related Sciences?

Linguistics: focuses on formal, structural models of language and the discovery of language universals – in fact the field of NLP was originally referred to as Computational Linguistics.

Computer Science: is concerned with developing internal representations of data and efficient processing of these structures.

Cognitive Psychology: looks at language usage as a window into human cognitive processes, and has the goal of modeling the use of language in a psychologically plausible way.

Language Processing VS Language Generation

NLP may focus on language processing or generation. The first of these refers to the analysis of language for the purpose of producing a meaningful representation, while the latter refers to the production of language from a representation. The task of language processing is equivalent to the role of reader/listener, while the task of language generation is that of the writer/speaker. While much of the theory and technology are shared by these two divisions, Natural Language Generation also requires a planning capability. That is, the generation system requires a plan or model of the goal of the interaction in order to decide what the system should generate at each point in an interaction.

What are its sub-problems?

NLP’s performed by solving a number of sub-problems, where each sub-problem constitute a level (mentioned earlier). Note that, a portion of those levels could be applied, not necessarily all of them. For example some applications require the first 3 levels only. Also, the levels could be applied in a different order independent of their granularity.

Level 1 – Phonology : This level is applied only if the text origin is a speech. It deals with the interpretation of speech sounds within and across words. Speech sound might give a big hint about the meaning of a word or a sentence.

Level 2 –  Morphology : Deals with understanding distinct words according to their morphemes ( the smallest units of meanings) . For example, the word preregistration can be morphologically analyzed into three separate morphemes: the prefix “pre”, the root “registra”, and the suffix “tion”.

Level 3 – Lexical : Deals with understanding everything about distinct words according to their position in the speech, their meanings and their relation to other words.

Level 4 – Syntactic : Deals with analyzing the words of a sentence so as to uncover the grammatical structure of the sentence.

Level 5- Semantic : Determines the possible meanings of a sentence by focusing on the interactions among word-level meanings in the sentence. Some people may thing its the level which determines the meaning, but actually all the level do.

Level 6 – Discourse : Focuses on the properties of the text as a whole that convey meaning by making connections between component sentences.

Level 7 – Pragmatic : Explains how extra meaning is read into texts without actually being encoded in them. This requires much world knowledge, including the understanding of intentions, plans, and goals. Consider the following 2 sentences:

  • The city councilors refused the demonstrators a permit because they feared violence.
  • The city councilors refused the demonstrators a permit because they advocated revolution.

The meaning of “they” in the 2 sentences is different. In order to figure out the difference, world knowledge in knowledge bases and inferencing modules should be utilized.

What are the approaches for performing NLP?

Natural language processing approaches fall roughly into four categories: symbolic, statistical, connectionist, and hybrid. Symbolic and statistical approaches have coexisted since the early days of this field. Connectionist NLP work first appeared in the 1960’s.

Symbolic Approach: Symbolic approaches perform deep analysis of linguistic phenomena and are based on explicit representation of facts about language through well-understood knowledge representation schemes and associated algorithms. The primary source of evidence in symbolic systems comes from human-developed rules.

Statistical Approach: Statistical approaches employ various mathematical techniques and often use large text input to develop approximate generalized models of linguistic phenomena based on actual examples of these phenomena provided by the text input without adding significant linguistic or world knowledge. In contrast to symbolic approaches, statistical approaches use observable data as the primary source of evidence.

Connectionist Approach: Similar to the statistical approaches, connectionist approaches also develop generalized models from examples of linguistic phenomena. What separates connectionism from other statistical methods is that connectionist models combine statistical learning with various theories of representation – thus the connectionist representations allow transformation, inference, and manipulation of logic formulae. In addition, in connectionist systems, linguistic models are harder to observe due to the fact that connectionist architectures are less constrained than statistical ones.

NLP Applications

Information Retrieval – Information Extraction – Question-Answering – Summarization – Machine Translation – Dialogue Systems

References

Liddy, E. D. Natural Language Processing. In Encyclopedia of Library and Information Science. 2nd Ed. Marcel Decker, Inc.

Ubiquitous Computing and AI


Introduction

OK !, Today I give a very big picture about that crucial term called “Ubiquitous Computing” and it’s relation to AI (YES, I mean Artificial Intelligence) . Actually, it’s a shame that any AI geek doesn’t know it. Just remember that, the word “Ubiquitous” means “Existing or being everywhere, or in all places, at the same time”.

What is Ubiquitous Computing?

I can say about Ubiquitous Computing (UbiComp) as “The Incorporation of computers into the background of human life without any physical interaction with them”. It’s considered the future third era of computing where the first era was the mainframes era and the second era is the personal computers era (what we are living now).   The term Ubiquitous Computing – also known as Calm Technology- was found by Mark Weiser -The father of Ubiquitous Computing- in the late 80s (so it’s not a new thing).

Ubiquitous Computing involves tens, hundreds or even thousands of different sized (often tiny) computers sensing the environment, deducing stuff, communicating and performing actions to help a human without actually using any interface. Thousands of computers should be embedded in everyday’s objects such as paper, pens, books, doors, buildings, walls, food containers, clothes, furniture, equipment … etc. to maintain a human’s life.

Even the most powerful notebook computer, with access to a worldwide information network, still focuses attention on a single box (the computer itself). However, Ubiquitous Computing means no human attention to any computer interface when using it. Take a look at motors technology which is considered ubiquitous; a glance through the shop manual of a typical automobile, for example, reveals twenty-two motors and twenty-five more solenoids. They start the engine, clean the windshield, lock and unlock the doors, and so on. By paying careful attention it might be possible to know whenever one activated a motor, but there would be no point to it. Similarly, computers in the Ubiquitous Computing era should be like that.

Suppose you want to lift a heavy object. You can call in your strong assistant to lift it for you, or you can be yourself made effortlessly, unconsciously, stronger and just lift it. There are times when both are good. Much of the past and current effort for better computers has been aimed at the former; ubiquitous computing aims at the latter.

An Example for life with Ubiquitous Computing.

AI and UbiComp ?

According to this essay, AI will play a major role in UbiComp in 3 different ways:

  1. Ubiquitous Computing needs a transparent interface to work, which means a natural way for communication with human-kind. This involves a lot of artificial intelligence such as gesture recognition, sound and speech recognition and computer vision.
  2. Ubiquitous Computing needs computers to be aware of their context such as location and time. Artificial Intelligence plays an important role in context awareness where it helps the computer perceive people’s location and generate proper service accordingly. For example, when you are at office, you may want to read some business reports, but when you go back home, you want to watch movie and enjoy coffee for a rest. These scenarios impose requirements to artificial intelligence agents.
  3. Ubiquitous Computing will also favor from automated learning from their past experience and capturing people’s experience. Learning agents are introduced into this framework to perceive people’s behavior and make decision based on people’s preference.

References

Shang, Yu Liang, The Role and Possibility of Artificial Intelligence in Ubiquitous Computing, 1993

Mark Weiser, “The Computer for the Twenty-First Century,” Scientific American, pp. 94-10, September 1991

http://www.peterindia.net/UbiquitousComputingLinks.html

Machine Learning – A Slight Introduction


Introduction

My goal from this post is to simplify machine learning as much as possible. I have summarized what’s considered to be a summary in a question/answer form in order to let interested people get the big picture rapidly.

What is Machine Learning?

Machine learning is the study of computer algorithms to make the computer learn stuff. The learning is based on examples, direct experience or instruction. In general, machine learning is about learning to do better in the future based on what was experienced in the past.

What’s its relation to Artificial Intelligence?

Machine learning is a core subarea of Artificial Intelligence (AI) because:

  1. It is very unlikely that we will be able to build any kind of intelligent system capable of any of the facilities that we associate with intelligence, such as language or vision, without using learning to get there. These tasks are otherwise simply too difficult to solve.
  2. We would not consider a system to be truly intelligent if it were incapable of learning since learning is at the core of intelligence.

Although a subarea of AI, machine learning also intersects broadly with other fields, especially statistics, but also mathematics, physics, theoretical computer science and more.

Examples of machine learning applications ?

Optical Character Recognition (OCR) – Face Detection – Spam Filtering- Topic Spotting – Spoken language understanding – Medical Diagnosis – Customer Segmentation – Fraud Detection – Weather Prediction

Machine learning general approaches ?

Supervised Learning

Supervised learning simply makes the computer learn by showing it examples. An example of this could be telling the computer that a specific handwritten “Z” is really the letter “Z”. So afterward, when the computer is being questioned if this letter is a “Z” or not, it can answer.

A supervised learning problem could either be a classification or a regression. In classification, we want to categorize objects into fixed categories. In regression, on the other hand, we are trying to predict a real value. For instance, we may wish to predict how much it will rain tomorrow.

Unsupervised Learning

Unsupervised learning simple makes the computer divide – on its own – a set of objects into a number of groups based on the differences between them. For example if a group of fruit (cucumber and tomatoes ) are the set of objects introduced, the computer – based on the difference in color, size and smell – will tell that certain objects (which it didn’t know they’re named Cucumbers) belong to a certain group and other objects (which it didn’t know they’re named tomatoes) belong to an another certain group.

Reinforcement Learning

Sometimes, it’s not a single action (such as figuring out the type of the fruit) that is important, what is important is the policy that is the sequence of actions to reach the goal. There is no such thing as the best action; an action is good if it’s part of a good policy. A good example is game playing where a single move by itself is not that important; it is the sequence of right moves that is good.

A simple example of a machine learning problem ?

In Figure 1, supervised learning is demonstrated; notice that it consists of 2 phases (that could be done at the same time) :

  1. Training phase: where the computer learns what the right things to do are.  As you can see, the computer learns by an example that bats, leopards, zebras and mice are land mammals (+ve sign). On the other hand, ants, dolphins, sea lions, sharks and chicken are not (-ve sign)
  2. Testing phase: where the computer evaluates what it has learnt. It’s asked to state whether the tiger, tuna and platypus are land mammals or not .

A Tiny Learning Problem

Basic Definitions for a supervised learning classification problem

  • An example (sometimes also called an instance) is the object that is being classified. For instance, in OCR, the images are the examples.
  • An example is described by a set of attributes, also known as features or variables. For instance, in medical diagnosis, a patient might be described by attributes such as gender, age, weight, blood pressure, body temperature, etc.
  • The label is the category that we are trying to predict. For instance, in OCR, the labels are the possible letters or digits being represented. During training, the learning algorithm is supplied with labeled examples, while during testing, only unlabeled examples are provided.
  • The rule used for mapping from an example to a label is called a concept.

3 conditions for learning to succeed

There are 3 conditions that must be met for learning to succeed.

  1. We need enough data
  2. We need to find a rule (concept) that makes a low number of mistakes on the training data.
  3. We need that rule to be as simple as possible

Note that the last two requirements are typically in conflict with one another: we sometimes can only find a rule that makes a low number of mistakes by choosing a rule that is more complex, and conversely, choosing a simple rule can sometimes come at the cost of allowing more mistakes on the training data. Finding the right balance is perhaps the most central problem of machine learning. The notion that simple rules should be preferred is often referred to as “Occam’s razor.”

References

Rob Schapire, COS 511: THEORETICAL MACHINE LEARNING, LECTURE 1

Ethem Alpaydin , Introduction to machine learning , 2004.

Machines Uncontrollable !


Introduction

Artificial Intelligence started in the fifties with a very optimistic vision. Scientists at that time predicted that machines outsmarting human will be built in 10-20 years.Unfortunately (or maybe luckily) this was far optimistic than reality was. AI Scientists after that went through an era of pessimism and thought that -i.e. machines outsmarting man -was impossible.

However, The optimistic era re-started some years ago. This was due to the great increase in computer’s power over the years which really proved Moore’s Law (computer power is doubled every 2 years) .

“AI Scientists worry machines may outsmart Man” is an article in the NewYork Times.They are debating whether there should be limits on research that might lead to loss of human control over computer-based systems.

Evil computer

Evil computer

Why worry ?

So what’s new in order to make scientists start to worry. Unstoppable Computer Viruses, Predator Drones ,self-driving cars, software-based personal assistants,service robots in the home and robots navigating the world are all signs of machines that would harm mankind easily if one of the following happened :

  • Man lost control over them.
  • They were used by criminals
  • Evil was programmed in them for research.
  • They had BUGS 🙂

The Singularity

Technological Singularity refers to the prediction that human will create machines that out-smart human, causing the end of the human era due to the control of machines. The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. To understand the idea more please watch The Matrix.

The power of forgetting nothing

I and a friend during Cairo ICT 2010 were chating with a researcher working with DARPA about this issue. I was saying that the singularity is very far away and that building extremely intelligent machines is impossible in the coming years. He – the researcher- stated that it’s not that far.Although it’s very hard for humans to invent algorithms that make the machines outsmart humans, the machines still have a power not found in humans : The Power of remembering everything 🙂 . According to him,This fact will lead the machines to learn extremely fast and outsmart humans in a small time.

Just Imagine,All the experiences and sciences a man learns in 40 years could be learned by a machine in -say- 7 months. I think you now can understand what i mean !

Case-Based Reasoning VS Reinforcement Learning – Part 1 – Introduction


Introduction to the introduction

Greetings! Today I talk about the AI topic I’ m most concerned about nowadays. It’s RL, CBR, hybrid CBR/RL Techniques and the comparison of CBR and RL techniques. According to my information the Hybridization of CBR with RL is a relatively modern research topic (since 2005). However, I have never seen any document comparing RL to CBR (I’d love to see one).

I will give a slight introduction today.

Case based reasoning

Case-based reasoning (CBR) a Machine Learning Technique which aims to solve a new problem (case) using a database (case-base) of old problems (cases). So, it depends on past Experience.

CBR Cycle

CBR Cycle

Reinforcement learning

Reinforcement Learning (RL) is a sub-science of Machine Learning. It’s considered a Hybrid of supervised and unsupervised Learning. It simulates the human learning based on trial and error.

RL Cycle

RL Cycle

  • NB: For people unaware of the above two techniques, please read more about them before proceeding.

Can we compare CBR to RL?

What I think is … Yes we can, some people say “RL is a problem but CBR is a solution to a problem so how can you compare them?” I shall answer: “When I mean RL I mean implicitly RL techniques such as TD-Learning and Q-Learning”. CBR solves the learning problem by depending on past experience while RL solves the learning problem depending on trial and error.

How are CBR and RL similar anyway?

  • In CBR we have the Case-Base, In Reinforcement Learning we have the state-action space. Each case consists of the problem and its solution. In RL the action is considered the solution to the current state too.
  • In CBR there could be a value for each case that measures the performance of this case; in RL each state-action pair has its value.
  • In RL rewards from the environment are the way of updating the state-action pairs’ values. In CBR there are no rewards but after applying each case a revision process is performed to the case after testing it to update its performance value.
  • In CBR retrieving the case is done in RL under the name of “the policy of choosing actions from the action space”
  • In CBR adaptation is performed after retrieving the action, in RL no adaptation is performed because the action-space contains ALL the possible solutions.

And there are more examples to say but those are enough.

So Are CBR and RL techniques the same thing?

Of course not, there are many differences between them. CBR solves the learning problem by depending on past experience while RL solves the learning problem depending on trial and error.

What is the significance of hybrid CBR/RL Techniques?

It’s as if we make the computer learn BOTH by trial and error and past experience. Of-course this often leads to better results. We can say that both complete each other.

How is CBR hybridized with RL and vice versa?

CBR needs RL techniques in the revising phase. Q-learning is used in this paper: “Transfer Learning in Real-Time Strategy Games Using Hybrid CBR/RL -2007″ , and another example is the master “A CBR/RL system for learning micromanagement in real-time strategy games – 2009″

RL needs CBR for function approximation; an example is the paper “CBR for State Value Function Approximation in Reinforcement Learning – 2005″

RL needs CBR for learning continuous action model (also approximation), an example is: “Learning Continuous Action Models in a Real-Time Strategy Environment – 2008″

Heuristically Accelerated RL – One of the Heuristics here is the case-base, an example is “Improving Reinforcement Learning by using Case Based Heuristics – 2009”

Continuous Case Based Reasoning – 1993

Experiments with reinforcement learning in problems with continuous state and action-spaces – 1997

A new heuristic approach for dual control – 1997

So the questions here are …

  • When to use Reinforcement Learning and when to use Case-Based Reasoning?
  • Are there Cases where Applying one of them is completely refused from the technical point of view?
  • If we are dealing with infinite number of state-action pairs, what is the preferred solution? Could both techniques serve well separately?
  • How can we make the best use of Hybrid CBR/RL approach to tackle the problem?

I’ll do my best to answer them in future post(s) .However; I’d be delighted if an expert could answer now.